Uniform approximation is more appropriate for Wilcoxon Rank-Sum Test in gene set analysis.
Directory of Open Access Journals (Sweden)
Zhide Fang
Full Text Available Gene set analysis is widely used to facilitate biological interpretations in the analyses of differential expression from high throughput profiling data. Wilcoxon Rank-Sum (WRS test is one of the commonly used methods in gene set enrichment analysis. It compares the ranks of genes in a gene set against those of genes outside the gene set. This method is easy to implement and it eliminates the dichotomization of genes into significant and non-significant in a competitive hypothesis testing. Due to the large number of genes being examined, it is impractical to calculate the exact null distribution for the WRS test. Therefore, the normal distribution is commonly used as an approximation. However, as we demonstrate in this paper, the normal approximation is problematic when a gene set with relative small number of genes is tested against the large number of genes in the complementary set. In this situation, a uniform approximation is substantially more powerful, more accurate, and less intensive in computation. We demonstrate the advantage of the uniform approximations in Gene Ontology (GO term analysis using simulations and real data sets.
Škrbić, Biljana; Héberger, Károly; Durišić-Mladenović, Nataša
2013-10-01
Sum of ranking differences (SRD) was applied for comparing multianalyte results obtained by several analytical methods used in one or in different laboratories, i.e., for ranking the overall performances of the methods (or laboratories) in simultaneous determination of the same set of analytes. The data sets for testing of the SRD applicability contained the results reported during one of the proficiency tests (PTs) organized by EU Reference Laboratory for Polycyclic Aromatic Hydrocarbons (EU-RL-PAH). In this way, the SRD was also tested as a discriminant method alternative to existing average performance scores used to compare mutlianalyte PT results. SRD should be used along with the z scores--the most commonly used PT performance statistics. SRD was further developed to handle the same rankings (ties) among laboratories. Two benchmark concentration series were selected as reference: (a) the assigned PAH concentrations (determined precisely beforehand by the EU-RL-PAH) and (b) the averages of all individual PAH concentrations determined by each laboratory. Ranking relative to the assigned values and also to the average (or median) values pointed to the laboratories with the most extreme results, as well as revealed groups of laboratories with similar overall performances. SRD reveals differences between methods or laboratories even if classical test(s) cannot. The ranking was validated using comparison of ranks by random numbers (a randomization test) and using seven folds cross-validation, which highlighted the similarities among the (methods used in) laboratories. Principal component analysis and hierarchical cluster analysis justified the findings based on SRD ranking/grouping. If the PAH-concentrations are row-scaled, (i.e., z scores are analyzed as input for ranking) SRD can still be used for checking the normality of errors. Moreover, cross-validation of SRD on z scores groups the laboratories similarly. The SRD technique is general in nature, i.e., it can
Wang, Ling; Xia, Jie-lai; Yu, Li-li; Li, Chan-juan; Wang, Su-zhen
2008-06-01
To explore several numerical methods of ordinal variable in one-way ordinal contingency table and their interrelationship, and to compare corresponding statistical analysis methods such as Ridit analysis and rank sum test. Formula deduction was based on five simplified grading approaches including rank_r(i), ridit_r(i), ridit_r(ci), ridit_r(mi), and table scores. Practical data set was verified by SAS8.2 in clinical practice (to test the effect of Shiwei solution in treatment for chronic tracheitis). Because of the linear relationship of rank_r(i) = N ridit_r(i) + 1/2 = N ridit_r(ci) = (N + 1) ridit_r(mi), the exact chi2 values in Ridit analysis based on ridit_r(i), ridit_r(ci), and ridit_r(mi), were completely the same, and they were equivalent to the Kruskal-Wallis H test. Traditional Ridit analysis was based on ridit_r(i), and its corresponding chi2 value calculated with an approximate variance (1/12) was conservative. The exact chi2 test of Ridit analysis should be used when comparing multiple groups in the clinical researches because of its special merits such as distribution of mean ridit value on (0,1) and clear graph expression. The exact chi2 test of Ridit analysis can be output directly by proc freq of SAS8.2 with ridit and modridit option (SCORES =). The exact chi2 test of Ridit analysis is equivalent to the Kruskal-Wallis H test, and should be used when comparing multiple groups in the clinical researches.
The Distribution of the Sum of Signed Ranks
Albright, Brian
2012-01-01
We describe the calculation of the distribution of the sum of signed ranks and develop an exact recursive algorithm for the distribution as well as an approximation of the distribution using the normal. The results have applications to the non-parametric Wilcoxon signed-rank test.
Lachin, John M
2011-11-10
The power of a chi-square test, and thus the required sample size, are a function of the noncentrality parameter that can be obtained as the limiting expectation of the test statistic under an alternative hypothesis specification. Herein, we apply this principle to derive simple expressions for two tests that are commonly applied to discrete ordinal data. The Wilcoxon rank sum test for the equality of distributions in two groups is algebraically equivalent to the Mann-Whitney test. The Kruskal-Wallis test applies to multiple groups. These tests are equivalent to a Cochran-Mantel-Haenszel mean score test using rank scores for a set of C-discrete categories. Although various authors have assessed the power function of the Wilcoxon and Mann-Whitney tests, herein it is shown that the power of these tests with discrete observations, that is, with tied ranks, is readily provided by the power function of the corresponding Cochran-Mantel-Haenszel mean scores test for two and R > 2 groups. These expressions yield results virtually identical to those derived previously for rank scores and also apply to other score functions. The Cochran-Armitage test for trend assesses whether there is an monotonically increasing or decreasing trend in the proportions with a positive outcome or response over the C-ordered categories of an ordinal independent variable, for example, dose. Herein, it is shown that the power of the test is a function of the slope of the response probabilities over the ordinal scores assigned to the groups that yields simple expressions for the power of the test. Copyright © 2011 John Wiley & Sons, Ltd.
Convolutional Codes with Maximum Column Sum Rank for Network Streaming
Mahmood, Rafid; Badr, Ahmed; Khisti, Ashish
2015-01-01
The column Hamming distance of a convolutional code determines the error correction capability when streaming over a class of packet erasure channels. We introduce a metric known as the column sum rank, that parallels column Hamming distance when streaming over a network with link failures. We prove rank analogues of several known column Hamming distance properties and introduce a new family of convolutional codes that maximize the column sum rank up to the code memory. Our construction invol...
Eisinga, R.N.; Heskes, T.M.; Pelzer, B.J.; Grotenhuis, H.F. te
2017-01-01
Background: The Friedman rank sum test is a widely-used nonparametric method in computational biology. In addition to examining the overall null hypothesis of no significant difference among any of the rank sums, it is typically of interest to conduct pairwise comparison tests. Current approaches to
Sharp bounds on the ranks of negativity of certain sums
Indian Academy of Sciences (India)
with a finite rank of negativity k (i.e., k is the maximal dimension of any linear subspace ..... By linear algebra, we can choose a linear subspace E of L which is mapped ...... matics and its applications (Cambridge University Press) (1994) vol. 49.
Use of rank sum method in identifying high occupational dose jobs for ALARA implementation
International Nuclear Information System (INIS)
Cho, Yeong Ho; Kang, Chang Sun
1998-01-01
The cost-effective reduction of occupational radiation exposure (ORE) dose at a nuclear power plant could not be achieved without going through an extensive analysis of accumulated ORE dose data of existing plants. It is necessary to identify what are high ORE jobs for ALARA implementation. In this study, the Rank Sum Method (RSM) is used in identifying high ORE jobs. As a case study, the database of ORE-related maintenance and repair jobs for Kori Units 3 and 4 is used for assessment, and top twenty high ORE jobs are identified. The results are also verified and validated using the Friedman test, and RSM is found to be a very efficient way of analyzing the data. (author)
Cointegration rank testing under conditional heteroskedasticity
DEFF Research Database (Denmark)
Cavaliere, Giuseppe; Rahbek, Anders Christian; Taylor, Robert M.
2010-01-01
We analyze the properties of the conventional Gaussian-based cointegrating rank tests of Johansen (1996, Likelihood-Based Inference in Cointegrated Vector Autoregressive Models) in the case where the vector of series under test is driven by globally stationary, conditionally heteroskedastic......, relative to tests based on the asymptotic critical values or the i.i.d. bootstrap, the wild bootstrap rank tests perform very well in small samples under a variety of conditionally heteroskedastic innovation processes. An empirical application to the term structure of interest rates is given....
Li, Mengtian; Zhang, Ruisheng; Hu, Rongjing; Yang, Fan; Yao, Yabing; Yuan, Yongna
2018-03-01
Identifying influential spreaders is a crucial problem that can help authorities to control the spreading process in complex networks. Based on the classical degree centrality (DC), several improved measures have been presented. However, these measures cannot rank spreaders accurately. In this paper, we first calculate the sum of the degrees of the nearest neighbors of a given node, and based on the calculated sum, a novel centrality named clustered local-degree (CLD) is proposed, which combines the sum and the clustering coefficients of nodes to rank spreaders. By assuming that the spreading process in networks follows the susceptible-infectious-recovered (SIR) model, we perform extensive simulations on a series of real networks to compare the performances between the CLD centrality and other six measures. The results show that the CLD centrality has a competitive performance in distinguishing the spreading ability of nodes, and exposes the best performance to identify influential spreaders accurately.
Comparing survival curves using rank tests
Albers, Willem/Wim
1990-01-01
Survival times of patients can be compared using rank tests in various experimental setups, including the two-sample case and the case of paired data. Attention is focussed on two frequently occurring complications in medical applications: censoring and tail alternatives. A review is given of the
Leclerc, Arnaud; Thomas, Phillip S.; Carrington, Tucker
2017-08-01
Vibrational spectra and wavefunctions of polyatomic molecules can be calculated at low memory cost using low-rank sum-of-product (SOP) decompositions to represent basis functions generated using an iterative eigensolver. Using a SOP tensor format does not determine the iterative eigensolver. The choice of the interative eigensolver is limited by the need to restrict the rank of the SOP basis functions at every stage of the calculation. We have adapted, implemented and compared different reduced-rank algorithms based on standard iterative methods (block-Davidson algorithm, Chebyshev iteration) to calculate vibrational energy levels and wavefunctions of the 12-dimensional acetonitrile molecule. The effect of using low-rank SOP basis functions on the different methods is analysed and the numerical results are compared with those obtained with the reduced rank block power method. Relative merits of the different algorithms are presented, showing that the advantage of using a more sophisticated method, although mitigated by the use of reduced-rank SOP functions, is noticeable in terms of CPU time.
On Locally Most Powerful Sequential Rank Tests
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2017-01-01
Roč. 36, č. 1 (2017), s. 111-125 ISSN 0747-4946 R&D Projects: GA ČR GA17-07384S Grant - others:Nadační fond na podporu vědy(CZ) Neuron Institutional support: RVO:67985807 Keywords : nonparametric test s * sequential ranks * stopping variable Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.339, year: 2016
On Locally Most Powerful Sequential Rank Tests
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2017-01-01
Roč. 36, č. 1 (2017), s. 111-125 ISSN 0747-4946 R&D Projects: GA ČR GA17-07384S Grant - others:Nadační fond na podporu vědy(CZ) Neuron Institutional support: RVO:67985556 Keywords : nonparametric test s * sequential ranks * stopping variable Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.339, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/kalina-0474065.pdf
Generalized reduced rank tests using the singular value decomposition
Kleibergen, F.R.; Paap, R.
2002-01-01
We propose a novel statistic to test the rank of a matrix. The rank statistic overcomes deficiencies of existing rank statistics, like: necessity of a Kronecker covariance matrix for the canonical correlation rank statistic of Anderson (1951), sensitivity to the ordering of the variables for the LDU
Generalized Reduced Rank Tests using the Singular Value Decomposition
F.R. Kleibergen (Frank); R. Paap (Richard)
2003-01-01
textabstractWe propose a novel statistic to test the rank of a matrix. The rank statistic overcomes deficiencies of existing rank statistics, like: necessity of a Kronecker covariance matrix for the canonical correlation rank statistic of Anderson (1951), sensitivity to the ordering of the variables
Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models
Hallin, M.; van den Akker, R.; Werker, B.J.M.
2012-01-01
Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the
Asympotic efficiency of signed - rank symmetry tests under skew alternatives.
Alessandra Durio; Yakov Nikitin
2002-01-01
The efficiency of some known tests for symmetry such as the sign test, the Wilcoxon signed-rank test or more general linear signed rank tests was studied mainly under the classical alternatives of location. However it is interesting to compare the efficiencies of these tests under asymmetric alternatives like the so-called skew alternative proposed in Azzalini (1985). We find and compare local Bahadur efficiencies of linear signed-rank statistics for skew alternatives and discuss also the con...
Aspects of analysis of small-sample right censored data using generalized Wilcoxon rank tests
Öhman, Marie-Louise
1994-01-01
The estimated bias and variance of commonly applied and jackknife variance estimators and observed significance level and power of standardised generalized Wilcoxon linear rank sum test statistics and tests, respectively, of Gehan and Prentice are compared in a Monte Carlo simulation study. The variance estimators are the permutational-, the conditional permutational- and the jackknife variance estimators of the test statistic of Gehan, and the asymptotic- and the jackknife variance estimator...
Co-integration Rank Testing under Conditional Heteroskedasticity
DEFF Research Database (Denmark)
Cavaliere, Guiseppe; Rahbæk, Anders; Taylor, A.M. Robert
null distributions of the rank statistics coincide with those derived by previous authors who assume either i.i.d. or (strict and covariance) stationary martingale difference innovations. We then propose wild bootstrap implementations of the co-integrating rank tests and demonstrate that the associated...... bootstrap rank statistics replicate the first-order asymptotic null distributions of the rank statistics. We show the same is also true of the corresponding rank tests based on the i.i.d. bootstrap of Swensen (2006). The wild bootstrap, however, has the important property that, unlike the i.i.d. bootstrap......, it preserves in the re-sampled data the pattern of heteroskedasticity present in the original shocks. Consistent with this, numerical evidence sug- gests that, relative to tests based on the asymptotic critical values or the i.i.d. bootstrap, the wild bootstrap rank tests perform very well in small samples un...
International Nuclear Information System (INIS)
Cho, Y.H.; Ko, H.S.; Kim, S.H.; Kang, C.S.; Moon, J.H.; Kim, K.D.
2004-01-01
The cost-effective reduction of occupational radiation dose (ORD) at a nuclear power plant could not be achieved without going through an extensive analysis of accumulated ORD data of existing plants. Through the data analysis, it is required to identify what are the jobs of repetitive high ORD at the nuclear power plant. In general the point value method commonly used, over-estimates the role of mean and median values to identify the high ORD jobs which can lead to misjudgment. In this study, Percentile Rank Sum Method (PRSM) is proposed to identify repetitive high ORD jobs, which is based on non-parametric statistical theory. As a case study, the method is applied to ORD data of maintenance and repair jobs at Kori units 3 and 4 that are pressurized water reactors with 950 MWe capacity and have been operated since 1986 and 1987, respectively in Korea. The results were verified and validated, and PRSM has been demonstrated to be an efficient method of analyzing the data. (authors)
Development of a new biofidelity ranking system for anthropomorphic test devices.
Rhule, Heather H; Maltese, Matthew R; Donnelly, Bruce R; Eppinger, Rolf H; Brunner, Jill K; Bolte, John H
2002-11-01
A new biofidelity assessment system is being developed and applied to three side impact dummies: the WorldSID-alpha, the ES-2 and the SID-HIII. This system quantifies (1) the ability of a dummy to load a vehicle as a cadaver does, "External Biofidelity," and (2) the ability of a dummy to replicate those cadaver responses that best predict injury potential, "Internal Biofidelity." The ranking system uses cadaver and dummy responses from head drop tests, thorax and shoulder pendulum tests, and whole body sled tests. Each test condition is assigned a weight factor based on the number of human subjects tested to form the biomechanical response corridor and how well the biofidelity tests represent FMVSS 214, side NCAP (SNCAP) and FMVSS 201 Pole crash environments. For each response requirement, the cumulative variance of the dummy response relative to the mean cadaver response (DCV) and the cumulative variance of the mean cadaver response relative to the mean plus one standard deviation (CCV) are calculated. The ratio of DCV/CCV expresses how well the dummy response duplicates the mean cadaver response: a smaller ratio indicating better biofidelity. For each test condition, the square root is taken of each Response Comparison Value (DCV/CCV), and then these values are averaged and multiplied by the appropriate Test Condition Weight. The weighted and averaged comparison values are then summed and divided by the sum of the Test Condition Weights to obtain a rank for each body region. Each dummy obtains an overall rank for External Biofidelity and an overall rank for Internal Biofidelity comprised of an average of the ranks from each body region. Of the three dummies studied, the selected comparison test data indicate that the WorldSID-alpha prototype dummy demonstrated the best overall External Biofidelity although improvement is needed in all of the dummies to better replicate human kinematics. All three dummies estimate potential injury assessment with similar levels of
Identification of significant features by the Global Mean Rank test.
Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph
2014-01-01
With the introduction of omics-technologies such as transcriptomics and proteomics, numerous methods for the reliable identification of significantly regulated features (genes, proteins, etc.) have been developed. Experimental practice requires these tests to successfully deal with conditions such as small numbers of replicates, missing values, non-normally distributed expression levels, and non-identical distributions of features. With the MeanRank test we aimed at developing a test that performs robustly under these conditions, while favorably scaling with the number of replicates. The test proposed here is a global one-sample location test, which is based on the mean ranks across replicates, and internally estimates and controls the false discovery rate. Furthermore, missing data is accounted for without the need of imputation. In extensive simulations comparing MeanRank to other frequently used methods, we found that it performs well with small and large numbers of replicates, feature dependent variance between replicates, and variable regulation across features on simulation data and a recent two-color microarray spike-in dataset. The tests were then used to identify significant changes in the phosphoproteomes of cancer cells induced by the kinase inhibitors erlotinib and 3-MB-PP1 in two independently published mass spectrometry-based studies. MeanRank outperformed the other global rank-based methods applied in this study. Compared to the popular Significance Analysis of Microarrays and Linear Models for Microarray methods, MeanRank performed similar or better. Furthermore, MeanRank exhibits more consistent behavior regarding the degree of regulation and is robust against the choice of preprocessing methods. MeanRank does not require any imputation of missing values, is easy to understand, and yields results that are easy to interpret. The software implementing the algorithm is freely available for academic and commercial use.
A Rank Test on Equality of Population Medians
Pooi Ah Hin
2012-01-01
The Kruskal-Wallis test is a non-parametric test for the equality of K population medians. The test statistic involved is a measure of the overall closeness of the K average ranks in the individual samples to the average rank in the combined sample. The resulting acceptance region of the test however may not be the smallest region with the required acceptance probability under the null hypothesis. Presently an alternative acceptance region is constructed such that it has the smallest size, ap...
Generating pseudo test collections for learning to rank scientific articles
Berendsen, R.; Tsagkias, M.; de Rijke, M.; Meij, E.
2012-01-01
Pseudo test collections are automatically generated to provide training material for learning to rank methods. We propose a method for generating pseudo test collections in the domain of digital libraries, where data is relatively sparse, but comes with rich annotations. Our intuition is that
Adaptive linear rank tests for eQTL studies.
Szymczak, Silke; Scheinhardt, Markus O; Zeller, Tanja; Wild, Philipp S; Blankenberg, Stefan; Ziegler, Andreas
2013-02-10
Expression quantitative trait loci (eQTL) studies are performed to identify single-nucleotide polymorphisms that modify average expression values of genes, proteins, or metabolites, depending on the genotype. As expression values are often not normally distributed, statistical methods for eQTL studies should be valid and powerful in these situations. Adaptive tests are promising alternatives to standard approaches, such as the analysis of variance or the Kruskal-Wallis test. In a two-stage procedure, skewness and tail length of the distributions are estimated and used to select one of several linear rank tests. In this study, we compare two adaptive tests that were proposed in the literature using extensive Monte Carlo simulations of a wide range of different symmetric and skewed distributions. We derive a new adaptive test that combines the advantages of both literature-based approaches. The new test does not require the user to specify a distribution. It is slightly less powerful than the locally most powerful rank test for the correct distribution and at least as powerful as the maximin efficiency robust rank test. We illustrate the application of all tests using two examples from different eQTL studies. Copyright © 2012 John Wiley & Sons, Ltd.
Another Argument in Favour of Wilcoxon's Signed Rank Test
Rosenblatt, Jonathan; Benjamini, Yoav
2013-01-01
The Wilcoxon Signed Rank test is typically called upon when testing whether a symmetric distribution has a specified centre and the Gaussianity is in question. As with all insurance policies it comes with a cost, even if small, in terms of power versus a t-test, when the distribution is indeed Gaussian. In this note we further show that even when the distribution tested is Gaussian there need not be power loss at all, if the alternative is of a mixture type rather than a shift. The signed ran...
Partial sums of lagged cross-products of AR residuals and a test for white noise
de Gooijer, J.G.
2008-01-01
Partial sums of lagged cross-products of AR residuals are defined. By studying the sample paths of these statistics, changes in residual dependence can be detected that might be missed by statistics using only the total sum of cross-products. Also, a test statistic for white noise is proposed. It is
Testing rank-dependent utility theory for health outcomes.
Oliver, Adam
2003-10-01
Systematic violations of expected utility theory (EU) have been reported in the context of both money and health outcomes. Rank-dependent utility theory (RDU) is currently the most popular and influential alternative theory of choice under circumstances of risk. This paper reports a test of the descriptive performance of RDU compared to EU in the context of health. When one of the options is certain, violations of EU that can be explained by RDU are found. When both options are risky, no evidence that RDU is a descriptive improvement over EU is found, though this finding may be due to the low power of the tests. Copyright 2002 John Wiley & Sons, Ltd.
Test elements of direct sums and free products of free Lie algebras
Indian Academy of Sciences (India)
Abstract. We give a characterization of test elements of a direct sum of free Lie algebras in terms of test elements of the factors. In addition, we construct certain types of test elements and we prove that in a free product of free Lie algebras, product of the homogeneous test elements of the factors is also a test element.
Test elements of direct sums and free products of free Lie algebras
Indian Academy of Sciences (India)
We give a characterization of test elements of a direct sum of free Lie algebras in terms of test elements of the factors. In addition, we construct certain types of test elements and we prove that in a free product of free Lie algebras, product of the homogeneous test elements of the factors is also a test element.
A multivariate rank test for comparing mass size distributions
Lombard, F.
2012-04-01
Particle size analyses of a raw material are commonplace in the mineral processing industry. Knowledge of particle size distributions is crucial in planning milling operations to enable an optimum degree of liberation of valuable mineral phases, to minimize plant losses due to an excess of oversize or undersize material or to attain a size distribution that fits a contractual specification. The problem addressed in the present paper is how to test the equality of two or more underlying size distributions. A distinguishing feature of these size distributions is that they are not based on counts of individual particles. Rather, they are mass size distributions giving the fractions of the total mass of a sampled material lying in each of a number of size intervals. As such, the data are compositional in nature, using the terminology of Aitchison [1] that is, multivariate vectors the components of which add to 100%. In the literature, various versions of Hotelling\\'s T 2 have been used to compare matched pairs of such compositional data. In this paper, we propose a robust test procedure based on ranks as a competitor to Hotelling\\'s T 2. In contrast to the latter statistic, the power of the rank test is not unduly affected by the presence of outliers or of zeros among the data. © 2012 Copyright Taylor and Francis Group, LLC.
How Well Does the Sum Score Summarize the Test? Summability as a Measure of Internal Consistency
Goeman, J.J.; De, Jong N.H.
2018-01-01
Many researchers use Cronbach's alpha to demonstrate internal consistency, even though it has been shown numerous times that Cronbach's alpha is not suitable for this. Because the intention of questionnaire and test constructers is to summarize the test by its overall sum score, we advocate
Impact of controlling the sum of error probability in the sequential probability ratio test
Directory of Open Access Journals (Sweden)
Bijoy Kumarr Pradhan
2013-05-01
Full Text Available A generalized modified method is proposed to control the sum of error probabilities in sequential probability ratio test to minimize the weighted average of the two average sample numbers under a simple null hypothesis and a simple alternative hypothesis with the restriction that the sum of error probabilities is a pre-assigned constant to find the optimal sample size and finally a comparison is done with the optimal sample size found from fixed sample size procedure. The results are applied to the cases when the random variate follows a normal law as well as Bernoullian law.
Tupker, RA; Bunte, EE; Fidler, [No Value; Wiechers, JW; Coenraads, PJ
Discrepancies between the one-time patch test and the wash test regarding the ranking of irritancy of detergents have been found in the literature. The aim of the present study was to investigate the concordance of irritancy rank order of 4 anionic detergents tested by 3 different exposure methods,
Strategic alternatives ranking methodology: Multiple RCRA incinerator evaluation test case
International Nuclear Information System (INIS)
Baker, G.; Thomson, R.D.; Reece, J.; Springer, L.; Main, D.
1988-01-01
This paper presents an important process approach to permit quantification and ranking of multiple alternatives being considered in remedial actions or hazardous waste strategies. This process is a methodology for evaluating programmatic options in support of site selection or environmental analyses. Political or other less tangible motivations for alternatives may be quantified by means of establishing the range of significant variables, weighting their importance, and by establishing specific criteria for scoring individual alternatives. An application of the process to a recent AFLC program permitted ranking incineration alternatives from a list of over 130 options. The process forced participation by the organizations to be effected, allowed a consensus of opinion to be achieved, allowed complete flexibility to evaluate factor sensitivity, and resulted in strong, quantifiable support for any subsequent site-selection action NEPA documents
A Bootstrap Cointegration Rank Test for Panels of VAR Models
DEFF Research Database (Denmark)
Callot, Laurent
functions of the individual Cointegrated VARs (CVAR) models. A bootstrap based procedure is used to compute empirical distributions of the trace test statistics for these individual models. From these empirical distributions two panel trace test statistics are constructed. The satisfying small sample...
Mathur, Sunil; Sadana, Ajit
2015-12-01
We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.
Test the principle of maximum entropy in constant sum 2×2 game: Evidence in experimental economics
International Nuclear Information System (INIS)
Xu, Bin; Zhang, Hongen; Wang, Zhijian; Zhang, Jianbo
2012-01-01
By using laboratory experimental data, we test the uncertainty of strategy type in various competing environments with two-person constant sum 2×2 game in the social system. It firstly shows that, in these competing game environments, the outcome of human's decision-making obeys the principle of the maximum entropy. -- Highlights: ► Test the uncertainty in two-person constant sum games with experimental data. ► On game level, the constant sum game fits the principle of maximum entropy. ► On group level, all empirical entropy values are close to theoretical maxima. ► The results can be different for the games that are not constant sum game.
Test the principle of maximum entropy in constant sum 2×2 game: Evidence in experimental economics
Energy Technology Data Exchange (ETDEWEB)
Xu, Bin, E-mail: xubin211@zju.edu.cn [Experimental Social Science Laboratory, Zhejiang University, Hangzhou, 310058 (China); Public Administration College, Zhejiang Gongshang University, Hangzhou, 310018 (China); Zhang, Hongen, E-mail: hongen777@163.com [Department of Physics, Zhejiang University, Hangzhou, 310027 (China); Wang, Zhijian, E-mail: wangzj@zju.edu.cn [Experimental Social Science Laboratory, Zhejiang University, Hangzhou, 310058 (China); Zhang, Jianbo, E-mail: jbzhang08@zju.edu.cn [Department of Physics, Zhejiang University, Hangzhou, 310027 (China)
2012-03-19
By using laboratory experimental data, we test the uncertainty of strategy type in various competing environments with two-person constant sum 2×2 game in the social system. It firstly shows that, in these competing game environments, the outcome of human's decision-making obeys the principle of the maximum entropy. -- Highlights: ► Test the uncertainty in two-person constant sum games with experimental data. ► On game level, the constant sum game fits the principle of maximum entropy. ► On group level, all empirical entropy values are close to theoretical maxima. ► The results can be different for the games that are not constant sum game.
Directory of Open Access Journals (Sweden)
CHRISTOPHER H. TIENKEN
2008-04-01
Full Text Available Examining a popular political notion, this article presents results from a series of Spearman Rho calculations conducted to investigate relationships between countries’ rankings on international tests of mathematics and science and future economic competitiveness as measured by the 2006 World Economic Forum’s Growth Competitiveness Index (GCI. The study investigated the existence of relationships between international test rankings from three different time periods during the last 50 years of U.S. education policy development (i.e., 1957–1982, 1983–2000, and 2001–2006 and 2006 GCI ranks. It extends previous research on the topic by investigating how GCI rankings in the top 50 percent and bottom 50 percent relate to rankings on international tests for the countries that participated in each test. The study found that the relationship between ranks on international tests of mathematics and science and future economic strength is stronger among nations with lower-performing economies. Nations with strong economies, such as the United States, demonstrate a weaker, nonsignificant relationship.
Analysis of diffractive pd to Xd and pp to Xp interactions and test of the finite-mass sum rule
Akimov, Y; Golovanov, L B; Goulianos, K; Gross, D; Malamud, E; Melissinos, A C; Mukhin, S; Nitz, D; Olsen, S; Sticker, H; Tsarev, V A; Yamada, R; Zimmerman, P
1976-01-01
The first moment finite mass sum rule is tested by utilising cross- sections for pp to Xp extracted from recent Fermilab data on pd to Xd and also comparing with CERN ISR data. The dependences on M/sub x//sup 2/, t and s are also discussed. (11 refs).
Different goodness of fit tests for Rayleigh distribution in ranked set sampling
Directory of Open Access Journals (Sweden)
Amer Al-Omari
2016-03-01
Full Text Available In this paper, different goodness of fit tests for the Rayleigh distribution are considered based on simple random sampling (SRS and ranked set sampling (RSS techniques. The performance of the suggested estimators is evaluated in terms of the power of the tests by using Monte Carlo simulation. It is found that the suggested RSS tests perform better than their counterparts in SRS.
Niu, Sunny X; Tienda, Marta
2012-04-01
Using administrative data for five Texas universities that differ in selectivity, this study evaluates the relative influence of two key indicators for college success-high school class rank and standardized tests. Empirical results show that class rank is the superior predictor of college performance and that test score advantages do not insulate lower ranked students from academic underperformance. Using the UT-Austin campus as a test case, we conduct a simulation to evaluate the consequences of capping students admitted automatically using both achievement metrics. We find that using class rank to cap the number of students eligible for automatic admission would have roughly uniform impacts across high schools, but imposing a minimum test score threshold on all students would have highly unequal consequences by greatly reduce the admission eligibility of the highest performing students who attend poor high schools while not jeopardizing admissibility of students who attend affluent high schools. We discuss the implications of the Texas admissions experiment for higher education in Europe.
Tien, Flora F.; Blackburn, Robert T.
1996-01-01
A study explored the relationship between the traditional system of college faculty rank and faculty research productivity from the perspectives of behavioral reinforcement theory and selection function. Six hypotheses were generated and tested, using data from a 1989 national faculty survey. Results failed to support completely either the…
Tienken, Christopher H.
2008-01-01
Examining a popular political notion, this article presents results from a series of Spearman Rho calculations conducted to investigate relationships between countries' rankings on international tests of mathematics and science and future economic competitiveness as measured by the 2006 World Economic Forum's Growth Competitiveness Index (GCI).…
Adaptive designs for the one-sample log-rank test.
Schmidt, Rene; Faldum, Andreas; Kwiecien, Robert
2017-09-22
Traditional designs in phase IIa cancer trials are single-arm designs with a binary outcome, for example, tumor response. In some settings, however, a time-to-event endpoint might appear more appropriate, particularly in the presence of loss to follow-up. Then the one-sample log-rank test might be the method of choice. It allows to compare the survival curve of the patients under treatment to a prespecified reference survival curve. The reference curve usually represents the expected survival under standard of the care. In this work, convergence of the one-sample log-rank statistic to Brownian motion is proven using Rebolledo's martingale central limit theorem while accounting for staggered entry times of the patients. On this basis, a confirmatory adaptive one-sample log-rank test is proposed where provision is made for data dependent sample size reassessment. The focus is to apply the inverse normal method. This is done in two different directions. The first strategy exploits the independent increments property of the one-sample log-rank statistic. The second strategy is based on the patient-wise separation principle. It is shown by simulation that the proposed adaptive test might help to rescue an underpowered trial and at the same time lowers the average sample number (ASN) under the null hypothesis as compared to a single-stage fixed sample design. © 2017, The International Biometric Society.
Nadine Chlass; Jens J. Krueger
2007-01-01
This Monte-Carlo study investigates sensitivity of the Wilcoxon signed rank test to certain assumption violations in small samples. Emphasis is put on within-sample-dependence, between-sample dependence, and the presence of ties. Our results show that both assumption violations induce severe size distortions and entail power losses. Surprisingly, these consequences do vary substantially with other properties the data may display. Results provided are particularly relevant for experimental set...
Critical test of isotropic periodic sum techniques with group-based cut-off schemes.
Nozawa, Takuma; Yasuoka, Kenji; Takahashi, Kazuaki Z
2018-03-08
Truncation is still chosen for many long-range intermolecular interaction calculations to efficiently compute free-boundary systems, macromolecular systems and net-charge molecular systems, for example. Advanced truncation methods have been developed for long-range intermolecular interactions. Every truncation method can be implemented as one of two basic cut-off schemes, namely either an atom-based or a group-based cut-off scheme. The former computes interactions of "atoms" inside the cut-off radius, whereas the latter computes interactions of "molecules" inside the cut-off radius. In this work, the effect of group-based cut-off is investigated for isotropic periodic sum (IPS) techniques, which are promising cut-off treatments to attain advanced accuracy for many types of molecular system. The effect of group-based cut-off is clearly different from that of atom-based cut-off, and severe artefacts are observed in some cases. However, no severe discrepancy from the Ewald sum is observed with the extended IPS techniques.
[Computerized ranking test in three French universities: Staff experience and students' feedback].
Roux, D; Meyer, G; Cymbalista, F; Bouaziz, J-D; Falgarone, G; Tesniere, A; Gervais, J; Cariou, A; Peffault de Latour, R; Marat, M; Moenaert, E; Guebli, T; Rodriguez, O; Lefort, A; Dreyfuss, D; Hajage, D; Ricard, J-D
2016-03-01
The year 2016 will be pivotal for the evaluation of French medical students with the introduction of the first computerized National Ranking Test (ECNi). The SIDES, online electronic system for medical student evaluation, was created for this purpose. All the universities have already organized faculty exams but few a joint computerized ranking test at several universities simultaneously. We report our experience on the organization of a mock ECNi by universities Paris Descartes, Paris Diderot and Paris 13. Docimological, administrative and technical working groups were created to organize this ECNi. Students in their fifth year of medical studies, who will be the first students to sit for the official ECNi in 2016, were invited to attend this mock exam that represented more than 50% of what will be proposed in 2016. A final electronic questionnaire allowed a docimological and organizational evaluation by students. An analysis of ratings and rankings and their distribution on a 1000-point scale were performed. Sixty-four percent of enrolled students (i.e., 654) attended the three half-day exams. No difference in total score and ranking between the three universities was observed. Students' feedback was extremely positive. Normalized over 1000 points, 99% of students were scored on 300 points only. Progressive clinical cases were the most discriminating test. The organization of a mock ECNi involving multiple universities was a docimological and technical success but required an important administrative, technical and teaching investment. Copyright © 2016 Société nationale française de médecine interne (SNFMI). Published by Elsevier SAS. All rights reserved.
Pearson's chi-square test and rank correlation inferences for clustered data.
Shih, Joanna H; Fay, Michael P
2017-09-01
Pearson's chi-square test has been widely used in testing for association between two categorical responses. Spearman rank correlation and Kendall's tau are often used for measuring and testing association between two continuous or ordered categorical responses. However, the established statistical properties of these tests are only valid when each pair of responses are independent, where each sampling unit has only one pair of responses. When each sampling unit consists of a cluster of paired responses, the assumption of independent pairs is violated. In this article, we apply the within-cluster resampling technique to U-statistics to form new tests and rank-based correlation estimators for possibly tied clustered data. We develop large sample properties of the new proposed tests and estimators and evaluate their performance by simulations. The proposed methods are applied to a data set collected from a PET/CT imaging study for illustration. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Directory of Open Access Journals (Sweden)
Donald W. Zimmerman
2004-01-01
Full Text Available It is well known that the two-sample Student t test fails to maintain its significance level when the variances of treatment groups are unequal, and, at the same time, sample sizes are unequal. However, introductory textbooks in psychology and education often maintain that the test is robust to variance heterogeneity when sample sizes are equal. The present study discloses that, for a wide variety of non-normal distributions, especially skewed distributions, the Type I error probabilities of both the t test and the Wilcoxon-Mann-Whitney test are substantially inflated by heterogeneous variances, even when sample sizes are equal. The Type I error rate of the t test performed on ranks replacing the scores (rank-transformed data is inflated in the same way and always corresponds closely to that of the Wilcoxon-Mann-Whitney test. For many probability densities, the distortion of the significance level is far greater after transformation to ranks and, contrary to known asymptotic properties, the magnitude of the inflation is an increasing function of sample size. Although nonparametric tests of location also can be sensitive to differences in the shape of distributions apart from location, the Wilcoxon-Mann-Whitney test and rank-transformation tests apparently are influenced mainly by skewness that is accompanied by specious differences in the means of ranks.
Endogenous Versus Exogenous Shocks in Complex Networks: An Empirical Test Using Book Sale Rankings
Sornette, D.; Deschâtres, F.; Gilbert, T.; Ageon, Y.
2004-11-01
We study the precursory and recovery signatures accompanying shocks in complex networks, that we test on a unique database of the Amazon.com ranking of book sales. We find clear distinguishing signatures classifying two types of sales peaks. Exogenous peaks occur abruptly and are followed by a power law relaxation, while endogenous peaks occur after a progressively accelerating power law growth followed by an approximately symmetrical power law relaxation which is slower than for exogenous peaks. These results are rationalized quantitatively by a simple model of epidemic propagation of interactions with long memory within a network of acquaintances. The observed relaxation of sales implies that the sales dynamics is dominated by cascades rather than by the direct effects of news or advertisements, indicating that the social network is close to critical.
American Society for Testing and Materials. Philadelphia
2005-01-01
1.1 This test method covers laboratory procedures for determining the resistance of materials to sliding wear. The test utilizes a block-on-ring friction and wear testing machine to rank pairs of materials according to their sliding wear characteristics under various conditions. 1.2 An important attribute of this test is that it is very flexible. Any material that can be fabricated into, or applied to, blocks and rings can be tested. Thus, the potential materials combinations are endless. However, the interlaboratory testing has been limited to metals. In addition, the test can be run with various lubricants, liquids, or gaseous atmospheres, as desired, to simulate service conditions. Rotational speed and load can also be varied to better correspond to service requirements. 1.3 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information only. Wear test results are reported as the volume loss in cubic millimetres for both the block and ring. Materials...
International Nuclear Information System (INIS)
Jackson, A.D.; Weiss, C.; Wirzba, A.
1990-01-01
The Skyrme model has the same high density behavior as a free quark gas. However, the inclusion of higher-order terms spoils this agreement. We consider the all-order sum of a class of chiral invariant Lagrangians of even order in L μ suggested by Marleau. We prove Marleau's conjecture that these terms are of second order in the derivatives of the chiral angle for the hedgehog case and show the terms are unique under the additional condition that, for each order, the identity map on the 3-sphere S 3 (L) is a solution. The general form of the summation can be restricted by physical constraints leading to stable results. Under the assumption that the Lagrangian scales like the non-linear sigma model at low densities and like the free quark gas at high densities, we prove that a chiral phase transition must occur. (orig.)
Hall, M B; Mertens, D R
2012-04-01
In vitro neutral detergent fiber (NDF) digestibility (NDFD) is an empirical measurement of fiber fermentability by rumen microbes. Variation is inherent in all assays and may be increased as multiple steps or differing procedures are used to assess an empirical measure. The main objective of this study was to evaluate variability within and among laboratories of 30-h NDFD values analyzed in repeated runs. Subsamples of alfalfa (n=4), corn forage (n=5), and grass (n=5) ground to pass a 6-mm screen passed a test for homogeneity. The 14 samples were sent to 10 laboratories on 3 occasions over 12 mo. Laboratories ground the samples and ran 1 to 3 replicates of each sample within fermentation run and analyzed 2 or 3 sets of samples. Laboratories used 1 of 2 NDFD procedures: 8 labs used procedures related to the 1970 Goering and Van Soest (GVS) procedure using fermentation vessels or filter bags, and 2 used a procedure with preincubated inoculum (PInc). Means and standard deviations (SD) of sample replicates within run within laboratory (lab) were evaluated with a statistical model that included lab, run within lab, sample, and lab × sample interaction as factors. All factors affected mean values for 30-h NDFD. The lab × sample effect suggests against a simple lab bias in mean values. The SD ranged from 0.49 to 3.37% NDFD and were influenced by lab and run within lab. The GVS procedure gave greater NDFD values than PInc, with an average difference across all samples of 17% NDFD. Because of the differences between GVS and PInc, we recommend using results in contexts appropriate to each procedure. The 95% probability limits for within-lab repeatability and among-lab reproducibility for GVS mean values were 10.2 and 13.4%, respectively. These percentages describe the span of the range around the mean into which 95% of analytical results for a sample fall for values generated within a lab and among labs. This degree of precision was supported in that the average maximum
Schnur, P
1977-11-01
Two experiments investigated the effects of exemplar ranking on retention. High-ranking exemplars are words judged to be prototypical of a given category; low-ranking exemplars are words judged to be atypical of a given category. In Experiment 1, an incidental learning paradigm was used to measure reaction time to answer an encoding question as well as subsequent recognition. It was found that low-ranking exemplars were classified more slowly but recognized better than high-ranking exemplars. Other comparisons of the effects of category encoding, rhyme encoding, and typescript encoding on response latency and recognition replicated the results of Craik and Tulving (1975). In Experiment 2, unanticipated free recall of live previously learned paired associate lists revealed that a list composed of low-ranking exemplars was better recalled than a comparable list composed of high-ranking exemplars. Moreover, this was true only when the lists were studied in the context of appropriate category cues. These findings are discussed in terms of the encoding elaboration hypothesis.
Safieddine, Doha; Chkeir, Aly; Herlem, Cyrille; Bera, Delphine; Collart, Michèle; Novella, Jean-Luc; Dramé, Moustapha; Hewson, David J; Duchêne, Jacques
2017-11-01
Falls are a major cause of death in older people. One method used to predict falls is analysis of Centre of Pressure (CoP) displacement, which provides a measure of balance quality. The Balance Quality Tester (BQT) is a device based on a commercial bathroom scale that calculates instantaneous values of vertical ground reaction force (Fz) as well as the CoP in both anteroposterior (AP) and mediolateral (ML) directions. The entire testing process needs to take no longer than 12 s to ensure subject compliance, making it vital that calculations related to balance are only calculated for the period when the subject is static. In the present study, a method is presented to detect the stabilization period after a subject has stepped onto the BQT. Four different phases of the test are identified (stepping-on, stabilization, balancing, stepping-off), ensuring that subjects are static when parameters from the balancing phase are calculated. The method, based on a simplified cumulative sum (CUSUM) algorithm, could detect the change between unstable and stable stance. The time taken to stabilize significantly affected the static balance variables of surface area and trajectory velocity, and was also related to Timed-up-and-Go performance. Such a finding suggests that the time to stabilize could be a worthwhile parameter to explore as a potential indicator of balance problems and fall risk in older people. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Data depth and rank-based tests for covariance and spectral density matrices
Chau, Joris
2017-06-26
In multivariate time series analysis, objects of primary interest to study cross-dependences in the time series are the autocovariance or spectral density matrices. Non-degenerate covariance and spectral density matrices are necessarily Hermitian and positive definite, and our primary goal is to develop new methods to analyze samples of such matrices. The main contribution of this paper is the generalization of the concept of statistical data depth for collections of covariance or spectral density matrices by exploiting the geometric properties of the space of Hermitian positive definite matrices as a Riemannian manifold. This allows one to naturally characterize most central or outlying matrices, but also provides a practical framework for rank-based hypothesis testing in the context of samples of covariance or spectral density matrices. First, the desired properties of a data depth function acting on the space of Hermitian positive definite matrices are presented. Second, we propose two computationally efficient pointwise and integrated data depth functions that satisfy each of these requirements. Several applications of the developed methodology are illustrated by the analysis of collections of spectral matrices in multivariate brain signal time series datasets.
Data depth and rank-based tests for covariance and spectral density matrices
Chau, Joris; Ombao, Hernando; Sachs, Rainer von
2017-01-01
In multivariate time series analysis, objects of primary interest to study cross-dependences in the time series are the autocovariance or spectral density matrices. Non-degenerate covariance and spectral density matrices are necessarily Hermitian and positive definite, and our primary goal is to develop new methods to analyze samples of such matrices. The main contribution of this paper is the generalization of the concept of statistical data depth for collections of covariance or spectral density matrices by exploiting the geometric properties of the space of Hermitian positive definite matrices as a Riemannian manifold. This allows one to naturally characterize most central or outlying matrices, but also provides a practical framework for rank-based hypothesis testing in the context of samples of covariance or spectral density matrices. First, the desired properties of a data depth function acting on the space of Hermitian positive definite matrices are presented. Second, we propose two computationally efficient pointwise and integrated data depth functions that satisfy each of these requirements. Several applications of the developed methodology are illustrated by the analysis of collections of spectral matrices in multivariate brain signal time series datasets.
Kim, Seonghoon
2013-01-01
With known item response theory (IRT) item parameters, Lord and Wingersky provided a recursive algorithm for computing the conditional frequency distribution of number-correct test scores, given proficiency. This article presents a generalized algorithm for computing the conditional distribution of summed test scores involving real-number item…
The spin-dependent structure function of the proton g(1)(p) and a test of the Bjorken sum rule
Czech Academy of Sciences Publication Activity Database
Alekseev, M.; Alexakhin, V. Yu.; Alexandrov, Yu.; Alexeev, G. D.; Amoroso, A.; Austregisilio, A.; Badelek, B.; Balestra, F.; Ball, J.; Barth, J.; Baum, G.; Bedfer, Y.; Bernhard, J.; Bertini, R.; Bettinelli, M.; Birsa, R.; Bisplinghoff, J.; Bordalo, P.; Bradamante, F.; Bravar, A.; Bressan, A.; Brona, G.; Burtin, E.; Bussa, M.; Chaberny, D.; Chiosso, M.; Chung, S.U.; Cicuttin, A.; Colantoni, M.; Cotic, D.; Crespo, M.; Dalla Torre, S.; Das, S.; Dasgupta, S. S.; Denisov, O.; Dhara, L.; Diaz, V.; Donskov, S.; Doshita, N.; Duic, V.; Dünnweber, W.; Efremov, A.V.; El Alaoui, A.; Eversheim, P.; Eyrich, W.; Faessler, M.; Ferrero, A.; Filin, A.; Finger, M.; Finger jr., M.; Fischer, H.; Franco, C.; Friedrich, J.; Garfagnini, R.; Gautheron, F.; Gavrichtchouk, O.; Gazda, R.; Gerassimov, S.; Geyer, R.; Giorgi, M.; Gnesi, I.; Gobbo, B.; Goertz, S.; Grabmüller, S.; Grasso, A.; Grube, B.; Gushterski, R.; Guskov, A.; Haas, F.; von Harrach, D.; Hasegawa, T.; Heinsius, F.; Hermann, R.; Herrmann, F.; Hess, C.; Hinterberger, F.; Horikawa, N.; Höppner, Ch.; d'Hose, N.; Ilgner, C.; Ishimoto, S.; Ivanov, O.; Ivanshin, Yu.; Iwata, T.; Jahn, R.; Jasinski, P.; Jegou, G.; Joosten, R.; Kabuss, E.; Käfer, W.; Kang, D.; Ketzer, B.; Khaustov, G.; Khokhlov, Y.; Kisselev, Y.; Klein, F.; Klimaszewski, K.; Koblitz, S.; Koivuniemi, J.; Kolosov, V.; Kondo, K.; Königsmann, K.; Konopka, R.; Konorov, I.; Konstantinov, V.; Korzenev, A.; Kotzinian, A.; Kouznetsov, O.; Kowalik, K.; Krämer, M.; Kral, A.; Kroumchtein, Z.; Kuhn, R.; Kunne, F.; Kurek, K.; Lauser, L.; Le Goff, J.; Lednev, A.; Lehmann, A.; Levorato, S.; Lichtenstadt, J.; Liska, T.; Maggiora, A.; Maggiora, M.; Magnon, A.; Mallot, G.; Mann, A.; Marchand, C.; Marroncle, J.; Martin, A.; Marzec, J.; Massmann, F.; Matsuda, T.; Meyer, W.; Michigami, T.; Mikhailov, Y.; Moinester, M.; Mutter, A.; Nagaytsev, A.; Nagel, T.; Nassalski, J.; Negrini, S.; Nerling, F.; Neubert, S.; Neyret, D.; Nikolaenko, V.; Nunes, A.S.; Olshevsky, A.; Ostrick, M.; Padee, A.; Panknin, R.; Panzieri, D.; Parsamyan, B.; Paul, S.; Pawlukiewicz-Kaminska, B.; Perevalova, E.; Pesaro, G.; Peshekhonov, D.; Piragino, G.; Platchkov, S.; Pochodzalla, J.; Polak, J.; Polyakov, V.; Pontecorvo, G.; Pretz, J.; Quintans, C.; Rajotte, J.; Ramos, S.; Rapatsky, V.; Reicherz, G.; Richter, A.; Robinet, F.; Rocco, E.; Rondio, E.; Ryabchikov, D.; Samoylenko, V.; Sandacz, A.; Santos, H.; Sapozhnikov, M.; Sarkar, S.; Savin, I.; Sbrizzai, G.; Schiavon, P.; Schill, C.; Schlütter, T.; Schmitt, L.; Schopferer, S.; Schröder, W.; Shevchenko, O.; Siebert, H.; Silva, L.; Sinha, L.; Sissakian, A.; Slunecka, M.; Smirnov, G.; Sosio, S.; Sozzi, F.; Srnka, Aleš; Stolarski, M.; Sulc, M.; Sulej, R.; Takekawa, S.; Tessaro, S.; Tessarotto, F.; Teufel, A.; Tkatchev, L.; Uhl, S.; Uman, I.; Virius, M.; Vlassov, N.; Vossen, A.; Weitzel, Q.; Windmolders, R.; Wislicki, W.; Wollny, H.; Zaremba, K.; Zavertyaev, M.; Zemlyanichkina, E.; Ziembicki, M.; Zhao, J.; Zhuravlev, N.; Zvyagin, A.
2010-01-01
Roč. 690, č. 5 (2010), s. 466-472 ISSN 0370-2693 R&D Projects: GA MŠk ME 492 Institutional research plan: CEZ:AV0Z20650511 Keywords : deep inelastic scattering * structure function * QCD analysis * Bjorken sum rule Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 5.255, year: 2010
RankProdIt: A web-interactive Rank Products analysis tool
Directory of Open Access Journals (Sweden)
Laing Emma
2010-08-01
Full Text Available Abstract Background The first objective of a DNA microarray experiment is typically to generate a list of genes or probes that are found to be differentially expressed or represented (in the case of comparative genomic hybridizations and/or copy number variation between two conditions or strains. Rank Products analysis comprises a robust algorithm for deriving such lists from microarray experiments that comprise small numbers of replicates, for example, less than the number required for the commonly used t-test. Currently, users wishing to apply Rank Products analysis to their own microarray data sets have been restricted to the use of command line-based software which can limit its usage within the biological community. Findings Here we have developed a web interface to existing Rank Products analysis tools allowing users to quickly process their data in an intuitive and step-wise manner to obtain the respective Rank Product or Rank Sum, probability of false prediction and p-values in a downloadable file. Conclusions The online interactive Rank Products analysis tool RankProdIt, for analysis of any data set containing measurements for multiple replicated conditions, is available at: http://strep-microarray.sbs.surrey.ac.uk/RankProducts
Minkowski metrics in creating universal ranking algorithms
Directory of Open Access Journals (Sweden)
Andrzej Ameljańczyk
2014-06-01
Full Text Available The paper presents a general procedure for creating the rankings of a set of objects, while the relation of preference based on any ranking function. The analysis was possible to use the ranking functions began by showing the fundamental drawbacks of commonly used functions in the form of a weighted sum. As a special case of the ranking procedure in the space of a relation, the procedure based on the notion of an ideal element and generalized Minkowski distance from the element was proposed. This procedure, presented as universal ranking algorithm, eliminates most of the disadvantages of ranking functions in the form of a weighted sum.[b]Keywords[/b]: ranking functions, preference relation, ranking clusters, categories, ideal point, universal ranking algorithm
Gershenson, Carlos
Studies of rank distributions have been popular for decades, especially since the work of Zipf. For example, if we rank words of a given language by use frequency (most used word in English is 'the', rank 1; second most common word is 'of', rank 2), the distribution can be approximated roughly with a power law. The same applies for cities (most populated city in a country ranks first), earthquakes, metabolism, the Internet, and dozens of other phenomena. We recently proposed ``rank diversity'' to measure how ranks change in time, using the Google Books Ngram dataset. Studying six languages between 1800 and 2009, we found that the rank diversity curves of languages are universal, adjusted with a sigmoid on log-normal scale. We are studying several other datasets (sports, economies, social systems, urban systems, earthquakes, artificial life). Rank diversity seems to be universal, independently of the shape of the rank distribution. I will present our work in progress towards a general description of the features of rank change in time, along with simple models which reproduce it
The spin structure function g1p of the proton and a test of the Bjorken sum rule
Directory of Open Access Journals (Sweden)
C. Adolph
2016-02-01
Full Text Available New results for the double spin asymmetry A1p and the proton longitudinal spin structure function g1p are presented. They were obtained by the COMPASS Collaboration using polarised 200 GeV muons scattered off a longitudinally polarised NH3 target. The data were collected in 2011 and complement those recorded in 2007 at 160 GeV, in particular at lower values of x. They improve the statistical precision of g1p(x by about a factor of two in the region x≲0.02. A next-to-leading order QCD fit to the g1 world data is performed. It leads to a new determination of the quark spin contribution to the nucleon spin, ΔΣ, ranging from 0.26 to 0.36, and to a re-evaluation of the first moment of g1p. The uncertainty of ΔΣ is mostly due to the large uncertainty in the present determinations of the gluon helicity distribution. A new evaluation of the Bjorken sum rule based on the COMPASS results for the non-singlet structure function g1NS(x,Q2 yields as ratio of the axial and vector coupling constants |gA/gV|=1.22±0.05 (stat.±0.10 (syst., which validates the sum rule to an accuracy of about 9%.
Custodio, Tomas; Garcia, Jose; Markovski, Jasmina; McKay Gifford, James; Hristovski, Kiril D; Olson, Larry W
2017-12-15
The underlying hypothesis of this study was that pseudo-equilibrium and column testing conditions would provide the same sorbent ranking trends although the values of sorbents' performance descriptors (e.g. sorption capacity) may vary because of different kinetics and competition effects induced by the two testing approaches. To address this hypothesis, nano-enabled hybrid media were fabricated and its removal performances were assessed for two model contaminants under multi-point batch pseudo-equilibrium and continuous-flow conditions. Calculation of simultaneous removal capacity indices (SRC) demonstrated that the more resource demanding continuous-flow tests are able to generate the same performance rankings as the ones obtained by conducing the simpler pseudo-equilibrium tests. Furthermore, continuous overlap between the 98% confidence boundaries for each SRC index trend, not only validated the hypothesis that both testing conditions provide the same ranking trends, but also pointed that SRC indices are statistically the same for each media, regardless of employed method. In scenarios where rapid screening of new media is required to obtain the best performing synthesis formulation, use of pseudo-equilibrium tests proved to be reliable. Considering that kinetics induced effects on sorption capacity must not be neglected, more resource demanding column test could be conducted only with the top performing media that exhibit the highest sorption capacity. Copyright © 2017 Elsevier B.V. All rights reserved.
On the matched pairs sign test using bivariate ranked set sampling ...
African Journals Online (AJOL)
BVRSS) is introduced and investigated. We show that this test is asymptotically more efficient than its counterpart sign test based on a bivariate simple random sample (BVSRS). The asymptotic null distribution and the efficiency of the test are derived.
Bakker, Marjan; Wicherts, Jelte M
2014-09-01
In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.
The Spin-dependent Structure Function of the Proton $g_{1}^p$ and a Test of the Bjorken Sum Rule
Alekseev, M.G.; Alexandrov, Yu.; Alexeev, G.D.; Amoroso, A.; Austregesilo, A.; Badelek, B.; Balestra, F.; Ball, J.; Barth, J.; Baum, G.; Bedfer, Y.; Bernhard, J.; Bertini, R.; Bettinelli, M.; Birsa, R.; Bisplinghoff, J.; Bordalo, P.; Bradamante, F.; Bravar, A.; Bressan, A.; Brona, G.; Burtin, E.; Bussa, M.P.; Chaberny, D.; Cotic, D.; Chiosso, M.; Chung, S.U.; Cicuttin, A.; Colantoni, M.; Crespo, M.L.; Dalla Torre, S.; Das, S.; Dasgupta, S.S.; Denisov, O.Yu.; Dhara, L.; Diaz, V.; Donskov, S.V.; Doshita, N.; Duic, V.; Dunnweber, W.; Efremov, A.; El Alaoui, A.; Eversheim, P.D.; Eyrich, W.; Faessler, M.; Ferrero, A.; Filin, A.; Finger, M.; Finger, M., Jr.; Fischer, H.; Franco, C.; Friedrich, J.M.; Garfagnini, R.; Gautheron, F.; Gavrichtchouk, O.P.; Gazda, R.; Gerassimov, S.; Geyer, R.; Giorgi, M.; Gnesi, I.; Gobbo, B.; Goertz, S.; Grabmuller, S.; Grasso, A.; Grube, B.; Gushterski, R.; Guskov, A.; Haas, F.; von Harrach, D.; Hasegawa, T.; Heinsius, F.H.; Hermann, R.; Herrmann, F.; Hess, C.; Hinterberger, F.; Horikawa, N.; Hoppner, Ch.; d'Hose, N.; Ilgner, C.; Ishimoto, S.; Ivanov, O.; Ivanshin, Yu.; Iwata, T.; Jahn, R.; Jasinski, P.; Jegou, G.; Joosten, R.; Kabuss, E.; Kafer, W.; Kang, D.; Ketzer, B.; Khaustov, G.V.; Khokhlov, Yu.A.; Kisselev, Yu.; Klein, F.; Klimaszewski, K.; Koblitz, S.; Koivuniemi, J.H.; Kolosov, V.N.; Kondo, K.; Konigsmann, K.; Konopka, R.; Konorov, I.; Konstantinov, V.F.; Korzenev, A.; Kotzinian, A.M.; Kouznetsov, O.; Kowalik, K.; Kramer, M.; Kral, A.; Kroumchtein, Z.V.; Kuhn, R.; Kunne, F.; Kurek, K.; Lauser, L.; Le Goff, J.M.; Lednev, A.A.; Lehmann, A.; Levorato, S.; Lichtenstadt, J.; Liska, T.; Maggiora, A.; Maggiora, M.; Magnon, A.; Mallot, G.K.; Mann, A.; Marchand, C.; Marroncle, J.; Martin, A.; Marzec, J.; Massmann, F.; Matsuda, T.; Maximov, A.N.; Meyer, W.; Michigami, T.; Mikhailov, Yu.V.; Moinester, M.A.; Mutter, A.; Nagaytsev, A.; Nagel, T.; Nassalski, J.; Negrini, T.; Nerling, F.; Neubert, S.; Neyret, D.; Nikolaenko, V.I.; Nunes, A.S.; Olshevsky, A.G.; Ostrick, M.; Padee, A.; Panknin, R.; Panzieri, D.; Parsamyan, B.; Paul, S.; Pawlukiewicz-Kaminska, B.; Perevalova, E.; Pesaro, G.; Peshekhonov, D.V.; Piragino, G.; Platchkov, S.; Pochodzalla, J.; Polak, J.; Polyakov, V.A.; Pontecorvo, G.; Pretz, J.; Quintans, C.; Rajotte, J.F.; Ramos, S.; Rapatsky, V.; Reicherz, G.; Richter, A.; Robinet, F.; Rocco, E.; Rondio, E.; Ryabchikov, D.I.; Samoylenko, V.D.; Sandacz, A.; Santos, H.; Sapozhnikov, M.G.; Sarkar, S.; Savin, I.A.; Sbrizzai, G.; Schiavon, P.; Schill, C.; Schmitt, L.; Schluter, T.; Schopferer, S.; Schroder, W.; Shevchenko, O.Yu.; Siebert, H.W.; Silva, L.; Sinha, L.; Sissakian, A.N.; Slunecka, M.; Smirnov, G.I.; Sosio, S.; Sozzi, F.; Srnka, A.; Stolarski, M.; Sulc, M.; Sulej, R.; Takekawa, S.; Tessaro, S.; Tessarotto, F.; Teufel, A.; Tkatchev, L.G.; Uhl, S.; Uman, I.; Virius, M.; Vlassov, N.V.; Vossen, A.; Weitzel, Q.; Windmolders, R.; Wislicki, W.; Wollny, H.; Zaremba, K.; Zavertyaev, M.; Zemlyanichkina, E.; Ziembicki, M.; Zhao, J.; Zhuravlev, N.; Zvyagin, A.
2010-01-01
The inclusive double-spin asymmetry, $A_{1}^{p}$, has been measured at COMPASS in deepinelastic polarised muon scattering off a large polarised NH3 target. The data, collected in the year 2007, cover the range Q2 > 1 (GeV/c)^2, 0.004 < x < 0.7 and improve the statistical precision of g_{1}^{p}(x) by a factor of two in the region x < 0.02. The new proton asymmetries are combined with those previously published for the deuteron to extract the non-singlet spin-dependent structure function g_1^NS(x,Q2). The isovector quark density, Delta_q_3(x,Q2), is evaluated from a NLO QCD fit of g_1^NS. The first moment of Delta_q3 is in good agreement with the value predicted by the Bjorken sum rule and corresponds to a ratio of the axial and vector coupling constants g_A/g_V = 1.28+-0.07(stat)+-0.10(syst).
The Spin Structure Function $g_1^{\\rm p}$ of the Proton and a Test of the Bjorken Sum Rule
Adolph, C.; Alexeev, M.G.; Alexeev, G.D.; Amoroso, A.; Andrieux, V.; Anosov, V.; Austregesilo, A.; Azevedo, C.; Badelek, B.; Balestra, F.; Barth, J.; Baum, G.; Beck, R.; Bedfer, Y.; Bernhard, J.; Bicker, K.; Bielert, E.R.; Birsa, R.; Bisplinghoff, J.; Bodlak, M.; Boer, M.; Bordalo, P.; Bradamante, F.; Braun, C.; Bressan, A.; Buchele, M.; Burtin, E.; Capozza, L.; Chang, W.C.; Chiosso, M.; Choi, I.; Chung, S.U.; Cicuttin, A.; Crespo, M.L.; Curiel, Q.; Dalla Torre, S.; Dasgupta, S.S.; Dasgupta, S.; Denisov, O.Yu.; Dhara, L.; Donskov, S.V.; Doshita, N.; Duic, V.; Dziewiecki, M.; Efremov, A.; Eversheim, P.D.; Eyrich, W.; Ferrero, A.; Finger, M.; M. Finger jr; Fischer, H.; Franco, C.; von Hohenesche, N. du Fresne; Friedrich, J.M.; Frolov, V.; Fuchey, E.; Gautheron, F.; Gavrichtchouk, O.P.; Gerassimov, S.; Giordano, F.; Gnesi, I.; Gorzellik, M.; Grabmuller, S.; Grasso, A.; Grosse-Perdekamp, M.; Grube, B.; Grussenmeyer, T.; Guskov, A.; Haas, F.; Hahne, D.; von Harrach, D.; Hashimoto, R.; Heinsius, F.H.; Herrmann, F.; Hinterberger, F.; Horikawa, N.; d'Hose, N.; Hsieh, C.Yu; Huber, S.; Ishimoto, S.; Ivanov, A.; Ivanshin, Yu.; Iwata, T.; Jahn, R.; Jary, V.; Jorg, P.; Joosten, R.; Kabuss, E.; Ketzer, B.; Khaustov, G.V.; Khokhlov, Yu. A.; Kisselev, Yu.; Klein, F.; Klimaszewski, K.; Koivuniemi, J.H.; Kolosov, V.N.; Kondo, K.; Konigsmann, K.; Konorov, I.; Konstantinov, V.F.; Kotzinian, A.M.; Kouznetsov, O.; Kramer, M.; Kremser, P.; Krinner, F.; Kroumchtein, Z.V.; Kuchinski, N.; Kunne, F.; Kurek, K.; Kurjata, R.P.; Lednev, A.A.; Lehmann, A.; Levillain, M.; Levorato, S.; Lichtenstadt, J.; Longo, R.; Maggiora, A.; Magnon, A.; Makins, N.; Makke, N.; Mallot, G.K.; Marchand, C.; Martin, A.; Marzec, J.; Matousek, J.; Matsuda, H.; Matsuda, T.; Meshcheryakov, G.; Meyer, W.; Michigami, T.; Mikhailov, Yu. V.; Miyachi, Y.; Nagaytsev, A.; Nagel, T.; Nerling, F.; Neyret, D.; Nikolaenko, V.I.; Novy, J.; Nowak, W.D.; Nunes, A.S.; Olshevsky, A.G.; Orlov, I.; Ostrick, M.; Panzieri, D.; Parsamyan, B.; Paul, S.; Peng, J.C.; Pereira, F.; Pesek, M.; Peshekhonov, D.V.; Platchkov, S.; Pochodzalla, J.; Polyakov, V.A.; Pretz, J.; Quaresma, M.; Quintans, C.; Ramos, S.; Regali, C.; Reicherz, G.; Riedl, C.; Rocco, E.; Rossiyskaya, N.S.; Ryabchikov, D.I.; Rychter, A.; Samoylenko, V.D.; Sandacz, A.; Santos, C.; Sarkar, S.; Savin, I.A.; Sbrizzai, G.; Schiavon, P.; Schmidt, K.; Schmieden, H.; Schonning, K.; Schopferer, S.; Selyunin, A.; Shevchenko, O.Yu.; Silva, L.; Sinha, L.; Sirtl, S.; Slunecka, M.; Sozzi, F.; Srnka, A.; Stolarski, M.; Sulc, M.; Suzuki, H.; Szabelski, A.; Szameitat, T.; Sznajder, P.; Takekawa, S.; Wolbeek, J. ter; Tessaro, S.; Tessarotto, F.; Thibaud, F.; Tosello, F.; Tskhay, V.; Uhl, S.; Veloso, J.; Virius, M.; Weisrock, T.; Wilfert, M.; Windmolders, R.; Zaremba, K.; Zavertyaev, M.; Zemlyanichkina, E.; Ziembicki, M.; Zink, A.
2016-02-10
New results for the double spin asymmetry $A_1^{\\rm p}$ and the proton longitudinal spin structure function $g_1^{\\rm p}$ are presented. They were obtained by the COMPASS collaboration using polarised 200 GeV muons scattered off a longitudinally polarised NH$_3$ target. The data were collected in 2011 and complement those recorded in 2007 at 160\\,GeV, in particular at lower values of $x$. They improve the statistical precision of $g_1^{\\rm p}(x)$ by about a factor of two in the region $x\\lesssim 0.02$. A next-to-leading order QCD fit to the $g_1$ world data is performed. It leads to a new determination of the quark spin contribution to the nucleon spin, $\\Delta \\Sigma$ ranging from 0.26 to 0.36, and to a re-evaluation of the first moment of $g_1^{\\rm p}$. The uncertainty of $\\Delta \\Sigma$ is mostly due to the large uncertainty in the present determinations of the gluon helicity distribution. A new evaluation of the Bjorken sum rule based on the COMPASS results for the non-singlet structure function $g_1^{\\rm...
International Nuclear Information System (INIS)
Shiles, E.; Sasaki, T.; Inokuti, M.; Smith, D.Y.
1980-01-01
An iterative, self-consistent procedure for the Kramers-Kronig analysis of data from reflectance, ellipsometric, transmission, and electron-energy-loss measurements is presented. This procedure has been developed for practical dispersion analysis since experimentally no single optical function can be readily measured over the entire range of frequencies as required by the Kramers-Kronig relations. The present technique is applied to metallic aluminum as an example. The results are then examined for internal consistency and for systematic errors by various optical sum rules. The present procedure affords a systematic means of preparing a self-consistent set of optical functions provided some optical or energy-loss data are available in all important spectral regions. The analysis of aluminum discloses that currently available data exhibit an excess oscillator strength, apparently in the vicinity of the L edge. A possible explanation is a systematic experimental error in the absorption-coefficient measurements resulting from surface layers: possibly oxides: present in thin-film transmission samples. A revised set of optical functions has been prepared by an ad hoc reduction of the reported absorption coefficient above the L edge by 14%. These revised data lead to a total oscillator strength consistent with the known electron density and are in agreement with dc-conductivity and stopping-power measurements as well as with absorption coefficients inferred from the cross sections of neighboring elements in the periodic table. The optical functions resulting from this study show evidence for both the redistribution of oscillator strength between energy levels and the effects on real transitions of the shielding of conduction electrons by virtual processes in the core states
Alabdulmohsin, Ibrahim M.
2018-01-01
In this chapter, we extend the previous results of Chap. 2 to the more general case of composite finite sums. We describe what composite finite sums are and how their analysis can be reduced to the analysis of simple finite sums using the chain rule. We apply these techniques, next, on numerical integration and on some identities of Ramanujan.
Alabdulmohsin, Ibrahim M.
2018-03-07
In this chapter, we extend the previous results of Chap. 2 to the more general case of composite finite sums. We describe what composite finite sums are and how their analysis can be reduced to the analysis of simple finite sums using the chain rule. We apply these techniques, next, on numerical integration and on some identities of Ramanujan.
International Nuclear Information System (INIS)
Morgenthaler, Stephan; Thilly, William G.
2007-01-01
A method is described to discover if a gene carries one or more allelic mutations that confer risk for any specified common disease. The method does not depend upon genetic linkage of risk-conferring mutations to high frequency genetic markers such as single nucleotide polymorphisms. Instead, the sums of allelic mutation frequencies in case and control cohorts are determined and a statistical test is applied to discover if the difference in these sums is greater than would be expected by chance. A statistical model is presented that defines the ability of such tests to detect significant gene-disease relationships as a function of case and control cohort sizes and key confounding variables: zygosity and genicity, environmental risk factors, errors in diagnosis, limits to mutant detection, linkage of neutral and risk-conferring mutations, ethnic diversity in the general population and the expectation that among all exonic mutants in the human genome greater than 90% will be neutral with regard to any effect on disease risk. Means to test the null hypothesis for, and determine the statistical power of, each test are provided. For this 'cohort allelic sums test' or 'CAST', the statistical model and test are provided as an Excel (TM) program, CASTAT (C) at http://epidemiology.mit.edu. Based on genetics, technology and statistics, a strategy of enumerating the mutant alleles carried in the exons and splice sites of the estimated ∼25,000 human genes in case cohort samples of 10,000 persons for each of 100 common diseases is proposed and evaluated: A wide range of possible conditions of multi-allelic or mono-allelic and monogenic, multigenic or polygenic (including epistatic) risk are found to be detectable using the statistical criteria of 1 or 10 ''false positive'' gene associations per 25,000 gene-disease pair-wise trials and a statistical power of >0.8. Using estimates of the distribution of both neutral and gene-inactivating nondeleterious mutations in humans and
Directory of Open Access Journals (Sweden)
Massimiliano Malgieri
2017-01-01
Full Text Available In this paper we present the results of a research-based teaching-learning sequence on introductory quantum physics based on Feynman’s sum over paths approach in the Italian high school. Our study focuses on students’ understanding of two founding ideas of quantum physics, wave particle duality and the uncertainty principle. In view of recent research reporting the fragmentation of students’ mental models of quantum concepts after initial instruction, we collected and analyzed data using the assessment tools provided by knowledge integration theory. Our results on the group of n=14 students who performed the final test indicate that the functional explanation of wave particle duality provided by the sum over paths approach may be effective in leading students to build consistent mental models of quantum objects, and in providing them with a unified perspective on both the photon and the electron. Results on the uncertainty principle are less clear cut, as the improvements over traditional instruction appear less significant. Given the low number of students in the sample, this work should be interpreted as a case study, and we do not attempt to draw definitive conclusions. However, our study suggests that (i the sum over paths approach may deserve more attention from researchers and educators as a possible route to introduce basic concepts of quantum physics in high school, and (ii more research should be focused not only on the correctness of students’ mental models on individual concepts, but also on the ability of students to connect different ideas and experiments related to quantum theory in an organized whole.
Holmes, Tyson H; Li, Shou-Hua; McCann, David J
2016-11-23
The design of pharmacological trials for management of substance use disorders is shifting toward outcomes of successful individual-level behavior (abstinence or no heavy use). While binary success/failure analyses are common, McCann and Li (CNS Neurosci Ther 2012; 18: 414-418) introduced "number of beyond-threshold weeks of success" (NOBWOS) scores to avoid dichotomized outcomes. NOBWOS scoring employs an efficacy "hurdle" with values reflecting duration of success. Here, we evaluate NOBWOS scores rigorously. Formal analysis of mathematical structure of NOBWOS scores is followed by simulation studies spanning diverse conditions to assess operating characteristics of five linear-rank tests on NOBWOS scores. Simulations include assessment of Fisher's exact test applied to hurdle component. On average, statistical power was approximately equal for five linear-rank tests. Under none of conditions examined did Fisher's exact test exhibit greater statistical power than any of the linear-rank tests. These linear-rank tests provide good Type I and Type II error control for comparing distributions of NOBWOS scores between groups (e.g. active vs. placebo). All methods were applied to re-analyses of data from four clinical trials of differing lengths and substances of abuse. These linear-rank tests agreed across all trials in rejecting (or not) their null (equality of distributions) at ≤ 0.05. © The Author(s) 2016.
Intensity rankings of plyometric exercises using joint power absorption.
Van Lieshout, Kathryn G; Anderson, Joy G; Shelburne, Kevin B; Davidson, Bradley S
2014-09-01
Athletic trainers and physical therapists often progress patients through rehabilitation by selecting plyometric exercises of increasing intensity in preparation for return to sport. The purpose of this study was to quantify the intensity of seven plyometric movements commonly used in lower-extremity rehabilitation by joint-specific peak power absorption and the sum of the peak power. Ten collegiate athletes performed submaximal plyometric exercises in a single test session: vertical jump, forward jump, backward jump, box drop, box jump up, tuck jump, and depth jump. Three-dimensional kinematics and force platform data were collected to generate joint kinetics. Peak power absorption normalized to body mass was calculated at the ankle, knee, and hip, and averaged across repetitions. Joint peak power data were pooled across athletes and summed to obtain the sum of peak power. Movements were ranked from 1 (low) to 7 (high) based on the sum of peak power and joint peak power (ankle, knee, hip). The sum of peak power did not correspond with standard low, medium, and high subjective intensity ratings or joint peak power in all joints. Mixed model analyses revealed significant variance between the sum of peak power and joint peak power ranks in the forward jump, backward jump, box drop, and depth jump (P<0.05), but not in the vertical jump, box jump up, and tuck jump. Results provide intensity rankings that can be used directly by athletic trainers and physical therapists in developing protocols for rehabilitation specific to the injured joint. Copyright © 2014 Elsevier Ltd. All rights reserved.
The Seven Deadly Sins of World University Ranking: A Summary from Several Papers
Soh, Kaycheng
2017-01-01
World university rankings use the weight-and-sum approach to process data. Although this seems to pass the common sense test, it has statistical problems. In recent years, seven such problems have been uncovered: spurious precision, weight discrepancies, assumed mutual compensation, indictor redundancy, inter-system discrepancy, negligence of…
DEFF Research Database (Denmark)
Schneider, Jesper Wiborg
2012-01-01
In this paper we discuss and question the use of statistical significance tests in relation to university rankings as recently suggested. We outline the assumptions behind and interpretations of statistical significance tests and relate this to examples from the recent SCImago Institutions Rankin...
Alabdulmohsin, Ibrahim M.
2018-03-07
We will begin our treatment of summability calculus by analyzing what will be referred to, throughout this book, as simple finite sums. Even though the results of this chapter are particular cases of the more general results presented in later chapters, they are important to start with for a few reasons. First, this chapter serves as an excellent introduction to what summability calculus can markedly accomplish. Second, simple finite sums are encountered more often and, hence, they deserve special treatment. Third, the results presented in this chapter for simple finite sums will, themselves, be used as building blocks for deriving the most general results in subsequent chapters. Among others, we establish that fractional finite sums are well-defined mathematical objects and show how various identities related to the Euler constant as well as the Riemann zeta function can actually be derived in an elementary manner using fractional finite sums.
Alabdulmohsin, Ibrahim M.
2018-01-01
We will begin our treatment of summability calculus by analyzing what will be referred to, throughout this book, as simple finite sums. Even though the results of this chapter are particular cases of the more general results presented in later chapters, they are important to start with for a few reasons. First, this chapter serves as an excellent introduction to what summability calculus can markedly accomplish. Second, simple finite sums are encountered more often and, hence, they deserve special treatment. Third, the results presented in this chapter for simple finite sums will, themselves, be used as building blocks for deriving the most general results in subsequent chapters. Among others, we establish that fractional finite sums are well-defined mathematical objects and show how various identities related to the Euler constant as well as the Riemann zeta function can actually be derived in an elementary manner using fractional finite sums.
International Nuclear Information System (INIS)
Arenhoevel, H.; Drechsel, D.; Weber, H.J.
1978-01-01
Generalized sum rules are derived by integrating the electromagnetic structure functions along lines of constant ratio of momentum and energy transfer. For non-relativistic systems these sum rules are related to the conventional photonuclear sum rules by a scaling transformation. The generalized sum rules are connected with the absorptive part of the forward scattering amplitude of virtual photons. The analytic structure of the scattering amplitudes and the possible existence of dispersion relations have been investigated in schematic relativistic and non-relativistic models. While for the non-relativistic case analyticity does not hold, the relativistic scattering amplitude is analytical for time-like (but not for space-like) photons and relations similar to the Gell-Mann-Goldberger-Thirring sum rule exist. (Auth.)
University Rankings: The Web Ranking
Aguillo, Isidro F.
2012-01-01
The publication in 2003 of the Ranking of Universities by Jiao Tong University of Shanghai has revolutionized not only academic studies on Higher Education, but has also had an important impact on the national policies and the individual strategies of the sector. The work gathers the main characteristics of this and other global university…
Yurinsky, Vadim Vladimirovich
1995-01-01
Surveys the methods currently applied to study sums of infinite-dimensional independent random vectors in situations where their distributions resemble Gaussian laws. Covers probabilities of large deviations, Chebyshev-type inequalities for seminorms of sums, a method of constructing Edgeworth-type expansions, estimates of characteristic functions for random vectors obtained by smooth mappings of infinite-dimensional sums to Euclidean spaces. A self-contained exposition of the modern research apparatus around CLT, the book is accessible to new graduate students, and can be a useful reference for researchers and teachers of the subject.
Chan, Kwun Chuen Gary; Qin, Jing
2015-10-01
Existing linear rank statistics cannot be applied to cross-sectional survival data without follow-up since all subjects are essentially censored. However, partial survival information are available from backward recurrence times and are frequently collected from health surveys without prospective follow-up. Under length-biased sampling, a class of linear rank statistics is proposed based only on backward recurrence times without any prospective follow-up. When follow-up data are available, the proposed rank statistic and a conventional rank statistic that utilizes follow-up information from the same sample are shown to be asymptotically independent. We discuss four ways to combine these two statistics when follow-up is present. Simulations show that all combined statistics have substantially improved power compared with conventional rank statistics, and a Mantel-Haenszel test performed the best among the proposal statistics. The method is applied to a cross-sectional health survey without follow-up and a study of Alzheimer's disease with prospective follow-up. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
International Nuclear Information System (INIS)
Staebler, A.; Fink, U.; Siuda, S.; Neville, S.
1989-01-01
Three groups of patients (n = 55, 52 and 54) were examined with the X-ray contrast media Gastrografin, Peritrast-Oral GI, and Telebrix Gastro to assess the diagnostic ranking, side effects and taste of watersoluble oral contrast media. No significant differences were seen in respect of diagnostic ranking and side effects. Side effects were exclusively abdominal symptoms; there was no difference with regard to laxative action. Telebrix Gastroas accepted significantly better in respect of taste than Gastrografin and Peritrast-Oral GI. (orig.) [de
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Jørgensen, Allan Grønlund
2008-01-01
In an array of n numbers each of the \\binomn2+nUnknown control sequence '\\binom' contiguous subarrays define a sum. In this paper we focus on algorithms for selecting and reporting maximal sums from an array of numbers. First, we consider the problem of reporting k subarrays inducing the k largest...... sums among all subarrays of length at least l and at most u. For this problem we design an optimal O(n + k) time algorithm. Secondly, we consider the problem of selecting a subarray storing the k’th largest sum. For this problem we prove a time bound of Θ(n · max {1,log(k/n)}) by describing...... an algorithm with this running time and by proving a matching lower bound. Finally, we combine the ideas and obtain an O(n· max {1,log(k/n)}) time algorithm that selects a subarray storing the k’th largest sum among all subarrays of length at least l and at most u....
Sanders, Anthony P; Brannon, Rebecca M
2014-02-01
This research has developed a novel test method for evaluating the wear resistance of ceramic materials under severe contact stresses simulating edge loading in prosthetic hip bearings. Simply shaped test specimens - a cylinder and a spheroid - were designed as surrogates for an edge-loaded, head/liner implant pair. Equivalency of the simpler specimens was assured in the sense that their theoretical contact dimensions and pressures were identical, according to Hertzian contact theory, to those of the head/liner pair. The surrogates were fabricated in three ceramic materials: Al2 O3 , zirconia-toughened alumina (ZTA), and ZrO2 . They were mated in three different material pairs and reciprocated under a 200 N normal contact force for 1000-2000 cycles, which created small (material pairs were ranked by their wear resistance, quantified by the volume of abraded material measured using an interferometer. Similar tests were performed on edge-loaded hip implants in the same material pairs. The surrogates replicated the wear rankings of their full-scale implant counterparts and mimicked their friction force trends. The results show that a proxy test using simple test specimens can validly rank the wear performance of ceramic materials under severe, edge-loading contact stresses, while replicating the beginning stage of edge-loading wear. This simple wear test is therefore potentially useful for screening and ranking new, prospective materials early in their development, to produce optimized candidates for more complicated full-scale hip simulator wear tests. Copyright © 2013 Wiley Periodicals, Inc.
Multiparty symmetric sum types
DEFF Research Database (Denmark)
Nielsen, Lasse; Yoshida, Nobuko; Honda, Kohei
2010-01-01
This paper introduces a new theory of multiparty session types based on symmetric sum types, by which we can type non-deterministic orchestration choice behaviours. While the original branching type in session types can represent a choice made by a single participant and accepted by others...... determining how the session proceeds, the symmetric sum type represents a choice made by agreement among all the participants of a session. Such behaviour can be found in many practical systems, including collaborative workflow in healthcare systems for clinical practice guidelines (CPGs). Processes...... with the symmetric sums can be embedded into the original branching types using conductor processes. We show that this type-driven embedding preserves typability, satisfies semantic soundness and completeness, and meets the encodability criteria adapted to the typed setting. The theory leads to an efficient...
DEFF Research Database (Denmark)
T. Frandsen, Mads; Masina, Isabella; Sannino, Francesco
2011-01-01
We introduce new sum rules allowing to determine universal properties of the unknown component of the cosmic rays and show how it can be used to predict the positron fraction at energies not yet explored by current experiments and to constrain specific models.......We introduce new sum rules allowing to determine universal properties of the unknown component of the cosmic rays and show how it can be used to predict the positron fraction at energies not yet explored by current experiments and to constrain specific models....
Undurraga, Eduardo A; Nyberg, Colleen; Eisenberg, Dan T A; Magvanjav, Oyunbileg; Reyes-García, Victoria; Huanca, Tomás; Leonard, William R; McDade, Thomas W; Tanner, Susan; Vadez, Vincent; Godoy, Ricardo
2010-12-01
Growing evidence suggests that economic inequality in a community harms the health of a person. Using panel data from a small-scale, preindustrial rural society, we test whether individual wealth rank and village wealth inequality affects self-reported poor health in a foraging-farming native Amazonian society. A person's wealth rank was negatively but weakly associated with self-reported morbidity. Each step up/year in the village wealth hierarchy reduced total self-reported days ill by 0.4 percent. The Gini coefficient of village wealth inequality bore a positive association with self-reported poor health that was large in size, but not statistically significant. We found small village wealth inequality, and evidence that individual economic rank did not change. The modest effects may have to do with having used subjective rather than objective measures of health, having small village wealth inequality, and with the possibly true modest effect of a person's wealth rank on health in a small-scale, kin-based society. Finally, we also found that an increase in mean individual wealth by village was related to worse self-reported health. As the Tsimane' integrate into the market economy, their possibilities of wealth accumulation rise, which may affect their well-being. Our work contributes to recent efforts in biocultural anthropology to link the study of social inequalities, human biology, and human-environment interactions.
Statistical methods for ranking data
Alvo, Mayer
2014-01-01
This book introduces advanced undergraduate, graduate students and practitioners to statistical methods for ranking data. An important aspect of nonparametric statistics is oriented towards the use of ranking data. Rank correlation is defined through the notion of distance functions and the notion of compatibility is introduced to deal with incomplete data. Ranking data are also modeled using a variety of modern tools such as CART, MCMC, EM algorithm and factor analysis. This book deals with statistical methods used for analyzing such data and provides a novel and unifying approach for hypotheses testing. The techniques described in the book are illustrated with examples and the statistical software is provided on the authors’ website.
Malgieri, Massimiliano; Onorato, Pasquale; De Ambrosis, Anna
2017-01-01
In this paper we present the results of a research-based teaching-learning sequence on introductory quantum physics based on Feynman's sum over paths approach in the Italian high school. Our study focuses on students' understanding of two founding ideas of quantum physics, wave particle duality and the uncertainty principle. In view of recent…
African Journals Online (AJOL)
Tracie1
Résumé. L'activité traduisant est un processus très compliqué qui exige la connaissance extralinguistique chez le traducteur. Ce travail est basé sur la traduction littéraire. La traduction littéraire consistedes textes littéraires que comprennent la poésie, le théâtre, et la prose. La traduction littéraire a quelques problèmes ...
Alabdulmohsin, Ibrahim M.
2018-03-07
In this chapter, we use the theory of summability of divergent series, presented earlier in Chap. 4, to derive the analogs of the Euler-Maclaurin summation formula for oscillating sums. These formulas will, in turn, be used to perform many remarkable deeds with ease. For instance, they can be used to derive analytic expressions for summable divergent series, obtain asymptotic expressions of oscillating series, and even accelerate the convergence of series by several orders of magnitude. Moreover, we will prove the notable fact that, as far as the foundational rules of summability calculus are concerned, summable divergent series behave exactly as if they were convergent.
Alabdulmohsin, Ibrahim M.
2018-01-01
In this chapter, we use the theory of summability of divergent series, presented earlier in Chap. 4, to derive the analogs of the Euler-Maclaurin summation formula for oscillating sums. These formulas will, in turn, be used to perform many remarkable deeds with ease. For instance, they can be used to derive analytic expressions for summable divergent series, obtain asymptotic expressions of oscillating series, and even accelerate the convergence of series by several orders of magnitude. Moreover, we will prove the notable fact that, as far as the foundational rules of summability calculus are concerned, summable divergent series behave exactly as if they were convergent.
Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim
2014-03-16
In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an [Formula: see text]-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem.
Sparse structure regularized ranking
Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin
2014-01-01
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse
Counting Triangles to Sum Squares
DeMaio, Joe
2012-01-01
Counting complete subgraphs of three vertices in complete graphs, yields combinatorial arguments for identities for sums of squares of integers, odd integers, even integers and sums of the triangular numbers.
DeTemple, Duane
2010-01-01
Purely combinatorial proofs are given for the sum of squares formula, 1[superscript 2] + 2[superscript 2] + ... + n[superscript 2] = n(n + 1) (2n + 1) / 6, and the sum of sums of squares formula, 1[superscript 2] + (1[superscript 2] + 2[superscript 2]) + ... + (1[superscript 2] + 2[superscript 2] + ... + n[superscript 2]) = n(n + 1)[superscript 2]…
Khan, Haseeb Ahmad
2005-01-28
Due to versatile diagnostic and prognostic fidelity molecular signatures or fingerprints are anticipated as the most powerful tools for cancer management in the near future. Notwithstanding the experimental advancements in microarray technology, methods for analyzing either whole arrays or gene signatures have not been firmly established. Recently, an algorithm, ArraySolver has been reported by Khan for two-group comparison of microarray gene expression data using two-tailed Wilcoxon signed-rank test. Most of the molecular signatures are composed of two sets of genes (hybrid signatures) wherein up-regulation of one set and down-regulation of the other set collectively define the purpose of a gene signature. Since the direction of a selected gene's expression (positive or negative) with respect to a particular disease condition is known, application of one-tailed statistics could be a more relevant choice. A novel method, ArrayVigil, is described for comparing hybrid signatures using segregated-one-tailed (SOT) Wilcoxon signed-rank test and the results compared with integrated-two-tailed (ITT) procedures (SPSS and ArraySolver). ArrayVigil resulted in lower P values than those obtained from ITT statistics while comparing real data from four signatures.
DEFF Research Database (Denmark)
Johansen, Søren
2008-01-01
The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...
Electronuclear sum rules for the lightest nuclei
International Nuclear Information System (INIS)
Efros, V.D.
1992-01-01
It is shown that the model-independent longitudinal electronuclear sum rules for nuclei with A = 3 and A = 4 have an accuracy on the order of a percent in the traditional single-nucleon approximation with free nucleons for the nuclear charge-density operator. This makes it possible to test this approximation by using these sum rules. The longitudinal sum rules for A = 3 and A = 4 are calculated using the wave functions of these nuclei corresponding to a large set of realistic NN interactions. The values of the model-independent sum rules lie in the range of values calculated by this method. Model-independent expressions are obtained for the transverse sum rules for nuclei with A = 3 and A = 4. These sum rules are calculated using a large set of realistic wave functions of these nuclei. The contribution of the convection current and the changes in the results for different versions of realistic NN forces are given. 29 refs., 4 tabs
American Society for Testing and Materials. Philadelphia
2003-01-01
1.1 This test method covers laboratory procedures for determining the resistance of plastics to sliding wear. The test utilizes a block-on-ring friction and wear testing machine to rank plastics according to their sliding wear characteristics against metals or other solids. 1.2 An important attribute of this test is that it is very flexible. Any material that can be fabricated into, or applied to, blocks and rings can be tested. Thus, the potential materials combinations are endless. In addition, the test can be run with different gaseous atmospheres and elevated temperatures, as desired, to simulate service conditions. 1.3 Wear test results are reported as the volume loss in cubic millimetres for the block and ring. Materials of higher wear resistance will have lower volume loss. 1.4 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information only. 1.5 This standard does not purport to address all of the safety concerns, if any, associated with it...
Social Security Administration — Staging Instance for all SUMs Counts related projects including: Redeterminations/Limited Issue, Continuing Disability Resolution, CDR Performance Measures, Initial...
Ranking Operations Management conferences
Steenhuis, H.J.; de Bruijn, E.J.; Gupta, Sushil; Laptaned, U
2007-01-01
Several publications have appeared in the field of Operations Management which rank Operations Management related journals. Several ranking systems exist for journals based on , for example, perceived relevance and quality, citation, and author affiliation. Many academics also publish at conferences
Owen, Rhiannon K; Cooper, Nicola J; Quinn, Terence J; Lees, Rosalind; Sutton, Alex J
2018-07-01
Network meta-analyses (NMA) have extensively been used to compare the effectiveness of multiple interventions for health care policy and decision-making. However, methods for evaluating the performance of multiple diagnostic tests are less established. In a decision-making context, we are often interested in comparing and ranking the performance of multiple diagnostic tests, at varying levels of test thresholds, in one simultaneous analysis. Motivated by an example of cognitive impairment diagnosis following stroke, we synthesized data from 13 studies assessing the efficiency of two diagnostic tests: Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA), at two test thresholds: MMSE accounting for the correlations between multiple test accuracy measures from the same study. We developed and successfully fitted a model comparing multiple tests/threshold combinations while imposing threshold constraints. Using this model, we found that MoCA at threshold decision making. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Damanik, Asan
2018-03-01
Neutrino mass sum-rele is a very important research subject from theoretical side because neutrino oscillation experiment only gave us two squared-mass differences and three mixing angles. We review neutrino mass sum-rule in literature that have been reported by many authors and discuss its phenomenological implications.
Maua, Denis Deratani; Cozman, Fabio Gagli; Conaty, Diarmaid; de Campos, Cassio P.
2017-01-01
Sum-product networks are a relatively new and increasingly popular class of (precise) probabilistic graphical models that allow for marginal inference with polynomial effort. As with other probabilistic models, sum-product networks are often learned from data and used to perform classification.
A bayesian approach to QCD sum rules
International Nuclear Information System (INIS)
Gubler, Philipp; Oka, Makoto
2010-01-01
QCD sum rules are analyzed with the help of the Maximum Entropy Method. We develop a new technique based on the Bayesion inference theory, which allows us to directly obtain the spectral function of a given correlator from the results of the operator product expansion given in the deep euclidean 4-momentum region. The most important advantage of this approach is that one does not have to make any a priori assumptions about the functional form of the spectral function, such as the 'pole + continuum' ansatz that has been widely used in QCD sum rule studies, but only needs to specify the asymptotic values of the spectral function at high and low energies as an input. As a first test of the applicability of this method, we have analyzed the sum rules of the ρ-meson, a case where the sum rules are known to work well. Our results show a clear peak structure in the region of the experimental mass of the ρ-meson. We thus demonstrate that the Maximum Entropy Method is successfully applied and that it is an efficient tool in the analysis of QCD sum rules. (author)
Moreno, Carlos J
2005-01-01
Introduction Prerequisites Outline of Chapters 2 - 8 Elementary Methods Introduction Some Lemmas Two Fundamental Identities Euler's Recurrence for Sigma(n)More Identities Sums of Two Squares Sums of Four Squares Still More Identities Sums of Three Squares An Alternate Method Sums of Polygonal Numbers Exercises Bernoulli Numbers Overview Definition of the Bernoulli Numbers The Euler-MacLaurin Sum Formula The Riemann Zeta Function Signs of Bernoulli Numbers Alternate The von Staudt-Clausen Theorem Congruences of Voronoi and Kummer Irregular Primes Fractional Parts of Bernoulli Numbers Exercises Examples of Modular Forms Introduction An Example of Jacobi and Smith An Example of Ramanujan and Mordell An Example of Wilton: t (n) Modulo 23 An Example of Hamburger Exercises Hecke's Theory of Modular FormsIntroduction Modular Group ? and its Subgroup ? 0 (N) Fundamental Domains For ? and ? 0 (N) Integral Modular Forms Modular Forms of Type Mk(? 0(N);chi) and Euler-Poincare series Hecke Operators Dirichlet Series and ...
PageRank as a method to rank biomedical literature by importance.
Yates, Elliot J; Dixon, Louise C
2015-01-01
Optimal ranking of literature importance is vital in overcoming article overload. Existing ranking methods are typically based on raw citation counts, giving a sum of 'inbound' links with no consideration of citation importance. PageRank, an algorithm originally developed for ranking webpages at the search engine, Google, could potentially be adapted to bibliometrics to quantify the relative importance weightings of a citation network. This article seeks to validate such an approach on the freely available, PubMed Central open access subset (PMC-OAS) of biomedical literature. On-demand cloud computing infrastructure was used to extract a citation network from over 600,000 full-text PMC-OAS articles. PageRanks and citation counts were calculated for each node in this network. PageRank is highly correlated with citation count (R = 0.905, P PageRank can be trivially computed on commodity cluster hardware and is linearly correlated with citation count. Given its putative benefits in quantifying relative importance, we suggest it may enrich the citation network, thereby overcoming the existing inadequacy of citation counts alone. We thus suggest PageRank as a feasible supplement to, or replacement of, existing bibliometric ranking methods.
Fractional cointegration rank estimation
DEFF Research Database (Denmark)
Lasak, Katarzyna; Velasco, Carlos
the parameters of the model under the null hypothesis of the cointegration rank r = 1, 2, ..., p-1. This step provides consistent estimates of the cointegration degree, the cointegration vectors, the speed of adjustment to the equilibrium parameters and the common trends. In the second step we carry out a sup......-likelihood ratio test of no-cointegration on the estimated p - r common trends that are not cointegrated under the null. The cointegration degree is re-estimated in the second step to allow for new cointegration relationships with different memory. We augment the error correction model in the second step...... to control for stochastic trend estimation effects from the first step. The critical values of the tests proposed depend only on the number of common trends under the null, p - r, and on the interval of the cointegration degrees b allowed, but not on the true cointegration degree b0. Hence, no additional...
Sparse structure regularized ranking
Wang, Jim Jing-Yan
2014-04-17
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.
Coulomb sum rules in the relativistic Fermi gas model
International Nuclear Information System (INIS)
Do Dang, G.; L'Huillier, M.; Nguyen Giai, Van.
1986-11-01
Coulomb sum rules are studied in the framework of the Fermi gas model. A distinction is made between mathematical and observable sum rules. Differences between non-relativistic and relativistic Fermi gas predictions are stressed. A method to deduce a Coulomb response function from the longitudinal response is proposed and tested numerically. This method is applied to the 40 Ca data to obtain the experimental Coulomb sum rule as a function of momentum transfer
Proximinality in generalized direct sums
Directory of Open Access Journals (Sweden)
Darapaneni Narayana
2004-01-01
Full Text Available We consider proximinality and transitivity of proximinality for subspaces of finite codimension in generalized direct sums of Banach spaces. We give several examples of Banach spaces where proximinality is transitive among subspaces of finite codimension.
'Sum rules' for preequilibrium reactions
International Nuclear Information System (INIS)
Hussein, M.S.
1981-03-01
Evidence that suggests a correct relationship between the optical transmission matrix, P, and the several correlation widths, gamma sub(n), found in nsmission matrix, P, and the several correlation widths, n, found in multistep compound (preequilibrium) nuclear reactions, is presented. A second sum rule is also derived within the shell model approach to nuclear reactions. Indications of the potential usefulness of the sum rules in preequilibrium studies are given. (Author) [pt
Sum rules in classical scattering
International Nuclear Information System (INIS)
Bolle, D.; Osborn, T.A.
1981-01-01
This paper derives sum rules associated with the classical scattering of two particles. These sum rules are the analogs of Levinson's theorem in quantum mechanics which provides a relationship between the number of bound-state wavefunctions and the energy integral of the time delay of the scattering process. The associated classical relation is an identity involving classical time delay and an integral over the classical bound-state density. We show that equalities between the Nth-order energy moment of the classical time delay and the Nth-order energy moment of the classical bound-state density hold in both a local and a global form. Local sum rules involve the time delay defined on a finite but otherwise arbitrary coordinate space volume S and the bound-state density associated with this same region. Global sum rules are those that obtain when S is the whole coordinate space. Both the local and global sum rules are derived for potentials of arbitrary shape and for scattering in any space dimension. Finally the set of classical sum rules, together with the known quantum mechanical analogs, are shown to provide a unified method of obtaining the high-temperature expansion of the classical, respectively the quantum-mechanical, virial coefficients
Low Rank Approximation Algorithms, Implementation, Applications
Markovsky, Ivan
2012-01-01
Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...
An electrophysiological signature of summed similarity in visual working memory.
van Vugt, Marieke K; Sekuler, Robert; Wilson, Hugh R; Kahana, Michael J
2013-05-01
Summed-similarity models of short-term item recognition posit that participants base their judgments of an item's prior occurrence on that item's summed similarity to the ensemble of items on the remembered list. We examined the neural predictions of these models in 3 short-term recognition memory experiments using electrocorticographic/depth electrode recordings and scalp electroencephalography. On each experimental trial, participants judged whether a test face had been among a small set of recently studied faces. Consistent with summed-similarity theory, participants' tendency to endorse a test item increased as a function of its summed similarity to the items on the just-studied list. To characterize this behavioral effect of summed similarity, we successfully fit a summed-similarity model to individual participant data from each experiment. Using the parameters determined from fitting the summed-similarity model to the behavioral data, we examined the relation between summed similarity and brain activity. We found that 4-9 Hz theta activity in the medial temporal lobe and 2-4 Hz delta activity recorded from frontal and parietal cortices increased with summed similarity. These findings demonstrate direct neural correlates of the similarity computations that form the foundation of several major cognitive theories of human recognition memory. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Finding Sums for an Infinite Class of Alternating Series
Chen, Zhibo; Wei, Sheng; Xiao, Xuerong
2012-01-01
Calculus II students know that many alternating series are convergent by the Alternating Series Test. However, they know few alternating series (except geometric series and some trivial ones) for which they can find the sum. In this article, we present a method that enables the students to find sums for infinitely many alternating series in the…
Small sum privacy and large sum utility in data publishing.
Fu, Ada Wai-Chee; Wang, Ke; Wong, Raymond Chi-Wing; Wang, Jia; Jiang, Minhao
2014-08-01
While the study of privacy preserving data publishing has drawn a lot of interest, some recent work has shown that existing mechanisms do not limit all inferences about individuals. This paper is a positive note in response to this finding. We point out that not all inference attacks should be countered, in contrast to all existing works known to us, and based on this we propose a model called SPLU. This model protects sensitive information, by which we refer to answers for aggregate queries with small sums, while queries with large sums are answered with higher accuracy. Using SPLU, we introduce a sanitization algorithm to protect data while maintaining high data utility for queries with large sums. Empirical results show that our method behaves as desired. Copyright © 2014 Elsevier Inc. All rights reserved.
Bradshaw, Corey J A; Brook, Barry W
2016-01-01
There are now many methods available to assess the relative citation performance of peer-reviewed journals. Regardless of their individual faults and advantages, citation-based metrics are used by researchers to maximize the citation potential of their articles, and by employers to rank academic track records. The absolute value of any particular index is arguably meaningless unless compared to other journals, and different metrics result in divergent rankings. To provide a simple yet more objective way to rank journals within and among disciplines, we developed a κ-resampled composite journal rank incorporating five popular citation indices: Impact Factor, Immediacy Index, Source-Normalized Impact Per Paper, SCImago Journal Rank and Google 5-year h-index; this approach provides an index of relative rank uncertainty. We applied the approach to six sample sets of scientific journals from Ecology (n = 100 journals), Medicine (n = 100), Multidisciplinary (n = 50); Ecology + Multidisciplinary (n = 25), Obstetrics & Gynaecology (n = 25) and Marine Biology & Fisheries (n = 25). We then cross-compared the κ-resampled ranking for the Ecology + Multidisciplinary journal set to the results of a survey of 188 publishing ecologists who were asked to rank the same journals, and found a 0.68-0.84 Spearman's ρ correlation between the two rankings datasets. Our composite index approach therefore approximates relative journal reputation, at least for that discipline. Agglomerative and divisive clustering and multi-dimensional scaling techniques applied to the Ecology + Multidisciplinary journal set identified specific clusters of similarly ranked journals, with only Nature & Science separating out from the others. When comparing a selection of journals within or among disciplines, we recommend collecting multiple citation-based metrics for a sample of relevant and realistic journals to calculate the composite rankings and their relative uncertainty windows.
Sum rules for collisional processes
International Nuclear Information System (INIS)
Oreg, J.; Goldstein, W.H.; Bar-Shalom, A.; Klapisch, M.
1991-01-01
We derive level-to-configuration sum rules for dielectronic capture and for collisional excitation and ionization. These sum rules give the total transition rate from a detailed atomic level to an atomic configuration. For each process, we show that it is possible to factor out the dependence on continuum-electron wave functions. The remaining explicit level dependence of each rate is then obtained from the matrix element of an effective operator acting on the bound orbitals only. In a large class of cases, the effective operator reduces to a one-electron monopole whose matrix element is proportional to the statistical weight of the level. We show that even in these cases, nonstatistical level dependence enters through the dependence of radial integrals on continuum orbitals. For each process, explicit analytic expressions for the level-to-configuration sum rules are given for all possible cases. Together with the well-known J-file sum rule for radiative rates [E. U. Condon and G. H. Shortley, The Theory of Atomic Spectra (University Press, Cambridge, 1935)], the sum rules offer a systematic and efficient procedure for collapsing high-multiplicity configurations into ''effective'' levels for the purpose of modeling the population kinetics of ionized heavy atoms in plasma
Block models and personalized PageRank.
Kloumann, Isabel M; Ugander, Johan; Kleinberg, Jon
2017-01-03
Methods for ranking the importance of nodes in a network have a rich history in machine learning and across domains that analyze structured data. Recent work has evaluated these methods through the "seed set expansion problem": given a subset [Formula: see text] of nodes from a community of interest in an underlying graph, can we reliably identify the rest of the community? We start from the observation that the most widely used techniques for this problem, personalized PageRank and heat kernel methods, operate in the space of "landing probabilities" of a random walk rooted at the seed set, ranking nodes according to weighted sums of landing probabilities of different length walks. Both schemes, however, lack an a priori relationship to the seed set objective. In this work, we develop a principled framework for evaluating ranking methods by studying seed set expansion applied to the stochastic block model. We derive the optimal gradient for separating the landing probabilities of two classes in a stochastic block model and find, surprisingly, that under reasonable assumptions the gradient is asymptotically equivalent to personalized PageRank for a specific choice of the PageRank parameter [Formula: see text] that depends on the block model parameters. This connection provides a formal motivation for the success of personalized PageRank in seed set expansion and node ranking generally. We use this connection to propose more advanced techniques incorporating higher moments of landing probabilities; our advanced methods exhibit greatly improved performance, despite being simple linear classification rules, and are even competitive with belief propagation.
Hoede, C.
In this paper the concept of page rank for the world wide web is discussed. The possibility of describing the distribution of page rank by an exponential law is considered. It is shown that the concept is essentially equal to that of status score, a centrality measure discussed already in 1953 by
Dobbs, David E.
2012-01-01
This note explains how Emil Artin's proof that row rank equals column rank for a matrix with entries in a field leads naturally to the formula for the nullity of a matrix and also to an algorithm for solving any system of linear equations in any number of variables. This material could be used in any course on matrix theory or linear algebra.
Chapman, David W.
2008-01-01
Recently, Samford University was ranked 27th in the nation in a report released by "Forbes" magazine. In this article, the author relates how the people working at Samford University were surprised at its ranking. Although Samford is the largest privately institution in Alabama, its distinguished academic achievements aren't even…
Tensor rank is not multiplicative under the tensor product
DEFF Research Database (Denmark)
Christandl, Matthias; Jensen, Asger Kjærulff; Zuiddam, Jeroen
2018-01-01
The tensor rank of a tensor t is the smallest number r such that t can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an ℓ-tensor. The tensor product of s and t is a (k+ℓ)-tensor. Tensor rank is sub-multiplicative under the tensor product. We revisit the connection b...
Tensor rank is not multiplicative under the tensor product
M. Christandl (Matthias); A. K. Jensen (Asger Kjærulff); J. Zuiddam (Jeroen)
2018-01-01
textabstractThe tensor rank of a tensor t is the smallest number r such that t can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an ℓ-tensor. The tensor product of s and t is a (k+ℓ)-tensor. Tensor rank is sub-multiplicative under the tensor product. We revisit the
Polarizability sum rules in QED
International Nuclear Information System (INIS)
Llanta, E.; Tarrach, R.
1978-01-01
The well founded total photoproduction and the, assumed subtraction free, longitudinal photoproduction polarizability sum rules are checked in QED at the lowest non-trivial order. The first one is shown to hold, whereas the second one turns out to need a subtraction, which makes its usefulness for determining the electromagnetic polarizabilities of the nucleons quite doubtful. (Auth.)
International Nuclear Information System (INIS)
Frandsen, Mads T.; Masina, Isabella; Sannino, Francesco
2011-01-01
We introduce new sum rules allowing to determine universal properties of the unknown component of the cosmic rays; we show how they can be used to predict the positron fraction at energies not yet explored by current experiments, and to constrain specific models.
Sum rules for neutrino oscillations
International Nuclear Information System (INIS)
Kobzarev, I.Yu.; Martemyanov, B.V.; Okun, L.B.; Schepkin, M.G.
1981-01-01
Sum rules for neutrino oscillations are obtained. The derivation of the general form of the s matrix for two stage process lsub(i)sup(-)→ν→lsub(k)sup(+-) (where lsub(i)sup(-)e, μ, tau, ... are initial leptons with flavor i and lsub(k)sup(+-) is final lepton) is presented. The consideration of two stage process lsub(i)sup(-)→ν→lsub(k)sup(+-) gives the possibility to take into account neutrino masses and to obtain the expressions for the oscillating cross sections. In the case of Dirac and left-handed Majorana neutrino is obtained the sum rule for the quantities 1/Vsub(K)σ(lsub(i)sup(-)→lsub(K)sup(+-)), (where Vsub(K) is a velocity of lsub(K)). In the left-handed Majorana neutrino case there is an additional antineutrino admixture leading to lsub(i)sup(-)→lsub(K)sup(+) process. Both components (neutrino and antineutrino) oscillate independently. The sums Σsub(K)1/Vsub(k)σ(lsub(i)sup(-) - lsub(K)sup(+-) then oscillate due to the presence of left-handed antineutrinos and right-handed neutrinos which do not take part in weak interactions. If right-handed currents are added sum rules analogous to considered above may be obtained. All conclusions are valid in the general case when CP is not conserved [ru
Sums of Generalized Harmonic Series
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 20; Issue 9. Sums of Generalized Harmonic Series: For Kids from Five to Fifteen. Zurab Silagadze. General Article Volume 20 Issue 9 September 2015 pp 822-843. Fulltext. Click here to view fulltext PDF. Permanent link:
Recurrent fuzzy ranking methods
Hajjari, Tayebeh
2012-11-01
With the increasing development of fuzzy set theory in various scientific fields and the need to compare fuzzy numbers in different areas. Therefore, Ranking of fuzzy numbers plays a very important role in linguistic decision-making, engineering, business and some other fuzzy application systems. Several strategies have been proposed for ranking of fuzzy numbers. Each of these techniques has been shown to produce non-intuitive results in certain case. In this paper, we reviewed some recent ranking methods, which will be useful for the researchers who are interested in this area.
Khoromskaia, Venera; Khoromskij, Boris N.
2014-12-01
Our recent method for low-rank tensor representation of sums of the arbitrarily positioned electrostatic potentials discretized on a 3D Cartesian grid reduces the 3D tensor summation to operations involving only 1D vectors however retaining the linear complexity scaling in the number of potentials. Here, we introduce and study a novel tensor approach for fast and accurate assembled summation of a large number of lattice-allocated potentials represented on 3D N × N × N grid with the computational requirements only weakly dependent on the number of summed potentials. It is based on the assembled low-rank canonical tensor representations of the collected potentials using pointwise sums of shifted canonical vectors representing the single generating function, say the Newton kernel. For a sum of electrostatic potentials over L × L × L lattice embedded in a box the required storage scales linearly in the 1D grid-size, O(N) , while the numerical cost is estimated by O(NL) . For periodic boundary conditions, the storage demand remains proportional to the 1D grid-size of a unit cell, n = N / L, while the numerical cost reduces to O(N) , that outperforms the FFT-based Ewald-type summation algorithms of complexity O(N3 log N) . The complexity in the grid parameter N can be reduced even to the logarithmic scale O(log N) by using data-sparse representation of canonical N-vectors via the quantics tensor approximation. For justification, we prove an upper bound on the quantics ranks for the canonical vectors in the overall lattice sum. The presented approach is beneficial in applications which require further functional calculus with the lattice potential, say, scalar product with a function, integration or differentiation, which can be performed easily in tensor arithmetics on large 3D grids with 1D cost. Numerical tests illustrate the performance of the tensor summation method and confirm the estimated bounds on the tensor ranks.
Ranking as parameter estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav; Guy, Tatiana Valentine
2009-01-01
Roč. 4, č. 2 (2009), s. 142-158 ISSN 1745-7645 R&D Projects: GA MŠk 2C06001; GA AV ČR 1ET100750401; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : ranking * Bayesian estimation * negotiation * modelling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/AS/karny- ranking as parameter estimation.pdf
Validating rankings in soccer championships
Directory of Open Access Journals (Sweden)
Annibal Parracho Sant'Anna
2012-08-01
Full Text Available The final ranking of a championship is determined by quality attributes combined with other factors which should be filtered out of any decision on relegation or draft for upper level tournaments. Factors like referees' mistakes and difficulty of certain matches due to its accidental importance to the opponents should have their influence reduced. This work tests approaches to combine classification rules considering the imprecision of the number of points as a measure of quality and of the variables that provide reliable explanation for it. Two home-advantage variables are tested and shown to be apt to enter as explanatory variables. Independence between the criteria is checked against the hypothesis of maximal correlation. The importance of factors and of composition rules is evaluated on the basis of correlation between rank vectors, number of classes and number of clubs in tail classes. Data from five years of the Brazilian Soccer Championship are analyzed.
Tensor rank is not multiplicative under the tensor product
Christandl, Matthias; Jensen, Asger Kjærulff; Zuiddam, Jeroen
2017-01-01
The tensor rank of a tensor t is the smallest number r such that t can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an l-tensor. The tensor product of s and t is a (k + l)-tensor. Tensor rank is sub-multiplicative under the tensor product. We revisit the connection between restrictions and degenerations. A result of our study is that tensor rank is not in general multiplicative under the tensor product. This answers a question of Draisma and Saptharishi. Specif...
Hierarchical partial order ranking
International Nuclear Information System (INIS)
Carlsen, Lars
2008-01-01
Assessing the potential impact on environmental and human health from the production and use of chemicals or from polluted sites involves a multi-criteria evaluation scheme. A priori several parameters are to address, e.g., production tonnage, specific release scenarios, geographical and site-specific factors in addition to various substance dependent parameters. Further socio-economic factors may be taken into consideration. The number of parameters to be included may well appear to be prohibitive for developing a sensible model. The study introduces hierarchical partial order ranking (HPOR) that remedies this problem. By HPOR the original parameters are initially grouped based on their mutual connection and a set of meta-descriptors is derived representing the ranking corresponding to the single groups of descriptors, respectively. A second partial order ranking is carried out based on the meta-descriptors, the final ranking being disclosed though average ranks. An illustrative example on the prioritisation of polluted sites is given. - Hierarchical partial order ranking of polluted sites has been developed for prioritization based on a large number of parameters
African Journals Online (AJOL)
ISONIC
Résumé. Cardisoma armatum, est une espèce de crabe de terre rencontrée en Afrique de l'ouest en particulier en ... optique suite au traitement histologique ont permis la mise en évidence de quelques critères d'identification de l'espèce et ...... En Côte d'Ivoire il n'est pas rare de voir durant les saisons propices. Cardisoma ...
Halu, Arda; Mondragón, Raúl J; Panzarasa, Pietro; Bianconi, Ginestra
2013-01-01
Many complex systems can be described as multiplex networks in which the same nodes can interact with one another in different layers, thus forming a set of interacting and co-evolving networks. Examples of such multiplex systems are social networks where people are involved in different types of relationships and interact through various forms of communication media. The ranking of nodes in multiplex networks is one of the most pressing and challenging tasks that research on complex networks is currently facing. When pairs of nodes can be connected through multiple links and in multiple layers, the ranking of nodes should necessarily reflect the importance of nodes in one layer as well as their importance in other interdependent layers. In this paper, we draw on the idea of biased random walks to define the Multiplex PageRank centrality measure in which the effects of the interplay between networks on the centrality of nodes are directly taken into account. In particular, depending on the intensity of the interaction between layers, we define the Additive, Multiplicative, Combined, and Neutral versions of Multiplex PageRank, and show how each version reflects the extent to which the importance of a node in one layer affects the importance the node can gain in another layer. We discuss these measures and apply them to an online multiplex social network. Findings indicate that taking the multiplex nature of the network into account helps uncover the emergence of rankings of nodes that differ from the rankings obtained from one single layer. Results provide support in favor of the salience of multiplex centrality measures, like Multiplex PageRank, for assessing the prominence of nodes embedded in multiple interacting networks, and for shedding a new light on structural properties that would otherwise remain undetected if each of the interacting networks were analyzed in isolation.
Directory of Open Access Journals (Sweden)
Arda Halu
Full Text Available Many complex systems can be described as multiplex networks in which the same nodes can interact with one another in different layers, thus forming a set of interacting and co-evolving networks. Examples of such multiplex systems are social networks where people are involved in different types of relationships and interact through various forms of communication media. The ranking of nodes in multiplex networks is one of the most pressing and challenging tasks that research on complex networks is currently facing. When pairs of nodes can be connected through multiple links and in multiple layers, the ranking of nodes should necessarily reflect the importance of nodes in one layer as well as their importance in other interdependent layers. In this paper, we draw on the idea of biased random walks to define the Multiplex PageRank centrality measure in which the effects of the interplay between networks on the centrality of nodes are directly taken into account. In particular, depending on the intensity of the interaction between layers, we define the Additive, Multiplicative, Combined, and Neutral versions of Multiplex PageRank, and show how each version reflects the extent to which the importance of a node in one layer affects the importance the node can gain in another layer. We discuss these measures and apply them to an online multiplex social network. Findings indicate that taking the multiplex nature of the network into account helps uncover the emergence of rankings of nodes that differ from the rankings obtained from one single layer. Results provide support in favor of the salience of multiplex centrality measures, like Multiplex PageRank, for assessing the prominence of nodes embedded in multiple interacting networks, and for shedding a new light on structural properties that would otherwise remain undetected if each of the interacting networks were analyzed in isolation.
Groundwater contaminant plume ranking
International Nuclear Information System (INIS)
1988-08-01
Containment plumes at Uranium Mill Tailings Remedial Action (UMTRA) Project sites were ranked to assist in Subpart B (i.e., restoration requirements of 40 CFR Part 192) compliance strategies for each site, to prioritize aquifer restoration, and to budget future requests and allocations. The rankings roughly estimate hazards to the environment and human health, and thus assist in determining for which sites cleanup, if appropriate, will provide the greatest benefits for funds available. The rankings are based on the scores that were obtained using the US Department of Energy's (DOE) Modified Hazard Ranking System (MHRS). The MHRS and HRS consider and score three hazard modes for a site: migration, fire and explosion, and direct contact. The migration hazard mode score reflects the potential for harm to humans or the environment from migration of a hazardous substance off a site by groundwater, surface water, and air; it is a composite of separate scores for each of these routes. For ranking the containment plumes at UMTRA Project sites, it was assumed that each site had been remediated in compliance with the EPA standards and that relict contaminant plumes were present. Therefore, only the groundwater route was scored, and the surface water and air routes were not considered. Section 2.0 of this document describes the assumptions and procedures used to score the groundwater route, and Section 3.0 provides the resulting scores for each site. 40 tabs
Neutrino mass sum rules and symmetries of the mass matrix
Energy Technology Data Exchange (ETDEWEB)
Gehrlein, Julia [Karlsruhe Institute of Technology, Institut fuer Theoretische Teilchenphysik, Karlsruhe (Germany); Universidad Autonoma de Madrid, Departamento de Fisica Teorica, Madrid (Spain); Instituto de Fisica Teorica UAM/CSIC, Madrid (Spain); Spinrath, Martin [Karlsruhe Institute of Technology, Institut fuer Theoretische Teilchenphysik, Karlsruhe (Germany); National Center for Theoretical Sciences, Physics Division, Hsinchu (China)
2017-05-15
Neutrino mass sum rules have recently gained again more attention as a powerful tool to discriminate and test various flavour models in the near future. A related question which has not yet been discussed fully satisfactorily was the origin of these sum rules and if they are related to any residual or accidental symmetry. We will address this open issue here systematically and find previous statements confirmed. Namely, the sum rules are not related to any enhanced symmetry of the Lagrangian after family symmetry breaking but they are simply the result of a reduction of free parameters due to skillful model building. (orig.)
Ranking economic history journals
DEFF Research Database (Denmark)
Di Vaio, Gianfranco; Weisdorf, Jacob Louis
2010-01-01
This study ranks-for the first time-12 international academic journals that have economic history as their main topic. The ranking is based on data collected for the year 2007. Journals are ranked using standard citation analysis where we adjust for age, size and self-citation of journals. We also...... compare the leading economic history journals with the leading journals in economics in order to measure the influence on economics of economic history, and vice versa. With a few exceptions, our results confirm the general idea about what economic history journals are the most influential for economic...... history, and that, although economic history is quite independent from economics as a whole, knowledge exchange between the two fields is indeed going on....
Ranking Economic History Journals
DEFF Research Database (Denmark)
Di Vaio, Gianfranco; Weisdorf, Jacob Louis
This study ranks - for the first time - 12 international academic journals that have economic history as their main topic. The ranking is based on data collected for the year 2007. Journals are ranked using standard citation analysis where we adjust for age, size and self-citation of journals. We...... also compare the leading economic history journals with the leading journals in economics in order to measure the influence on economics of economic history, and vice versa. With a few exceptions, our results confirm the general idea about what economic history journals are the most influential...... for economic history, and that, although economic history is quite independent from economics as a whole, knowledge exchange between the two fields is indeed going on....
DEFF Research Database (Denmark)
Frandsen, Gudmund Skovbjerg; Frandsen, Peter Frands
2009-01-01
We consider maintaining information about the rank of a matrix under changes of the entries. For n×n matrices, we show an upper bound of O(n1.575) arithmetic operations and a lower bound of Ω(n) arithmetic operations per element change. The upper bound is valid when changing up to O(n0.575) entries...... in a single column of the matrix. We also give an algorithm that maintains the rank using O(n2) arithmetic operations per rank one update. These bounds appear to be the first nontrivial bounds for the problem. The upper bounds are valid for arbitrary fields, whereas the lower bound is valid for algebraically...... closed fields. The upper bound for element updates uses fast rectangular matrix multiplication, and the lower bound involves further development of an earlier technique for proving lower bounds for dynamic computation of rational functions....
DEFF Research Database (Denmark)
Gross, Fridolin; Green, Sara
2017-01-01
Systems biologists often distance themselves from reductionist approaches and formulate their aim as understanding living systems “as a whole”. Yet, it is often unclear what kind of reductionism they have in mind, and in what sense their methodologies offer a more comprehensive approach. To addre......-up”. Specifically, we point out that system-level properties constrain lower-scale processes. Thus, large-scale modeling reveals how living systems at the same time are more and less than the sum of the parts....
Borwein, J M; McPhedran, R C
2013-01-01
The study of lattice sums began when early investigators wanted to go from mechanical properties of crystals to the properties of the atoms and ions from which they were built (the literature of Madelung's constant). A parallel literature was built around the optical properties of regular lattices of atoms (initiated by Lord Rayleigh, Lorentz and Lorenz). For over a century many famous scientists and mathematicians have delved into the properties of lattices, sometimes unwittingly duplicating the work of their predecessors. Here, at last, is a comprehensive overview of the substantial body of
Diversifying customer review rankings.
Krestel, Ralf; Dokoohaki, Nima
2015-06-01
E-commerce Web sites owe much of their popularity to consumer reviews accompanying product descriptions. On-line customers spend hours and hours going through heaps of textual reviews to decide which products to buy. At the same time, each popular product has thousands of user-generated reviews, making it impossible for a buyer to read everything. Current approaches to display reviews to users or recommend an individual review for a product are based on the recency or helpfulness of each review. In this paper, we present a framework to rank product reviews by optimizing the coverage of the ranking with respect to sentiment or aspects, or by summarizing all reviews with the top-K reviews in the ranking. To accomplish this, we make use of the assigned star rating for a product as an indicator for a review's sentiment polarity and compare bag-of-words (language model) with topic models (latent Dirichlet allocation) as a mean to represent aspects. Our evaluation on manually annotated review data from a commercial review Web site demonstrates the effectiveness of our approach, outperforming plain recency ranking by 30% and obtaining best results by combining language and topic model representations. Copyright © 2015 Elsevier Ltd. All rights reserved.
College Rankings. ERIC Digest.
Holub, Tamara
The popularity of college ranking surveys published by "U.S. News and World Report" and other magazines is indisputable, but the methodologies used to measure the quality of higher education institutions have come under fire by scholars and college officials. Criticisms have focused on methodological flaws, such as failure to consider…
DEFF Research Database (Denmark)
Müller, Emmanuel; Assent, Ira; Steinhausen, Uwe
2008-01-01
Outlier detection is an important data mining task for consistency checks, fraud detection, etc. Binary decision making on whether or not an object is an outlier is not appropriate in many applications and moreover hard to parametrize. Thus, recently, methods for outlier ranking have been proposed...
Statistical sums of strings on hyperellyptic surfaces
International Nuclear Information System (INIS)
Lebedev, D.; Morozov, A.
1987-01-01
Contributions of hyperellyptic surfaces to statistical sums of string theories are presented. Available results on hyperellyptic surface give the apportunity to check factorization of three-loop statsum. Some remarks on the vanishing statistical sum are presented
Improving Ranking Using Quantum Probability
Melucci, Massimo
2011-01-01
The paper shows that ranking information units by quantum probability differs from ranking them by classical probability provided the same data used for parameter estimation. As probability of detection (also known as recall or power) and probability of false alarm (also known as fallout or size) measure the quality of ranking, we point out and show that ranking by quantum probability yields higher probability of detection than ranking by classical probability provided a given probability of ...
Momentum sum rules for fragmentation functions
International Nuclear Information System (INIS)
Meissner, S.; Metz, A.; Pitonyak, D.
2010-01-01
Momentum sum rules for fragmentation functions are considered. In particular, we give a general proof of the Schaefer-Teryaev sum rule for the transverse momentum dependent Collins function. We also argue that corresponding sum rules for related fragmentation functions do not exist. Our model-independent analysis is supplemented by calculations in a simple field-theoretical model.
Adiabatic quantum algorithm for search engine ranking.
Garnerone, Silvano; Zanardi, Paolo; Lidar, Daniel A
2012-06-08
We propose an adiabatic quantum algorithm for generating a quantum pure state encoding of the PageRank vector, the most widely used tool in ranking the relative importance of internet pages. We present extensive numerical simulations which provide evidence that this algorithm can prepare the quantum PageRank state in a time which, on average, scales polylogarithmically in the number of web pages. We argue that the main topological feature of the underlying web graph allowing for such a scaling is the out-degree distribution. The top-ranked log(n) entries of the quantum PageRank state can then be estimated with a polynomial quantum speed-up. Moreover, the quantum PageRank state can be used in "q-sampling" protocols for testing properties of distributions, which require exponentially fewer measurements than all classical schemes designed for the same task. This can be used to decide whether to run a classical update of the PageRank.
1991 Acceptance priority ranking
International Nuclear Information System (INIS)
1991-12-01
The Standard Contract for Disposal of Spent Nuclear Fuel and/or High- Level Radioactive Waste (10 CFR Part 961) that the Department of Energy (DOE) has executed with the owners and generators of civilian spent nuclear fuel requires annual publication of the Acceptance Priority Ranking (APR). The 1991 APR details the order in which DOE will allocate Federal waste acceptance capacity. As required by the Standard Contract, the ranking is based on the age of permanently discharged spent nuclear fuel (SNF), with the owners of the oldest SNF, on an industry-wide basis, given the highest priority. the 1991 APR will be the basis for the annual allocation of waste acceptance capacity to the Purchasers in the 1991 Annual Capacity Report (ACR), to be issued later this year. This document is based on SNF discharges as of December 31, 1990, and reflects Purchaser comments and corrections, as appropriate, to the draft APR issued on May 15, 1991
Some relations between rank, chromatic number and energy of graphs
International Nuclear Information System (INIS)
Akbari, S.; Ghorbani, E.; Zare, S.
2006-08-01
The energy of a graph G is defined as the sum of the absolute values of all eigenvalues of G and denoted by E(G). Let G be a graph and rank(G) be the rank of the adjacency matrix of G. In this paper we characterize all the graphs with E(G) = rank(G). Among other results we show that apart from a few families of graphs, E(G) ≥ 2max(χ(G), n - χ(G--bar)), where G-bar and χ(G) are the complement and the chromatic number of G, respectively. Moreover some new lower bounds for E(G) in terms of rank(G) are given. (author)
International Nuclear Information System (INIS)
Menezes, Filipe; Fedorov, Alexander; Baleizão, Carlos; Berberan-Santos, Mário N; Valeur, Bernard
2013-01-01
Ensemble fluorescence decays are usually analyzed with a sum of exponentials. However, broad continuous distributions of lifetimes, either unimodal or multimodal, occur in many situations. A simple and flexible fitting function for these cases that encompasses the exponential is the Becquerel function. In this work, the applicability of the Becquerel function for the analysis of complex decays of several kinds is tested. For this purpose, decays of mixtures of four different fluorescence standards (binary, ternary and quaternary mixtures) are measured and analyzed. For binary and ternary mixtures, the expected sum of narrow distributions is well recovered from the Becquerel functions analysis, if the correct number of components is used. For ternary mixtures, however, satisfactory fits are also obtained with a number of Becquerel functions smaller than the true number of fluorophores in the mixture, at the expense of broadening the lifetime distributions of the fictitious components. The quaternary mixture studied is well fitted with both a sum of three exponentials and a sum of two Becquerel functions, showing the inevitable loss of information when the number of components is large. Decays of a fluorophore in a heterogeneous environment, known to be represented by unimodal and broad continuous distributions (as previously obtained by the maximum entropy method), are also measured and analyzed. It is concluded that these distributions can be recovered by the Becquerel function method with an accuracy similar to that of the much more complex maximum entropy method. It is also shown that the polar (or phasor) plot is not always helpful for ascertaining the degree (and kind) of complexity of a fluorescence decay. (paper)
Ranking Baltic States Researchers
Directory of Open Access Journals (Sweden)
Gyula Mester
2017-10-01
Full Text Available In this article, using the h-index and the total number of citations, the best 10 Lithuanian, Latvian and Estonian researchers from several disciplines are ranked. The list may be formed based on the h-index and the total number of citations, given in Web of Science, Scopus, Publish or Perish Program and Google Scholar database. Data for the first 10 researchers are presented. Google Scholar is the most complete. Therefore, to define a single indicator, h-index calculated by Google Scholar may be a good and simple one. The author chooses the Google Scholar database as it is the broadest one.
International Nuclear Information System (INIS)
Marrakchi, A.E.L.; Tapia, V.
1992-05-01
Some cosmological implications of the recently proposed fourth-rank theory of gravitation are studied. The model exhibits the possibility of being free from the horizon and flatness problems at the price of introducing a negative pressure. The field equations we obtain are compatible with k obs =0 and Ω obs t clas approx. 10 20 t Planck approx. 10 -23 s. When interpreted at the light of General Relativity the treatment is shown to be almost equivalent to that of the standard model of cosmology combined with the inflationary scenario. Hence, an interpretation of the negative pressure hypothesis is provided. (author). 8 refs
University Rankings and Social Science
Marginson, S.
2014-01-01
University rankings widely affect the behaviours of prospective students and their families, university executive leaders, academic faculty, governments and investors in higher education. Yet the social science foundations of global rankings receive little scrutiny. Rankings that simply recycle reputation without any necessary connection to real outputs are of no common value. It is necessary that rankings be soundly based in scientific terms if a virtuous relationship between performance and...
University Rankings and Social Science
Marginson, Simon
2014-01-01
University rankings widely affect the behaviours of prospective students and their families, university executive leaders, academic faculty, governments and investors in higher education. Yet the social science foundations of global rankings receive little scrutiny. Rankings that simply recycle reputation without any necessary connection to real…
Sum rules for quasifree scattering of hadrons
Peterson, R. J.
2018-02-01
The areas d σ /d Ω of fitted quasifree scattering peaks from bound nucleons for continuum hadron-nucleus spectra measuring d2σ /d Ω d ω are converted to sum rules akin to the Coulomb sums familiar from continuum electron scattering spectra from nuclear charge. Hadronic spectra with or without charge exchange of the beam are considered. These sums are compared to the simple expectations of a nonrelativistic Fermi gas, including a Pauli blocking factor. For scattering without charge exchange, the hadronic sums are below this expectation, as also observed with Coulomb sums. For charge exchange spectra, the sums are near or above the simple expectation, with larger uncertainties. The strong role of hadron-nucleon in-medium total cross sections is noted from use of the Glauber model.
Extremum uncertainty product and sum states
Energy Technology Data Exchange (ETDEWEB)
Mehta, C L; Kumar, S [Indian Inst. of Tech., New Delhi. Dept. of Physics
1978-01-01
The extremum product states and sum states of the uncertainties in non-commuting observables have been examined. These are illustrated by two specific examples of harmonic oscillator and the angular momentum states. It shows that the coherent states of the harmonic oscillator are characterized by the minimum uncertainty sum <(..delta..q)/sup 2/>+<(..delta..p)/sup 2/>. The extremum values of the sums and products of the uncertainties of the components of the angular momentum are also obtained.
QCD Sum Rules, a Modern Perspective
Colangelo, Pietro; Colangelo, Pietro; Khodjamirian, Alexander
2001-01-01
An introduction to the method of QCD sum rules is given for those who want to learn how to use this method. Furthermore, we discuss various applications of sum rules, from the determination of quark masses to the calculation of hadronic form factors and structure functions. Finally, we explain the idea of the light-cone sum rules and outline the recent development of this approach.
Decompounding random sums: A nonparametric approach
DEFF Research Database (Denmark)
Hansen, Martin Bøgsted; Pitts, Susan M.
Observations from sums of random variables with a random number of summands, known as random, compound or stopped sums arise within many areas of engineering and science. Quite often it is desirable to infer properties of the distribution of the terms in the random sum. In the present paper we...... review a number of applications and consider the nonlinear inverse problem of inferring the cumulative distribution function of the components in the random sum. We review the existing literature on non-parametric approaches to the problem. The models amenable to the analysis are generalized considerably...
Sum rules in the response function method
International Nuclear Information System (INIS)
Takayanagi, Kazuo
1990-01-01
Sum rules in the response function method are studied in detail. A sum rule can be obtained theoretically by integrating the imaginary part of the response function over the excitation energy with a corresponding energy weight. Generally, the response function is calculated perturbatively in terms of the residual interaction, and the expansion can be described by diagrammatic methods. In this paper, we present a classification of the diagrams so as to clarify which diagram has what contribution to which sum rule. This will allow us to get insight into the contributions to the sum rules of all the processes expressed by Goldstone diagrams. (orig.)
A novel three-stage distance-based consensus ranking method
Aghayi, Nazila; Tavana, Madjid
2018-05-01
In this study, we propose a three-stage weighted sum method for identifying the group ranks of alternatives. In the first stage, a rank matrix, similar to the cross-efficiency matrix, is obtained by computing the individual rank position of each alternative based on importance weights. In the second stage, a secondary goal is defined to limit the vector of weights since the vector of weights obtained in the first stage is not unique. Finally, in the third stage, the group rank position of alternatives is obtained based on a distance of individual rank positions. The third stage determines a consensus solution for the group so that the ranks obtained have a minimum distance from the ranks acquired by each alternative in the previous stage. A numerical example is presented to demonstrate the applicability and exhibit the efficacy of the proposed method and algorithms.
Selection of suitable e-learning approach using TOPSIS technique with best ranked criteria weights
Mohammed, Husam Jasim; Kasim, Maznah Mat; Shaharanee, Izwan Nizal Mohd
2017-11-01
This paper compares the performances of four rank-based weighting assessment techniques, Rank Sum (RS), Rank Reciprocal (RR), Rank Exponent (RE), and Rank Order Centroid (ROC) on five identified e-learning criteria to select the best weights method. A total of 35 experts in a public university in Malaysia were asked to rank the criteria and to evaluate five e-learning approaches which include blended learning, flipped classroom, ICT supported face to face learning, synchronous learning, and asynchronous learning. The best ranked criteria weights are defined as weights that have the least total absolute differences with the geometric mean of all weights, were then used to select the most suitable e-learning approach by using TOPSIS method. The results show that RR weights are the best, while flipped classroom approach implementation is the most suitable approach. This paper has developed a decision framework to aid decision makers (DMs) in choosing the most suitable weighting method for solving MCDM problems.
Tensor rank is not multiplicative under the tensor product
M. Christandl (Matthias); A. K. Jensen (Asger Kjærulff); J. Zuiddam (Jeroen)
2017-01-01
textabstractThe tensor rank of a tensor is the smallest number r such that the tensor can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an l-tensor. The tensor product of s and t is a (k + l)-tensor (not to be confused with the "tensor Kronecker product" used in
Rankings, creatividad y urbanismo
Directory of Open Access Journals (Sweden)
JOAQUÍN SABATÉ
2008-08-01
Full Text Available La competencia entre ciudades constituye uno de los factores impulsores de procesos de renovación urbana y los rankings han devenido instrumentos de medida de la calidad de las ciudades. Nos detendremos en el caso de un antiguo barrio industrial hoy en vías de transformación en distrito "creativo" por medio de una intervención urbanística de gran escala. Su análisis nos descubre tres claves críticas. En primer lugar, nos obliga a plantearnos la definición de innovación urbana y cómo se integran el pasado, la identidad y la memoria en la construcción del futuro. Nos lleva a comprender que la innovación y el conocimiento no se "dan" casualmente, sino que son el fruto de una larga y compleja red en la que participan saberes, espacios, actores e instituciones diversas en naturaleza, escala y magnitud. Por último nos obliga a reflexionar sobre el valor que se le otorga a lo local en los procesos de renovación urbana.Competition among cities constitutes one ofthe main factors o furban renewal, and rankings have become instruments to indícate cities quality. Studying the transformation of an old industrial quarter into a "creative district" by the means ofa large scale urban project we highlight three main conclusions. First, itasks us to reconsider the notion ofurban innovation and hoto past, identity and memory should intégrate the future development. Second, it shows that innovation and knowledge doesn't yield per chance, but are the result ofa large and complex grid of diverse knowledges, spaces, agents and institutions. Finally itforces us to reflect about the valué attributed to the "local" in urban renewalprocesses.
Ranking nodes in growing networks: When PageRank fails.
Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng
2015-11-10
PageRank is arguably the most popular ranking algorithm which is being applied in real systems ranging from information to biological and infrastructure networks. Despite its outstanding popularity and broad use in different areas of science, the relation between the algorithm's efficacy and properties of the network on which it acts has not yet been fully understood. We study here PageRank's performance on a network model supported by real data, and show that realistic temporal effects make PageRank fail in individuating the most valuable nodes for a broad range of model parameters. Results on real data are in qualitative agreement with our model-based findings. This failure of PageRank reveals that the static approach to information filtering is inappropriate for a broad class of growing systems, and suggest that time-dependent algorithms that are based on the temporal linking patterns of these systems are needed to better rank the nodes.
Some Finite Sums Involving Generalized Fibonacci and Lucas Numbers
Directory of Open Access Journals (Sweden)
E. Kılıç
2011-01-01
Full Text Available By considering Melham's sums (Melham, 2004, we compute various more general nonalternating sums, alternating sums, and sums that alternate according to (−12+1 involving the generalized Fibonacci and Lucas numbers.
Where Does Latin "Sum" Come From?
Nyman, Martti A.
1977-01-01
The derivation of Latin "sum,""es(s),""est" from Indo-European "esmi,""est,""esti" involves methodological problems. It is claimed here that the development of "sum" from "esmi" is related to the origin of the variation "est-st" (less than"esti"). The study is primarily concerned with this process, but chronological suggestions are also made. (CHK)
Compound sums and their applications in finance
R. Helmers (Roelof); B. Tarigan
2003-01-01
textabstractCompound sums arise frequently in insurance (total claim size in a portfolio) and in accountancy (total error amount in audit populations). As the normal approximation for compound sums usually performs very badly, one may look for better methods for approximating the distribution of a
Gauss Sum Factorization with Cold Atoms
International Nuclear Information System (INIS)
Gilowski, M.; Wendrich, T.; Mueller, T.; Ertmer, W.; Rasel, E. M.; Jentsch, Ch.; Schleich, W. P.
2008-01-01
We report the first implementation of a Gauss sum factorization algorithm by an internal state Ramsey interferometer using cold atoms. A sequence of appropriately designed light pulses interacts with an ensemble of cold rubidium atoms. The final population in the involved atomic levels determines a Gauss sum. With this technique we factor the number N=263193
Shapley Value for Constant-sum Games
Khmelnitskaya, A.B.
2002-01-01
It is proved that Young's axiomatization for the Shapley value by marginalism, efficiency, and symmetry is still valid for the Shapley value defined on the class of nonnegative constant-sum games and on the entire class of constant-sum games as well. To support an interest to study the class of
Superconvergent sum rules for the normal reflectivity
International Nuclear Information System (INIS)
Furuya, K.; Zimerman, A.H.; Villani, A.
1976-05-01
Families of superconvergent relations for the normal reflectivity function are written. Sum rules connecting the difference of phases of the reflectivities of two materials are also considered. Finally superconvergence relations and sum rules for magneto-reflectivity in the Faraday and Voigt regimes are also studied
Neophilia Ranking of Scientific Journals.
Packalen, Mikko; Bhattacharya, Jay
2017-01-01
The ranking of scientific journals is important because of the signal it sends to scientists about what is considered most vital for scientific progress. Existing ranking systems focus on measuring the influence of a scientific paper (citations)-these rankings do not reward journals for publishing innovative work that builds on new ideas. We propose an alternative ranking based on the proclivity of journals to publish papers that build on new ideas, and we implement this ranking via a text-based analysis of all published biomedical papers dating back to 1946. In addition, we compare our neophilia ranking to citation-based (impact factor) rankings; this comparison shows that the two ranking approaches are distinct. Prior theoretical work suggests an active role for our neophilia index in science policy. Absent an explicit incentive to pursue novel science, scientists underinvest in innovative work because of a coordination problem: for work on a new idea to flourish, many scientists must decide to adopt it in their work. Rankings that are based purely on influence thus do not provide sufficient incentives for publishing innovative work. By contrast, adoption of the neophilia index as part of journal-ranking procedures by funding agencies and university administrators would provide an explicit incentive for journals to publish innovative work and thus help solve the coordination problem by increasing scientists' incentives to pursue innovative work.
Sums and products of sets and estimates of rational trigonometric sums in fields of prime order
Energy Technology Data Exchange (ETDEWEB)
Garaev, Mubaris Z [National Autonomous University of Mexico, Institute of Mathematics (Mexico)
2010-11-16
This paper is a survey of main results on the problem of sums and products of sets in fields of prime order and their applications to estimates of rational trigonometric sums. Bibliography: 85 titles.
Current algebra sum rules for Reggeons
Carlitz, R
1972-01-01
The interplay between the constraints of chiral SU/sub 2/*SU/sub 2/ symmetry and Regge asymptotic behaviour is investigated. The author reviews the derivation of various current algebra sum rules in a study of the reaction pi + alpha to pi + beta . These sum rules imply that all particles may be classified in multiplets of SU/sub 2/*SU/sub 2/ and that each of these multiplets may contain linear combinations of an infinite number of physical states. Extending his study to the reaction pi + alpha to pi + pi + beta , he derives new sum rules involving commutators of the axial charge with the reggeon coupling matrices of the rho and f Regge trajectories. Some applications of these new sum rules are noted, and the general utility of these and related sum rules is discussed. (17 refs).
Study of QCD medium by sum rules
Energy Technology Data Exchange (ETDEWEB)
Mallik, S [Saha Institute of Nuclear Physics, Calcutta (India)
1998-08-01
Though it has no analogue in condensed matter physics, the thermal QCD sum rules can, nevertheless, answer questions of condensed matter type about the QCD medium. The ingredients needed to write such sum rules, viz. the operator product expansion and the spectral representation at finite temperature, are reviewed in detail. The sum rules are then actually written for the case of correlation function of two vector currents. Collecting information on the thermal average of the higher dimension operators from other sources, we evaluate these sum rules for the temperature dependent {rho}-meson parameters. Possibility of extracting more information from the combined set of all sum rules from different correlation functions is also discussed. (author) 30 refs., 2 figs.
Coloring sums of extensions of certain graphs
Directory of Open Access Journals (Sweden)
Johan Kok
2017-12-01
Full Text Available We recall that the minimum number of colors that allow a proper coloring of graph $G$ is called the chromatic number of $G$ and denoted $\\chi(G$. Motivated by the introduction of the concept of the $b$-chromatic sum of a graph the concept of $\\chi'$-chromatic sum and $\\chi^+$-chromatic sum are introduced in this paper. The extended graph $G^x$ of a graph $G$ was recently introduced for certain regular graphs. This paper furthers the concepts of $\\chi'$-chromatic sum and $\\chi^+$-chromatic sum to extended paths and cycles. Bipartite graphs also receive some attention. The paper concludes with patterned structured graphs. These last said graphs are typically found in chemical and biological structures.
Jackknife Variance Estimator for Two Sample Linear Rank Statistics
1988-11-01
Accesion For - - ,NTIS GPA&I "TIC TAB Unann c, nc .. [d Keywords: strong consistency; linear rank test’ influence function . i , at L By S- )Distribut...reverse if necessary and identify by block number) FIELD IGROUP SUB-GROUP Strong consistency; linear rank test; influence function . 19. ABSTRACT
VaRank: a simple and powerful tool for ranking genetic variants
Directory of Open Access Journals (Sweden)
Véronique Geoffroy
2015-03-01
Full Text Available Background. Most genetic disorders are caused by single nucleotide variations (SNVs or small insertion/deletions (indels. High throughput sequencing has broadened the catalogue of human variation, including common polymorphisms, rare variations or disease causing mutations. However, identifying one variation among hundreds or thousands of others is still a complex task for biologists, geneticists and clinicians.Results. We have developed VaRank, a command-line tool for the ranking of genetic variants detected by high-throughput sequencing. VaRank scores and prioritizes variants annotated either by Alamut Batch or SnpEff. A barcode allows users to quickly view the presence/absence of variants (with homozygote/heterozygote status in analyzed samples. VaRank supports the commonly used VCF input format for variants analysis thus allowing it to be easily integrated into NGS bioinformatics analysis pipelines. VaRank has been successfully applied to disease-gene identification as well as to molecular diagnostics setup for several hundred patients.Conclusions. VaRank is implemented in Tcl/Tk, a scripting language which is platform-independent but has been tested only on Unix environment. The source code is available under the GNU GPL, and together with sample data and detailed documentation can be downloaded from http://www.lbgi.fr/VaRank/.
African Journals Online (AJOL)
project that used the World Wide Web to assess the health needs of secondary school students, and tested the web's ..... running water, indoor bathroom, refrigerator, gas or electric stove, metal or wooden bed, sofa, bicycle, car (motor vehicle),.
African Journals Online (AJOL)
associated with knowing the partner's HIV status and discussion about HIV testing prior to seeking services, while for women it was .... In 2001, women were 50% of people living with HIV (Ministry of ...... Tracking a multisectoral approach.
A New Sum Analogous to Gauss Sums and Its Fourth Power Mean
Directory of Open Access Journals (Sweden)
Shaofeng Ru
2014-01-01
Full Text Available The main purpose of this paper is to use the analytic methods and the properties of Gauss sums to study the computational problem of one kind of new sum analogous to Gauss sums and give an interesting fourth power mean and a sharp upper bound estimate for it.
Energy Technology Data Exchange (ETDEWEB)
Weber, G. F.; Laudal, D. L.
1989-01-01
This work is a compilation of reports on ongoing research at the University of North Dakota. Topics include: Control Technology and Coal Preparation Research (SO{sub x}/NO{sub x} control, waste management), Advanced Research and Technology Development (turbine combustion phenomena, combustion inorganic transformation, coal/char reactivity, liquefaction reactivity of low-rank coals, gasification ash and slag characterization, fine particulate emissions), Combustion Research (fluidized bed combustion, beneficiation of low-rank coals, combustion characterization of low-rank coal fuels, diesel utilization of low-rank coals), Liquefaction Research (low-rank coal direct liquefaction), and Gasification Research (hydrogen production from low-rank coals, advanced wastewater treatment, mild gasification, color and residual COD removal from Synfuel wastewaters, Great Plains Gasification Plant, gasifier optimization).
Ranking Specific Sets of Objects.
Maly, Jan; Woltran, Stefan
2017-01-01
Ranking sets of objects based on an order between the single elements has been thoroughly studied in the literature. In particular, it has been shown that it is in general impossible to find a total ranking - jointly satisfying properties as dominance and independence - on the whole power set of objects. However, in many applications certain elements from the entire power set might not be required and can be neglected in the ranking process. For instance, certain sets might be ruled out due to hard constraints or are not satisfying some background theory. In this paper, we treat the computational problem whether an order on a given subset of the power set of elements satisfying different variants of dominance and independence can be found, given a ranking on the elements. We show that this problem is tractable for partial rankings and NP-complete for total rankings.
Wikipedia ranking of world universities
Lages, José; Patt, Antoine; Shepelyansky, Dima L.
2016-03-01
We use the directed networks between articles of 24 Wikipedia language editions for producing the wikipedia ranking of world Universities (WRWU) using PageRank, 2DRank and CheiRank algorithms. This approach allows to incorporate various cultural views on world universities using the mathematical statistical analysis independent of cultural preferences. The Wikipedia ranking of top 100 universities provides about 60% overlap with the Shanghai university ranking demonstrating the reliable features of this approach. At the same time WRWU incorporates all knowledge accumulated at 24 Wikipedia editions giving stronger highlights for historically important universities leading to a different estimation of efficiency of world countries in university education. The historical development of university ranking is analyzed during ten centuries of their history.
Efficient Rank Reduction of Correlation Matrices
I. Grubisic (Igor); R. Pietersz (Raoul)
2005-01-01
textabstractGeometric optimisation algorithms are developed that efficiently find the nearest low-rank correlation matrix. We show, in numerical tests, that our methods compare favourably to the existing methods in the literature. The connection with the Lagrange multiplier method is established,
A generalization of Friedman's rank statistic
Kroon, de J.; Laan, van der P.
1983-01-01
In this paper a very natural generalization of the two·way analysis of variance rank statistic of FRIEDMAN is given. The general distribution-free test procedure based on this statistic for the effect of J treatments in a random block design can be applied in general two-way layouts without
CSIR Research Space (South Africa)
O'Quigley, DGF
1997-01-01
Full Text Available This paper reports the results of a comprehensive investigation into the abrasion resistance of WC-Co alloys, as measured by the ASTM Standard B 611-85 test. The alloys ranged from 3 to 50 wt% and from 0.6 to 5 mu-m average grain size. Careful...
A cross-benchmark comparison of 87 learning to rank methods
Tax, N.; Bockting, S.; Hiemstra, D.
2015-01-01
Learning to rank is an increasingly important scientific field that comprises the use of machine learning for the ranking task. New learning to rank methods are generally evaluated on benchmark test collections. However, comparison of learning to rank methods based on evaluation results is hindered
Fast local fragment chaining using sum-of-pair gap costs
DEFF Research Database (Denmark)
Otto, Christian; Hoffmann, Steve; Gorodkin, Jan
2011-01-01
, and rank the fragments to improve the specificity. Results: Here we present a fast and flexible fragment chainer that for the first time also supports a sum-of-pair gap cost model. This model has proven to achieve a higher accuracy and sensitivity in its own field of application. Due to a highly time...... alignment heuristics alone. By providing both the linear and the sum-of-pair gap cost model, a wider range of application can be covered. The software clasp is available at http://www.bioinf.uni-leipzig.de/Software/clasp/....
Social class rank, essentialism, and punitive judgment.
Kraus, Michael W; Keltner, Dacher
2013-08-01
Recent evidence suggests that perceptions of social class rank influence a variety of social cognitive tendencies, from patterns of causal attribution to moral judgment. In the present studies we tested the hypotheses that upper-class rank individuals would be more likely to endorse essentialist lay theories of social class categories (i.e., that social class is founded in genetically based, biological differences) than would lower-class rank individuals and that these beliefs would decrease support for restorative justice--which seeks to rehabilitate offenders, rather than punish unlawful action. Across studies, higher social class rank was associated with increased essentialism of social class categories (Studies 1, 2, and 4) and decreased support for restorative justice (Study 4). Moreover, manipulated essentialist beliefs decreased preferences for restorative justice (Study 3), and the association between social class rank and class-based essentialist theories was explained by the tendency to endorse beliefs in a just world (Study 2). Implications for how class-based essentialist beliefs potentially constrain social opportunity and mobility are discussed.
International Nuclear Information System (INIS)
Akbari, S.; Khosrovshahi, G.B.; Mofidi, A.
2010-07-01
Let D be a t-(v, k, λ) design and let N i (D), for 1 ≤ i ≤ t, be the higher incidence matrix of D, a (0, 1)-matrix of size (v/i) x b, where b is the number of blocks of D. A zero-sum flow of D is a nowhere-zero real vector in the null space of N 1 (D). A zero-sum k-flow of D is a zero-sum flow with values in {±,...,±(k-1)}. In this paper we show that every non-symmetric design admits an integral zero-sum flow, and consequently we conjecture that every non-symmetric design admits a zero-sum 5-flow. Similarly, the definition of zero-sum flow can be extended to N i (D), 1 ≤ i ≤ t. Let D = t-(v,k, (v-t/k-t)) be the complete design. We conjecture that N t (D) admits a zero-sum 3-flow and prove this conjecture for t = 2. (author)
Discovering author impact: A PageRank perspective
Yan, Erjia; Ding, Ying
2010-01-01
This article provides an alternative perspective for measuring author impact by applying PageRank algorithm to a coauthorship network. A weighted PageRank algorithm considering citation and coauthorship network topology is proposed. We test this algorithm under different damping factors by evaluating author impact in the informetrics research community. In addition, we also compare this weighted PageRank with the h-index, citation, and program committee (PC) membership of the International So...
Ranking nodes in growing networks: When PageRank fails
Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng
2015-11-01
PageRank is arguably the most popular ranking algorithm which is being applied in real systems ranging from information to biological and infrastructure networks. Despite its outstanding popularity and broad use in different areas of science, the relation between the algorithm’s efficacy and properties of the network on which it acts has not yet been fully understood. We study here PageRank’s performance on a network model supported by real data, and show that realistic temporal effects make PageRank fail in individuating the most valuable nodes for a broad range of model parameters. Results on real data are in qualitative agreement with our model-based findings. This failure of PageRank reveals that the static approach to information filtering is inappropriate for a broad class of growing systems, and suggest that time-dependent algorithms that are based on the temporal linking patterns of these systems are needed to better rank the nodes.
Fixed mass and scaling sum rules
International Nuclear Information System (INIS)
Ward, B.F.L.
1975-01-01
Using the correspondence principle (continuity in dynamics), the approach of Keppell-Jones-Ward-Taha to fixed mass and scaling current algebraic sum rules is extended so as to consider explicitly the contributions of all classes of intermediate states. A natural, generalized formulation of the truncation ideas of Cornwall, Corrigan, and Norton is introduced as a by-product of this extension. The formalism is illustrated in the familiar case of the spin independent Schwinger term sum rule. New sum rules are derived which relate the Regge residue functions of the respective structure functions to their fixed hadronic mass limits for q 2 → infinity. (Auth.)
Directory of Open Access Journals (Sweden)
Irene Machado
2008-11-01
Full Text Available Nem sempre os temas candentes da investigação, numa determinada área do conhecimento, são colocados de maneira orgânica e organizada para o conjunto dos pesquisadores que sobre eles se debruçam. Quase nunca as edições cientícas, que se propõem a torná-los acessíveis a seus leitores, conseguem harmonizá-los sem correr os riscos de aproximações indevidas. A única forma de não incorrer em equívocos perigosos é assumir a idiossincrasia do temário diversificado que constitui o campo em questão. O leitor que ora inicia seu diálogo com este sétimo número de Galáxia não deve tomar esse preâmbulo por alerta, mas sim como tentativa de a revista manter a coerência face a seu compromisso de ser porta-voz dos temas e problemas da comunicação e da cultura pelo prisma das teorias semióticas que orientam o olhar dos vários colaboradores que encontram neste espaço uma tribuna aberta ao trânsito das diferenças. Basta um relance pelo sumário desta edição para que tal armação possa ser confirmada. Os textos que constituem o Fórum, respeitadas as singularidades, tratam de temas que são caros para as abordagens da comunicação e da semiótica na cultura. Temos o privilégio de publicar o texto inédito em português de Jakob von Uexküll em que o autor apresenta sua teoria da Umwelt, caracterizando formulações da biossemiótica sobre o signi.cado do entorno ou do espaço circundante, que são valiosas para compreender a percepção, a interação, o contexto, a informação, os códigos em ambientes de semiose. De um outro lugar - aquele modulado pela lógica da linguagem - Lucrécia Ferrara perscruta o campo conceitual que entende o design não pelo viés da operatividade, mas como processo semiótico-cognitivo. A outra ponta deste que pode ser um triálogo nos é dado pela comunicologia de Vilém Flusser. Para Michael Hanke, Flusser foi um dos grandes teóricos a investigar a importância da mídia para os
A Bayesian analysis of QCD sum rules
International Nuclear Information System (INIS)
Gubler, Philipp; Oka, Makoto
2011-01-01
A new technique has recently been developed, in which the Maximum Entropy Method is used to analyze QCD sum rules. This approach has the virtue of being able to directly generate the spectral function of a given operator, without the need of making an assumption about its specific functional form. To investigate whether useful results can be extracted within this method, we have first studied the vector meson channel, where QCD sum rules are traditionally known to provide a valid description of the spectral function. Our results show a significant peak in the region of the experimentally observed ρ-meson mass, which is in agreement with earlier QCD sum rules studies and suggests that the Maximum Entropy Method is a strong tool for analyzing QCD sum rules.
Sum rules for nuclear collective excitations
International Nuclear Information System (INIS)
Bohigas, O.
1978-07-01
Characterizations of the response function and of integral properties of the strength function via a moment expansion are discussed. Sum rule expressions for the moments in the RPA are derived. The validity of these sum rules for both density independent and density dependent interactions is proved. For forces of the Skyrme type, analytic expressions for the plus one and plus three energy weighted sum rules are given for isoscalar monopole and quadrupole operators. From these, a close relationship between the monopole and quadrupole energies is shown and their dependence on incompressibility and effective mass is studied. The inverse energy weighted sum rule is computed numerically for the monopole operator, and an upper bound for the width of the monopole resonance is given. Finally the reliability of moments given by the RPA with effective interactions is discussed using simple soluble models for the hamiltonian, and also by comparison with experimental data
3He electron scattering sum rules
International Nuclear Information System (INIS)
Kim, Y.E.; Tornow, V.
1982-01-01
Electron scattering sum rules for 3 He are derived with a realistic ground-state wave function. The theoretical results are compared with the experimentally measured integrated cross sections. (author)
Sum formulas for reductive algebraic groups
DEFF Research Database (Denmark)
Andersen, Henning Haahr; Kulkarni, Upendra
2008-01-01
\\supset V^1 \\cdots \\supset V^r = 0$. The sum of the positive terms in this filtration satisfies a well known sum formula. If $T$ denotes a tilting module either for $G$ or $U_q$ then we can similarly filter the space $\\Hom_G(V,T)$, respectively $\\Hom_{U_q}(V,T)$ and there is a sum formula for the positive...... terms here as well. We give an easy and unified proof of these two (equivalent) sum formulas. Our approach is based on an Euler type identity which we show holds without any restrictions on $p$ or $l$. In particular, we get rid of previous such restrictions in the tilting module case....
Gaussian sum rules for optical functions
International Nuclear Information System (INIS)
Kimel, I.
1981-12-01
A new (Gaussian) type of sum rules (GSR) for several optical functions, is presented. The functions considered are: dielectric permeability, refractive index, energy loss function, rotatory power and ellipticity (circular dichroism). While reducing to the usual type of sum rules in a certain limit, the GSR contain in general, a Gaussian factor that serves to improve convergence. GSR might be useful in analysing experimental data. (Author) [pt
Structural relations between nested harmonic sums
International Nuclear Information System (INIS)
Bluemlein, J.
2008-07-01
We describe the structural relations between nested harmonic sums emerging in the description of physical single scale quantities up to the 3-loop level in renormalizable gauge field theories. These are weight w=6 harmonic sums. We identify universal basic functions which allow to describe a large class of physical quantities and derive their complex analysis. For the 3-loop QCD Wilson coefficients 35 basic functions are required, whereas a subset of 15 describes the 3-loop anomalous dimensions. (orig.)
Structural relations between nested harmonic sums
Energy Technology Data Exchange (ETDEWEB)
Bluemlein, J.
2008-07-15
We describe the structural relations between nested harmonic sums emerging in the description of physical single scale quantities up to the 3-loop level in renormalizable gauge field theories. These are weight w=6 harmonic sums. We identify universal basic functions which allow to describe a large class of physical quantities and derive their complex analysis. For the 3-loop QCD Wilson coefficients 35 basic functions are required, whereas a subset of 15 describes the 3-loop anomalous dimensions. (orig.)
The Gross-Llewellyn Smith sum rule
International Nuclear Information System (INIS)
Scott, W.G.
1981-01-01
We present the most recent data on the Gross-Llewellyn Smith sum rule obtained from the combined BEBC Narrow Band Neon and GGM-PS Freon neutrino/antineutrino experiments. The data for the Gross-Llewellyn Smith sum rule as a function of q 2 suggest a smaller value for the QCD coupling constant parameter Λ than is obtained from the analysis of the higher moments. (author)
PageRank tracker: from ranking to tracking.
Gong, Chen; Fu, Keren; Loza, Artur; Wu, Qiang; Liu, Jia; Yang, Jie
2014-06-01
Video object tracking is widely used in many real-world applications, and it has been extensively studied for over two decades. However, tracking robustness is still an issue in most existing methods, due to the difficulties with adaptation to environmental or target changes. In order to improve adaptability, this paper formulates the tracking process as a ranking problem, and the PageRank algorithm, which is a well-known webpage ranking algorithm used by Google, is applied. Labeled and unlabeled samples in tracking application are analogous to query webpages and the webpages to be ranked, respectively. Therefore, determining the target is equivalent to finding the unlabeled sample that is the most associated with existing labeled set. We modify the conventional PageRank algorithm in three aspects for tracking application, including graph construction, PageRank vector acquisition and target filtering. Our simulations with the use of various challenging public-domain video sequences reveal that the proposed PageRank tracker outperforms mean-shift tracker, co-tracker, semiboosting and beyond semiboosting trackers in terms of accuracy, robustness and stability.
Transition sum rules in the shell model
Lu, Yi; Johnson, Calvin W.
2018-03-01
An important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. We focus on the non-energy-weighted sum rule (NEWSR), or total strength, and the energy-weighted sum rule (EWSR); the ratio of the EWSR to the NEWSR is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. These sum rules can be expressed as expectation values of operators, which in the case of the EWSR is a double commutator. While most prior applications of the double commutator have been to special cases, we derive general formulas for matrix elements of both operators in a shell model framework (occupation space), given the input matrix elements for the nuclear Hamiltonian and for the transition operator. With these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. We apply this simple tool to a number of nuclides and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole (E 1 ) sum rule against the famous Thomas-Reiche-Kuhn version. We also find surprising systematic behaviors for ground-state electric quadrupole (E 2 ) centroids in the s d shell.
Using incomplete citation data for MEDLINE results ranking.
Herskovic, Jorge R; Bernstam, Elmer V
2005-01-01
Information overload is a significant problem for modern medicine. Searching MEDLINE for common topics often retrieves more relevant documents than users can review. Therefore, we must identify documents that are not only relevant, but also important. Our system ranks articles using citation counts and the PageRank algorithm, incorporating data from the Science Citation Index. However, citation data is usually incomplete. Therefore, we explore the relationship between the quantity of citation information available to the system and the quality of the result ranking. Specifically, we test the ability of citation count and PageRank to identify "important articles" as defined by experts from large result sets with decreasing citation information. We found that PageRank performs better than simple citation counts, but both algorithms are surprisingly robust to information loss. We conclude that even an incomplete citation database is likely to be effective for importance ranking.
International Nuclear Information System (INIS)
Heald, D.
1983-01-01
Accurate investment plans are difficult to make for projects such as power stations because of the timescale over which economic growth, technical performance and relative fuel prices must be estimated. For the Sizewell B PWR reactor this is 71/2 years for construction and 35 years for operating life. Past errors in forecasting, the political opposition from environmental groups and the Conservative governments dislike of nationalized industries have all combined to make the Central Electricity Generating Board (CEGB) defensive about the figures it uses to promote its power station building programme. These figures are questioned here in a summary of a report by the Electricity Consumer's Council. In particular it is suggested that the CEGB's estimate of the net effective cost was systematically biased in favour of nuclear power over coal, that the sensitivity analysis was inadequate, that the 5% discount rate chosen by the CEGB favours nuclear power with its higher capital costs and that the external presentation of results of past investments was misleading. The CEGB in its submission to the Sizewell B enquiry has rectified the first two criticisms. However, the continued use of the 5% discount rate without sensitivity tests is questioned. A table of comparative generation costs is also discussed. (U.K.)
Prototyping a Distributed Information Retrieval System That Uses Statistical Ranking.
Harman, Donna; And Others
1991-01-01
Built using a distributed architecture, this prototype distributed information retrieval system uses statistical ranking techniques to provide better service to the end user. Distributed architecture was shown to be a feasible alternative to centralized or CD-ROM information retrieval, and user testing of the ranking methodology showed both…
Directory of Open Access Journals (Sweden)
Rohmah Neng
2018-01-01
Full Text Available Only 15% of the industries in Citarum Watershed, specifically in Bandung Regency, West Bandung Regency, Sumedang Regency, Bandung City and Cimahi City, are registered as PROPER industries. They must comply to indicators as set in the Minister of Environment and Forestry Decree No. 3 In 2014 concerning Industrial Performance Rank in Environmental Management, as a requirement to apply for PROPER. Wastewater treatment and management, referencing to Minister of Environment and Forestry Decree No. 5 In 2014 concerning Wastewater Effluent Standards, must be performed to be registered as PROPER industries. Conducting only physical-chemical parameter monitoring of wastewater is insufficient to determine the safety of wastewater discharged into the river, therefore additional toxicity tests involving bioindicator are required to determine acute toxicity characteristic of wastewater. The acute toxicity test quantifies LC50 value based on death response of bioindicators from certain dosage. Daphnia magna was used as bioindicator in the toxicity test and probit software for analysis. In 2015-2016, the number of industries that discharged wastewater exceeding the standard was found greater in non-PROPER industries than in PROPER industries. Based on the toxicity level, both PROPER and non-PROPER industries have toxic properties, however PROPER industries of 2015-2016 is more toxic with LC5096 value reaching 2.79%.
Universal scaling in sports ranking
International Nuclear Information System (INIS)
Deng Weibing; Li Wei; Cai Xu; Bulou, Alain; Wang Qiuping A
2012-01-01
Ranking is a ubiquitous phenomenon in human society. On the web pages of Forbes, one may find all kinds of rankings, such as the world's most powerful people, the world's richest people, the highest-earning tennis players, and so on and so forth. Herewith, we study a specific kind—sports ranking systems in which players' scores and/or prize money are accrued based on their performances in different matches. By investigating 40 data samples which span 12 different sports, we find that the distributions of scores and/or prize money follow universal power laws, with exponents nearly identical for most sports. In order to understand the origin of this universal scaling we focus on the tennis ranking systems. By checking the data we find that, for any pair of players, the probability that the higher-ranked player tops the lower-ranked opponent is proportional to the rank difference between the pair. Such a dependence can be well fitted to a sigmoidal function. By using this feature, we propose a simple toy model which can simulate the competition of players in different matches. The simulations yield results consistent with the empirical findings. Extensive simulation studies indicate that the model is quite robust with respect to the modifications of some parameters. (paper)
Expansion around half-integer values, binomial sums, and inverse binomial sums
International Nuclear Information System (INIS)
Weinzierl, Stefan
2004-01-01
I consider the expansion of transcendental functions in a small parameter around rational numbers. This includes in particular the expansion around half-integer values. I present algorithms which are suitable for an implementation within a symbolic computer algebra system. The method is an extension of the technique of nested sums. The algorithms allow in addition the evaluation of binomial sums, inverse binomial sums and generalizations thereof
QCD sum rules in a Bayesian approach
International Nuclear Information System (INIS)
Gubler, Philipp; Oka, Makoto
2011-01-01
A novel technique is developed, in which the Maximum Entropy Method is used to analyze QCD sum rules. The main advantage of this approach lies in its ability of directly generating the spectral function of a given operator. This is done without the need of making an assumption about the specific functional form of the spectral function, such as in the 'pole + continuum' ansatz that is frequently used in QCD sum rule studies. Therefore, with this method it should in principle be possible to distinguish narrow pole structures form continuum states. To check whether meaningful results can be extracted within this approach, we have first investigated the vector meson channel, where QCD sum rules are traditionally known to provide a valid description of the spectral function. Our results exhibit a significant peak in the region of the experimentally observed ρ-meson mass, which agrees with earlier QCD sum rules studies and shows that the Maximum Entropy Method is a useful tool for analyzing QCD sum rules.
International Nuclear Information System (INIS)
Frahm, K M; Shepelyansky, D L; Chepelianskii, A D
2012-01-01
We up a directed network tracing links from a given integer to its divisors and analyze the properties of the Google matrix of this network. The PageRank vector of this matrix is computed numerically and it is shown that its probability is approximately inversely proportional to the PageRank index thus being similar to the Zipf law and the dependence established for the World Wide Web. The spectrum of the Google matrix of integers is characterized by a large gap and a relatively small number of nonzero eigenvalues. A simple semi-analytical expression for the PageRank of integers is derived that allows us to find this vector for matrices of billion size. This network provides a new PageRank order of integers. (paper)
Freudenthal ranks: GHZ versus W
International Nuclear Information System (INIS)
Borsten, L
2013-01-01
The Hilbert space of three-qubit pure states may be identified with a Freudenthal triple system. Every state has an unique Freudenthal rank ranging from 1 to 4, which is determined by a set of automorphism group covariants. It is shown here that the optimal success rates for winning a three-player non-local game, varying over all local strategies, are strictly ordered by the Freudenthal rank of the shared three-qubit resource. (paper)
Ranking Queries on Uncertain Data
Hua, Ming
2011-01-01
Uncertain data is inherent in many important applications, such as environmental surveillance, market analysis, and quantitative economics research. Due to the importance of those applications and rapidly increasing amounts of uncertain data collected and accumulated, analyzing large collections of uncertain data has become an important task. Ranking queries (also known as top-k queries) are often natural and useful in analyzing uncertain data. Ranking Queries on Uncertain Data discusses the motivations/applications, challenging problems, the fundamental principles, and the evaluation algorith
Ranking in evolving complex networks
Liao, Hao; Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng; Zhou, Ming-Yang
2017-05-01
Complex networks have emerged as a simple yet powerful framework to represent and analyze a wide range of complex systems. The problem of ranking the nodes and the edges in complex networks is critical for a broad range of real-world problems because it affects how we access online information and products, how success and talent are evaluated in human activities, and how scarce resources are allocated by companies and policymakers, among others. This calls for a deep understanding of how existing ranking algorithms perform, and which are their possible biases that may impair their effectiveness. Many popular ranking algorithms (such as Google's PageRank) are static in nature and, as a consequence, they exhibit important shortcomings when applied to real networks that rapidly evolve in time. At the same time, recent advances in the understanding and modeling of evolving networks have enabled the development of a wide and diverse range of ranking algorithms that take the temporal dimension into account. The aim of this review is to survey the existing ranking algorithms, both static and time-aware, and their applications to evolving networks. We emphasize both the impact of network evolution on well-established static algorithms and the benefits from including the temporal dimension for tasks such as prediction of network traffic, prediction of future links, and identification of significant nodes.
Vacuum structure and QCD sum rules
International Nuclear Information System (INIS)
Shifman, M.A.
1992-01-01
The method of the QCD sum rules was and still is one of the most productive tools in a wide range of problems associated with the hadronic phenomenology. Many heuristic ideas, computational devices, specific formulae which are useful to theorists working not only in hadronic physics, have been accumulated in this method. Some of the results and approaches which have originally been developed in connection with the QCD sum rules can be and are successfully applied in related fields, as supersymmetric gauge theories, nontraditional schemes of quarks and leptons, etc. The amount of literature on these and other more basic problems in hadronic physics has grown enormously in recent years. This volume presents a collection of papers which provide an overview of all basic elements of the sum rule approach and priority has been given to the works which seemed most useful from a pedagogical point of view
Experimental results of the betatron sum resonance
International Nuclear Information System (INIS)
Wang, Y.; Ball, M.; Brabson, B.
1993-06-01
The experimental observations of motion near the betatron sum resonance, ν x + 2ν z = 13, are presented. A fast quadrupole (Panofsky-style ferrite picture-frame magnet with a pulsed power supplier) producing a betatron tune shift of the order of 0.03 at rise time of 1 μs was used. This quadrupole was used to produce betatron tunes which jumped past and then crossed back through a betatron sum resonance line. The beam response as function of initial betatron amplitudes were recorded turn by turn. The correlated growth of the action variables, J x and J z , was observed. The phase space plots in the resonance frame reveal the features of particle motion near the nonlinear sum resonance region
Inverse-moment chiral sum rules
International Nuclear Information System (INIS)
Golowich, E.; Kambor, J.
1996-01-01
A general class of inverse-moment sum rules was previously derived by the authors in a chiral perturbation theory (ChPT) study at two-loop order of the isospin and hypercharge vector-current propagators. Here, we address the evaluation of the inverse-moment sum rules in terms of existing data and theoretical constraints. Two kinds of sum rules are seen to occur: those which contain as-yet undetermined O(q 6 ) counterterms and those free of such quantities. We use the former to obtain phenomenological evaluations of two O(q 6 ) counterterms. Light is shed on the important but difficult issue regarding contributions of higher orders in the ChPT expansion. copyright 1996 The American Physical Society
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
Systematics of strength function sum rules
Directory of Open Access Journals (Sweden)
Calvin W. Johnson
2015-11-01
Full Text Available Sum rules provide useful insights into transition strength functions and are often expressed as expectation values of an operator. In this letter I demonstrate that non-energy-weighted transition sum rules have strong secular dependences on the energy of the initial state. Such non-trivial systematics have consequences: the simplification suggested by the generalized Brink–Axel hypothesis, for example, does not hold for most cases, though it weakly holds in at least some cases for electric dipole transitions. Furthermore, I show the systematics can be understood through spectral distribution theory, calculated via traces of operators and of products of operators. Seen through this lens, violation of the generalized Brink–Axel hypothesis is unsurprising: one expects sum rules to evolve with excitation energy. Furthermore, to lowest order the slope of the secular evolution can be traced to a component of the Hamiltonian being positive (repulsive or negative (attractive.
Vacuum structure and QCD sum rules
International Nuclear Information System (INIS)
Shifman, M.A.
1992-01-01
The method of the QCD sum rules was and still is one of the most productive tools in a wide range of problems associated with the hadronic phenomenology. Many heuristic ideas, computational devices, specific formulae which are useful to theorists working not only in hadronic physics, have been accumulated in this method. Some of the results and approaches which have been originally developed in connection with the QCD sum rules can be and are successfully applied in related fields, such as supersymmetric gauge theories, nontraditional schemes of quarks and leptons, etc. The amount of literature on these and other more basic problems in hadronic physics has grown enormously in recent years. This collection of papers provides an overview of all basic elements of the sum rule approach. Priority has been given to those works which seemed most useful from a pedagogical point of view
On the Computation of Correctly Rounded Sums
DEFF Research Database (Denmark)
Kornerup, Peter; Lefevre, Vincent; Louvet, Nicolas
2012-01-01
This paper presents a study of some basic blocks needed in the design of floating-point summation algorithms. In particular, in radix-2 floating-point arithmetic, we show that among the set of the algorithms with no comparisons performing only floating-point additions/subtractions, the 2Sum...... algorithm introduced by Knuth is minimal, both in terms of number of operations and depth of the dependency graph. We investigate the possible use of another algorithm, Dekker's Fast2Sum algorithm, in radix-10 arithmetic. We give methods for computing, in radix 10, the floating-point number nearest...... the average value of two floating-point numbers. We also prove that under reasonable conditions, an algorithm performing only round-to-nearest additions/subtractions cannot compute the round-to-nearest sum of at least three floating-point numbers. Starting from an algorithm due to Boldo and Melquiond, we also...
Sum rules for the quarkonium systems
International Nuclear Information System (INIS)
Burnel, A.; Caprasse, H.
1980-01-01
In the framework of the radial Schroedinger equation we derive in a very simple way sum rules relating the potential to physical quantities such as the energy eigenvalues and the square of the lth derivative of the eigenfunctions at the origin. These sum rules contain as particular cases well-known results such as the quantum version of the Clausius theorem in classical mechanics as well as Kramers's relations for the Coulomb potential. Several illustrations are given and the possibilities of applying them to the quarkonium systems are considered
Integrals of Lagrange functions and sum rules
Energy Technology Data Exchange (ETDEWEB)
Baye, Daniel, E-mail: dbaye@ulb.ac.be [Physique Quantique, CP 165/82, Universite Libre de Bruxelles, B 1050 Bruxelles (Belgium); Physique Nucleaire Theorique et Physique Mathematique, CP 229, Universite Libre de Bruxelles, B 1050 Bruxelles (Belgium)
2011-09-30
Exact values are derived for some matrix elements of Lagrange functions, i.e. orthonormal cardinal functions, constructed from orthogonal polynomials. They are obtained with exact Gauss quadratures supplemented by corrections. In the particular case of Lagrange-Laguerre and shifted Lagrange-Jacobi functions, sum rules provide exact values for matrix elements of 1/x and 1/x{sup 2} as well as for the kinetic energy. From these expressions, new sum rules involving Laguerre and shifted Jacobi zeros and weights are derived. (paper)
RANK and RANK ligand expression in primary human osteosarcoma
Directory of Open Access Journals (Sweden)
Daniel Branstetter
2015-09-01
Our results demonstrate RANKL expression was observed in the tumor element in 68% of human OS using IHC. However, the staining intensity was relatively low and only 37% (29/79 of samples exhibited≥10% RANKL positive tumor cells. RANK expression was not observed in OS tumor cells. In contrast, RANK expression was clearly observed in other cells within OS samples, including the myeloid osteoclast precursor compartment, osteoclasts and in giant osteoclast cells. The intensity and frequency of RANKL and RANK staining in OS samples were substantially less than that observed in GCTB samples. The observation that RANKL is expressed in OS cells themselves suggests that these tumors may mediate an osteoclastic response, and anti-RANKL therapy may potentially be protective against bone pathologies in OS. However, the absence of RANK expression in primary human OS cells suggests that any autocrine RANKL/RANK signaling in human OS tumor cells is not operative, and anti-RANKL therapy would not directly affect the tumor.
Ranking structures and rank-rank correlations of countries: The FIFA and UEFA cases
Ausloos, Marcel; Cloots, Rudi; Gadomski, Adam; Vitanov, Nikolay K.
2014-04-01
Ranking of agents competing with each other in complex systems may lead to paradoxes according to the pre-chosen different measures. A discussion is presented on such rank-rank, similar or not, correlations based on the case of European countries ranked by UEFA and FIFA from different soccer competitions. The first question to be answered is whether an empirical and simple law is obtained for such (self-) organizations of complex sociological systems with such different measuring schemes. It is found that the power law form is not the best description contrary to many modern expectations. The stretched exponential is much more adequate. Moreover, it is found that the measuring rules lead to some inner structures in both cases.
Poortvliet, P. Marijn; Janssen, Onne; Van Yperen, N.W.; Van de Vliert, E.
This investigation tested the joint effect of achievement goals and ranking information on information exchange intentions with a commensurate exchange partner. Results showed that individuals with performance goals were less inclined to cooperate with an exchange partner when they had low or high
Acar, Elif F.; Sun, Lei
2012-01-01
Motivated by genetic association studies of SNPs with genotype uncertainty, we propose a generalization of the Kruskal-Wallis test that incorporates group uncertainty when comparing k samples. The extended test statistic is based on probability-weighted rank-sums and follows an asymptotic chi-square distribution with k-1 degrees of freedom under the null hypothesis. Simulation studies confirm the validity and robustness of the proposed test in finite samples. Application to a genome-wide asso...
Goehring, Jenny L.; Neff, Donna L.; Baudhuin, Jacquelyn L.; Hughes, Michelle L.
2014-01-01
The first objective of this study was to determine whether adaptive pitch-ranking and electrode-discrimination tasks with cochlear-implant (CI) recipients produce similar results for perceiving intermediate “virtual-channel” pitch percepts using current steering. Previous studies have not examined both behavioral tasks in the same subjects with current steering. A second objective was to determine whether a physiological metric of spatial separation using the electrically evoked compound action potential spread-of-excitation (ECAP SOE) function could predict performance in the behavioral tasks. The metric was the separation index (Σ), defined as the difference in normalized amplitudes between two adjacent ECAP SOE functions, summed across all masker electrodes. Eleven CII or 90 K Advanced Bionics (Valencia, CA) recipients were tested using pairs of electrodes from the basal, middle, and apical portions of the electrode array. The behavioral results, expressed as d′, showed no significant differences across tasks. There was also no significant effect of electrode region for either task. ECAP Σ was not significantly correlated with pitch ranking or electrode discrimination for any of the electrode regions. Therefore, the ECAP separation index is not sensitive enough to predict perceptual resolution of virtual channels. PMID:25480063
Pentaquarks in QCD Sum Rule Approach
International Nuclear Information System (INIS)
Rodrigues da Silva, R.; Matheus, R.D.; Navarra, F.S.; Nielsen, M.
2004-01-01
We estimate the mass of recently observed pentaquak staes Ξ- (1862) and Θ+(1540) using two kinds of interpolating fields, containing two highly correlated diquarks, in the QCD sum rule approach. We obtained good agreement with the experimental value, using standard continuum threshold
Sums of two-dimensional spectral triples
DEFF Research Database (Denmark)
Christensen, Erik; Ivan, Cristina
2007-01-01
construct a sum of two dimensional modules which reflects some aspects of the topological dimensions of the compact metric space, but this will only give the metric back approximately. At the end we make an explicit computation of the last module for the unit interval in. The metric is recovered exactly...
Summing threshold logs in a parton shower
International Nuclear Information System (INIS)
Nagy, Zoltan; Soper, Davison E.
2016-05-01
When parton distributions are falling steeply as the momentum fractions of the partons increases, there are effects that occur at each order in α s that combine to affect hard scattering cross sections and need to be summed. We show how to accomplish this in a leading approximation in the context of a parton shower Monte Carlo event generator.
On Learning Ring-Sum-Expansions
DEFF Research Database (Denmark)
Fischer, Paul; Simon, H. -U.
1992-01-01
The problem of learning ring-sum-expansions from examples is studied. Ring-sum-expansions (RSE) are representations of Boolean functions over the base {#123;small infinum, (+), 1}#125;, which reflect arithmetic operations in GF(2). k-RSE is the class of ring-sum-expansions containing only monomials...... of length at most k:. term-RSE is the class of ring-sum-expansions having at most I: monomials. It is shown that k-RSE, k>or=1, is learnable while k-term-RSE, k>2, is not learnable if RPnot=NP. Without using a complexity-theoretical hypothesis, it is proven that k-RSE, k>or=1, and k-term-RSE, k>or=2 cannot...... be learned from positive (negative) examples alone. However, if the restriction that the hypothesis which is output by the learning algorithm is also a k-RSE is suspended, then k-RSE is learnable from positive (negative) examples only. Moreover, it is proved that 2-term-RSE is learnable by a conjunction...
Stark resonances: asymptotics and distributional Borel sum
International Nuclear Information System (INIS)
Caliceti, E.; Grecchi, V.; Maioli, M.
1993-01-01
We prove that the Stark effect perturbation theory of a class of bound states uniquely determines the position and the width of the resonances by Distributional Borel Sum. In particular the small field asymptotics of the width is uniquely related to the large order asymptotics of the perturbation coefficients. Similar results apply to all the ''resonances'' of the anharmonic and double well oscillators. (orig.)
Fibonacci Identities via the Determinant Sum Property
Spivey, Michael
2006-01-01
We use the sum property for determinants of matrices to give a three-stage proof of an identity involving Fibonacci numbers. Cassini's and d'Ocagne's Fibonacci identities are obtained at the ends of stages one and two, respectively. Catalan's Fibonacci identity is also a special case.
Summing threshold logs in a parton shower
Energy Technology Data Exchange (ETDEWEB)
Nagy, Zoltán [DESY,Notkestrasse 85, 22607 Hamburg (Germany); Soper, Davison E. [Institute of Theoretical Science, University of Oregon,Eugene, OR 97403-5203 (United States)
2016-10-05
When parton distributions are falling steeply as the momentum fractions of the partons increases, there are effects that occur at each order in α{sub s} that combine to affect hard scattering cross sections and need to be summed. We show how to accomplish this in a leading approximation in the context of a parton shower Monte Carlo event generator.
Demonstration of a Quantum Nondemolition Sum Gate
DEFF Research Database (Denmark)
Yoshikawa, J.; Miwa, Y.; Huck, Alexander
2008-01-01
The sum gate is the canonical two-mode gate for universal quantum computation based on continuous quantum variables. It represents the natural analogue to a qubit C-NOT gate. In addition, the continuous-variable gate describes a quantum nondemolition (QND) interaction between the quadrature...
Sum rule approach to nuclear vibrations
International Nuclear Information System (INIS)
Suzuki, T.
1983-01-01
Velocity field of various collective states is explored by using sum rules for the nuclear current. It is shown that an irrotational and incompressible flow model is applicable to giant resonance states. Structure of the hydrodynamical states is discussed according to Tomonaga's microscopic theory for collective motions. (author)
Generalizations of some zero sum theorems
Indian Academy of Sciences (India)
Let G be an abelian group of order n, written additively. The Davenport constant D(G) is defined to be the smallest natural number t such that any sequence of length t over G has a non-empty subsequence whose sum is zero. Another combinatorial invariant E(G). (known as the EGZ constant) is the smallest natural number t ...
Succinct partial sums and fenwick trees
DEFF Research Database (Denmark)
Bille, Philip; Christiansen, Anders Roy; Prezza, Nicola
2017-01-01
We consider the well-studied partial sums problem in succint space where one is to maintain an array of n k-bit integers subject to updates such that partial sums queries can be efficiently answered. We present two succint versions of the Fenwick Tree â€“ which is known for its simplicity...... and practicality. Our results hold in the encoding model where one is allowed to reuse the space from the input data. Our main result is the first that only requires nk + o(n) bits of space while still supporting sum/update in O(logbn)/O(blogbn) time where 2 â‰¤ b â‰¤ log O(1)n. The second result shows how optimal...... time for sum/update can be achieved while only slightly increasing the space usage to nk + o(nk) bits. Beyond Fenwick Trees, the results are primarily based on bit-packing and sampling â€“ making them very practical â€“ and they also allow for simple optimal parallelization....
Sensitivity ranking for freshwater invertebrates towards hydrocarbon contaminants.
Gerner, Nadine V; Cailleaud, Kevin; Bassères, Anne; Liess, Matthias; Beketov, Mikhail A
2017-11-01
Hydrocarbons have an utmost economical importance but may also cause substantial ecological impacts due to accidents or inadequate transportation and use. Currently, freshwater biomonitoring methods lack an indicator that can unequivocally reflect the impacts caused by hydrocarbons while being independent from effects of other stressors. The aim of the present study was to develop a sensitivity ranking for freshwater invertebrates towards hydrocarbon contaminants, which can be used in hydrocarbon-specific bioindicators. We employed the Relative Sensitivity method and developed the sensitivity ranking S hydrocarbons based on literature ecotoxicological data supplemented with rapid and mesocosm test results. A first validation of the sensitivity ranking based on an earlier field study has been conducted and revealed the S hydrocarbons ranking to be promising for application in sensitivity based indicators. Thus, the first results indicate that the ranking can serve as the core component of future hydrocarbon-specific and sensitivity trait based bioindicators.
A Case-Based Reasoning Method with Rank Aggregation
Sun, Jinhua; Du, Jiao; Hu, Jian
2018-03-01
In order to improve the accuracy of case-based reasoning (CBR), this paper addresses a new CBR framework with the basic principle of rank aggregation. First, the ranking methods are put forward in each attribute subspace of case. The ordering relation between cases on each attribute is got between cases. Then, a sorting matrix is got. Second, the similar case retrieval process from ranking matrix is transformed into a rank aggregation optimal problem, which uses the Kemeny optimal. On the basis, a rank aggregation case-based reasoning algorithm, named RA-CBR, is designed. The experiment result on UCI data sets shows that case retrieval accuracy of RA-CBR algorithm is higher than euclidean distance CBR and mahalanobis distance CBR testing.So we can get the conclusion that RA-CBR method can increase the performance and efficiency of CBR.
Ranking agility factors affecting hospitals in Iran
Directory of Open Access Journals (Sweden)
M. Abdi Talarposht
2017-04-01
Full Text Available Background: Agility is an effective response to the changing and unpredictable environment and using these changes as opportunities for organizational improvement. Objective: The aim of the present study was to rank the factors affecting agile supply chain of hospitals of Iran. Methods: This applied study was conducted by cross sectional-descriptive method at some point of 2015 for one year. The research population included managers, administrators, faculty members and experts were selected hospitals. A total of 260 people were selected as sample from the health centers. The construct validity of the questionnaire was approved by confirmatory factor analysis test and its reliability was approved by Cronbach's alpha (α=0.97. All data were analyzed by Kolmogorov-Smirnov, Chi-square and Friedman tests. Findings: The development of staff skills, the use of information technology, the integration of processes, appropriate planning, and customer satisfaction and product quality had a significant impact on the agility of public hospitals of Iran (P<0.001. New product introductions had earned the highest ranking and the development of staff skills earned the lowest ranking. Conclusion: The new product introduction, market responsiveness and sensitivity, reduce costs, and the integration of organizational processes, ratings better to have acquired agility hospitals in Iran. Therefore, planners and officials of hospitals have to, through the promotion quality and variety of services customer-oriented, providing a basis for investing in the hospital and etc to apply for agility supply chain public hospitals of Iran.
Rank-Constrained Beamforming for MIMO Cognitive Interference Channel
Directory of Open Access Journals (Sweden)
Duoying Zhang
2016-01-01
Full Text Available This paper considers the spectrum sharing multiple-input multiple-output (MIMO cognitive interference channel, in which multiple primary users (PUs coexist with multiple secondary users (SUs. Interference alignment (IA approach is introduced that guarantees that secondary users access the licensed spectrum without causing harmful interference to the PUs. A rank-constrained beamforming design is proposed where the rank of the interferences and the desired signals is concerned. The standard interferences metric for the primary link, that is, interference temperature, is investigated and redesigned. The work provides a further improvement that optimizes the dimension of the interferences in the cognitive interference channel, instead of the power of the interference leakage. Due to the nonconvexity of the rank, the developed optimization problems are further approximated as convex form and are solved via choosing the transmitter precoder and receiver subspace iteratively. Numerical results show that the proposed designs can improve the achievable degree of freedom (DoF of the primary links and provide the considerable sum rate for both secondary and primary transmissions under the rank constraints.
Ranking species in mutualistic networks
Domínguez-García, Virginia; Muñoz, Miguel A.
2015-02-01
Understanding the architectural subtleties of ecological networks, believed to confer them enhanced stability and robustness, is a subject of outmost relevance. Mutualistic interactions have been profusely studied and their corresponding bipartite networks, such as plant-pollinator networks, have been reported to exhibit a characteristic ``nested'' structure. Assessing the importance of any given species in mutualistic networks is a key task when evaluating extinction risks and possible cascade effects. Inspired in a recently introduced algorithm -similar in spirit to Google's PageRank but with a built-in non-linearity- here we propose a method which -by exploiting their nested architecture- allows us to derive a sound ranking of species importance in mutualistic networks. This method clearly outperforms other existing ranking schemes and can become very useful for ecosystem management and biodiversity preservation, where decisions on what aspects of ecosystems to explicitly protect need to be made.
Ranking Theory and Conditional Reasoning.
Skovgaard-Olsen, Niels
2016-05-01
Ranking theory is a formal epistemology that has been developed in over 600 pages in Spohn's recent book The Laws of Belief, which aims to provide a normative account of the dynamics of beliefs that presents an alternative to current probabilistic approaches. It has long been received in the AI community, but it has not yet found application in experimental psychology. The purpose of this paper is to derive clear, quantitative predictions by exploiting a parallel between ranking theory and a statistical model called logistic regression. This approach is illustrated by the development of a model for the conditional inference task using Spohn's (2013) ranking theoretic approach to conditionals. Copyright © 2015 Cognitive Science Society, Inc.
University rankings in computer science
DEFF Research Database (Denmark)
Ehret, Philip; Zuccala, Alesia Ann; Gipp, Bela
2017-01-01
This is a research-in-progress paper concerning two types of institutional rankings, the Leiden and QS World ranking, and their relationship to a list of universities’ ‘geo-based’ impact scores, and Computing Research and Education Conference (CORE) participation scores in the field of computer...... science. A ‘geo-based’ impact measure examines the geographical distribution of incoming citations to a particular university’s journal articles for a specific period of time. It takes into account both the number of citations and the geographical variability in these citations. The CORE participation...... score is calculated on the basis of the number of weighted proceedings papers that a university has contributed to either an A*, A, B, or C conference as ranked by the Computing Research and Education Association of Australasia. In addition to calculating the correlations between the distinct university...
Development and first application of an operating events ranking tool
International Nuclear Information System (INIS)
Šimić, Zdenko; Zerger, Benoit; Banov, Reni
2015-01-01
Highlights: • A method using analitycal hierarchy process for ranking operating events is developed and tested. • The method is applied for 5 years of U.S. NRC Licensee Event Reports (1453 events). • Uncertainty and sensitivity of the ranking results are evaluated. • Real events assessment shows potential of the method for operating experience feedback. - Abstract: The operating experience feedback is important for maintaining and improving safety and availability in nuclear power plants. Detailed investigation of all events is challenging since it requires excessive resources, especially in case of large event databases. This paper presents an event groups ranking method to complement the analysis of individual operating events. The basis for the method is the use of an internationally accepted events characterization scheme that allows different ways of events grouping and ranking. The ranking method itself consists of implementing the analytical hierarchy process (AHP) by means of a custom developed tool which allows events ranking based on ranking indexes pre-determined by expert judgment. Following the development phase, the tool was applied to analyze a complete set of 5 years of real nuclear power plants operating events (1453 events). The paper presents the potential of this ranking method to identify possible patterns throughout the event database and therefore to give additional insights into the events as well as to give quantitative input for the prioritization of further more detailed investigation of selected event groups
Subtracting a best rank-1 approximation may increase tensor rank
Stegeman, Alwin; Comon, Pierre
2010-01-01
It has been shown that a best rank-R approximation of an order-k tensor may not exist when R >= 2 and k >= 3. This poses a serious problem to data analysts using tensor decompositions it has been observed numerically that, generally, this issue cannot be solved by consecutively computing and
Consistent ranking of volatility models
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
2006-01-01
We show that the empirical ranking of volatility models can be inconsistent for the true ranking if the evaluation is based on a proxy for the population measure of volatility. For example, the substitution of a squared return for the conditional variance in the evaluation of ARCH-type models can...... variance in out-of-sample evaluations rather than the squared return. We derive the theoretical results in a general framework that is not specific to the comparison of volatility models. Similar problems can arise in comparisons of forecasting models whenever the predicted variable is a latent variable....
Evaluating chiral symmetry restoration through the use of sum rules
Directory of Open Access Journals (Sweden)
Rapp Ralf
2012-11-01
Full Text Available We pursue the idea of assessing chiral restoration via in-medium modifications of hadronic spectral functions of chiral partners. The usefulness of sum rules in this endeavor is illustrated, focusing on the vector/axial-vector channel. We first present an update on obtaining quantitative results for pertinent vacuum spectral functions. These serve as a basis upon which the in-medium spectral functions can be constructed. A novel feature of our analysis of the vacuum spectral functions is the need to include excited resonances, dictated by satisfying the Weinberg-type sum rules. This includes excited states in both the vector and axial-vector channels.We also analyze the QCD sum rule for the finite temperature vector spectral function, based on a ρ spectral function tested in dilepton data which develops a shoulder at low energies.We find that the ρ′ peak flattens off which may be a sign of chiral restoration, though a study of the finite temperature axial-vector spectral function remains to be carried out.
Simulation approach to coincidence summing in {gamma}-ray spectrometry
Energy Technology Data Exchange (ETDEWEB)
Dziri, S., E-mail: samir.dziri@iphc.cnrs.fr [Groupe RaMsEs, Institut Pluridisciplinaire Hubert Curien (IPHC), University of Strasbourg, CNRS, IN2P3, UMR 7178, 23 rue de Loess, BP 28, 67037 Strasbourg Cedex 2 (France); Nourreddine, A.; Sellam, A.; Pape, A.; Baussan, E. [Groupe RaMsEs, Institut Pluridisciplinaire Hubert Curien (IPHC), University of Strasbourg, CNRS, IN2P3, UMR 7178, 23 rue de Loess, BP 28, 67037 Strasbourg Cedex 2 (France)
2012-07-15
Some of the radionuclides used for efficiency calibration of a HPGe spectrometer are subject to coincidence-summing (CS) and account must be taken of the phenomenon to obtain quantitative results when counting samples to determine their activity. We have used MCNPX simulations, which do not take CS into account, to obtain {gamma}-ray peak intensities that were compared to those observed experimentally. The loss or gain of a measured peak intensity relative to the simulated peak is attributed to CS. CS correction factors are compared with those of ETNA and GESPECOR. Application to a test sample prepared with known radionuclides gave values close to the published activities. - Highlights: Black-Right-Pointing-Pointer Coincidence summing occurs when the solid angle is increased. Black-Right-Pointing-Pointer The loss of counts gives rise to an approximative efficiency curves, this means a wrong quantitative data. Black-Right-Pointing-Pointer To overcome this problem we need mono-energetic source, otherwise, the MCNPX simulation allows by comparison with the experiment data to get the coincidence summing correction factors. Black-Right-Pointing-Pointer By multiplying these factors by the approximative efficiency, we obtain the accurate efficiency.
Limiting law excess sum rule for polyelectrolytes.
Landy, Jonathan; Lee, YongJin; Jho, YongSeok
2013-11-01
We revisit the mean-field limiting law screening excess sum rule that holds for rodlike polyelectrolytes. We present an efficient derivation of this law that clarifies its region of applicability: The law holds in the limit of small polymer radius, measured relative to the Debye screening length. From the limiting law, we determine the individual ion excess values for single-salt electrolytes. We also consider the mean-field excess sum away from the limiting region, and we relate this quantity to the osmotic pressure of a dilute polyelectrolyte solution. Finally, we consider numerical simulations of many-body polymer-electrolyte solutions. We conclude that the limiting law often accurately describes the screening of physical charged polymers of interest, such as extended DNA.
Geometric optimization and sums of algebraic functions
Vigneron, Antoine E.
2014-01-01
We present a new optimization technique that yields the first FPTAS for several geometric problems. These problems reduce to optimizing a sum of nonnegative, constant description complexity algebraic functions. We first give an FPTAS for optimizing such a sum of algebraic functions, and then we apply it to several geometric optimization problems. We obtain the first FPTAS for two fundamental geometric shape-matching problems in fixed dimension: maximizing the volume of overlap of two polyhedra under rigid motions and minimizing their symmetric difference. We obtain the first FPTAS for other problems in fixed dimension, such as computing an optimal ray in a weighted subdivision, finding the largest axially symmetric subset of a polyhedron, and computing minimum-area hulls.
Second harmonic generation and sum frequency generation
International Nuclear Information System (INIS)
Pellin, M.J.; Biwer, B.M.; Schauer, M.W.; Frye, J.M.; Gruen, D.M.
1990-01-01
Second harmonic generation and sum frequency generation are increasingly being used as in situ surface probes. These techniques are coherent and inherently surface sensitive by the nature of the mediums response to intense laser light. Here we will review these two techniques using aqueous corrosion as an example problem. Aqueous corrosion of technologically important materials such as Fe, Ni and Cr proceeds from a reduced metal surface with layer by layer growth of oxide films mitigated by compositional changes in the chemical makeup of the growing film. Passivation of the metal surface is achieved after growth of only a few tens of atomic layers of metal oxide. Surface Second Harmonic Generation and a related nonlinear laser technique, Sum Frequency Generation have demonstrated an ability to probe the surface composition of growing films even in the presence of aqueous solutions. 96 refs., 4 figs
Let Us Rank Journalism Programs
Weber, Joseph
2014-01-01
Unlike law, business, and medical schools, as well as universities in general, journalism schools and journalism programs have rarely been ranked. Publishers such as "U.S. News & World Report," "Forbes," "Bloomberg Businessweek," and "Washington Monthly" do not pay them much mind. What is the best…
On Rank Driven Dynamical Systems
Veerman, J. J. P.; Prieto, F. J.
2014-08-01
We investigate a class of models related to the Bak-Sneppen (BS) model, initially proposed to study evolution. The BS model is extremely simple and yet captures some forms of "complex behavior" such as self-organized criticality that is often observed in physical and biological systems. In this model, random fitnesses in are associated to agents located at the vertices of a graph . Their fitnesses are ranked from worst (0) to best (1). At every time-step the agent with the worst fitness and some others with a priori given rank probabilities are replaced by new agents with random fitnesses. We consider two cases: The exogenous case where the new fitnesses are taken from an a priori fixed distribution, and the endogenous case where the new fitnesses are taken from the current distribution as it evolves. We approximate the dynamics by making a simplifying independence assumption. We use Order Statistics and Dynamical Systems to define a rank-driven dynamical system that approximates the evolution of the distribution of the fitnesses in these rank-driven models, as well as in the BS model. For this simplified model we can find the limiting marginal distribution as a function of the initial conditions. Agreement with experimental results of the BS model is excellent.
African Journals Online (AJOL)
maths/stats
... GAUSS SEIDEL'S. NUMERICAL ALGORITHMS IN PAGE RANK ANALYSIS. ... The convergence is guaranteed, if the absolute value of the largest eigen ... improved Gauss-Seidel iteration algorithm, based on the decomposition. U. L. D. M. +. +. = ..... This corresponds to determine the eigen vector of T with eigen value 1.
Old tensor mesons in QCD sum rules
International Nuclear Information System (INIS)
Aliev, T.M.; Shifman, M.A.
1981-01-01
Tensor mesons f, A 2 and A 3 are analyzed within the framework of QCD sum rules. The effects of gluon and quark condensate is accounted for phenomenologically. Accurate estimates of meson masses and coupling constants of the lowest-lying states are obtained. It is shown that the masses are reproduced within theoretical uncertainty of about 80 MeV. The coupling of f meson to the corresponding quark current is determined. The results are in good aqreement with experimental data [ru
Disjoint sum forms in reliability theory
Directory of Open Access Journals (Sweden)
B. Anrig
2014-01-01
Full Text Available The structure function f of a binary monotone system is assumed to be known and given in a disjunctive normal form, i.e. as the logical union of products of the indicator variables of the states of its subsystems. Based on this representation of f, an improved Abraham algorithm is proposed for generating the disjoint sum form of f. This form is the base for subsequent numerical reliability calculations. The approach is generalized to multivalued systems. Examples are discussed.
Singlet axial constant from QCD sum rules
International Nuclear Information System (INIS)
Belitskij, A.V.; Teryaev, O.V.
1995-01-01
We analyze the singlet axial form factor of the proton for small momentum transferred in the framework of QCD sum rules using the interpolating nucleon current which explicitly accounts for the gluonic degrees of freedom. As the result we come to the quantitative prediction of the singlet axial constant. It is shown that the bilocal power corrections play the most important role in the analysis. 21 refs., 3 figs
Beautiful mesons from QCD spectral sum rules
International Nuclear Information System (INIS)
Narison, S.
1991-01-01
We discuss the beautiful meson from the point of view of the QCD spectral sum rules (QSSR). The bottom quark mass and the mixed light quark-gluon condensates are determined quite accurately. The decay constant f B is estimated and we present some arguments supporting this result. The decay constants and the masses of the other members of the beautiful meson family are predicted. (orig.)
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan
2012-11-19
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-01-01
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
14 CFR 1214.1105 - Final ranking.
2010-01-01
... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Final ranking. 1214.1105 Section 1214.1105... Recruitment and Selection Program § 1214.1105 Final ranking. Final rankings will be based on a combination of... preference will be included in this final ranking in accordance with applicable regulations. ...
Multiple graph regularized protein domain ranking.
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-11-19
Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Multiple graph regularized protein domain ranking
Directory of Open Access Journals (Sweden)
Wang Jim
2012-11-01
Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
A 2-categorical state sum model
Energy Technology Data Exchange (ETDEWEB)
Baratin, Aristide, E-mail: abaratin@uwaterloo.ca [Department of Applied Mathematics, University of Waterloo, 200 University Ave W, Waterloo, Ontario N2L 3G1 (Canada); Freidel, Laurent, E-mail: lfreidel@perimeterinstitute.ca [Perimeter Institute for Theoretical Physics, 31 Caroline Str. N, Waterloo, Ontario N2L 2Y5 (Canada)
2015-01-15
It has long been argued that higher categories provide the proper algebraic structure underlying state sum invariants of 4-manifolds. This idea has been refined recently, by proposing to use 2-groups and their representations as specific examples of 2-categories. The challenge has been to make these proposals fully explicit. Here, we give a concrete realization of this program. Building upon our earlier work with Baez and Wise on the representation theory of 2-groups, we construct a four-dimensional state sum model based on a categorified version of the Euclidean group. We define and explicitly compute the simplex weights, which may be viewed a categorified analogue of Racah-Wigner 6j-symbols. These weights solve a hexagon equation that encodes the formal invariance of the state sum under the Pachner moves of the triangulation. This result unravels the combinatorial formulation of the Feynman amplitudes of quantum field theory on flat spacetime proposed in A. Baratin and L. Freidel [Classical Quantum Gravity 24, 2027–2060 (2007)] which was shown to lead after gauge-fixing to Korepanov’s invariant of 4-manifolds.
A Survey on PageRank Computing
Berkhin, Pavel
2005-01-01
This survey reviews the research related to PageRank computing. Components of a PageRank vector serve as authority weights for web pages independent of their textual content, solely based on the hyperlink structure of the web. PageRank is typically used as a web search ranking component. This defines the importance of the model and the data structures that underly PageRank processing. Computing even a single PageRank is a difficult computational task. Computing many PageRanks is a much mor...
Variants of the Borda count method for combining ranked classifier hypotheses
van Erp, Merijn; Schomaker, Lambert; Schomaker, Lambert; Vuurpijl, Louis
2000-01-01
The Borda count is a simple yet effective method of combining rankings. In pattern recognition, classifiers are often able to return a ranked set of results. Several experiments have been conducted to test the ability of the Borda count and two variant methods to combine these ranked classifier
Use of the dry-weight-rank method of botanical analysis in the ...
African Journals Online (AJOL)
The dry-weight-rank method of botanical analysis was tested in the highveld of the Eastern Transvaal and was found to be an efficient and accurate means of determining the botanical composition of veld herbage. Accuracy was increased by weighting ranks on the basis of quadrat yield, and by allocation of equal ranks to ...
Time evolution of Wikipedia network ranking
Eom, Young-Ho; Frahm, Klaus M.; Benczúr, András; Shepelyansky, Dima L.
2013-12-01
We study the time evolution of ranking and spectral properties of the Google matrix of English Wikipedia hyperlink network during years 2003-2011. The statistical properties of ranking of Wikipedia articles via PageRank and CheiRank probabilities, as well as the matrix spectrum, are shown to be stabilized for 2007-2011. A special emphasis is done on ranking of Wikipedia personalities and universities. We show that PageRank selection is dominated by politicians while 2DRank, which combines PageRank and CheiRank, gives more accent on personalities of arts. The Wikipedia PageRank of universities recovers 80% of top universities of Shanghai ranking during the considered time period.
Weighted Discriminative Dictionary Learning based on Low-rank Representation
International Nuclear Information System (INIS)
Chang, Heyou; Zheng, Hao
2017-01-01
Low-rank representation has been widely used in the field of pattern classification, especially when both training and testing images are corrupted with large noise. Dictionary plays an important role in low-rank representation. With respect to the semantic dictionary, the optimal representation matrix should be block-diagonal. However, traditional low-rank representation based dictionary learning methods cannot effectively exploit the discriminative information between data and dictionary. To address this problem, this paper proposed weighted discriminative dictionary learning based on low-rank representation, where a weighted representation regularization term is constructed. The regularization associates label information of both training samples and dictionary atoms, and encourages to generate a discriminative representation with class-wise block-diagonal structure, which can further improve the classification performance where both training and testing images are corrupted with large noise. Experimental results demonstrate advantages of the proposed method over the state-of-the-art methods. (paper)
Low-ranking female Japanese macaques make efforts for social grooming.
Kurihara, Yosuke
2016-04-01
Grooming is essential to build social relationships in primates. Its importance is universal among animals from different ranks; however, rank-related differences in feeding patterns can lead to conflicts between feeding and grooming in low-ranking animals. Unifying the effects of dominance rank on feeding and grooming behaviors contributes to revealing the importance of grooming. Here, I tested whether the grooming behavior of low-ranking females were similar to that of high-ranking females despite differences in their feeding patterns. I followed 9 Japanese macaques Macaca fuscata fuscata adult females from the Arashiyama group, and analyzed the feeding patterns and grooming behaviors of low- and high-ranking females. Low-ranking females fed on natural foods away from the provisioning site, whereas high-ranking females obtained more provisioned food at the site. Due to these differences in feeding patterns, low-ranking females spent less time grooming than high-ranking females. However, both low- and high-ranking females performed grooming around the provisioning site, which was linked to the number of neighboring individuals for low-ranking females and feeding on provisioned foods at the site for high-ranking females. The similarity in grooming area led to a range and diversity of grooming partners that did not differ with rank. Thus, low-ranking females can obtain small amounts of provisioned foods and perform grooming with as many partners around the provisioning site as high-ranking females. These results highlight the efforts made by low-ranking females to perform grooming and suggest the importance of grooming behavior in group-living primates.
Low-ranking female Japanese macaques make efforts for social grooming
Kurihara, Yosuke
2016-01-01
Abstract Grooming is essential to build social relationships in primates. Its importance is universal among animals from different ranks; however, rank-related differences in feeding patterns can lead to conflicts between feeding and grooming in low-ranking animals. Unifying the effects of dominance rank on feeding and grooming behaviors contributes to revealing the importance of grooming. Here, I tested whether the grooming behavior of low-ranking females were similar to that of high-ranking females despite differences in their feeding patterns. I followed 9 Japanese macaques Macaca fuscata fuscata adult females from the Arashiyama group, and analyzed the feeding patterns and grooming behaviors of low- and high-ranking females. Low-ranking females fed on natural foods away from the provisioning site, whereas high-ranking females obtained more provisioned food at the site. Due to these differences in feeding patterns, low-ranking females spent less time grooming than high-ranking females. However, both low- and high-ranking females performed grooming around the provisioning site, which was linked to the number of neighboring individuals for low-ranking females and feeding on provisioned foods at the site for high-ranking females. The similarity in grooming area led to a range and diversity of grooming partners that did not differ with rank. Thus, low-ranking females can obtain small amounts of provisioned foods and perform grooming with as many partners around the provisioning site as high-ranking females. These results highlight the efforts made by low-ranking females to perform grooming and suggest the importance of grooming behavior in group-living primates. PMID:29491896
Kuzmickienė, Jurgita; Kaubrys, Gintaras
2016-10-08
BACKGROUND The primary manifestation of Alzheimer's disease (AD) is decline in memory. Dysexecutive symptoms have tremendous impact on functional activities and quality of life. Data regarding frontal-executive dysfunction in mild AD are controversial. The aim of this study was to assess the presence and specific features of executive dysfunction in mild AD based on Cambridge Neuropsychological Test Automated Battery (CANTAB) results. MATERIAL AND METHODS Fifty newly diagnosed, treatment-naïve, mild, late-onset AD patients (MMSE ≥20, AD group) and 25 control subjects (CG group) were recruited in this prospective, cross-sectional study. The CANTAB tests CRT, SOC, PAL, SWM were used for in-depth cognitive assessment. Comparisons were performed using the t test or Mann-Whitney U test, as appropriate. Correlations were evaluated by Pearson r or Spearman R. Statistical significance was set at p<0.05. RESULTS AD and CG groups did not differ according to age, education, gender, or depression. Few differences were found between groups in the SOC test for performance measures: Mean moves (minimum 3 moves): AD (Rank Sum=2227), CG (Rank Sum=623), p<0.001. However, all SOC test time measures differed significantly between groups: SOC Mean subsequent thinking time (4 moves): AD (Rank Sum=2406), CG (Rank Sum=444), p<0.001. Correlations were weak between executive function (SOC) and episodic/working memory (PAL, SWM) (R=0.01-0.38) or attention/psychomotor speed (CRT) (R=0.02-0.37). CONCLUSIONS Frontal-executive functions are impaired in mild AD patients. Executive dysfunction is highly prominent in time measures, but minimal in performance measures. Executive disorders do not correlate with a decline in episodic and working memory or psychomotor speed in mild AD.
First rank symptoms for schizophrenia.
Soares-Weiser, Karla; Maayan, Nicola; Bergman, Hanna; Davenport, Clare; Kirkham, Amanda J; Grabowski, Sarah; Adams, Clive E
2015-01-25
(5515 were included in the analysis). Studies were conducted from 1974 to 2011, with 80% of the studies conducted in the 1970's, 1980's or 1990's. Most studies did not report study methods sufficiently and many had high applicability concerns. In 20 studies, FRS differentiated schizophrenia from all other diagnoses with a sensitivity of 57% (50.4% to 63.3%), and a specificity of 81.4% (74% to 87.1%) In seven studies, FRS differentiated schizophrenia from non-psychotic mental health disorders with a sensitivity of 61.8% (51.7% to 71%) and a specificity of 94.1% (88% to 97.2%). In sixteen studies, FRS differentiated schizophrenia from other types of psychosis with a sensitivity of 58% (50.3% to 65.3%) and a specificity of 74.7% (65.2% to 82.3%). The synthesis of old studies of limited quality in this review indicates that FRS correctly identifies people with schizophrenia 75% to 95% of the time. The use of FRS to diagnose schizophrenia in triage will incorrectly diagnose around five to 19 people in every 100 who have FRS as having schizophrenia and specialists will not agree with this diagnosis. These people will still merit specialist assessment and help due to the severity of disturbance in their behaviour and mental state. Again, with a sensitivity of FRS of 60%, reliance on FRS to diagnose schizophrenia in triage will not correctly diagnose around 40% of people that specialists will consider to have schizophrenia. Some of these people may experience a delay in getting appropriate treatment. Others, whom specialists will consider to have schizophrenia, could be prematurely discharged from care, if triage relies on the presence of FRS to diagnose schizophrenia. Empathetic, considerate use of FRS as a diagnostic aid - with known limitations - should avoid a good proportion of these errors.We hope that newer tests - to be included in future Cochrane reviews - will show better results. However, symptoms of first rank can still be helpful where newer tests are not available
Atp1a3-deficient heterozygous mice show lower rank in the hierarchy and altered social behavior.
Sugimoto, H; Ikeda, K; Kawakami, K
2017-10-23
Atp1a3 is the Na-pump alpha3 subunit gene expressed mainly in neurons of the brain. Atp1a3-deficient heterozygous mice (Atp1a3 +/- ) show altered neurotransmission and deficits of motor function after stress loading. To understand the function of Atp1a3 in a social hierarchy, we evaluated social behaviors (social interaction, aggression, social approach and social dominance) of Atp1a3 +/- and compared the rank and hierarchy structure between Atp1a3 +/- and wild-type mice within a housing cage using the round-robin tube test and barbering observations. Formation of a hierarchy decreases social conflict and promote social stability within the group. The hierarchical rank is a reflection of social dominance within a cage, which is heritable and can be regulated by specific genes in mice. Here we report: (1) The degree of social interaction but not aggression was lower in Atp1a3 +/- than wild-type mice, and Atp1a3 +/- approached Atp1a3 +/- mice more frequently than wild type. (2) The frequency of barbering was lower in the Atp1a3 +/- group than in the wild-type group, while no difference was observed in the mixed-genotype housing condition. (3) Hierarchy formation was not different between Atp1a3 +/- and wild type. (4) Atp1a3 +/- showed a lower rank in the mixed-genotype housing condition than that in the wild type, indicating that Atp1a3 regulates social dominance. In sum, Atp1a3 +/- showed unique social behavior characteristics of lower social interaction and preference to approach the same genotype mice and a lower ranking in the hierarchy. © 2017 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.
International Nuclear Information System (INIS)
Tapia, V.
1992-04-01
Recently we have explored the consequences of describing the metric properties of our universe through a quartic line element. In this geometry the natural object is a fourth-rank metric, i.e., a tensor with four indices. Based on this geometry we constructed a simple field theory for the gravitational field. The field equations coincide with the Einstein field equations in the vacuum case. This fact, however, does not guarantee the observational equivalence of both theories since one must still verify that, as a consequence of the field equations, test particles move along geodesics. This letter is aimed at establishing this result. (author). 7 refs
Identifying the most influential spreaders in complex networks by an Extended Local K-Shell Sum
Yang, Fan; Zhang, Ruisheng; Yang, Zhao; Hu, Rongjing; Li, Mengtian; Yuan, Yongna; Li, Keqin
Identifying influential spreaders is crucial for developing strategies to control the spreading process on complex networks. Following the well-known K-Shell (KS) decomposition, several improved measures are proposed. However, these measures cannot identify the most influential spreaders accurately. In this paper, we define a Local K-Shell Sum (LKSS) by calculating the sum of the K-Shell indices of the neighbors within 2-hops of a given node. Based on the LKSS, we propose an Extended Local K-Shell Sum (ELKSS) centrality to rank spreaders. The ELKSS is defined as the sum of the LKSS of the nearest neighbors of a given node. By assuming that the spreading process on networks follows the Susceptible-Infectious-Recovered (SIR) model, we perform extensive simulations on a series of real networks to compare the performance between the ELKSS centrality and other six measures. The results show that the ELKSS centrality has a better performance than the six measures to distinguish the spreading ability of nodes and to identify the most influential spreaders accurately.
A random-sum Wilcoxon statistic and its application to analysis of ROC and LROC data.
Tang, Liansheng Larry; Balakrishnan, N
2011-01-01
The Wilcoxon-Mann-Whitney statistic is commonly used for a distribution-free comparison of two groups. One requirement for its use is that the sample sizes of the two groups are fixed. This is violated in some of the applications such as medical imaging studies and diagnostic marker studies; in the former, the violation occurs since the number of correctly localized abnormal images is random, while in the latter the violation is due to some subjects not having observable measurements. For this reason, we propose here a random-sum Wilcoxon statistic for comparing two groups in the presence of ties, and derive its variance as well as its asymptotic distribution for large sample sizes. The proposed statistic includes the regular Wilcoxon rank-sum statistic. Finally, we apply the proposed statistic for summarizing location response operating characteristic data from a liver computed tomography study, and also for summarizing diagnostic accuracy of biomarker data.
Iacovacci, Jacopo; Rahmede, Christoph; Arenas, Alex; Bianconi, Ginestra
2016-10-01
Recently it has been recognized that many complex social, technological and biological networks have a multilayer nature and can be described by multiplex networks. Multiplex networks are formed by a set of nodes connected by links having different connotations forming the different layers of the multiplex. Characterizing the centrality of the nodes in a multiplex network is a challenging task since the centrality of the node naturally depends on the importance associated to links of a certain type. Here we propose to assign to each node of a multiplex network a centrality called Functional Multiplex PageRank that is a function of the weights given to every different pattern of connections (multilinks) existent in the multiplex network between any two nodes. Since multilinks distinguish all the possible ways in which the links in different layers can overlap, the Functional Multiplex PageRank can describe important non-linear effects when large relevance or small relevance is assigned to multilinks with overlap. Here we apply the Functional Page Rank to the multiplex airport networks, to the neuronal network of the nematode C. elegans, and to social collaboration and citation networks between scientists. This analysis reveals important differences existing between the most central nodes of these networks, and the correlations between their so-called pattern to success.
Low rank magnetic resonance fingerprinting.
Mazor, Gal; Weizman, Lior; Tal, Assaf; Eldar, Yonina C
2016-08-01
Magnetic Resonance Fingerprinting (MRF) is a relatively new approach that provides quantitative MRI using randomized acquisition. Extraction of physical quantitative tissue values is preformed off-line, based on acquisition with varying parameters and a dictionary generated according to the Bloch equations. MRF uses hundreds of radio frequency (RF) excitation pulses for acquisition, and therefore high under-sampling ratio in the sampling domain (k-space) is required. This under-sampling causes spatial artifacts that hamper the ability to accurately estimate the quantitative tissue values. In this work, we introduce a new approach for quantitative MRI using MRF, called Low Rank MRF. We exploit the low rank property of the temporal domain, on top of the well-known sparsity of the MRF signal in the generated dictionary domain. We present an iterative scheme that consists of a gradient step followed by a low rank projection using the singular value decomposition. Experiments on real MRI data demonstrate superior results compared to conventional implementation of compressed sensing for MRF at 15% sampling ratio.
Ranking Support Vector Machine with Kernel Approximation
Directory of Open Access Journals (Sweden)
Kai Chen
2017-01-01
Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Remark on the computation of mode sums
International Nuclear Information System (INIS)
Allen, Theodore J.; Olsson, M. G.; Schmidt, Jeffrey R.
2000-01-01
The computation of mode sums of the types encountered in basic quantum field theoretic applications is addressed with an emphasis on their expansions into functions of distance that can be interpreted as potentials. We show how to regularize and calculate the Casimir energy for the continuum Nambu-Goto string with massive ends as well as for the discrete Isgur-Paton non-relativistic string with massive ends. As an additional example, we examine the effect on the interquark potential of a constant Kalb-Ramond field strength interacting with a QCD string. (c) 2000 The American Physical Society
Sum rules in extended RPA theories
International Nuclear Information System (INIS)
Adachi, S.; Lipparini, E.
1988-01-01
Different moments m k of the excitation strength function are studied in the framework of the second RPA and of the extended RPA in which 2p2h correlations are explicitly introduced into the ground state by using first-order perturbation theory. Formal properties of the equations of motion concerning sum rules are derived and compared with those exhibited by the usual 1p1h RPA. The problem of the separation of the spurious solutions in extended RPA calculations is also discussed. (orig.)
DEFF Research Database (Denmark)
Triantafillou, Peter
2017-01-01
Supreme audit institutions (SAIs) are fundamental institutions in liberal democracies as they enable control of the exercise of state power. In order to maintain this function, SAIs must enjoy a high level of independence. Moreover, SAIs are increasingly expected to be also relevant for government...... and the execution of its policies by way of performance auditing. This article examines how and why the performance auditing of the Danish SAI pursues independence and relevance. It is argued that, in general, the simultaneous pursuit of independence and relevance is highly challenging and amounts to a zero-sum or...
Nuclear Symmetry Energy with QCD Sum Rule
International Nuclear Information System (INIS)
Jeong, K.S.; Lee, S.H.
2013-01-01
We calculate the nucleon self-energies in an isospin asymmetric nuclear matter using QCD sum rule. Taking the difference of these for the neutron and proton enables us to express an important part of the nuclear symmetry energy in terms of local operators. Calculating the operator product expansion up to mass dimension six operators, we find that the main contribution to the difference comes from the iso-vector scalar and vector operators, which is reminiscent to the case of relativistic mean field type theories where mesons with aforementioned quantum numbers produce the difference and provide the dominant mechanism for nuclear symmetry energy. (author)
Statistical sum of bosonic string, compactified on an orbifold
International Nuclear Information System (INIS)
Morozov, A.; Ol'shanetskij, M.
1986-01-01
Expression for statistical sum of bosonic string, compactified on a singular orbifold, is presented. All the information about the orbifold is encoded the specific combination of theta-functions, which the statistical sum is expressed through
Robinson's radiation damping sum rule: Reaffirmation and extension
International Nuclear Information System (INIS)
Mane, S.R.
2011-01-01
Robinson's radiation damping sum rule is one of the classic theorems of accelerator physics. Recently Orlov has claimed to find serious flaws in Robinson's proof of his sum rule. In view of the importance of the subject, I have independently examined the derivation of the Robinson radiation damping sum rule. Orlov's criticisms are without merit: I work through Robinson's derivation and demonstrate that Orlov's criticisms violate well-established mathematical theorems and are hence not valid. I also show that Robinson's derivation, and his damping sum rule, is valid in a larger domain than that treated by Robinson himself: Robinson derived his sum rule under the approximation of a small damping rate, but I show that Robinson's sum rule applies to arbitrary damping rates. I also display more concise derivations of the sum rule using matrix differential equations. I also show that Robinson's sum rule is valid in the vicinity of a parametric resonance.
A new generalization of Hardy–Berndt sums
Indian Academy of Sciences (India)
4,11,18]. Berndt and Goldberg [4] found analytic properties of these sums and established infinite trigonometric series representations for them. The most important properties of Hardy–. Berndt sums are reciprocity theorems due to Berndt [3] ...
Isospin sum rules for inclusive cross-sections
Rotelli, P.; Suttorp, L.G.
1972-01-01
A systematic analysis of isospin sum rules is presented for the distribution functions of strong, electromagnetic weak inclusive processes. The general expression for these sum rules is given and some new examples are presented.
SibRank: Signed bipartite network analysis for neighbor-based collaborative ranking
Shams, Bita; Haratizadeh, Saman
2016-09-01
Collaborative ranking is an emerging field of recommender systems that utilizes users' preference data rather than rating values. Unfortunately, neighbor-based collaborative ranking has gained little attention despite its more flexibility and justifiability. This paper proposes a novel framework, called SibRank that seeks to improve the state of the art neighbor-based collaborative ranking methods. SibRank represents users' preferences as a signed bipartite network, and finds similar users, through a novel personalized ranking algorithm in signed networks.
QCD sum-rules for V-A spectral functions
International Nuclear Information System (INIS)
Chakrabarti, J.; Mathur, V.S.
1980-01-01
The Borel transformation technique of Shifman et al is used to obtain QCD sum-rules for V-A spectral functions. In contrast to the situation in the original Weinberg sum-rules and those of Bernard et al, the problem of saturating the sum-rules by low lying resonances is brought under control. Furthermore, the present sum-rules, on saturation, directly determine useful phenomenological parameters
7 CFR 42.132 - Determining cumulative sum values.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Determining cumulative sum values. 42.132 Section 42... Determining cumulative sum values. (a) The parameters for the on-line cumulative sum sampling plans for AQL's... 3 1 2.5 3 1 2 1 (b) At the beginning of the basic inspection period, the CuSum value is set equal to...
Deriving the Normalized Min-Sum Algorithm from Cooperative Optimization
Huang, Xiaofei
2006-01-01
The normalized min-sum algorithm can achieve near-optimal performance at decoding LDPC codes. However, it is a critical question to understand the mathematical principle underlying the algorithm. Traditionally, people thought that the normalized min-sum algorithm is a good approximation to the sum-product algorithm, the best known algorithm for decoding LDPC codes and Turbo codes. This paper offers an alternative approach to understand the normalized min-sum algorithm. The algorithm is derive...
Rank Two Affine Manifolds in Genus 3
Aulicino, David; Nguyen, Duc-Manh
2016-01-01
We complete the classification of rank two affine manifolds in the moduli space of translation surfaces in genus three. Combined with a recent result of Mirzakhani and Wright, this completes the classification of higher rank affine manifolds in genus three.
Gottfried sum rule and mesonic exchanges in deuteron
International Nuclear Information System (INIS)
Kaptari, L.P.
1991-01-01
Recent NMC data on the experimental value of the Gottfried Sum are discussed. It is shown that the Gottfried Sum is sensitive to the nuclear structure corrections, viz. themesonic exchanges and binding effects. A new estimation of the Gottfried Sum is given. The obtained result is close to the quark-parton prediction of 1/3. 11 refs.; 2 figs
Extremal extensions for the sum of nonnegative selfadjoint relations
Hassi, Seppo; Sandovici, Adrian; De Snoo, Henk; Winkler, Henrik
2007-01-01
The sum A + B of two nonnegative selfadjoint relations (multivalued operators) A and B is a nonnegative relation. The class of all extremal extensions of the sum A + B is characterized as products of relations via an auxiliary Hilbert space associated with A and B. The so-called form sum extension
Moments of the weighted sum-of-digits function | Larcher ...
African Journals Online (AJOL)
The weighted sum-of-digits function is a slight generalization of the well known sum-of-digits function with the difference that here the digits are weighted by some weights. So for example in this concept also the alternated sum-of-digits function is included. In this paper we compute the first and the second moment of the ...
7 CFR 1726.205 - Multiparty lump sum quotations.
2010-01-01
... 7 Agriculture 11 2010-01-01 2010-01-01 false Multiparty lump sum quotations. 1726.205 Section 1726....205 Multiparty lump sum quotations. The borrower or its engineer must contact a sufficient number of... basis of written lump sum quotations, the borrower will select the supplier or contractor based on the...
Operator-sum representation for bosonic Gaussian channels
International Nuclear Information System (INIS)
Ivan, J. Solomon; Sabapathy, Krishna Kumar; Simon, R.
2011-01-01
Operator-sum or Kraus representations for single-mode bosonic Gaussian channels are developed, and several of their consequences explored. The fact that the two-mode metaplectic operators acting as unitary purification of these channels do not, in their canonical form, mix the position and momentum variables is exploited to present a procedure which applies uniformly to all families in the Holevo classification. In this procedure the Kraus operators of every quantum-limited Gaussian channel can be simply read off from the matrix elements of a corresponding metaplectic operator. Kraus operators are employed to bring out, in the Fock basis, the manner in which the antilinear, unphysical matrix transposition map when accompanied by injection of a threshold classical noise becomes a physical channel, denoted D(κ) in the Holevo classification. The matrix transposition channels D(κ), D(κ -1 ) turn out to be a dual pair in the sense that their Kraus operators are related by the adjoint operation. The amplifier channel with amplification factor κ and the beam-splitter channel with attenuation factor κ -1 turn out to be mutually dual in the same sense. The action of the quantum-limited attenuator and amplifier channels as simply scaling maps on suitable quasiprobabilities in phase space is examined in the Kraus picture. Consideration of cumulants is used to examine the issue of fixed points. The semigroup property of the amplifier and attenuator families leads in both cases to a Zeno-like effect arising as a consequence of interrupted evolution. In the cases of entanglement-breaking channels a description in terms of rank 1 Kraus operators is shown to emerge quite simply. In contradistinction, it is shown that there is not even one finite rank operator in the entire linear span of Kraus operators of the quantum-limited amplifier or attenuator families, an assertion far stronger than the statement that these are not entanglement breaking channels. A characterization of
The Privilege of Ranking: Google Plays Ball.
Wiggins, Richard
2003-01-01
Discussion of ranking systems used in various settings, including college football and academic admissions, focuses on the Google search engine. Explains the PageRank mathematical formula that scores Web pages by connecting the number of links; limitations, including authenticity and accuracy of ranked Web pages; relevancy; adjusting algorithms;…
A Comprehensive Analysis of Marketing Journal Rankings
Steward, Michelle D.; Lewis, Bruce R.
2010-01-01
The purpose of this study is to offer a comprehensive assessment of journal standings in Marketing from two perspectives. The discipline perspective of rankings is obtained from a collection of published journal ranking studies during the past 15 years. The studies in the published ranking stream are assessed for reliability by examining internal…
Sum rules for charge transition density
Energy Technology Data Exchange (ETDEWEB)
Gul' karov, I S [Tashkentskij Politekhnicheskij Inst. (USSR)
1979-01-01
The form factors of the quadrupole and octupole oscillations of the /sup 12/C nucleus are compared with the predictions of the sum rules for the charge transition density (CTD). These rules allow one to obtain various CTDs which contain the components k: r/sup lambda + 2k-2/rho(r) and r/sup lambda + 2k-1)(drho(r)/dr) (k = 0, 1, 2...) and can be applied to analyze the inelastic scattering of high energy particles by nuclei. It is shown that the CTD under consideration have different radius dependence and describe the data essentially better (though ambiguously) than the Tassy and Steinwedel-Jensen models do. Recurrence formulas are derived for the ratios of the higher-order transition matrix elements and CTD. These formulas can be used to predict the CTD behavior for highly excited nuclear states.
Neutron matter within QCD sum rules
Cai, Bao-Jun; Chen, Lie-Wen
2018-05-01
The equation of state (EOS) of pure neutron matter (PNM) is studied in QCD sum rules (QCDSRs ). It is found that the QCDSR results on the EOS of PNM are in good agreement with predictions by current advanced microscopic many-body theories. Moreover, the higher-order density terms in quark condensates are shown to be important to describe the empirical EOS of PNM in the density region around and above nuclear saturation density although they play a minor role at subsaturation densities. The chiral condensates in PNM are also studied, and our results indicate that the higher-order density terms in quark condensates, which are introduced to reasonably describe the empirical EOS of PNM at suprasaturation densities, tend to hinder the appearance of chiral symmetry restoration in PNM at high densities.
Scattering and; Delay, Scale, and Sum Migration
Energy Technology Data Exchange (ETDEWEB)
Lehman, S K
2011-07-06
How do we see? What is the mechanism? Consider standing in an open field on a clear sunny day. In the field are a yellow dog and a blue ball. From a wave-based remote sensing point of view the sun is a source of radiation. It is a broadband electromagnetic source which, for the purposes of this introduction, only the visible spectrum is considered (approximately 390 to 750 nanometers or 400 to 769 TeraHertz). The source emits an incident field into the known background environment which, for this example, is free space. The incident field propagates until it strikes an object or target, either the yellow dog or the blue ball. The interaction of the incident field with an object results in a scattered field. The scattered field arises from a mis-match between the background refractive index, considered to be unity, and the scattering object refractive index ('yellow' for the case of the dog, and 'blue' for the ball). This is also known as an impedance mis-match. The scattering objects are referred to as secondary sources of radiation, that radiation being the scattered field which propagates until it is measured by the two receivers known as 'eyes'. The eyes focus the measured scattered field to form images which are processed by the 'wetware' of the brain for detection, identification, and localization. When time series representations of the measured scattered field are available, the image forming focusing process can be mathematically modeled by delayed, scaled, and summed migration. This concept of optical propagation, scattering, and focusing have one-to-one equivalents in the acoustic realm. This document is intended to present the basic concepts of scalar scattering and migration used in wide band wave-based remote sensing and imaging. The terms beamforming and (delayed, scaled, and summed) migration are used interchangeably but are to be distinguished from the narrow band (frequency domain) beamforming to determine
On sum rules for charge transition density
International Nuclear Information System (INIS)
Gul'karov, I.S.
1979-01-01
The form factors of the quadrupole and octupole oscillations of the 12 C nucleus are compared with the predictions of the sum rules for the charge transition density (CTD). These rules allow to obtain various CTD which contain the components k: rsup(lambda+2k-2)rho(r) and rsup(lambda+2k-1)(drho(r)/dr) (k=0, 1, 2...) and can be applied to analyze the inelastic scattering of high energy particles by nuclei. It is shown that the CTD under consideration have different radius dependence and describe the data essentially better (though ambiguously) than the Tassy and Steinwedel-Jensen models do. The recurrent formulas are derived for the ratios of the higher order transition matrix elements and CTD. These formulas can be used to predict the CTD behaviour for highly excited nuclear states
About the use of rank transformation in sensitivity analysis of model output
International Nuclear Information System (INIS)
Saltelli, Andrea; Sobol', Ilya M
1995-01-01
Rank transformations are frequently employed in numerical experiments involving a computational model, especially in the context of sensitivity and uncertainty analyses. Response surface replacement and parameter screening are tasks which may benefit from a rank transformation. Ranks can cope with nonlinear (albeit monotonic) input-output distributions, allowing the use of linear regression techniques. Rank transformed statistics are more robust, and provide a useful solution in the presence of long tailed input and output distributions. As is known to practitioners, care must be employed when interpreting the results of such analyses, as any conclusion drawn using ranks does not translate easily to the original model. In the present note an heuristic approach is taken, to explore, by way of practical examples, the effect of a rank transformation on the outcome of a sensitivity analysis. An attempt is made to identify trends, and to correlate these effects to a model taxonomy. Employing sensitivity indices, whereby the total variance of the model output is decomposed into a sum of terms of increasing dimensionality, we show that the main effect of the rank transformation is to increase the relative weight of the first order terms (the 'main effects'), at the expense of the 'interactions' and 'higher order interactions'. As a result the influence of those parameters which influence the output mostly by way of interactions may be overlooked in an analysis based on the ranks. This difficulty increases with the dimensionality of the problem, and may lead to the failure of a rank based sensitivity analysis. We suggest that the models can be ranked, with respect to the complexity of their input-output relationship, by mean of an 'Association' index I y . I y may complement the usual model coefficient of determination R y 2 as a measure of model complexity for the purpose of uncertainty and sensitivity analysis
Spin Sum Rules and Polarizabilities: Results from Jefferson Lab
International Nuclear Information System (INIS)
Jian-Ping Chen
2006-01-01
The nucleon spin structure has been an active, exciting and intriguing subject of interest for the last three decades. Recent experimental data on nucleon spin structure at low to intermediate momentum transfers provide new information in the confinement regime and the transition region from the confinement regime to the asymptotic freedom regime. New insight is gained by exploring moments of spin structure functions and their corresponding sum rules (i.e. the generalized Gerasimov-Drell-Hearn, Burkhardt-Cottingham and Bjorken). The Burkhardt-Cottingham sum rule is verified to good accuracy. The spin structure moments data are compared with Chiral Perturbation Theory calculations at low momentum transfers. It is found that chiral perturbation calculations agree reasonably well with the first moment of the spin structure function g 1 at momentum transfer of 0.05 to 0.1 GeV 2 but fail to reproduce the neutron data in the case of the generalized polarizability (delta) LT (the (delta) LT puzzle). New data have been taken on the neutron ( 3 He), the proton and the deuteron at very low Q 2 down to 0.02 GeV 2 . They will provide benchmark tests of Chiral dynamics in the kinematic region where the Chiral Perturbation theory is expected to work
Learning to rank figures within a biomedical article.
Directory of Open Access Journals (Sweden)
Feifan Liu
Full Text Available Hundreds of millions of figures are available in biomedical literature, representing important biomedical experimental evidence. This ever-increasing sheer volume has made it difficult for scientists to effectively and accurately access figures of their interest, the process of which is crucial for validating research facts and for formulating or testing novel research hypotheses. Current figure search applications can't fully meet this challenge as the "bag of figures" assumption doesn't take into account the relationship among figures. In our previous study, hundreds of biomedical researchers have annotated articles in which they serve as corresponding authors. They ranked each figure in their paper based on a figure's importance at their discretion, referred to as "figure ranking". Using this collection of annotated data, we investigated computational approaches to automatically rank figures. We exploited and extended the state-of-the-art listwise learning-to-rank algorithms and developed a new supervised-learning model BioFigRank. The cross-validation results show that BioFigRank yielded the best performance compared with other state-of-the-art computational models, and the greedy feature selection can further boost the ranking performance significantly. Furthermore, we carry out the evaluation by comparing BioFigRank with three-level competitive domain-specific human experts: (1 First Author, (2 Non-Author-In-Domain-Expert who is not the author nor co-author of an article but who works in the same field of the corresponding author of the article, and (3 Non-Author-Out-Domain-Expert who is not the author nor co-author of an article and who may or may not work in the same field of the corresponding author of an article. Our results show that BioFigRank outperforms Non-Author-Out-Domain-Expert and performs as well as Non-Author-In-Domain-Expert. Although BioFigRank underperforms First Author, since most biomedical researchers are either in- or
Automatic figure ranking and user interfacing for intelligent figure search.
Directory of Open Access Journals (Sweden)
Hong Yu
2010-10-01
Full Text Available Figures are important experimental results that are typically reported in full-text bioscience articles. Bioscience researchers need to access figures to validate research facts and to formulate or to test novel research hypotheses. On the other hand, the sheer volume of bioscience literature has made it difficult to access figures. Therefore, we are developing an intelligent figure search engine (http://figuresearch.askhermes.org. Existing research in figure search treats each figure equally, but we introduce a novel concept of "figure ranking": figures appearing in a full-text biomedical article can be ranked by their contribution to the knowledge discovery.We empirically validated the hypothesis of figure ranking with over 100 bioscience researchers, and then developed unsupervised natural language processing (NLP approaches to automatically rank figures. Evaluating on a collection of 202 full-text articles in which authors have ranked the figures based on importance, our best system achieved a weighted error rate of 0.2, which is significantly better than several other baseline systems we explored. We further explored a user interfacing application in which we built novel user interfaces (UIs incorporating figure ranking, allowing bioscience researchers to efficiently access important figures. Our evaluation results show that 92% of the bioscience researchers prefer as the top two choices the user interfaces in which the most important figures are enlarged. With our automatic figure ranking NLP system, bioscience researchers preferred the UIs in which the most important figures were predicted by our NLP system than the UIs in which the most important figures were randomly assigned. In addition, our results show that there was no statistical difference in bioscience researchers' preference in the UIs generated by automatic figure ranking and UIs by human ranking annotation.The evaluation results conclude that automatic figure ranking and user
Learning to rank figures within a biomedical article.
Liu, Feifan; Yu, Hong
2014-01-01
Hundreds of millions of figures are available in biomedical literature, representing important biomedical experimental evidence. This ever-increasing sheer volume has made it difficult for scientists to effectively and accurately access figures of their interest, the process of which is crucial for validating research facts and for formulating or testing novel research hypotheses. Current figure search applications can't fully meet this challenge as the "bag of figures" assumption doesn't take into account the relationship among figures. In our previous study, hundreds of biomedical researchers have annotated articles in which they serve as corresponding authors. They ranked each figure in their paper based on a figure's importance at their discretion, referred to as "figure ranking". Using this collection of annotated data, we investigated computational approaches to automatically rank figures. We exploited and extended the state-of-the-art listwise learning-to-rank algorithms and developed a new supervised-learning model BioFigRank. The cross-validation results show that BioFigRank yielded the best performance compared with other state-of-the-art computational models, and the greedy feature selection can further boost the ranking performance significantly. Furthermore, we carry out the evaluation by comparing BioFigRank with three-level competitive domain-specific human experts: (1) First Author, (2) Non-Author-In-Domain-Expert who is not the author nor co-author of an article but who works in the same field of the corresponding author of the article, and (3) Non-Author-Out-Domain-Expert who is not the author nor co-author of an article and who may or may not work in the same field of the corresponding author of an article. Our results show that BioFigRank outperforms Non-Author-Out-Domain-Expert and performs as well as Non-Author-In-Domain-Expert. Although BioFigRank underperforms First Author, since most biomedical researchers are either in- or out
Demographic Ranking of the Baltic Sea States
Directory of Open Access Journals (Sweden)
Sluka N.
2014-06-01
Full Text Available The relevance of the study lies in the acute need to modernise the tools for a more accurate and comparable reflection of the demographic reality of spatial objects of different scales. This article aims to test the methods of “demographic rankings” developed by Yermakov and Shmakov. The method is based on the principles of indirect standardisation of the major demographic coefficients relative to the age structure.The article describes the first attempt to apply the method to the analysis of birth and mortality rates in 1995 and 2010 for 140 countries against the global average, and for the Baltic Sea states against the European average. The grouping of countries and the analysis of changes over the given period confirmed a number of demographic development trends and the persistence of wide territorial disparities in major indicators. The authors identify opposite trends in ranking based on the standardised birth (country consolidation at the level of averaged values and mortality (polarisation rates. The features of demographic process development in the Baltic regions states are described against the global and European background. The study confirmed the validity of the demographic ranking method, which can be instrumental in solving not only scientific but also practical tasks, including those in the field of demographic and social policy.
Two-dimensional ranking of Wikipedia articles
Zhirov, A. O.; Zhirov, O. V.; Shepelyansky, D. L.
2010-10-01
The Library of Babel, described by Jorge Luis Borges, stores an enormous amount of information. The Library exists ab aeterno. Wikipedia, a free online encyclopaedia, becomes a modern analogue of such a Library. Information retrieval and ranking of Wikipedia articles become the challenge of modern society. While PageRank highlights very well known nodes with many ingoing links, CheiRank highlights very communicative nodes with many outgoing links. In this way the ranking becomes two-dimensional. Using CheiRank and PageRank we analyze the properties of two-dimensional ranking of all Wikipedia English articles and show that it gives their reliable classification with rich and nontrivial features. Detailed studies are done for countries, universities, personalities, physicists, chess players, Dow-Jones companies and other categories.
24 CFR 599.401 - Ranking of applications.
2010-04-01
... 24 Housing and Urban Development 3 2010-04-01 2010-04-01 false Ranking of applications. 599.401... Communities § 599.401 Ranking of applications. (a) Ranking order. Rural and urban applications will be ranked... applications ranked first. (b) Separate ranking categories. After initial ranking, both rural and urban...
Leclerc, Arnaud; Carrington, Tucker
2014-05-07
We propose an iterative method for computing vibrational spectra that significantly reduces the memory cost of calculations. It uses a direct product primitive basis, but does not require storing vectors with as many components as there are product basis functions. Wavefunctions are represented in a basis each of whose functions is a sum of products (SOP) and the factorizable structure of the Hamiltonian is exploited. If the factors of the SOP basis functions are properly chosen, wavefunctions are linear combinations of a small number of SOP basis functions. The SOP basis functions are generated using a shifted block power method. The factors are refined with a rank reduction algorithm to cap the number of terms in a SOP basis function. The ideas are tested on a 20-D model Hamiltonian and a realistic CH3CN (12 dimensional) potential. For the 20-D problem, to use a standard direct product iterative approach one would need to store vectors with about 10(20) components and would hence require about 8 × 10(11) GB. With the approach of this paper only 1 GB of memory is necessary. Results for CH3CN agree well with those of a previous calculation on the same potential.
Wilcoxon's signed-rank statistic: what null hypothesis and why it matters.
Li, Heng; Johnson, Terri
2014-01-01
In statistical literature, the term 'signed-rank test' (or 'Wilcoxon signed-rank test') has been used to refer to two distinct tests: a test for symmetry of distribution and a test for the median of a symmetric distribution, sharing a common test statistic. To avoid potential ambiguity, we propose to refer to those two tests by different names, as 'test for symmetry based on signed-rank statistic' and 'test for median based on signed-rank statistic', respectively. The utility of such terminological differentiation should become evident through our discussion of how those tests connect and contrast with sign test and one-sample t-test. Published 2014. This article is a U.S. Government work and is in the public domain in the USA. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
Cioca, L. I.; Giurea, R.; Precazzini, I.; Ragazzi, M.; Achim, M. I.; Schiavon, M.; Rada, E. C.
2018-05-01
Nowadays the global tourism growth has caused a significant interest in research focused on the impact of the tourism on environment and community. The purpose of this study is to introduce a new ranking for the classification of tourist accommodation establishments with the functions of agro-tourism boarding house type by examining the sector of agro-tourism based on a research aimed to improve the economic, socio-cultural and environmental performance of agrotourism structures. This paper links the criteria for the classification of agro-tourism boarding houses (ABHs) to the impact of agro-tourism activities on the environment, enhancing an eco-friendly approach on agro-tourism activities by increasing the quality reputation of the agro-tourism products and services. Taking into account the impact on the environment, agrotourism can play an important role by protecting and conserving it.
Evaluation of the convolution sum involving the sum of divisors function for 22, 44 and 52
Directory of Open Access Journals (Sweden)
Ntienjem Ebénézer
2017-04-01
\\end{array} $ where αβ = 22, 44, 52, is evaluated for all natural numbers n. Modular forms are used to achieve these evaluations. Since the modular space of level 22 is contained in that of level 44, we almost completely use the basis elements of the modular space of level 44 to carry out the evaluation of the convolution sums for αβ = 22. We then use these convolution sums to determine formulae for the number of representations of a positive integer by the octonary quadratic forms a(x12+x22+x32+x42+b(x52+x62+x72+x82, $a\\,(x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}+b\\,(x_{5}^{2}+x_{6}^{2}+x_{7}^{2}+x_{8}^{2},$ where (a, b = (1, 11, (1, 13.
Rank-based model selection for multiple ions quantum tomography
International Nuclear Information System (INIS)
Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian
2012-01-01
The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ 2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements. (paper)
The Asymptotic Joint Distribution of Self-Normalized Censored Sums and Sums of Squares
Hahn, Marjorie G.; Kuelbs, Jim; Weiner, Daniel C.
1990-01-01
Empirical versions of appropriate centering and scale constants for random variables which can fail to have second or even first moments are obtainable in various ways via suitable modifications of the summands in the partial sum. This paper discusses a particular modification, called censoring (which is a kind of winsorization), where the (random) number of summands altered tends to infinity but the proportion altered tends to zero as the number of summands increases. Some analytic advantage...
Sum rules and constraints on passive systems
International Nuclear Information System (INIS)
Bernland, A; Gustafsson, M; Luger, A
2011-01-01
A passive system is one that cannot produce energy, a property that naturally poses constraints on the system. A system in convolution form is fully described by its transfer function, and the class of Herglotz functions, holomorphic functions mapping the open upper half-plane to the closed upper half-plane, is closely related to the transfer functions of passive systems. Following a well-known representation theorem, Herglotz functions can be represented by means of positive measures on the real line. This fact is exploited in this paper in order to rigorously prove a set of integral identities for Herglotz functions that relate weighted integrals of the function to its asymptotic expansions at the origin and infinity. The integral identities are the core of a general approach introduced here to derive sum rules and physical limitations on various passive physical systems. Although similar approaches have previously been applied to a wide range of specific applications, this paper is the first to deliver a general procedure together with the necessary proofs. This procedure is described thoroughly and exemplified with examples from electromagnetic theory.
Sum frequency generation for surface vibrational spectroscopy
International Nuclear Information System (INIS)
Hunt, J.H.; Guyot-Sionnest, P.; Shen, Y.R.
1987-01-01
Surface vibrational spectroscopy is one of the best means for characterizing molecular adsorbates. For this reason, many techniques have been developed in the past. However, most of them suffer from poor sensitivity, low spectral and temporal resolution, and applications limited to vacuum solid interfaces. Recently, the second harmonic generation (SHG) technique was proved repeatedly to be a simple but versatile surface probe. It is highly sensitive and surface specific; it is also capable of achieving high temporal, spatial, and spectral resolution. Being an optical technique, it can be applied to any interface accessible by light. The only serious drawback is its lack of molecular selectivity. An obvious remedy is the extension of the technique to IR-visible sum frequency generation (SFG). Surface vibrational spectroscopy with submonolayer sensitivity is then possible using SFG with the help of a tunable IR laser. The authors report here an SFG measurement of the C-H stretch vibration of monolayers of molecules at air-solid and air-liquid interfaces
Fernandez-Gamboa, Iosu; Yanci, Javier; Granados, Cristina; Camara, Jesus
2017-08-01
Fernandez-Gamboa, I, Yanci, J, Granados, C, and Camara, J. Comparison of anthropometry and lower limb power qualities according to different levels and ranking position of competitive surfers. J Strength Cond Res 31(8): 2231-2237, 2017-The aim of this study was to compare competitive surfers' lower limb power output depending on their competitive level, and to evaluate the association between competition rankings. Twenty competitive surfers were divided according to the competitive level as follows: international (INT) or national (NAT), and competitive ranking (RANK1-50 or RANK51-100). Vertical jump and maximal peak power of the lower limbs were measured. No differences were found between INT and NAT surfers in the anthropometric variables, in the vertical jump, or in lower extremity power; although the NAT group had higher levels on the elasticity index, squat jumps (SJs), and counter movement jumps (CMJs) compared with the INT group. The RANK1-50 group had a lower biceps skinfold (p RANK1-50 group. Moderate to large significant correlations were obtained between the surfers' ranking position and some skinfolds, the sum of skinfolds, and vertical jump. Results demonstrate that surfers' physical performance seems to be an accurate indicator of ranking positioning, also revealing that vertical jump capacity and anthropometric variables play an important role in their competitive performance, which may be important when considering their power training.
A Multiobjective Programming Method for Ranking All Units Based on Compensatory DEA Model
Directory of Open Access Journals (Sweden)
Haifang Cheng
2014-01-01
Full Text Available In order to rank all decision making units (DMUs on the same basis, this paper proposes a multiobjective programming (MOP model based on a compensatory data envelopment analysis (DEA model to derive a common set of weights that can be used for the full ranking of all DMUs. We first revisit a compensatory DEA model for ranking all units, point out the existing problem for solving the model, and present an improved algorithm for which an approximate global optimal solution of the model can be obtained by solving a sequence of linear programming. Then, we applied the key idea of the compensatory DEA model to develop the MOP model in which the objectives are to simultaneously maximize all common weights under constraints that the sum of efficiency values of all DMUs is equal to unity and the sum of all common weights is also equal to unity. In order to solve the MOP model, we transform it into a single objective programming (SOP model using a fuzzy programming method and solve the SOP model using the proposed approximation algorithm. To illustrate the ranking method using the proposed method, two numerical examples are solved.
Error analysis of stochastic gradient descent ranking.
Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan
2013-06-01
Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.
Light cone sum rules in nonabelian gauge field theory
Energy Technology Data Exchange (ETDEWEB)
Mallik, S [Bern Univ. (Switzerland). Inst. fuer Theoretische Physik
1981-03-24
The author examines, in the context of nonabelian gauge field theory, the derivation of the light cone sum rules which were obtained earlier on the assumption of dominance of canonical singularity in the current commutator on the light cone. The retarded scaling functions appearing in the sum rules are numbers known in terms of the charges of the quarks and the number of quarks and gluons in the theory. Possible applications of the sum rules are suggested.
On the Laplace transform of the Weinberg type sum rules
International Nuclear Information System (INIS)
Narison, S.
1981-09-01
We consider the Laplace transform of various sum rules of the Weinberg type including the leading non-perturbative effects. We show from the third type Weinberg sum rules that 7.5 to 8.9 1 coupling to the W boson, while the second sum rule gives an upper bound on the A 1 mass (Msub(A 1 ) < or approx. 1.25 GeV). (author)
Premium Pricing of Liability Insurance Using Random Sum Model
Kartikasari, Mujiati Dwi
2017-01-01
Premium pricing is one of important activities in insurance. Nonlife insurance premium is calculated from expected value of historical data claims. The historical data claims are collected so that it forms a sum of independent random number which is called random sum. In premium pricing using random sum, claim frequency distribution and claim severity distribution are combined. The combination of these distributions is called compound distribution. By using liability claim insurance data, we ...
On poisson-stopped-sums that are mixed poisson
Valero Baya, Jordi; Pérez Casany, Marta; Ginebra Molins, Josep
2013-01-01
Maceda (1948) characterized the mixed Poisson distributions that are Poisson-stopped-sum distributions based on the mixing distribution. In an alternative characterization of the same set of distributions here the Poisson-stopped-sum distributions that are mixed Poisson distributions is proved to be the set of Poisson-stopped-sums of either a mixture of zero-truncated Poisson distributions or a zero-modification of it. Peer Reviewed
Inclusive sum rules and spectra of neutrons at the ISR
International Nuclear Information System (INIS)
Grigoryan, A.A.
1975-01-01
Neutron spectra in pp collisions at ISR energies are studied in the framework of sum rules for inclusive processes. The contributions of protons, π- and E- mesons to the energy sum rule are calculated at √5 = 53 GeV. It is shown by means of this sum rule that the spectra of neutrons at the ISR are in contradiction with the spectra of other particles also measured at the ISR
Singular f-sum rule for superfluid 4He
International Nuclear Information System (INIS)
Wong, V.K.
1979-01-01
The validity and applicability to inelastic neutron scattering of a singular f-sum rule for superfluid helium, proposed by Griffin to explain the rhosub(s) dependence in S(k, ω) as observed by Woods and Svensson, are examined in the light of similar sum rules rigorously derived for anharmonic crystals and Bose liquids. It is concluded that the singular f-sum rules are only of microscopic interest. (Auth,)
Compound Poisson Approximations for Sums of Random Variables
Serfozo, Richard F.
1986-01-01
We show that a sum of dependent random variables is approximately compound Poisson when the variables are rarely nonzero and, given they are nonzero, their conditional distributions are nearly identical. We give several upper bounds on the total-variation distance between the distribution of such a sum and a compound Poisson distribution. Included is an example for Markovian occurrences of a rare event. Our bounds are consistent with those that are known for Poisson approximations for sums of...
Dietary risk ranking for residual antibiotics in cultured aquatic products around Tai Lake, China.
Song, Chao; Li, Le; Zhang, Cong; Qiu, Liping; Fan, Limin; Wu, Wei; Meng, Shunlong; Hu, Gengdong; Chen, Jiazhang; Liu, Ying; Mao, Aimin
2017-10-01
Antibiotics are widely used in aquaculture and therefore may be present as a dietary risk in cultured aquatic products. Using the Tai Lake Basin as a study area, we assessed the presence of 15 antibiotics in 5 widely cultured aquatic species using a newly developed dietary risk ranking approach. By assigning scores to each factor involved in the ranking matrices, the scores of dietary risks per antibiotic and per aquatic species were calculated. The results indicated that fluoroquinolone antibiotics posed the highest dietary risk in all aquatic species. Then, the total scores per aquatic species were summed by all 15 antibiotic scores of antibiotics, it was found that Crab (Eriocheir sinensis) had the highest dietary risks. Finally, the most concerned antibiotic category and aquatic species were selected. This study highlighted the importance of dietary risk ranking in the production and consumption of cultured aquatic products around Tai Lake. Copyright © 2017 Elsevier Inc. All rights reserved.
Application of third order stochastic dominance algorithm in investments ranking
Directory of Open Access Journals (Sweden)
Lončar Sanja
2012-01-01
Full Text Available The paper presents the use of third order stochastic dominance in ranking Investment alternatives, using TSD algorithms (Levy, 2006for testing third order stochastic dominance. The main goal of using TSD rule is minimization of efficient investment set for investor with risk aversion, who prefers more money and likes positive skew ness.
Adaptive Game Level Creation through Rank-based Interactive Evolution
DEFF Research Database (Denmark)
Liapis, Antonios; Martínez, Héctor Pérez; Togelius, Julian
2013-01-01
as fitness functions for the optimization of the generated content. The preference models are built via ranking-based preference learning, while the content is generated via evolutionary search. The proposed method is evaluated on the creation of strategy game maps, and its performance is tested using...
Website visibility the theory and practice of improving rankings
Weideman, Melius
2009-01-01
The quest to achieve high website rankings in search engine results is a prominent subject for both academics and website owners/coders. Website Visibility marries academic research results to the world of the information practitioner and contains a focused look at the elements which contribute to website visibility, providing support for the application of each element with relevant research. A series of real-world case studies with tested examples of research on website visibility elements and their effect on rankings are reviewed.Written by a well-respected academic and practitioner in the
Premium Pricing of Liability Insurance Using Random Sum Model
Directory of Open Access Journals (Sweden)
Mujiati Dwi Kartikasari
2017-03-01
Full Text Available Premium pricing is one of important activities in insurance. Nonlife insurance premium is calculated from expected value of historical data claims. The historical data claims are collected so that it forms a sum of independent random number which is called random sum. In premium pricing using random sum, claim frequency distribution and claim severity distribution are combined. The combination of these distributions is called compound distribution. By using liability claim insurance data, we analyze premium pricing using random sum model based on compound distribution
QCD sum rules and applications to nuclear physics
International Nuclear Information System (INIS)
Cohen, T.D.; Xuemin, J.
1994-12-01
Applications of QCD sum-rule methods to the physics of nuclei are reviewed, with an emphasis on calculations of baryon self-energies in infinite nuclear matter. The sum-rule approach relates spectral properties of hadrons propagating in the finite-density medium, such as optical potentials for quasinucleons, to matrix elements of QCD composite operators (condensates). The vacuum formalism for QCD sum rules is generalized to finite density, and the strategy and implementation of the approach is discussed. Predictions for baryon self-energies are compared to those suggested by relativistic nuclear physics phenomenology. Sum rules for vector mesons in dense nuclear matter are also considered. (author)
Adler Function, DIS sum rules and Crewther Relations
International Nuclear Information System (INIS)
Baikov, P.A.; Chetyrkin, K.G.; Kuehn, J.H.
2010-01-01
The current status of the Adler function and two closely related Deep Inelastic Scattering (DIS) sum rules, namely, the Bjorken sum rule for polarized DIS and the Gross-Llewellyn Smith sum rule are briefly reviewed. A new result is presented: an analytical calculation of the coefficient function of the latter sum rule in a generic gauge theory in order O(α s 4 ). It is demonstrated that the corresponding Crewther relation allows to fix two of three colour structures in the O(α s 4 ) contribution to the singlet part of the Adler function.
Model dependence of energy-weighted sum rules
International Nuclear Information System (INIS)
Kirson, M.W.
1977-01-01
The contribution of the nucleon-nucleon interaction to energy-weighted sum rules for electromagnetic multipole transitions is investigated. It is found that only isoscalar electric transitions might have model-independent energy-weighted sum rules. For these transitions, explicit momentum and angular momentum dependence of the nuclear force give rise to corrections to the sum rule which are found to be negligibly small, thus confirming the model independence of these specific sum rules. These conclusions are unaffected by correlation effects. (author)
Methodology for ranking restoration options
International Nuclear Information System (INIS)
Hedemann Jensen, Per
1999-04-01
The work described in this report has been performed as a part of the RESTRAT Project FI4P-CT95-0021a (PL 950128) co-funded by the Nuclear Fission Safety Programme of the European Commission. The RESTRAT project has the overall objective of developing generic methodologies for ranking restoration techniques as a function of contamination and site characteristics. The project includes analyses of existing remediation methodologies and contaminated sites, and is structured in the following steps: characterisation of relevant contaminated sites; identification and characterisation of relevant restoration techniques; assessment of the radiological impact; development and application of a selection methodology for restoration options; formulation of generic conclusions and development of a manual. The project is intended to apply to situations in which sites with nuclear installations have been contaminated with radioactive materials as a result of the operation of these installations. The areas considered for remedial measures include contaminated land areas, rivers and sediments in rivers, lakes, and sea areas. Five contaminated European sites have been studied. Various remedial measures have been envisaged with respect to the optimisation of the protection of the populations being exposed to the radionuclides at the sites. Cost-benefit analysis and multi-attribute utility analysis have been applied for optimisation. Health, economic and social attributes have been included and weighting factors for the different attributes have been determined by the use of scaling constants. (au)
Citation graph based ranking in Invenio
Marian, Ludmila; Rajman, Martin; Vesely, Martin
2010-01-01
Invenio is the web-based integrated digital library system developed at CERN. Within this framework, we present four types of ranking models based on the citation graph that complement the simple approach based on citation counts: time-dependent citation counts, a relevancy ranking which extends the PageRank model, a time-dependent ranking which combines the freshness of citations with PageRank and a ranking that takes into consideration the external citations. We present our analysis and results obtained on two main data sets: Inspire and CERN Document Server. Our main contributions are: (i) a study of the currently available ranking methods based on the citation graph; (ii) the development of new ranking methods that correct some of the identified limitations of the current methods such as treating all citations of equal importance, not taking time into account or considering the citation graph complete; (iii) a detailed study of the key parameters for these ranking methods. (The original publication is ava...
Communities in Large Networks: Identification and Ranking
DEFF Research Database (Denmark)
Olsen, Martin
2008-01-01
We study the problem of identifying and ranking the members of a community in a very large network with link analysis only, given a set of representatives of the community. We define the concept of a community justified by a formal analysis of a simple model of the evolution of a directed graph. ...... and its immediate surroundings. The members are ranked with a “local” variant of the PageRank algorithm. Results are reported from successful experiments on identifying and ranking Danish Computer Science sites and Danish Chess pages using only a few representatives....
Ranking Entities in Networks via Lefschetz Duality
DEFF Research Database (Denmark)
Aabrandt, Andreas; Hansen, Vagn Lundsgaard; Poulsen, Bjarne
2014-01-01
then be ranked according to how essential their positions are in the network by considering the effect of their respective absences. Defining a ranking of a network which takes the individual position of each entity into account has the purpose of assigning different roles to the entities, e.g. agents......, in the network. In this paper it is shown that the topology of a given network induces a ranking of the entities in the network. Further, it is demonstrated how to calculate this ranking and thus how to identify weak sub-networks in any given network....
Modified Phenomena Identification and Ranking Table (PIRT) for Uncertainty Analysis
International Nuclear Information System (INIS)
Gol-Mohamad, Mohammad P.; Modarres, Mohammad; Mosleh, Ali
2006-01-01
This paper describes a methodology of characterizing important phenomena, which is also part of a broader research by the authors called 'Modified PIRT'. The methodology provides robust process of phenomena identification and ranking process for more precise quantification of uncertainty. It is a two-step process of identifying and ranking methodology based on thermal-hydraulics (TH) importance as well as uncertainty importance. Analytical Hierarchical Process (AHP) has been used for as a formal approach for TH identification and ranking. Formal uncertainty importance technique is used to estimate the degree of credibility of the TH model(s) used to represent the important phenomena. This part uses subjective justification by evaluating available information and data from experiments, and code predictions. The proposed methodology was demonstrated by developing a PIRT for large break loss of coolant accident LBLOCA for the LOFT integral facility with highest core power (test LB-1). (authors)
The DHG sum rule measured with medium energy photons
International Nuclear Information System (INIS)
Hicks, K.; Ardashev, K.; Babusci, D.
1997-01-01
The structure of the nucleon has many important features that are yet to be uncovered. Of current interest is the nucleon spin-structure which can be measured by doing double-polarization experiments with photon beams of medium energies (0.1 to 2 GeV). One such experiment uses dispersion relations, applied to the Compton scattering amplitude, to relate measurement of the total reaction cross section integrated over the incident photon energy to the nucleon anomalous magnetic moment. At present, no single facility spans the entire range of photon energies necessary to test this sum rule. The Laser-Electron Gamma Source (LEGS) facility will measure the double-polarization observables at photon energies between 0.15--0.47 MeV. Either the SPring8 facility, the GRAAL facility (France), or Jefferson Laboratory could make similar measurements at higher photon energies. A high-precision measurement of the spin-polarizability and the Drell-Hearn-Gerasimov sum rule is now possible with the advent of high-polarization solid HD targets at medium energy polarized photon facilities such as LEGS, GRAAL and SPring8. Other facilities with lower polarization in either the photon beam or target (or both) are also pursuing these measurements because of the high priority associated with this physics. The Spin-asymmetry (SASY) detector that will be used at LEGS has been briefly outlined in this paper. The detector efficiencies have been explored with simulations studies using the GEANT software, with the result that both charged and uncharged pions can be detected with a reasonable efficiency (> 30%) over a large solid angle. Tracking with a TPC, which will be built at LEGS over the next few years, will improve the capabilities of these measurements
Direct and reverse inclusions for strongly multiple summing operators
Indian Academy of Sciences (India)
and strongly multiple summing operators under the assumption that the range has finite cotype. Keywords. .... multiple (q, p)-summing, if there exists a constant C ≥ 0 such that for every choice of systems (x j i j )1≤i j ≤m j ...... Ideals and their Applications in Theoretical Physics (1983) (Leipzig: Teubner-Texte) pp. 185–199.
29 CFR 4044.75 - Other lump sum benefits.
2010-07-01
... sum benefits. The value of a lump sum benefit which is not covered under § 4044.73 or § 4044.74 is equal to— (a) The value under the qualifying bid, if an insurer provides the benefit; or (b) The present value of the benefit as of the date of distribution, determined using reasonable actuarial assumptions...
Lattice QCD evaluation of baryon magnetic moment sum rules
International Nuclear Information System (INIS)
Leinweber, D.B.
1991-05-01
Magnetic moment combinations and sum rules are evaluated using recent results for the magnetic moments of octet baryons determined in a numerical simulation of quenched QCD. The model-independent and parameter-free results of the lattice calculations remove some of the confusion and contradiction surrounding past magnetic moment sum rule analyses. The lattice results reveal the underlying quark dynamics investigated by magnetic moment sum rules and indicate the origin of magnetic moment quenching for the non-strange quarks in Σ. In contrast to previous sum rule analyses, the magnetic moments of nonstrange quarks in Ξ are seen to be enhanced in the lattice results. In most cases, the spin-dependent dynamics and center-of-mass effects giving rise to baryon dependence of the quark moments are seen to be sufficient to violate the sum rules in agreement with experimental measurements. In turn, the sum rules are used to further examine the results of the lattice simulation. The Sachs sum rule suggests that quark loop contributions not included in present lattice calculations may play a key role in removing the discrepancies between lattice and experimental ratios of magnetic moments. This is supported by other sum rules sensitive to quark loop contributions. A measure of the isospin symmetry breaking in the effective quark moments due to quark loop contributions is in agreement with model expectations. (Author) 16 refs., 2 figs., 2 tabs
Luttinger and Hubbard sum rules: are they compatible?
International Nuclear Information System (INIS)
Matho, K.
1992-01-01
A so-called Hubbard sum rule determines the weight of a satellite in fermionic single-particle excitations with strong local repulsion (U→∞). Together with the Luttinger sum rule, this imposes two different energy scales on the remaining finite excitations. In the Hubbard chain, this has been identified microscopically as being due to a separation of spin and charge. (orig.)
Chain hexagonal cacti with the extremal eccentric distance sum.
Qu, Hui; Yu, Guihai
2014-01-01
Eccentric distance sum (EDS), which can predict biological and physical properties, is a topological index based on the eccentricity of a graph. In this paper we characterize the chain hexagonal cactus with the minimal and the maximal eccentric distance sum among all chain hexagonal cacti of length n, respectively. Moreover, we present exact formulas for EDS of two types of hexagonal cacti.
A sum rule description of giant resonances at finite temperature
International Nuclear Information System (INIS)
Meyer, J.; Quentin, P.; Brack, M.
1983-01-01
A generalization of the sum rule approach to collective motion at finite temperature is presented. The m 1 and msub(-1) sum rules for the isovector dipole and the isoscalar monopole electric modes have been evaluated with the modified SkM force for the 208 Pb nucleus. The variation of the resulting giant resonance energies with temperature is discussed. (orig.)
Partial sums of arithmetical functions with absolutely convergent ...
Indian Academy of Sciences (India)
Keywords. Ramanujan expansions; average order; error terms; sum-of-divisors functions; Jordan's totient functions. 2010 Mathematics Subject Classification. 11N37, 11A25, 11K65. 1. Introduction. The theory of Ramanujan sums and Ramanujan expansions has emerged from the seminal article [10] of Ramanujan. In 1918 ...
28 CFR 523.16 - Lump sum awards.
2010-07-01
... satisfactory performance of an unusually hazardous assignment; (c) An act which protects the lives of staff or... TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.16 Lump sum awards. Any staff member may recommend... award is calculated. No seniority is accrued for such awards. Staff may recommend lump sum awards of...
Analytic and algorithmic aspects of generalized harmonic sums and polylogarithms
International Nuclear Information System (INIS)
Ablinger, Jakob; Schneider, Carsten
2013-01-01
In recent three-loop calculations of massive Feynman integrals within Quantum Chromodynamics (QCD) and, e.g., in recent combinatorial problems the so-called generalized harmonic sums (in short S-sums) arise. They are characterized by rational (or real) numerator weights also different from ±1. In this article we explore the algorithmic and analytic properties of these sums systematically. We work out the Mellin and inverse Mellin transform which connects the sums under consideration with the associated Poincare iterated integrals, also called generalized harmonic polylogarithms. In this regard, we obtain explicit analytic continuations by means of asymptotic expansions of the S-sums which started to occur frequently in current QCD calculations. In addition, we derive algebraic and structural relations, like differentiation w.r.t. the external summation index and different multi-argument relations, for the compactification of S-sum expressions. Finally, we calculate algebraic relations for infinite S-sums, or equivalently for generalized harmonic polylogarithms evaluated at special values. The corresponding algorithms and relations are encoded in the computer algebra package HarmonicSums.
On contribution of instantons to nucleon sum rules
International Nuclear Information System (INIS)
Dorokhov, A.E.; Kochelev, N.I.
1989-01-01
The contribution of instantons to nucleon QCD sum rules is obtained. It is shown that this contribution does provide stabilization of the sum rules and leads to formation of a nucleon as a bound state of quarks in the instanton field. 17 refs.; 3 figs
Analytic and algorithmic aspects of generalized harmonic sums and polylogarithms
Energy Technology Data Exchange (ETDEWEB)
Ablinger, Jakob; Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2013-01-15
In recent three-loop calculations of massive Feynman integrals within Quantum Chromodynamics (QCD) and, e.g., in recent combinatorial problems the so-called generalized harmonic sums (in short S-sums) arise. They are characterized by rational (or real) numerator weights also different from {+-}1. In this article we explore the algorithmic and analytic properties of these sums systematically. We work out the Mellin and inverse Mellin transform which connects the sums under consideration with the associated Poincare iterated integrals, also called generalized harmonic polylogarithms. In this regard, we obtain explicit analytic continuations by means of asymptotic expansions of the S-sums which started to occur frequently in current QCD calculations. In addition, we derive algebraic and structural relations, like differentiation w.r.t. the external summation index and different multi-argument relations, for the compactification of S-sum expressions. Finally, we calculate algebraic relations for infinite S-sums, or equivalently for generalized harmonic polylogarithms evaluated at special values. The corresponding algorithms and relations are encoded in the computer algebra package HarmonicSums.
Sum formula for SL2 over imaginary quadratic number fields
Lokvenec-Guleska, H.
2004-01-01
The subject of this thesis is generalization of the classical sum formula of Bruggeman and Kuznetsov to the upper half-space H3. The derivation of the preliminary sum formula involves computation of the inner product of two specially chosen Poincar´e series in two different ways: the spectral
An electrophysiological signature of summed similarity in visual working memory
Van Vugt, Marieke K.; Sekuler, Robert; Wilson, Hugh R.; Kahana, Michael J.
Summed-similarity models of short-term item recognition posit that participants base their judgments of an item's prior occurrence on that item's summed similarity to the ensemble of items on the remembered list. We examined the neural predictions of these models in 3 short-term recognition memory
Faraday effect revisited: sum rules and convergence issues
DEFF Research Database (Denmark)
Cornean, Horia; Nenciu, Gheorghe
2010-01-01
This is the third paper of a series revisiting the Faraday effect. The question of the absolute convergence of the sums over the band indices entering the Verdet constant is considered. In general, sum rules and traces per unit volume play an important role in solid-state physics, and they give...
Semiempirical search for oxide superconductors based on bond valence sums
International Nuclear Information System (INIS)
Tanaka, S.; Fukushima, N.; Niu, H.; Ando, K.
1992-01-01
Relationships between crystal structures and electronic states of layered transition-metal oxides are analyzed in the light of bond valence sums. Correlations between the superconducting transition temperature T c and the bond-valence-sum parameters are investigated for the high-T c cuprate compounds. Possibility of making nonsuperconducting oxides superconducting is discussed. (orig.)
Efficient yellow beam generation by intracavity sum frequency ...
Indian Academy of Sciences (India)
2014-02-06
Feb 6, 2014 ... We present our studies on dual wavelength operation using a single Nd:YVO4 crystal and its intracavity sum frequency generation by considering the influence of the thermal lensing effect on the performance of the laser. A KTP crystal cut for type-II phase matching was used for intracavity sum frequency ...
Analytic and algorithmic aspects of generalized harmonic sums and polylogarithms
Energy Technology Data Exchange (ETDEWEB)
Ablinger, Jakob; Schneider, Carsten [Research Institute for Symbolic Computation (RISC), Johannes Kepler University, Altenbergerstraße 69, A-4040, Linz (Austria); Blümlein, Johannes [Deutsches Elektronen–Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen (Germany)
2013-08-15
In recent three-loop calculations of massive Feynman integrals within Quantum Chromodynamics (QCD) and, e.g., in recent combinatorial problems the so-called generalized harmonic sums (in short S-sums) arise. They are characterized by rational (or real) numerator weights also different from ±1. In this article we explore the algorithmic and analytic properties of these sums systematically. We work out the Mellin and inverse Mellin transform which connects the sums under consideration with the associated Poincaré iterated integrals, also called generalized harmonic polylogarithms. In this regard, we obtain explicit analytic continuations by means of asymptotic expansions of the S-sums which started to occur frequently in current QCD calculations. In addition, we derive algebraic and structural relations, like differentiation with respect to the external summation index and different multi-argument relations, for the compactification of S-sum expressions. Finally, we calculate algebraic relations for infinite S-sums, or equivalently for generalized harmonic polylogarithms evaluated at special values. The corresponding algorithms and relations are encoded in the computer algebra package HarmonicSums.
Volume sums of polar Blaschke–Minkowski homomorphisms
Indian Academy of Sciences (India)
In this article, we establish Minkowski and Aleksandrov–Fenchel type inequalities for the volume sum of polars of Blaschke–Minkowski homomorphisms. Keywords. Blaschke–Minkowski homomorphism; volume differences; volume sum; projection body operator. 2010 Mathematics Subject Classification. 52A40, 52A30. 1.
Learning Preference Models from Data: On the Problem of Label Ranking and Its Variants
Hüllermeier, Eyke; Fürnkranz, Johannes
The term “preference learning” refers to the application of machine learning methods for inducing preference models from empirical data. In the recent literature, corresponding problems appear in various guises. After a brief overview of the field, this work focuses on a particular learning scenario called label ranking where the problem is to learn a mapping from instances to rankings over a finite number of labels. Our approach for learning such a ranking function, called ranking by pairwise comparison (RPC), first induces a binary preference relation from suitable training data, using a natural extension of pairwise classification. A ranking is then derived from this relation by means of a ranking procedure. This paper elaborates on a key advantage of such an approach, namely the fact that our learner can be adapted to different loss functions by using different ranking procedures on the same underlying order relations. In particular, the Spearman rank correlation is minimized by using a simple weighted voting procedure. Moreover, we discuss a loss function suitable for settings where candidate labels must be tested successively until a target label is found. In this context, we propose the idea of “empirical conditioning” of class probabilities. A related ranking procedure, called “ranking through iterated choice”, is investigated experimentally.
Ranking scientific publications: the effect of nonlinearity
Yao, Liyang; Wei, Tian; Zeng, An; Fan, Ying; di, Zengru
2014-10-01
Ranking the significance of scientific publications is a long-standing challenge. The network-based analysis is a natural and common approach for evaluating the scientific credit of papers. Although the number of citations has been widely used as a metric to rank papers, recently some iterative processes such as the well-known PageRank algorithm have been applied to the citation networks to address this problem. In this paper, we introduce nonlinearity to the PageRank algorithm when aggregating resources from different nodes to further enhance the effect of important papers. The validation of our method is performed on the data of American Physical Society (APS) journals. The results indicate that the nonlinearity improves the performance of the PageRank algorithm in terms of ranking effectiveness, as well as robustness against malicious manipulations. Although the nonlinearity analysis is based on the PageRank algorithm, it can be easily extended to other iterative ranking algorithms and similar improvements are expected.
Ranking scientific publications: the effect of nonlinearity.
Yao, Liyang; Wei, Tian; Zeng, An; Fan, Ying; Di, Zengru
2014-10-17
Ranking the significance of scientific publications is a long-standing challenge. The network-based analysis is a natural and common approach for evaluating the scientific credit of papers. Although the number of citations has been widely used as a metric to rank papers, recently some iterative processes such as the well-known PageRank algorithm have been applied to the citation networks to address this problem. In this paper, we introduce nonlinearity to the PageRank algorithm when aggregating resources from different nodes to further enhance the effect of important papers. The validation of our method is performed on the data of American Physical Society (APS) journals. The results indicate that the nonlinearity improves the performance of the PageRank algorithm in terms of ranking effectiveness, as well as robustness against malicious manipulations. Although the nonlinearity analysis is based on the PageRank algorithm, it can be easily extended to other iterative ranking algorithms and similar improvements are expected.
Neural Ranking Models with Weak Supervision
Dehghani, M.; Zamani, H.; Severyn, A.; Kamps, J.; Croft, W.B.
2017-01-01
Despite the impressive improvements achieved by unsupervised deep neural networks in computer vision and NLP tasks, such improvements have not yet been observed in ranking for information retrieval. The reason may be the complexity of the ranking problem, as it is not obvious how to learn from
A Rational Method for Ranking Engineering Programs.
Glower, Donald D.
1980-01-01
Compares two methods for ranking academic programs, the opinion poll v examination of career successes of the program's alumni. For the latter, "Who's Who in Engineering" and levels of research funding provided data. Tables display resulting data and compare rankings by the two methods for chemical engineering and civil engineering. (CS)
Lerot: An Online Learning to Rank Framework
Schuth, A.; Hofmann, K.; Whiteson, S.; de Rijke, M.
2013-01-01
Online learning to rank methods for IR allow retrieval systems to optimize their own performance directly from interactions with users via click feedback. In the software package Lerot, presented in this paper, we have bundled all ingredients needed for experimenting with online learning to rank for
Adaptive distributional extensions to DFR ranking
DEFF Research Database (Denmark)
Petersen, Casper; Simonsen, Jakob Grue; Järvelin, Kalervo
2016-01-01
-fitting distribution. We call this model Adaptive Distributional Ranking (ADR) because it adapts the ranking to the statistics of the specific dataset being processed each time. Experiments on TREC data show ADR to outperform DFR models (and their extensions) and be comparable in performance to a query likelihood...
Contests with rank-order spillovers
M.R. Baye (Michael); D. Kovenock (Dan); C.G. de Vries (Casper)
2012-01-01
textabstractThis paper presents a unified framework for characterizing symmetric equilibrium in simultaneous move, two-player, rank-order contests with complete information, in which each player's strategy generates direct or indirect affine "spillover" effects that depend on the rank-order of her
Classification of rank 2 cluster varieties
DEFF Research Database (Denmark)
Mandel, Travis
We classify rank 2 cluster varieties (those whose corresponding skew-form has rank 2) according to the deformation type of a generic fiber U of their X-spaces, as defined by Fock and Goncharov. Our approach is based on the work of Gross, Hacking, and Keel for cluster varieties and log Calabi...
Using centrality to rank web snippets
Jijkoun, V.; de Rijke, M.; Peters, C.; Jijkoun, V.; Mandl, T.; Müller, H.; Oard, D.W.; Peñas, A.; Petras, V.; Santos, D.
2008-01-01
We describe our participation in the WebCLEF 2007 task, targeted at snippet retrieval from web data. Our system ranks snippets based on a simple similarity-based centrality, inspired by the web page ranking algorithms. We experimented with retrieval units (sentences and paragraphs) and with the
Mining Feedback in Ranking and Recommendation Systems
Zhuang, Ziming
2009-01-01
The amount of online information has grown exponentially over the past few decades, and users become more and more dependent on ranking and recommendation systems to address their information seeking needs. The advance in information technologies has enabled users to provide feedback on the utilities of the underlying ranking and recommendation…
Entity Ranking using Wikipedia as a Pivot
R. Kaptein; P. Serdyukov; A.P. de Vries (Arjen); J. Kamps
2010-01-01
htmlabstractIn this paper we investigate the task of Entity Ranking on the Web. Searchers looking for entities are arguably better served by presenting a ranked list of entities directly, rather than a list of web pages with relevant but also potentially redundant information about
Entity ranking using Wikipedia as a pivot
Kaptein, R.; Serdyukov, P.; de Vries, A.; Kamps, J.; Huang, X.J.; Jones, G.; Koudas, N.; Wu, X.; Collins-Thompson, K.
2010-01-01
In this paper we investigate the task of Entity Ranking on the Web. Searchers looking for entities are arguably better served by presenting a ranked list of entities directly, rather than a list of web pages with relevant but also potentially redundant information about these entities. Since
Rank 2 fusion rings are complete intersections
DEFF Research Database (Denmark)
Andersen, Troels Bak
We give a non-constructive proof that fusion rings attached to a simple complex Lie algebra of rank 2 are complete intersections.......We give a non-constructive proof that fusion rings attached to a simple complex Lie algebra of rank 2 are complete intersections....
A Ranking Method for Evaluating Constructed Responses
Attali, Yigal
2014-01-01
This article presents a comparative judgment approach for holistically scored constructed response tasks. In this approach, the grader rank orders (rather than rate) the quality of a small set of responses. A prior automated evaluation of responses guides both set formation and scaling of rankings. Sets are formed to have similar prior scores and…
Ranking Music Data by Relevance and Importance
DEFF Research Database (Denmark)
Ruxanda, Maria Magdalena; Nanopoulos, Alexandros; Jensen, Christian Søndergaard
2008-01-01
Due to the rapidly increasing availability of audio files on the Web, it is relevant to augment search engines with advanced audio search functionality. In this context, the ranking of the retrieved music is an important issue. This paper proposes a music ranking method capable of flexibly fusing...
Ranking of Unwarranted Variations in Healthcare Treatments
Moes, Herry; Brekelmans, Ruud; Hamers, Herbert; Hasaart, F.
2017-01-01
In this paper, we introduce a framework designed to identify and rank possible unwarranted variation of treatments in healthcare. The innovative aspect of this framework is a ranking procedure that aims to identify healthcare institutions where unwarranted variation is most severe, and diagnosis
The Rankings Game: Who's Playing Whom?
Burness, John F.
2008-01-01
This summer, Forbes magazine published its new rankings of "America's Best Colleges," implying that it had developed a methodology that would give the public the information that it needed to choose a college wisely. "U.S. News & World Report," which in 1983 published the first annual ranking, just announced its latest ratings last week--including…
Dynamic collective entity representations for entity ranking
Graus, D.; Tsagkias, M.; Weerkamp, W.; Meij, E.; de Rijke, M.
2016-01-01
Entity ranking, i.e., successfully positioning a relevant entity at the top of the ranking for a given query, is inherently difficult due to the potential mismatch between the entity's description in a knowledge base, and the way people refer to the entity when searching for it. To counter this
The sum of friends’ and lovers’ self-control scores predicts relationship quality
Vohs, K.D.; Finkenauer, C.; Baumeister, R.F.
2011-01-01
What combination of partners' trait self-control levels produces the best relationship outcomes? The authors tested three hypotheses-complementarity (large difference in trait self-control scores), similarity (small difference in self-control scores), and totality (large sum of self-control
Simulations of charge summing and threshold dispersion effects in Medipix3
International Nuclear Information System (INIS)
Pennicard, D.; Ballabriga, R.; Llopart, X.; Campbell, M.; Graafsma, H.
2011-01-01
A novel feature of the Medipix3 photon-counting pixel readout chip is inter-pixel communication. By summing together the signals from neighbouring pixels at a series of 'summing nodes', and assigning each hit to the node with the highest signal, the chip can compensate for charge-sharing effects. However, previous experimental tests have demonstrated that the node-to-node variation in the detector's response is very large. Using computer simulations, it is shown that this variation is due to threshold dispersion, which results in many hits being assigned to whichever summing node in the vicinity has the lowest threshold level. A reduction in threshold variation would attenuate but not solve this issue. A new charge summing and hit assignment process is proposed, where the signals in individual pixels are used to determine the hit location, and then signals from neighbouring pixels are summed to determine whether the total photon energy is above threshold. In simulation, this new mode accurately assigns each hit to the pixel with the highest pulse height without any losses or double counting. - Research highlights: → Medipix3 readout chip compensates charge sharing using inter-pixel communication. → In initial production run, the flat-field response is unexpectedly nonuniform. → This effect is reproduced in simulation, and is caused by threshold dispersion. → A new inter-pixel communication process is proposed. → Simulations demonstrate the new process should give much better uniformity.
QCD and power corrections to sum rules in deep-inelastic lepton-nucleon scattering
International Nuclear Information System (INIS)
Ravindran, V.; Neerven, W.L. van
2001-01-01
In this paper we study QCD and power corrections to sum rules which show up in deep-inelastic lepton-hadron scattering. Furthermore we will make a distinction between fundamental sum rules which can be derived from quantum field theory and those which are of a phenomenological origin. Using current algebra techniques the fundamental sum rules can be expressed into expectation values of (partially) conserved (axial-)vector currents sandwiched between hadronic states. These expectation values yield the quantum numbers of the corresponding hadron which are determined by the underlying flavour group SU(n) F . In this case one can show that there exist an intimate relation between the appearance of power and QCD corrections. The above features do not hold for phenomenological sum rules, hereafter called non-fundamental. They have no foundation in quantum field theory and they mostly depend on certain assumptions made for the structure functions like super-convergence relations or the parton model. Therefore only the fundamental sum rules provide us with a stringent test of QCD
Comparing classical and quantum PageRanks
Loke, T.; Tang, J. W.; Rodriguez, J.; Small, M.; Wang, J. B.
2017-01-01
Following recent developments in quantum PageRanking, we present a comparative analysis of discrete-time and continuous-time quantum-walk-based PageRank algorithms. Relative to classical PageRank and to different extents, the quantum measures better highlight secondary hubs and resolve ranking degeneracy among peripheral nodes for all networks we studied in this paper. For the discrete-time case, we investigated the periodic nature of the walker's probability distribution for a wide range of networks and found that the dominant period does not grow with the size of these networks. Based on this observation, we introduce a new quantum measure using the maximum probabilities of the associated walker during the first couple of periods. This is particularly important, since it leads to a quantum PageRanking scheme that is scalable with respect to network size.
Universal emergence of PageRank
Energy Technology Data Exchange (ETDEWEB)
Frahm, K M; Georgeot, B; Shepelyansky, D L, E-mail: frahm@irsamc.ups-tlse.fr, E-mail: georgeot@irsamc.ups-tlse.fr, E-mail: dima@irsamc.ups-tlse.fr [Laboratoire de Physique Theorique du CNRS, IRSAMC, Universite de Toulouse, UPS, 31062 Toulouse (France)
2011-11-18
The PageRank algorithm enables us to rank the nodes of a network through a specific eigenvector of the Google matrix, using a damping parameter {alpha} Element-Of ]0, 1[. Using extensive numerical simulations of large web networks, with a special accent on British University networks, we determine numerically and analytically the universal features of the PageRank vector at its emergence when {alpha} {yields} 1. The whole network can be divided into a core part and a group of invariant subspaces. For {alpha} {yields} 1, PageRank converges to a universal power-law distribution on the invariant subspaces whose size distribution also follows a universal power law. The convergence of PageRank at {alpha} {yields} 1 is controlled by eigenvalues of the core part of the Google matrix, which are extremely close to unity, leading to large relaxation times as, for example, in spin glasses. (paper)
Universal emergence of PageRank
International Nuclear Information System (INIS)
Frahm, K M; Georgeot, B; Shepelyansky, D L
2011-01-01
The PageRank algorithm enables us to rank the nodes of a network through a specific eigenvector of the Google matrix, using a damping parameter α ∈ ]0, 1[. Using extensive numerical simulations of large web networks, with a special accent on British University networks, we determine numerically and analytically the universal features of the PageRank vector at its emergence when α → 1. The whole network can be divided into a core part and a group of invariant subspaces. For α → 1, PageRank converges to a universal power-law distribution on the invariant subspaces whose size distribution also follows a universal power law. The convergence of PageRank at α → 1 is controlled by eigenvalues of the core part of the Google matrix, which are extremely close to unity, leading to large relaxation times as, for example, in spin glasses. (paper)
Harmonic sums, polylogarithms, special numbers, and their generalizations
International Nuclear Information System (INIS)
Ablinger, Jakob
2013-04-01
In these introductory lectures we discuss classes of presently known nested sums, associated iterated integrals, and special constants which hierarchically appear in the evaluation of massless and massive Feynman diagrams at higher loops. These quantities are elements of stuffle and shuffle algebras implying algebraic relations being widely independent of the special quantities considered. They are supplemented by structural relations. The generalizations are given in terms of generalized harmonic sums, (generalized) cyclotomic sums, and sums containing in addition binomial and inverse-binomial weights. To all these quantities iterated integrals and special numbers are associated. We also discuss the analytic continuation of nested sums of different kind to complex values of the external summation bound N.
Evaluation of the multi-sums for large scale problems
International Nuclear Information System (INIS)
Bluemlein, J.; Hasselhuhn, A.; Schneider, C.
2012-02-01
A big class of Feynman integrals, in particular, the coefficients of their Laurent series expansion w.r.t. the dimension parameter ε can be transformed to multi-sums over hypergeometric terms and harmonic sums. In this article, we present a general summation method based on difference fields that simplifies these multi--sums by transforming them from inside to outside to representations in terms of indefinite nested sums and products. In particular, we present techniques that assist in the task to simplify huge expressions of such multi-sums in a completely automatic fashion. The ideas are illustrated on new calculations coming from 3-loop topologies of gluonic massive operator matrix elements containing two fermion lines, which contribute to the transition matrix elements in the variable flavor scheme. (orig.)
Harmonic sums, polylogarithms, special numbers, and their generalizations
Energy Technology Data Exchange (ETDEWEB)
Ablinger, Jakob [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2013-04-15
In these introductory lectures we discuss classes of presently known nested sums, associated iterated integrals, and special constants which hierarchically appear in the evaluation of massless and massive Feynman diagrams at higher loops. These quantities are elements of stuffle and shuffle algebras implying algebraic relations being widely independent of the special quantities considered. They are supplemented by structural relations. The generalizations are given in terms of generalized harmonic sums, (generalized) cyclotomic sums, and sums containing in addition binomial and inverse-binomial weights. To all these quantities iterated integrals and special numbers are associated. We also discuss the analytic continuation of nested sums of different kind to complex values of the external summation bound N.
Evaluation of the multi-sums for large scale problems
Energy Technology Data Exchange (ETDEWEB)
Bluemlein, J.; Hasselhuhn, A. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Schneider, C. [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation
2012-02-15
A big class of Feynman integrals, in particular, the coefficients of their Laurent series expansion w.r.t. the dimension parameter {epsilon} can be transformed to multi-sums over hypergeometric terms and harmonic sums. In this article, we present a general summation method based on difference fields that simplifies these multi--sums by transforming them from inside to outside to representations in terms of indefinite nested sums and products. In particular, we present techniques that assist in the task to simplify huge expressions of such multi-sums in a completely automatic fashion. The ideas are illustrated on new calculations coming from 3-loop topologies of gluonic massive operator matrix elements containing two fermion lines, which contribute to the transition matrix elements in the variable flavor scheme. (orig.)
Ranking metrics in gene set enrichment analysis: do they matter?
Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna
2017-05-12
There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner
PageRank and rank-reversal dependence on the damping factor
Son, S.-W.; Christensen, C.; Grassberger, P.; Paczuski, M.
2012-12-01
PageRank (PR) is an algorithm originally developed by Google to evaluate the importance of web pages. Considering how deeply rooted Google's PR algorithm is to gathering relevant information or to the success of modern businesses, the question of rank stability and choice of the damping factor (a parameter in the algorithm) is clearly important. We investigate PR as a function of the damping factor d on a network obtained from a domain of the World Wide Web, finding that rank reversal happens frequently over a broad range of PR (and of d). We use three different correlation measures, Pearson, Spearman, and Kendall, to study rank reversal as d changes, and we show that the correlation of PR vectors drops rapidly as d changes from its frequently cited value, d0=0.85. Rank reversal is also observed by measuring the Spearman and Kendall rank correlation, which evaluate relative ranks rather than absolute PR. Rank reversal happens not only in directed networks containing rank sinks but also in a single strongly connected component, which by definition does not contain any sinks. We relate rank reversals to rank pockets and bottlenecks in the directed network structure. For the network studied, the relative rank is more stable by our measures around d=0.65 than at d=d0.
PageRank and rank-reversal dependence on the damping factor.
Son, S-W; Christensen, C; Grassberger, P; Paczuski, M
2012-12-01
PageRank (PR) is an algorithm originally developed by Google to evaluate the importance of web pages. Considering how deeply rooted Google's PR algorithm is to gathering relevant information or to the success of modern businesses, the question of rank stability and choice of the damping factor (a parameter in the algorithm) is clearly important. We investigate PR as a function of the damping factor d on a network obtained from a domain of the World Wide Web, finding that rank reversal happens frequently over a broad range of PR (and of d). We use three different correlation measures, Pearson, Spearman, and Kendall, to study rank reversal as d changes, and we show that the correlation of PR vectors drops rapidly as d changes from its frequently cited value, d_{0}=0.85. Rank reversal is also observed by measuring the Spearman and Kendall rank correlation, which evaluate relative ranks rather than absolute PR. Rank reversal happens not only in directed networks containing rank sinks but also in a single strongly connected component, which by definition does not contain any sinks. We relate rank reversals to rank pockets and bottlenecks in the directed network structure. For the network studied, the relative rank is more stable by our measures around d=0.65 than at d=d_{0}.
A tilting approach to ranking influence
Genton, Marc G.
2014-12-01
We suggest a new approach, which is applicable for general statistics computed from random samples of univariate or vector-valued or functional data, to assessing the influence that individual data have on the value of a statistic, and to ranking the data in terms of that influence. Our method is based on, first, perturbing the value of the statistic by ‘tilting’, or reweighting, each data value, where the total amount of tilt is constrained to be the least possible, subject to achieving a given small perturbation of the statistic, and, then, taking the ranking of the influence of data values to be that which corresponds to ranking the changes in data weights. It is shown, both theoretically and numerically, that this ranking does not depend on the size of the perturbation, provided that the perturbation is sufficiently small. That simple result leads directly to an elegant geometric interpretation of the ranks; they are the ranks of the lengths of projections of the weights onto a ‘line’ determined by the first empirical principal component function in a generalized measure of covariance. To illustrate the generality of the method we introduce and explore it in the case of functional data, where (for example) it leads to generalized boxplots. The method has the advantage of providing an interpretable ranking that depends on the statistic under consideration. For example, the ranking of data, in terms of their influence on the value of a statistic, is different for a measure of location and for a measure of scale. This is as it should be; a ranking of data in terms of their influence should depend on the manner in which the data are used. Additionally, the ranking recognizes, rather than ignores, sign, and in particular can identify left- and right-hand ‘tails’ of the distribution of a random function or vector.
A Ranking Approach to Genomic Selection.
Blondel, Mathieu; Onogi, Akio; Iwata, Hiroyoshi; Ueda, Naonori
2015-01-01
Genomic selection (GS) is a recent selective breeding method which uses predictive models based on whole-genome molecular markers. Until now, existing studies formulated GS as the problem of modeling an individual's breeding value for a particular trait of interest, i.e., as a regression problem. To assess predictive accuracy of the model, the Pearson correlation between observed and predicted trait values was used. In this paper, we propose to formulate GS as the problem of ranking individuals according to their breeding value. Our proposed framework allows us to employ machine learning methods for ranking which had previously not been considered in the GS literature. To assess ranking accuracy of a model, we introduce a new measure originating from the information retrieval literature called normalized discounted cumulative gain (NDCG). NDCG rewards more strongly models which assign a high rank to individuals with high breeding value. Therefore, NDCG reflects a prerequisite objective in selective breeding: accurate selection of individuals with high breeding value. We conducted a comparison of 10 existing regression methods and 3 new ranking methods on 6 datasets, consisting of 4 plant species and 25 traits. Our experimental results suggest that tree-based ensemble methods including McRank, Random Forests and Gradient Boosting Regression Trees achieve excellent ranking accuracy. RKHS regression and RankSVM also achieve good accuracy when used with an RBF kernel. Traditional regression methods such as Bayesian lasso, wBSR and BayesC were found less suitable for ranking. Pearson correlation was found to correlate poorly with NDCG. Our study suggests two important messages. First, ranking methods are a promising research direction in GS. Second, NDCG can be a useful evaluation measure for GS.
Contextual effects on the perceived health benefits of exercise: the exercise rank hypothesis.
Maltby, John; Wood, Alex M; Vlaev, Ivo; Taylor, Michael J; Brown, Gordon D A
2012-12-01
Many accounts of social influences on exercise participation describe how people compare their behaviors to those of others. We develop and test a novel hypothesis, the exercise rank hypothesis, of how this comparison can occur. The exercise rank hypothesis, derived from evolutionary theory and the decision by sampling model of judgment, suggests that individuals' perceptions of the health benefits of exercise are influenced by how individuals believe the amount of exercise ranks in comparison with other people's amounts of exercise. Study 1 demonstrated that individuals' perceptions of the health benefits of their own current exercise amounts were as predicted by the exercise rank hypothesis. Study 2 demonstrated that the perceptions of the health benefits of an amount of exercise can be manipulated by experimentally changing the ranked position of the amount within a comparison context. The discussion focuses on how social norm-based interventions could benefit from using rank information.
African Journals Online (AJOL)
2010-12-04
Dec 4, 2010 ... their masculinity by testing, thus portraying themselves as positive role models for other men. ..... this was more common with men because they were ashamed of ..... He accused the media authorities for being money-.
Maximising information recovery from rank-order codes
Sen, B.; Furber, S.
2007-04-01
The central nervous system encodes information in sequences of asynchronously generated voltage spikes, but the precise details of this encoding are not well understood. Thorpe proposed rank-order codes as an explanation of the observed speed of information processing in the human visual system. The work described in this paper is inspired by the performance of SpikeNET, a biologically inspired neural architecture using rank-order codes for information processing, and is based on the retinal model developed by VanRullen and Thorpe. This model mimics retinal information processing by passing an input image through a bank of Difference of Gaussian (DoG) filters and then encoding the resulting coefficients in rank-order. To test the effectiveness of this encoding in capturing the information content of an image, the rank-order representation is decoded to reconstruct an image that can be compared with the original. The reconstruction uses a look-up table to infer the filter coefficients from their rank in the encoded image. Since the DoG filters are approximately orthogonal functions, they are treated as their own inverses in the reconstruction process. We obtained a quantitative measure of the perceptually important information retained in the reconstructed image relative to the original using a slightly modified version of an objective metric proposed by Petrovic. It is observed that around 75% of the perceptually important information is retained in the reconstruction. In the present work we reconstruct the input using a pseudo-inverse of the DoG filter-bank with the aim of improving the reconstruction and thereby extracting more information from the rank-order encoded stimulus. We observe that there is an increase of 10 - 15% in the information retrieved from a reconstructed stimulus as a result of inverting the filter-bank.
Lim, Kim-Hui,; Har, Wai-Mun
2008-01-01
The lack of academic and thinking culture is getting more worried and becomes a major challenge to our academia society this 21st century. Few directions that move academia from "cogito ergo sum" to "consumo ergo sum" are actually leading us to "the end of academia". Those directions are: (1) the death of dialectic;…
The effect of uncertainties in distance-based ranking methods for multi-criteria decision making
Jaini, Nor I.; Utyuzhnikov, Sergei V.
2017-08-01
Data in the multi-criteria decision making are often imprecise and changeable. Therefore, it is important to carry out sensitivity analysis test for the multi-criteria decision making problem. The paper aims to present a sensitivity analysis for some ranking techniques based on the distance measures in multi-criteria decision making. Two types of uncertainties are considered for the sensitivity analysis test. The first uncertainty is related to the input data, while the second uncertainty is towards the Decision Maker preferences (weights). The ranking techniques considered in this study are TOPSIS, the relative distance and trade-off ranking methods. TOPSIS and the relative distance method measure a distance from an alternative to the ideal and antiideal solutions. In turn, the trade-off ranking calculates a distance of an alternative to the extreme solutions and other alternatives. Several test cases are considered to study the performance of each ranking technique in both types of uncertainties.
Bernardi, Richard A.; Zamojcin, Kimberly A.; Delande, Taylor L.
2016-01-01
This research tests whether Holderness Jr., D. K., Myers, N., Summers, S. L., & Wood, D. A. [(2014). "Accounting education research: Ranking institutions and individual scholars." "Issues in Accounting Education," 29(1), 87-115] accounting-education rankings are sensitive to a change in the set of journals used. It provides…
The Influence of Wealth, Transparency, and Democracy on the Number of Top Ranked Universities
Jabnoun, Naceur
2015-01-01
Purpose: This paper aims to explore the influence of wealth, transparency and democracy on the number of universities per million people ranked among the top 300 and 500. The highly ranked universities in the world tend to be concentrated in a few countries. Design/Methodology/Approach: ANOVA was used to test the differences between the two groups…
Bastedo, Michael N.; Bowman, Nicholas A.
2011-01-01
Higher education administrators believe that revenues are linked to college rankings and act accordingly, particularly those at research universities. Although rankings are clearly influential for many schools and colleges, this fundamental assumption has yet to be tested empirically. Drawing on data from multiple resource providers in higher…
Ranking Adverse Drug Reactions With Crowdsourcing
Gottlieb, Assaf
2015-03-23
Background: There is no publicly available resource that provides the relative severity of adverse drug reactions (ADRs). Such a resource would be useful for several applications, including assessment of the risks and benefits of drugs and improvement of patient-centered care. It could also be used to triage predictions of drug adverse events. Objective: The intent of the study was to rank ADRs according to severity. Methods: We used Internet-based crowdsourcing to rank ADRs according to severity. We assigned 126,512 pairwise comparisons of ADRs to 2589 Amazon Mechanical Turk workers and used these comparisons to rank order 2929 ADRs. Results: There is good correlation (rho=.53) between the mortality rates associated with ADRs and their rank. Our ranking highlights severe drug-ADR predictions, such as cardiovascular ADRs for raloxifene and celecoxib. It also triages genes associated with severe ADRs such as epidermal growth-factor receptor (EGFR), associated with glioblastoma multiforme, and SCN1A, associated with epilepsy. Conclusions: ADR ranking lays a first stepping stone in personalized drug risk assessment. Ranking of ADRs using crowdsourcing may have useful clinical and financial implications, and should be further investigated in the context of health care decision making.
Ranking adverse drug reactions with crowdsourcing.
Gottlieb, Assaf; Hoehndorf, Robert; Dumontier, Michel; Altman, Russ B
2015-03-23
There is no publicly available resource that provides the relative severity of adverse drug reactions (ADRs). Such a resource would be useful for several applications, including assessment of the risks and benefits of drugs and improvement of patient-centered care. It could also be used to triage predictions of drug adverse events. The intent of the study was to rank ADRs according to severity. We used Internet-based crowdsourcing to rank ADRs according to severity. We assigned 126,512 pairwise comparisons of ADRs to 2589 Amazon Mechanical Turk workers and used these comparisons to rank order 2929 ADRs. There is good correlation (rho=.53) between the mortality rates associated with ADRs and their rank. Our ranking highlights severe drug-ADR predictions, such as cardiovascular ADRs for raloxifene and celecoxib. It also triages genes associated with severe ADRs such as epidermal growth-factor receptor (EGFR), associated with glioblastoma multiforme, and SCN1A, associated with epilepsy. ADR ranking lays a first stepping stone in personalized drug risk assessment. Ranking of ADRs using crowdsourcing may have useful clinical and financial implications, and should be further investigated in the context of health care decision making.
Correlation of Cognitive Abilities Level, Age and Ranks in Judo
Directory of Open Access Journals (Sweden)
Kraček Stanislav
2016-11-01
Full Text Available The aim of this paper is to ascertain the correlation between selected cognitive abilities, age and performance of judokas according to ranking. The study group consisted of judokas in the age group 18 ± 2.4 years. The Stroop Color-Word Test - Victoria Version (VST was the instrument used to determine the level of cognitive abilities. The data obtained were measured by the Pearson Correlation (r correlation test. The results of the study show an associative relationship of indirect correlation (p < 0.01 between age and all of the three categories of the Stroop test. This is an indirect correlation, so the higher the age, the lower the time (better performance of the probands in the Stroop test. There was no statistically significant correlation between performance in the categories of the Stroop test and rankings. The outcomes show that the level of selected cognitive abilities depends on age, but the level of the selected cognitive abilities does not affect the ranking of the judokas.
Zero-sum bias: perceived competition despite unlimited resources
Directory of Open Access Journals (Sweden)
Daniel V Meegan
2010-11-01
Full Text Available Zero-sum bias describes intuitively judging a situation to be zero-sum (i.e., resources gained by one party are matched by corresponding losses to another party when it is actually non-zero-sum. The experimental participants were students at a university where students’ grades are determined by how the quality of their work compares to a predetermined standard of quality rather than to the quality of the work produced by other students. This creates a non-zero-sum situation in which high grades are an unlimited resource. In three experiments, participants were shown the grade distribution after a majority of the students in a course had completed an assigned presentation, and asked to predict the grade of the next presenter. When many high grades had already been given, there was a corresponding increase in low grade predictions. This suggests a zero-sum bias, in which people perceive a competition for a limited resource despite unlimited resource availability. Interestingly, when many low grades had already been given, there was not a corresponding increase in high grade predictions. This suggests that a zero-sum heuristic is only applied in response to the allocation of desirable resources. A plausible explanation for the findings is that a zero-sum heuristic evolved as a cognitive adaptation to enable successful intra-group competition for limited resources. Implications for understanding inter-group interaction are also discussed.
Zero-sum bias: perceived competition despite unlimited resources.
Meegan, Daniel V
2010-01-01
Zero-sum bias describes intuitively judging a situation to be zero-sum (i.e., resources gained by one party are matched by corresponding losses to another party) when it is actually non-zero-sum. The experimental participants were students at a university where students' grades are determined by how the quality of their work compares to a predetermined standard of quality rather than to the quality of the work produced by other students. This creates a non-zero-sum situation in which high grades are an unlimited resource. In three experiments, participants were shown the grade distribution after a majority of the students in a course had completed an assigned presentation, and asked to predict the grade of the next presenter. When many high grades had already been given, there was a corresponding increase in low grade predictions. This suggests a zero-sum bias, in which people perceive a competition for a limited resource despite unlimited resource availability. Interestingly, when many low grades had already been given, there was not a corresponding increase in high grade predictions. This suggests that a zero-sum heuristic is only applied in response to the allocation of desirable resources. A plausible explanation for the findings is that a zero-sum heuristic evolved as a cognitive adaptation to enable successful intra-group competition for limited resources. Implications for understanding inter-group interaction are also discussed.
Harmonic sums and polylogarithms generated by cyclotomic polynomials
Energy Technology Data Exchange (ETDEWEB)
Ablinger, Jakob; Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2011-05-15
The computation of Feynman integrals in massive higher order perturbative calculations in renormalizable Quantum Field Theories requires extensions of multiply nested harmonic sums, which can be generated as real representations by Mellin transforms of Poincare-iterated integrals including denominators of higher cyclotomic polynomials. We derive the cyclotomic harmonic polylogarithms and harmonic sums and study their algebraic and structural relations. The analytic continuation of cyclotomic harmonic sums to complex values of N is performed using analytic representations. We also consider special values of the cyclotomic harmonic polylogarithms at argument x=1, resp., for the cyclotomic harmonic sums at N{yields}{infinity}, which are related to colored multiple zeta values, deriving various of their relations, based on the stuffle and shuffle algebras and three multiple argument relations. We also consider infinite generalized nested harmonic sums at roots of unity which are related to the infinite cyclotomic harmonic sums. Basis representations are derived for weight w=1,2 sums up to cyclotomy l=20. (orig.)
RankExplorer: Visualization of Ranking Changes in Large Time Series Data.
Shi, Conglei; Cui, Weiwei; Liu, Shixia; Xu, Panpan; Chen, Wei; Qu, Huamin
2012-12-01
For many applications involving time series data, people are often interested in the changes of item values over time as well as their ranking changes. For example, people search many words via search engines like Google and Bing every day. Analysts are interested in both the absolute searching number for each word as well as their relative rankings. Both sets of statistics may change over time. For very large time series data with thousands of items, how to visually present ranking changes is an interesting challenge. In this paper, we propose RankExplorer, a novel visualization method based on ThemeRiver to reveal the ranking changes. Our method consists of four major components: 1) a segmentation method which partitions a large set of time series curves into a manageable number of ranking categories; 2) an extended ThemeRiver view with embedded color bars and changing glyphs to show the evolution of aggregation values related to each ranking category over time as well as the content changes in each ranking category; 3) a trend curve to show the degree of ranking changes over time; 4) rich user interactions to support interactive exploration of ranking changes. We have applied our method to some real time series data and the case studies demonstrate that our method can reveal the underlying patterns related to ranking changes which might otherwise be obscured in traditional visualizations.
Augmenting the Deliberative Method for Ranking Risks.
Susel, Irving; Lasley, Trace; Montezemolo, Mark; Piper, Joel
2016-01-01
The Department of Homeland Security (DHS) characterized and prioritized the physical cross-border threats and hazards to the nation stemming from terrorism, market-driven illicit flows of people and goods (illegal immigration, narcotics, funds, counterfeits, and weaponry), and other nonmarket concerns (movement of diseases, pests, and invasive species). These threats and hazards pose a wide diversity of consequences with very different combinations of magnitudes and likelihoods, making it very challenging to prioritize them. This article presents the approach that was used at DHS to arrive at a consensus regarding the threats and hazards that stand out from the rest based on the overall risk they pose. Due to time constraints for the decision analysis, it was not feasible to apply multiattribute methodologies like multiattribute utility theory or the analytic hierarchy process. Using a holistic approach was considered, such as the deliberative method for ranking risks first published in this journal. However, an ordinal ranking alone does not indicate relative or absolute magnitude differences among the risks. Therefore, the use of the deliberative method for ranking risks is not sufficient for deciding whether there is a material difference between the top-ranked and bottom-ranked risks, let alone deciding what the stand-out risks are. To address this limitation of ordinal rankings, the deliberative method for ranking risks was augmented by adding an additional step to transform the ordinal ranking into a ratio scale ranking. This additional step enabled the selection of stand-out risks to help prioritize further analysis. © 2015 Society for Risk Analysis.
Measurement sum theory and application - Application to low level measurements
International Nuclear Information System (INIS)
Puydarrieux, S.; Bruel, V.; Rivier, C.; Crozet, M.; Vivier, A.; Manificat, G.; Thaurel, B.; Mokili, M.; Philippot, B.; Bohaud, E.
2015-09-01
In laboratories, most of the Total Sum methods implemented today use substitution or censure methods for nonsignificant or negative values, and thus create biases which can sometimes be quite large. They are usually positive, and generate, for example, becquerel (Bq) counting or 'administrative' quantities of materials (= 'virtual'), thus artificially falsifying the records kept by the laboratories under regulatory requirements (environment release records, waste records, etc.). This document suggests a methodology which will enable the user to avoid such biases. It is based on the following two fundamental rules: - The Total Sum of measurement values must be established based on all the individual measurement values, even those considered non-significant including the negative values. Any modification of these values, under the pretext that they are not significant, will inevitably lead to biases in the accumulated result and falsify the evaluation of its uncertainty. - In Total Sum operations, the decision thresholds are arrived at in a similar way to the approach used for uncertainties. The document deals with four essential aspects of the notion of 'measurement Total Sums': - The expression of results and associated uncertainties close to Decision Thresholds, and Detection or Quantification Limits, - The Total Sum of these measurements: sum or mean, - The calculation of the uncertainties associated with the Total Sums, - Result presentation (particularly when preparing balance sheets or reports, etc.) Several case studies arising from different situations are used to illustrate the methodology: environmental monitoring reports, release reports, and chemical impurity Total Sums for the qualification of a finished product. The special case of material balances, in which the measurements are usually all significant and correlated (the covariance term cannot then be ignored) will be the subject of a future second document. This
Communities in Large Networks: Identification and Ranking
DEFF Research Database (Denmark)
Olsen, Martin
2008-01-01
show that the problem of deciding whether a non trivial community exists is NP complete. Nevertheless, experiments show that a very simple greedy approach can identify members of a community in the Danish part of the web graph with time complexity only dependent on the size of the found community...... and its immediate surroundings. The members are ranked with a “local” variant of the PageRank algorithm. Results are reported from successful experiments on identifying and ranking Danish Computer Science sites and Danish Chess pages using only a few representatives....
2016-01-01
A mere hyperbolic law, like the Zipf’s law power function, is often inadequate to describe rank-size relationships. An alternative theoretical distribution is proposed based on theoretical physics arguments starting from the Yule-Simon distribution. A modeling is proposed leading to a universal form. A theoretical suggestion for the “best (or optimal) distribution”, is provided through an entropy argument. The ranking of areas through the number of cities in various countries and some sport competition ranking serves for the present illustrations. PMID:27812192
Light cone sum rules for single-pion electroproduction
International Nuclear Information System (INIS)
Mallik, S.
1978-01-01
Light cone dispersion sum rules (of low energy and superconvergence types) are derived for nucleon matrix elements of the commutator involving electromagnetic and divergence of axial vector currents. The superconvergence type sum rules in the fixed mass limit are rewritten without requiring the knowledge of Regge subtractions. The retarded scaling functions occurring in these sum rules are evaluated within the framework of quark light cone algebra of currents. Besides a general consistency check of the framework underlying the derivation, the author infers, on the basis of crude evaluation of scaling functions, an upper limit of 100 MeV for the bare mass of nonstrange quarks. (Auth.)
Parity of Θ+(1540) from QCD sum rules
International Nuclear Information System (INIS)
Lee, Su Houng; Kim, Hungchong; Kwon, Youngshin
2005-01-01
The QCD sum rule for the pentaquark Θ + , first analyzed by Sugiyama, Doi and Oka, is reanalyzed with a phenomenological side that explicitly includes the contribution from the two-particle reducible kaon-nucleon intermediate state. The magnitude for the overlap of the Θ + interpolating current with the kaon-nucleon state is obtained by using soft-kaon theorem and a separate sum rule for the ground state nucleon with the pentaquark nucleon interpolating current. It is found that the K-N intermediate state constitutes only 10% of the sum rule so that the original claim that the parity of Θ + is negative remains valid
The Eccentric-distance Sum of Some Graphs
P, Padmapriya; Mathad, Veena
2017-01-01
Let $G = (V,E)$ be a simple connected graph. Theeccentric-distance sum of $G$ is defined as$\\xi^{ds}(G) =\\ds\\sum_{\\{u,v\\}\\subseteq V(G)} [e(u)+e(v)] d(u,v)$, where $e(u)$ %\\dsis the eccentricity of the vertex $u$ in $G$ and $d(u,v)$ is thedistance between $u$ and $v$. In this paper, we establish formulaeto calculate the eccentric-distance sum for some graphs, namelywheel, star, broom, lollipop, double star, friendship, multi-stargraph and the join of $P_{n-2}$ and $P_2$.
The eccentric-distance sum of some graphs
Directory of Open Access Journals (Sweden)
Padmapriya P
2017-04-01
Full Text Available Let $G = (V,E$ be a simple connected graph. Theeccentric-distance sum of $G$ is defined as$\\xi^{ds}(G =\\ds\\sum_{\\{u,v\\}\\subseteq V(G} [e(u+e(v] d(u,v$, where $e(u$ %\\dsis the eccentricity of the vertex $u$ in $G$ and $d(u,v$ is thedistance between $u$ and $v$. In this paper, we establish formulaeto calculate the eccentric-distance sum for some graphs, namelywheel, star, broom, lollipop, double star, friendship, multi-stargraph and the join of $P_{n-2}$ and $P_2$.
Moessbauer sum rules for use with synchrotron sources
International Nuclear Information System (INIS)
Lipkin, Harry J.
1999-01-01
The availability of tunable synchrotron radiation sources with millivolt resolution has opened new prospects for exploring dynamics of complex systems with Moessbauer spectroscopy. Early Moessbauer treatments and moment sum rules are extended to treat inelastic excitations measured in synchrotron experiments, with emphasis on the unique new conditions absent in neutron scattering and arising in resonance scattering: prompt absorption, delayed emission, recoil-free transitions and coherent forward scattering. The first moment sum rule normalizes the inelastic spectrum. New sum rules obtained for higher moments include the third moment proportional to the second derivative of the potential acting on the Moessbauer nucleus and independent of temperature in the the harmonic approximation
Derivation of sum rules for quark and baryon fields
International Nuclear Information System (INIS)
Bongardt, K.
1978-01-01
In an analogous way to the Weinberg sum rules, two spectral-function sum rules for quark and baryon fields are derived by means of the concept of lightlike charges. The baryon sum rules are valid for the case of SU 3 as well as for SU 4 and the one-particle approximation yields a linear mass relation. This relation is not in disagreement with the normal linear GMO formula for the baryons. The calculated masses of the first resonance states agree very well with the experimental data
Adler-Weisberger sum rule for WLWL→WLWL scattering
International Nuclear Information System (INIS)
Pham, T.N.
1991-01-01
We analyse the Adler-Weisberger sum rule for W L W L →W L W L scattering. We find that at some energy, the W L W L total cross section must be large to saturate the sum rule. Measurements at future colliders would be needed to check the sum rule and to obtain the decay rates Γ(H→W L W L , Z L Z L ) which would be modified by the existence of a P-wave vector meson resonance in the standard model with strongly interacting Higgs sector or in technicolour models. (orig.)
A toolbox for Harmonic Sums and their analytic continuations
Energy Technology Data Exchange (ETDEWEB)
Ablinger, Jakob; Schneider, Carsten [RISC, J. Kepler University, Linz (Austria); Bluemlein, Johannes [DESY, Zeuthen (Germany)
2010-07-01
The package HarmonicSums implemented in the computer algebra system Mathematica is presented. It supports higher loop calculations in QCD and QED to represent single-scale quantities like anomalous dimensions and Wilson coefficients. The package allows to reduce general harmonic sums due to their algebraic and different structural relations. We provide a general framework for these reductions and the explicit representations up to weight w=8. For the use in experimental analyzes we also provide an analytic formalism to continue the harmonic sums form their integer arguments into the complex plane, which includes their recursions and asymptotic representations. The main ideas are illustrated by specific examples.
A folk-psychological ranking of personality facets
Directory of Open Access Journals (Sweden)
Eka Roivainen
2016-10-01
Full Text Available Background Which personality facets should a general personality test measure? No consensus exists on the facet structure of personality, the nature of facets, or the correct method of identifying the most significant facets. However, it can be hypothesized (the lexical hypothesis that high frequency personality describing words more likely represent important personality facets and rarely used words refer to less significant aspects of personality. Participants and procedure A ranking of personality facets was performed by studying the frequency of the use of popular personality adjectives in causal clauses (because he is a kind person on the Internet and in books as attributes of the word person (kind person. Results In Study 1, the 40 most frequently used adjectives had a cumulative usage frequency equal to that of the rest of the 295 terms studied. When terms with a higher-ranking dictionary synonym or antonym were eliminated, 23 terms remained, which represent 23 different facets. In Study 2, clusters of synonymous terms were examined. Within the top 30 clusters, personality terms were used 855 times compared to 240 for the 70 lower-ranking clusters. Conclusions It is hypothesized that personality facets represented by the top-ranking terms and clusters of terms are important and impactful independent of their correlation with abstract underlying personality factors (five/six factor models. Compared to hierarchical personality models, lists of important facets probably better cover those aspects of personality that are situated between the five or six major domains.
Scalable Faceted Ranking in Tagging Systems
Orlicki, José I.; Alvarez-Hamelin, J. Ignacio; Fierens, Pablo I.
Nowadays, web collaborative tagging systems which allow users to upload, comment on and recommend contents, are growing. Such systems can be represented as graphs where nodes correspond to users and tagged-links to recommendations. In this paper we analyze the problem of computing a ranking of users with respect to a facet described as a set of tags. A straightforward solution is to compute a PageRank-like algorithm on a facet-related graph, but it is not feasible for online computation. We propose an alternative: (i) a ranking for each tag is computed offline on the basis of tag-related subgraphs; (ii) a faceted order is generated online by merging rankings corresponding to all the tags in the facet. Based on the graph analysis of YouTube and Flickr, we show that step (i) is scalable. We also present efficient algorithms for step (ii), which are evaluated by comparing their results with two gold standards.
Evaluation of treatment effects by ranking
DEFF Research Database (Denmark)
Halekoh, U; Kristensen, K
2008-01-01
In crop experiments measurements are often made by a judge evaluating the crops' conditions after treatment. In the present paper an analysis is proposed for experiments where plots of crops treated differently are mutually ranked. In the experimental layout the crops are treated on consecutive...... plots usually placed side by side in one or more rows. In the proposed method a judge ranks several neighbouring plots, say three, by ranking them from best to worst. For the next observation the judge moves on by no more than two plots, such that up to two plots will be re-evaluated again...... in a comparison with the new plot(s). Data from studies using this set-up were analysed by a Thurstonian random utility model, which assumed that the judge's rankings were obtained by comparing latent continuous utilities or treatment effects. For the latent utilities a variance component model was considered...
Superfund Hazard Ranking System Training Course
The Hazard Ranking System (HRS) training course is a four and ½ day, intermediate-level course designed for personnel who are required to compile, draft, and review preliminary assessments (PAs), site inspections (SIs), and HRS documentation records/packag
Who's bigger? where historical figures really rank
Skiena, Steven
2014-01-01
Is Hitler bigger than Napoleon? Washington bigger than Lincoln? Picasso bigger than Einstein? Quantitative analysts are rapidly finding homes in social and cultural domains, from finance to politics. What about history? In this fascinating book, Steve Skiena and Charles Ward bring quantitative analysis to bear on ranking and comparing historical reputations. They evaluate each person by aggregating the traces of millions of opinions, just as Google ranks webpages. The book includes a technical discussion for readers interested in the details of the methods, but no mathematical or computational background is necessary to understand the rankings or conclusions. Along the way, the authors present the rankings of more than one thousand of history's most significant people in science, politics, entertainment, and all areas of human endeavor. Anyone interested in history or biography can see where their favorite figures place in the grand scheme of things.
Ranking Forestry Investments With Parametric Linear Programming
Paul A. Murphy
1976-01-01
Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.
Dufner, Michael; Leising, Daniel; Gebauer, Jochen E
2016-05-01
How are people who generally see others positively evaluated themselves? We propose that the answer to this question crucially hinges on the content domain: We hypothesize that Agency follows a "zero-sum principle" and therefore people who see others ashighin Agency are perceived aslowin Agency themselves. In contrast, we hypothesize that Communion follows a "non-zero-sum principle" and therefore people who see others ashighin Communion are perceived ashighin Communion themselves. We tested these hypotheses in a round-robin and a half-block study. Perceiving others as agentic was indeed linked to being perceived as low in Agency. To the contrary, perceiving others as communal was linked to being perceived as high in Communion, but only when people directly interacted with each other. These results help to clarify the nature of Agency and Communion and offer explanations for divergent findings in the literature. © 2016 by the Society for Personality and Social Psychology, Inc.
Experimental study of isovector spin sum rules
International Nuclear Information System (INIS)
Alexandre Deur; Peter Bosted; Volker Burkert; Donald Crabb; Kahanawita Dharmawardane; Gail Dodge; Tony Forest; Keith Griffioen; Sebastian Kuhn; Ralph Minehart; Yelena Prok
2008-01-01
We present the Bjorken integral extracted from Jefferson Lab experiment EG1b for 0.05 2 . The integral is fit to extract the twist-4 element f 2 p-n which is large and negative. Systematic studies of this higher twist analysis establish its legitimacy at Q 2 around 1 GeV 2 . We also extracted the isovector part of the generalized forward spin polarizability γ 0 . Although this quantity provides a robust test of Chiral Perturbation Theory, our data disagree with the calculations
Experimental study of isovector spin sum rules
International Nuclear Information System (INIS)
Deur, A.; Bosted, P.; Burkert, V.; Crabb, D.; Minehart, R.; Prok, Y.; Dharmawardane, V.; Dodge, G. E.; Kuhn, S. E.; Forest, T. A.; Griffioen, K. A.
2008-01-01
We present the Bjorken integral extracted from Jefferson Lab experiment EG1b for 0.05 2 2 . The integral is fit to extract the twist-4 element f 2 p-n which appears to be relatively large and negative. Systematic studies of this higher twist analysis establish its legitimacy at Q 2 around 1 GeV 2 . We also performed an isospin decomposition of the generalized forward spin polarizability γ 0 . Although its isovector part provides a reliable test of the calculation techniques of chiral perturbation theory, our data disagree with the calculations.
Block models and personalized PageRank
Kloumann, Isabel M.; Ugander, Johan; Kleinberg, Jon
2016-01-01
Methods for ranking the importance of nodes in a network have a rich history in machine learning and across domains that analyze structured data. Recent work has evaluated these methods though the seed set expansion problem: given a subset $S$ of nodes from a community of interest in an underlying graph, can we reliably identify the rest of the community? We start from the observation that the most widely used techniques for this problem, personalized PageRank and heat kernel methods, operate...
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A
2017-12-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.
Sum Rate Maximization of D2D Communications in Cognitive Radio Network Using Cheating Strategy
Directory of Open Access Journals (Sweden)
Yanjing Sun
2018-01-01
Full Text Available This paper focuses on the cheating algorithm for device-to-device (D2D pairs that reuse the uplink channels of cellular users. We are concerned about the way how D2D pairs are matched with cellular users (CUs to maximize their sum rate. In contrast with Munkres’ algorithm which gives the optimal matching in terms of the maximum throughput, Gale-Shapley algorithm ensures the stability of the system on the same time and achieves a men-optimal stable matching. In our system, D2D pairs play the role of “men,” so that each D2D pair could be matched to the CU that ranks as high as possible in the D2D pair’s preference list. It is found by previous studies that, by unilaterally falsifying preference lists in a particular way, some men can get better partners, while no men get worse off. We utilize this theory to exploit the best cheating strategy for D2D pairs. We find out that to acquire such a cheating strategy, we need to seek as many and as large cabals as possible. To this end, we develop a cabal finding algorithm named RHSTLC, and also we prove that it reaches the Pareto optimality. In comparison with other algorithms proposed by related works, the results show that our algorithm can considerably improve the sum rate of D2D pairs.
Tuan, Pham Viet; Koo, Insoo
2017-10-06
In this paper, we consider multiuser simultaneous wireless information and power transfer (SWIPT) for cognitive radio systems where a secondary transmitter (ST) with an antenna array provides information and energy to multiple single-antenna secondary receivers (SRs) equipped with a power splitting (PS) receiving scheme when multiple primary users (PUs) exist. The main objective of the paper is to maximize weighted sum harvested energy for SRs while satisfying their minimum required signal-to-interference-plus-noise ratio (SINR), the limited transmission power at the ST, and the interference threshold of each PU. For the perfect channel state information (CSI), the optimal beamforming vectors and PS ratios are achieved by the proposed PSO-SDR in which semidefinite relaxation (SDR) and particle swarm optimization (PSO) methods are jointly combined. We prove that SDR always has a rank-1 solution, and is indeed tight. For the imperfect CSI with bounded channel vector errors, the upper bound of weighted sum harvested energy (WSHE) is also obtained through the S-Procedure. Finally, simulation results demonstrate that the proposed PSO-SDR has fast convergence and better performance as compared to the other baseline schemes.
Compton scattering from nuclei and photo-absorption sum rules
International Nuclear Information System (INIS)
Gorchtein, Mikhail; Hobbs, Timothy; Londergan, J. Timothy; Szczepaniak, Adam P.
2011-01-01
We revisit the photo-absorption sum rule for real Compton scattering from the proton and from nuclear targets. In analogy with the Thomas-Reiche-Kuhn sum rule appropriate at low energies, we propose a new 'constituent quark model' sum rule that relates the integrated strength of hadronic resonances to the scattering amplitude on constituent quarks. We study the constituent quark model sum rule for several nuclear targets. In addition, we extract the α=0 pole contribution for both proton and nuclei. Using the modern high-energy proton data, we find that the α=0 pole contribution differs significantly from the Thomson term, in contrast with the original findings by Damashek and Gilman.
Unidirectional ring-laser operation using sum-frequency mixing
DEFF Research Database (Denmark)
Tidemand-Lichtenberg, Peter; Cheng, Haynes Pak Hay; Pedersen, Christian
2010-01-01
A technique enforcing unidirectional operation of ring lasers is proposed and demonstrated. The approach relies on sum-frequency mixing between a single-pass laser and one of the two counterpropagating intracavity fields of the ring laser. Sum-frequency mixing introduces a parametric loss for the...... where lossless second-order nonlinear materials are available. Numerical modeling and experimental demonstration of parametric-induced unidirectional operation of a diode-pumped solid-state 1342 nm cw ring laser are presented.......A technique enforcing unidirectional operation of ring lasers is proposed and demonstrated. The approach relies on sum-frequency mixing between a single-pass laser and one of the two counterpropagating intracavity fields of the ring laser. Sum-frequency mixing introduces a parametric loss...
A simple derivation of new sum rules of Bessel functions
International Nuclear Information System (INIS)
Ciocci, F.; Dattoli, G.; Dipace, A.
1985-01-01
In this note it is exploited a recently suggested technique to get simple expressions for a class of sum rules of Bessel functions appearing in plasma physics; their relevance to the numerical evaluation of the Turkin function is also discussed
Asymptotic distribution of products of sums of independent random ...
Indian Academy of Sciences (India)
integrable random variables (r.v.) are asymptotically log-normal. This fact ... the product of the partial sums of i.i.d. positive random variables as follows. .... Now define ..... by Henan Province Foundation and Frontier Technology Research Plan.
QCD sum rules and applications to nuclear physics
Energy Technology Data Exchange (ETDEWEB)
Cohen, T D [Maryland Univ., College Park, MD (United States). Dept. of Physics; [Washington Univ., Seattle, WA (United States). Dept. of Physics and Inst. for Nuclear Theory; Furnstahl, R J [Ohio State Univ., Columbus, OH (United States). Dept. of Physics; Griegel, D K [Maryland Univ., College Park, MD (United States). Dept. of Physics; [TRIUMF, Vancouver, BC (Canada); Xuemin, J
1994-12-01
Applications of QCD sum-rule methods to the physics of nuclei are reviewed, with an emphasis on calculations of baryon self-energies in infinite nuclear matter. The sum-rule approach relates spectral properties of hadrons propagating in the finite-density medium, such as optical potentials for quasinucleons, to matrix elements of QCD composite operators (condensates). The vacuum formalism for QCD sum rules is generalized to finite density, and the strategy and implementation of the approach is discussed. Predictions for baryon self-energies are compared to those suggested by relativistic nuclear physics phenomenology. Sum rules for vector mesons in dense nuclear matter are also considered. (author). 153 refs., 8 figs.
Effectiveness evaluation of contingency sum as a risk management ...
African Journals Online (AJOL)
Ethiopian Journal of Environmental Studies and Management ... manage risks prone projects have adopted several methods, one of which is contingency sum. ... initial project cost, cost overrun and percentage allowed for contingency.
Power sums of fibonacci and Lucas numbers | Chu | Quaestiones ...
African Journals Online (AJOL)
Lucas numbers are established, which include, as special cases, four for-mulae for odd power sums of Melham type on Fibonacci and Lucas numbers, obtained recently by Ozeki and Prodinger (2009). Quaestiones Mathematicae 34(2011), 75- ...
González-Galván, María del Carmen; Mosqueda-Taylor, Adalberto; Bologna-Molina, Ronell; Setien-Olarra, Amaia; Marichalar-Mendia, Xabier; Aguirre-Urizar, José-Manuel
2018-01-01
Background Odontogenic myxoma (OM) is a benign intraosseous neoplasm that exhibits local aggressiveness and high recurrence rates. Osteoclastogenesis is an important phenomenon in the tumor growth of maxillary neoplasms. RANK (Receptor Activator of Nuclear Factor κappa B) is the signaling receptor of RANK-L (Receptor activator of nuclear factor kappa-Β ligand) that activates the osteoclasts. OPG (osteoprotegerin) is a decoy receptor for RANK-L that inhibits pro-osteoclastogenesis. The RANK / RANKL / OPG system participates in the regulation of osteolytic activity under normal conditions, and its alteration has been associated with greater bone destruction, and also with tumor growth. Objectives To analyze the immunohistochemical expression of OPG, RANK and RANK-L proteins in odontogenic myxomas (OMs) and their relationship with the tumor size. Material and Methods Eighteen OMs, 4 small ( 3cm) and 18 dental follicles (DF) that were included as control were studied by means of standard immunohistochemical procedure with RANK, RANKL and OPG antibodies. For the evaluation, 5 fields (40x) of representative areas of OM and DF were selected where the expression of each antibody was determined. Descriptive and comparative statistical analyses were performed with the obtained data. Results There are significant differences in the expression of RANK in OM samples as compared to DF (p = 0.022) and among the OMSs and OMLs (p = 0.032). Also a strong association is recognized in the expression of RANK-L and OPG in OM samples. Conclusions Activation of the RANK / RANK-L / OPG triad seems to be involved in the mechanisms of bone balance and destruction, as well as associated with tumor growth in odontogenic myxomas. Key words:Odontogenic myxoma, dental follicle, RANK, RANK-L, OPG, osteoclastogenesis. PMID:29680857
How Many Alternatives Can Be Ranked? A Comparison of the Paired Comparison and Ranking Methods.
Ock, Minsu; Yi, Nari; Ahn, Jeonghoon; Jo, Min-Woo
2016-01-01
To determine the feasibility of converting ranking data into paired comparison (PC) data and suggest the number of alternatives that can be ranked by comparing a PC and a ranking method. Using a total of 222 health states, a household survey was conducted in a sample of 300 individuals from the general population. Each respondent performed a PC 15 times and a ranking method 6 times (two attempts of ranking three, four, and five health states, respectively). The health states of the PC and the ranking method were constructed to overlap each other. We converted the ranked data into PC data and examined the consistency of the response rate. Applying probit regression, we obtained the predicted probability of each method. Pearson correlation coefficients were determined between the predicted probabilities of those methods. The mean absolute error was also assessed between the observed and the predicted values. The overall consistency of the response rate was 82.8%. The Pearson correlation coefficients were 0.789, 0.852, and 0.893 for ranking three, four, and five health states, respectively. The lowest mean absolute error was 0.082 (95% confidence interval [CI] 0.074-0.090) in ranking five health states, followed by 0.123 (95% CI 0.111-0.135) in ranking four health states and 0.126 (95% CI 0.113-0.138) in ranking three health states. After empirically examining the consistency of the response rate between a PC and a ranking method, we suggest that using five alternatives in the ranking method may be superior to using three or four alternatives. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Rank distributions: A panoramic macroscopic outlook
Eliazar, Iddo I.; Cohen, Morrel H.
2014-01-01
This paper presents a panoramic macroscopic outlook of rank distributions. We establish a general framework for the analysis of rank distributions, which classifies them into five macroscopic "socioeconomic" states: monarchy, oligarchy-feudalism, criticality, socialism-capitalism, and communism. Oligarchy-feudalism is shown to be characterized by discrete macroscopic rank distributions, and socialism-capitalism is shown to be characterized by continuous macroscopic size distributions. Criticality is a transition state between oligarchy-feudalism and socialism-capitalism, which can manifest allometric scaling with multifractal spectra. Monarchy and communism are extreme forms of oligarchy-feudalism and socialism-capitalism, respectively, in which the intrinsic randomness vanishes. The general framework is applied to three different models of rank distributions—top-down, bottom-up, and global—and unveils each model's macroscopic universality and versatility. The global model yields a macroscopic classification of the generalized Zipf law, an omnipresent form of rank distributions observed across the sciences. An amalgamation of the three models establishes a universal rank-distribution explanation for the macroscopic emergence of a prevalent class of continuous size distributions, ones governed by unimodal densities with both Pareto and inverse-Pareto power-law tails.
Fair ranking of researchers and research teams.
Vavryčuk, Václav
2018-01-01
The main drawback of ranking of researchers by the number of papers, citations or by the Hirsch index is ignoring the problem of distributing authorship among authors in multi-author publications. So far, the single-author or multi-author publications contribute to the publication record of a researcher equally. This full counting scheme is apparently unfair and causes unjust disproportions, in particular, if ranked researchers have distinctly different collaboration profiles. These disproportions are removed by less common fractional or authorship-weighted counting schemes, which can distribute the authorship credit more properly and suppress a tendency to unjustified inflation of co-authors. The urgent need of widely adopting a fair ranking scheme in practise is exemplified by analysing citation profiles of several highly-cited astronomers and astrophysicists. While the full counting scheme often leads to completely incorrect and misleading ranking, the fractional or authorship-weighted schemes are more accurate and applicable to ranking of researchers as well as research teams. In addition, they suppress differences in ranking among scientific disciplines. These more appropriate schemes should urgently be adopted by scientific publication databases as the Web of Science (Thomson Reuters) or the Scopus (Elsevier).
The black hole interior and a curious sum rule
International Nuclear Information System (INIS)
Giveon, Amit; Itzhaki, Nissan; Troost, Jan
2014-01-01
We analyze the Euclidean geometry near non-extremal NS5-branes in string theory, including regions beyond the horizon and beyond the singularity of the black brane. The various regions have an exact description in string theory, in terms of cigar, trumpet and negative level minimal model conformal field theories. We study the worldsheet elliptic genera of these three superconformal theories, and show that their sum vanishes. We speculate on the significance of this curious sum rule for black hole physics
QCD sum rule for nucleon in nuclear matter
International Nuclear Information System (INIS)
Mallik, S.; Sarkar, Sourav
2010-01-01
We consider the two-point function of nucleon current in nuclear matter and write a QCD sum rule to analyse the residue of the nucleon pole as a function of nuclear density. The nucleon self-energy needed for the sum rule is taken as input from calculations using phenomenological N N potential. Our result shows a decrease in the residue with increasing nuclear density, as is known to be the case with similar quantities. (orig.)
The black hole interior and a curious sum rule
Energy Technology Data Exchange (ETDEWEB)
Giveon, Amit [Racah Institute of Physics, The Hebrew University,Jerusalem, 91904 (Israel); Itzhaki, Nissan [Physics Department, Tel-Aviv University,Ramat-Aviv, 69978 (Israel); Troost, Jan [Laboratoire de Physique Théorique,Unité Mixte du CRNS et de l’École Normale Supérieure,associée à l’Université Pierre et Marie Curie 6,UMR 8549 École Normale Supérieure,24 Rue Lhomond Paris 75005 (France)
2014-03-12
We analyze the Euclidean geometry near non-extremal NS5-branes in string theory, including regions beyond the horizon and beyond the singularity of the black brane. The various regions have an exact description in string theory, in terms of cigar, trumpet and negative level minimal model conformal field theories. We study the worldsheet elliptic genera of these three superconformal theories, and show that their sum vanishes. We speculate on the significance of this curious sum rule for black hole physics.
GDH sum rule measurement at low Q2
International Nuclear Information System (INIS)
Bianchi, N.
1996-01-01
The Gerasimov-Drell-Hearn (GDH) sum rule is based on a general dispersive relation for the forward Compton scattering. Multipole analysis suggested the possible violation of the sum rule. Some propositions have been made to modify the original GDH expression. An effort is now being made in several laboratories to shred some light on this topic. The purposes of the different planned experiments are briefly presented according to their Q 2 range
A Quantum Approach to Subset-Sum and Similar Problems
Daskin, Ammar
2017-01-01
In this paper, we study the subset-sum problem by using a quantum heuristic approach similar to the verification circuit of quantum Arthur-Merlin games. Under described certain assumptions, we show that the exact solution of the subset sum problem my be obtained in polynomial time and the exponential speed-up over the classical algorithms may be possible. We give a numerical example and discuss the complexity of the approach and its further application to the knapsack problem.
Spectral sum rule for time delay in R2
International Nuclear Information System (INIS)
Osborn, T.A.; Sinha, K.B.; Bolle, D.; Danneels, C.
1985-01-01
A local spectral sum rule for nonrelativistic scattering in two dimensions is derived for the potential class velement ofL 4 /sup // 3 (R 2 ). The sum rule relates the integral over all scattering energies of the trace of the time-delay operator for a finite region Σis contained inR 2 to the contributions in Σ of the pure point and singularly continuous spectra
Light-cone sum rules: A SCET-based formulation
De Fazio, F; Hurth, Tobias; Feldmann, Th.
2007-01-01
We describe the construction of light-cone sum rules (LCSRs) for exclusive $B$-meson decays into light energetic hadrons from correlation functions within soft-collinear effective theory (SCET). As an example, we consider the SCET sum rule for the $B \\to \\pi$ transition form factor at large recoil, including radiative corrections from hard-collinear loop diagrams at first order in the strong coupling constant.
A Global Optimization Algorithm for Sum of Linear Ratios Problem
Yuelin Gao; Siqiao Jin
2013-01-01
We equivalently transform the sum of linear ratios programming problem into bilinear programming problem, then by using the linear characteristics of convex envelope and concave envelope of double variables product function, linear relaxation programming of the bilinear programming problem is given, which can determine the lower bound of the optimal value of original problem. Therefore, a branch and bound algorithm for solving sum of linear ratios programming problem is put forward, and the c...
An Algorithm to Solve the Equal-Sum-Product Problem
Nyblom, M. A.; Evans, C. D.
2013-01-01
A recursive algorithm is constructed which finds all solutions to a class of Diophantine equations connected to the problem of determining ordered n-tuples of positive integers satisfying the property that their sum is equal to their product. An examination of the use of Binary Search Trees in implementing the algorithm into a working program is given. In addition an application of the algorithm for searching possible extra exceptional values of the equal-sum-product problem is explored after...
Hadronic final states and sum rules in deep inelastic processes
International Nuclear Information System (INIS)
Pal, B.K.
1977-01-01
In order to get maximum information on the hadronic final states and sum rules in deep inelastic processes, Regge phenomenology and quarks parton model have been used. The unified picture for the production of hadrons of type i as a function of Bjorken and Feyman variables with only one adjustable parameter is formulated. The results of neutrino experiments and the production of charm particles are discussed in sum rules. (author)
Comment on QCD sum rules and weak bottom decays
International Nuclear Information System (INIS)
Guberina, B.; Machet, B.
1982-07-01
QCD sum rules derived by Bourrely et al. are applied to B-decays to get a lower and an upper bound for the decay rate. The sum rules are shown to be essentially controlled by the large mass scales involved in the process. These bounds combined with the experimental value of BR (B→eνX) provide an upper bound for the lifetime of the B + meson. A comparison is made with D-meson decays
On the general Dedekind sums and its reciprocity formula
Indian Academy of Sciences (India)
if x is an integer. The various properties of S(h, q) were investigated by many authors. Maybe the most famous property of Dedekind sums is the reciprocity formula (see [2–4]):. S(h, q) + S(q, h) = h2 + q2 + 1. 12hq. −. 1. 4. (1) for all (h, q) = 1,q > 0,h> 0. The main purpose of this paper is to introduce a general. Dedekind sum:.
Root and Critical Point Behaviors of Certain Sums of Polynomials
Indian Academy of Sciences (India)
13
There is an extensive literature concerning roots of sums of polynomials. Many papers and books([5], [6],. [7]) have written about these polynomials. Perhaps the most immediate question of sums of polynomials,. A + B = C, is “given bounds for the roots of A and B, what bounds can be given for the roots of C?” By. Fell [3], if ...
Chiral corrections to the Adler-Weisberger sum rule
Beane, Silas R.; Klco, Natalie
2016-12-01
The Adler-Weisberger sum rule for the nucleon axial-vector charge, gA , offers a unique signature of chiral symmetry and its breaking in QCD. Its derivation relies on both algebraic aspects of chiral symmetry, which guarantee the convergence of the sum rule, and dynamical aspects of chiral symmetry breaking—as exploited using chiral perturbation theory—which allow the rigorous inclusion of explicit chiral symmetry breaking effects due to light-quark masses. The original derivations obtained the sum rule in the chiral limit and, without the benefit of chiral perturbation theory, made various attempts at extrapolating to nonvanishing pion masses. In this paper, the leading, universal, chiral corrections to the chiral-limit sum rule are obtained. Using PDG data, a recent parametrization of the pion-nucleon total cross sections in the resonance region given by the SAID group, as well as recent Roy-Steiner equation determinations of subthreshold amplitudes, threshold parameters, and correlated low-energy constants, the Adler-Weisberger sum rule is confronted with experimental data. With uncertainty estimates associated with the cross-section parametrization, the Goldberger-Treimann discrepancy, and the truncation of the sum rule at O (Mπ4) in the chiral expansion, this work finds gA=1.248 ±0.010 ±0.007 ±0.013 .
Dexter, Franklin; Bayman, Emine O; Dexter, Elisabeth U
2017-12-01
We examined type I and II error rates for analysis of (1) mean hospital length of stay (LOS) versus (2) percentage of hospital LOS that are overnight. These 2 end points are suitable for when LOS is treated as a secondary economic end point. We repeatedly resampled LOS for 5052 discharges of thoracoscopic wedge resections and lung lobectomy at 26 hospitals. Unequal variances t test (Welch method) and Fisher exact test both were conservative (ie, type I error rate less than nominal level). The Wilcoxon rank sum test was included as a comparator; the type I error rates did not differ from the nominal level of 0.05 or 0.01. Fisher exact test was more powerful than the unequal variances t test at detecting differences among hospitals; estimated odds ratio for obtaining P < .05 with Fisher exact test versus unequal variances t test = 1.94, with 95% confidence interval, 1.31-3.01. Fisher exact test and Wilcoxon-Mann-Whitney had comparable statistical power in terms of differentiating LOS between hospitals. For studies with LOS to be used as a secondary end point of economic interest, there is currently considerable interest in the planned analysis being for the percentage of patients suitable for ambulatory surgery (ie, hospital LOS equals 0 or 1 midnight). Our results show that there need not be a loss of statistical power when groups are compared using this binary end point, as compared with either Welch method or Wilcoxon rank sum test.
RANK/RANK-Ligand/OPG: Ein neuer Therapieansatz in der Osteoporosebehandlung
Directory of Open Access Journals (Sweden)
Preisinger E
2007-01-01
Full Text Available Die Erforschung der Kopplungsmechanismen zur Osteoklastogenese, Knochenresorption und Remodellierung eröffnete neue mögliche Therapieansätze in der Behandlung der Osteoporose. Eine Schlüsselrolle beim Knochenabbau spielt der RANK- ("receptor activator of nuclear factor (NF- κB"- Ligand (RANKL. Durch die Bindung von RANKL an den Rezeptor RANK wird die Knochenresorption eingeleitet. OPG (Osteoprotegerin sowie der für den klinischen Gebrauch entwickelte humane monoklonale Antikörper (IgG2 Denosumab blockieren die Bindung von RANK-Ligand an RANK und verhindern den Knochenabbau.
Country-specific determinants of world university rankings
Pietrucha, Jacek
2017-01-01
This paper examines country-specific factors that affect the three most influential world university rankings (the Academic Ranking of World Universities, the QS World University Ranking, and the Times Higher Education World University Ranking). We run a cross sectional regression that covers 42–71 countries (depending on the ranking and data availability). We show that the position of universities from a country in the ranking is determined by the following country-specific variables: econom...
Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques
Litvinenko, Alexander; Nowak, Wolfgang
2014-01-01
Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. For separable covariance functions, the results are exact, and non-separable covariance functions can be approximated through sums of separable components. Speedup factor is 1e+8, problem sizes 1.5e+13 and 2e+15 estimation points for Kriging and spatial design.
Decomposing tensors with structured matrix factors reduces to rank-1 approximations
DEFF Research Database (Denmark)
Comon, Pierre; Sørensen, Mikael; Tsigaridas, Elias
2010-01-01
Tensor decompositions permit to estimate in a deterministic way the parameters in a multi-linear model. Applications have been already pointed out in antenna array processing and digital communications, among others, and are extremely attractive provided some diversity at the receiver is availabl....... As opposed to the widely used ALS algorithm, non-iterative algorithms are proposed in this paper to compute the required tensor decomposition into a sum of rank-1 terms, when some factor matrices enjoy some structure, such as block-Hankel, triangular, band, etc....
Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques
Litvinenko, Alexander; Nowak, Wolfgang
2014-01-01
Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. The reduced computational complexity is O(dLlogL), where L := max ini, i = 1..d. For separable covariance functions, the results are exact, and non-separable covariance functions can be approximated through sums of separable components. Speedup factor is 10 8, problem sizes 15e + 12 and 2e + 15 estimation points for Kriging and spatial design.
Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques
Litvinenko, Alexander
2014-01-08
Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. For separable covariance functions, the results are exact, and non-separable covariance functions can be approximated through sums of separable components. Speedup factor is 1e+8, problem sizes 1.5e+13 and 2e+15 estimation points for Kriging and spatial design.
Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques
Litvinenko, Alexander
2014-01-06
Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. The reduced computational complexity is O(dLlogL), where L := max ini, i = 1..d. For separable covariance functions, the results are exact, and non-separable covariance functions can be approximated through sums of separable components. Speedup factor is 10 8, problem sizes 15e + 12 and 2e + 15 estimation points for Kriging and spatial design.
DEFF Research Database (Denmark)
Sebastiani, Paola; Hadley, Evan C; Province, Michael
2009-01-01
Family studies of exceptional longevity can potentially identify genetic and other factors contributing to long life and healthy aging. Although such studies seek families that are exceptionally long lived, they also need living members who can provide DNA and phenotype information. On the basis...... of these considerations, the authors developed a metric to rank families for selection into a family study of longevity. Their measure, the family longevity selection score (FLoSS), is the sum of 2 components: 1) an estimated family longevity score built from birth-, gender-, and nation-specific cohort survival...... probabilities and 2) a bonus for older living siblings. The authors examined properties of FLoSS-based family rankings by using data from 3 ongoing studies: the New England Centenarian Study, the Framingham Heart Study, and screenees for the Long Life Family Study. FLoSS-based selection yields families...
Global network centrality of university rankings
Guo, Weisi; Del Vecchio, Marco; Pogrebna, Ganna
2017-10-01
Universities and higher education institutions form an integral part of the national infrastructure and prestige. As academic research benefits increasingly from international exchange and cooperation, many universities have increased investment in improving and enabling their global connectivity. Yet, the relationship of university performance and its global physical connectedness has not been explored in detail. We conduct, to our knowledge, the first large-scale data-driven analysis into whether there is a correlation between university relative ranking performance and its global connectivity via the air transport network. The results show that local access to global hubs (as measured by air transport network betweenness) strongly and positively correlates with the ranking growth (statistical significance in different models ranges between 5% and 1% level). We also found that the local airport's aggregate flight paths (degree) and capacity (weighted degree) has no effect on university ranking, further showing that global connectivity distance is more important than the capacity of flight connections. We also examined the effect of local city economic development as a confounding variable and no effect was observed suggesting that access to global transportation hubs outweighs economic performance as a determinant of university ranking. The impact of this research is that we have determined the importance of the centrality of global connectivity and, hence, established initial evidence for further exploring potential connections between university ranking and regional investment policies on improving global connectivity.
Diversity rankings among bacterial lineages in soil.
Youssef, Noha H; Elshahed, Mostafa S
2009-03-01
We used rarefaction curve analysis and diversity ordering-based approaches to rank the 11 most frequently encountered bacterial lineages in soil according to diversity in 5 previously reported 16S rRNA gene clone libraries derived from agricultural, undisturbed tall grass prairie and forest soils (n=26,140, 28 328, 31 818, 13 001 and 53 533). The Planctomycetes, Firmicutes and the delta-Proteobacteria were consistently ranked among the most diverse lineages in all data sets, whereas the Verrucomicrobia, Gemmatimonadetes and beta-Proteobacteria were consistently ranked among the least diverse. On the other hand, the rankings of alpha-Proteobacteria, Acidobacteria, Actinobacteria, Bacteroidetes and Chloroflexi varied widely in different soil clone libraries. In general, lineages exhibiting largest differences in diversity rankings also exhibited the largest difference in relative abundance in the data sets examined. Within these lineages, a positive correlation between relative abundance and diversity was observed within the Acidobacteria, Actinobacteria and Chloroflexi, and a negative diversity-abundance correlation was observed within the Bacteroidetes. The ecological and evolutionary implications of these results are discussed.
RANK und RANKL - Vom Knochen zum Mammakarzinom
Directory of Open Access Journals (Sweden)
Sigl V
2012-01-01
Full Text Available RANK (Receptor Activator of NF-κB und sein Ligand RANKL sind Schlüsselmoleküle im Knochenmetabolismus und spielen eine essenzielle Rolle in der Entstehung von pathologischen Knochenveränderungen. Die Deregulation des RANK/RANKL-Systems ist zum Beispiel ein Hauptgrund für das Auftreten von postmenopausaler Osteoporose bei Frauen. Eine weitere wesentliche Funktion von RANK und RANKL liegt in der Entwicklung von milchsekretierenden Drüsen während der Schwangerschaft. Dabei regulieren Sexualhormone, wie zum Beispiel Progesteron, die Expression von RANKL und induzieren dadurch die Proliferation von epithelialen Zellen der Brust. Seit Längerem war schon bekannt, dass RANK und RANKL in der Metastasenbildung von Brustkrebszellen im Knochengewebe beteiligt sind. Wir konnten nun das RANK/RANKLSystem auch als essenziellen Mechanismus in der Entstehung von hormonellem Brustkrebs identifizieren. In diesem Beitrag werden wir daher den neuesten Erkenntnissen besondere Aufmerksamkeit schenken und diese kritisch in Bezug auf Brustkrebsentwicklung betrachten.
Google and the mind: predicting fluency with PageRank.
Griffiths, Thomas L; Steyvers, Mark; Firl, Alana
2007-12-01
Human memory and Internet search engines face a shared computational problem, needing to retrieve stored pieces of information in response to a query. We explored whether they employ similar solutions, testing whether we could predict human performance on a fluency task using PageRank, a component of the Google search engine. In this task, people were shown a letter of the alphabet and asked to name the first word beginning with that letter that came to mind. We show that PageRank, computed on a semantic network constructed from word-association data, outperformed word frequency and the number of words for which a word is named as an associate as a predictor of the words that people produced in this task. We identify two simple process models that could support this apparent correspondence between human memory and Internet search, and relate our results to previous rational models of memory.
QV modal distance displacement - a criterion for contingency ranking
Energy Technology Data Exchange (ETDEWEB)
Rios, M.A.; Sanchez, J.L.; Zapata, C.J. [Universidad de Los Andes (Colombia). Dept. of Electrical Engineering], Emails: mrios@uniandes.edu.co, josesan@uniandes.edu.co, cjzapata@utp.edu.co
2009-07-01
This paper proposes a new methodology using concepts of fast decoupled load flow, modal analysis and ranking of contingencies, where the impact of each contingency is measured hourly taking into account the influence of each contingency over the mathematical model of the system, i.e. the Jacobian Matrix. This method computes the displacement of the reduced Jacobian Matrix eigenvalues used in voltage stability analysis, as a criterion of contingency ranking, considering the fact that the lowest eigenvalue in the normal operation condition is not the same lowest eigenvalue in N-1 contingency condition. It is made using all branches in the system and specific branches according to the IBPF index. The test system used is the IEEE 118 nodes. (author)
Analysis of some methods for reduced rank Gaussian process regression
DEFF Research Database (Denmark)
Quinonero-Candela, J.; Rasmussen, Carl Edward
2005-01-01
While there is strong motivation for using Gaussian Processes (GPs) due to their excellent performance in regression and classification problems, their computational complexity makes them impractical when the size of the training set exceeds a few thousand cases. This has motivated the recent...... proliferation of a number of cost-effective approximations to GPs, both for classification and for regression. In this paper we analyze one popular approximation to GPs for regression: the reduced rank approximation. While generally GPs are equivalent to infinite linear models, we show that Reduced Rank...... Gaussian Processes (RRGPs) are equivalent to finite sparse linear models. We also introduce the concept of degenerate GPs and show that they correspond to inappropriate priors. We show how to modify the RRGP to prevent it from being degenerate at test time. Training RRGPs consists both in learning...
BridgeRank: A novel fast centrality measure based on local structure of the network
Salavati, Chiman; Abdollahpouri, Alireza; Manbari, Zhaleh
2018-04-01
Ranking nodes in complex networks have become an important task in many application domains. In a complex network, influential nodes are those that have the most spreading ability. Thus, identifying influential nodes based on their spreading ability is a fundamental task in different applications such as viral marketing. One of the most important centrality measures to ranking nodes is closeness centrality which is efficient but suffers from high computational complexity O(n3) . This paper tries to improve closeness centrality by utilizing the local structure of nodes and presents a new ranking algorithm, called BridgeRank centrality. The proposed method computes local centrality value for each node. For this purpose, at first, communities are detected and the relationship between communities is completely ignored. Then, by applying a centrality in each community, only one best critical node from each community is extracted. Finally, the nodes are ranked based on computing the sum of the shortest path length of nodes to obtained critical nodes. We have also modified the proposed method by weighting the original BridgeRank and selecting several nodes from each community based on the density of that community. Our method can find the best nodes with high spread ability and low time complexity, which make it applicable to large-scale networks. To evaluate the performance of the proposed method, we use the SIR diffusion model. Finally, experiments on real and artificial networks show that our method is able to identify influential nodes so efficiently, and achieves better performance compared to other recent methods.
COMPARISON OF SIMPLE SUM AND DIVISIA MONETARY AGGREGATES USING PANEL DATA ANALYSIS
Directory of Open Access Journals (Sweden)
Sadullah CELIK
2009-07-01
Full Text Available It is well documented that financial innovation has led to poor performance of simple sum method of monetary aggregation destabilizing the historical relationship between monetary aggregates and ultimate target variables like rate of growth and rate of unemployment during the liberalization period of 1980s. This study tries to emphasize the superiority of an alternative method of aggregation over the simple sum method, namely Divisia monetary aggregates, employing panel data analysis for United States, United Kingdom, Euro Area and Japan for the period between 1980Q1 and 1993Q3. After investigating the order of stationarity of the panel data set through several panel unit root tests, we perform advanced panel cointegration tests to check the existence of a long run link between the Divisia monetary aggregates and income and interest rates in a simple Keynesian money demand function.
Ranking the adaptive capacity of nations to climate change when socio-political goals are explicit
International Nuclear Information System (INIS)
Haddad, B.M.
2005-01-01
The typical categories for measuring national adaptive capacity to climate change include a nation's wealth, technology, education, information, skills, infrastructure, access to resources, and management capabilities. Resulting rankings predictably mirror more general rankings of economic development, such as the Human Development Index. This approach is incomplete since it does not consider the normative or motivational context of adaptation. For what purpose or toward what goal does a nation aspire, and in that context, what is its adaptive capacity? This paper posits 11 possible national socio-political goals that fall into the three categories of teleological legitimacy, procedural legitimacy, and norm-based decision rules. A model that sorts nations in terms of adaptive capacity based on national socio-political aspirations is presented. While the aspiration of maximizing summed utility matches typical existing rankings, alternative aspirations, including contractarian liberalism, technocratic management, and dictatorial/religious rule alter the rankings. An example describes how this research can potentially inform how priorities are set for international assistance for climate change adaptation. (author)
Ranking the adaptive capacity of nations to climate change when socio-political goals are explicit
Energy Technology Data Exchange (ETDEWEB)
Haddad, B.M. [University of California, Santa Cruz, CA (United States)
2005-07-01
The typical categories for measuring national adaptive capacity to climate change include a nation's wealth, technology, education, information, skills, infrastructure, access to resources, and management capabilities. Resulting rankings predictably mirror more general rankings of economic development, such as the Human Development Index. This approach is incomplete since it does not consider the normative or motivational context of adaptation. For what purpose or toward what goal does a nation aspire, and in that context, what is its adaptive capacity? This paper posits 11 possible national socio-political goals that fall into the three categories of teleological legitimacy, procedural legitimacy, and norm-based decision rules. A model that sorts nations in terms of adaptive capacity based on national socio-political aspirations is presented. While the aspiration of maximizing summed utility matches typical existing rankings, alternative aspirations, including contractarian liberalism, technocratic management, and dictatorial/religious rule alter the rankings. An example describes how this research can potentially inform how priorities are set for international assistance for climate change adaptation. (author)
Resolution of ranking hierarchies in directed networks
Barucca, Paolo; Lillo, Fabrizio
2018-01-01
Identifying hierarchies and rankings of nodes in directed graphs is fundamental in many applications such as social network analysis, biology, economics, and finance. A recently proposed method identifies the hierarchy by finding the ordered partition of nodes which minimises a score function, termed agony. This function penalises the links violating the hierarchy in a way depending on the strength of the violation. To investigate the resolution of ranking hierarchies we introduce an ensemble of random graphs, the Ranked Stochastic Block Model. We find that agony may fail to identify hierarchies when the structure is not strong enough and the size of the classes is small with respect to the whole network. We analytically characterise the resolution threshold and we show that an iterated version of agony can partly overcome this resolution limit. PMID:29394278
Ranking beta sheet topologies of proteins
DEFF Research Database (Denmark)
Fonseca, Rasmus; Helles, Glennie; Winter, Pawel
2010-01-01
One of the challenges of protein structure prediction is to identify long-range interactions between amino acids. To reliably predict such interactions, we enumerate, score and rank all beta-topologies (partitions of beta-strands into sheets, orderings of strands within sheets and orientations...... of paired strands) of a given protein. We show that the beta-topology corresponding to the native structure is, with high probability, among the top-ranked. Since full enumeration is very time-consuming, we also suggest a method to deal with proteins with many beta-strands. The results reported...... in this paper are highly relevant for ab initio protein structure prediction methods based on decoy generation. The top-ranked beta-topologies can be used to find initial conformations from which conformational searches can be started. They can also be used to filter decoys by removing those with poorly...
Data envelopment analysis of randomized ranks
Directory of Open Access Journals (Sweden)
Sant'Anna Annibal P.
2002-01-01
Full Text Available Probabilities and odds, derived from vectors of ranks, are here compared as measures of efficiency of decision-making units (DMUs. These measures are computed with the goal of providing preliminary information before starting a Data Envelopment Analysis (DEA or the application of any other evaluation or composition of preferences methodology. Preferences, quality and productivity evaluations are usually measured with errors or subject to influence of other random disturbances. Reducing evaluations to ranks and treating the ranks as estimates of location parameters of random variables, we are able to compute the probability of each DMU being classified as the best according to the consumption of each input and the production of each output. Employing the probabilities of being the best as efficiency measures, we stretch distances between the most efficient units. We combine these partial probabilities in a global efficiency score determined in terms of proximity to the efficiency frontier.