WorldWideScience

Sample records for ranking methods based

  1. International Conference on Robust Rank-Based and Nonparametric Methods

    CERN Document Server

    McKean, Joseph

    2016-01-01

    The contributors to this volume include many of the distinguished researchers in this area. Many of these scholars have collaborated with Joseph McKean to develop underlying theory for these methods, obtain small sample corrections, and develop efficient algorithms for their computation. The papers cover the scope of the area, including robust nonparametric rank-based procedures through Bayesian and big data rank-based analyses. Areas of application include biostatistics and spatial areas. Over the last 30 years, robust rank-based and nonparametric methods have developed considerably. These procedures generalize traditional Wilcoxon-type methods for one- and two-sample location problems. Research into these procedures has culminated in complete analyses for many of the models used in practice including linear, generalized linear, mixed, and nonlinear models. Settings are both multivariate and univariate. With the development of R packages in these areas, computation of these procedures is easily shared with r...

  2. Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy.

    Directory of Open Access Journals (Sweden)

    Yuling Tian

    Full Text Available For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic-there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions.

  3. Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy

    OpenAIRE

    Tian, Yuling; Zhang, Hongxian

    2016-01-01

    For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic?there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a...

  4. Recurrent fuzzy ranking methods

    Science.gov (United States)

    Hajjari, Tayebeh

    2012-11-01

    With the increasing development of fuzzy set theory in various scientific fields and the need to compare fuzzy numbers in different areas. Therefore, Ranking of fuzzy numbers plays a very important role in linguistic decision-making, engineering, business and some other fuzzy application systems. Several strategies have been proposed for ranking of fuzzy numbers. Each of these techniques has been shown to produce non-intuitive results in certain case. In this paper, we reviewed some recent ranking methods, which will be useful for the researchers who are interested in this area.

  5. The effect of uncertainties in distance-based ranking methods for multi-criteria decision making

    Science.gov (United States)

    Jaini, Nor I.; Utyuzhnikov, Sergei V.

    2017-08-01

    Data in the multi-criteria decision making are often imprecise and changeable. Therefore, it is important to carry out sensitivity analysis test for the multi-criteria decision making problem. The paper aims to present a sensitivity analysis for some ranking techniques based on the distance measures in multi-criteria decision making. Two types of uncertainties are considered for the sensitivity analysis test. The first uncertainty is related to the input data, while the second uncertainty is towards the Decision Maker preferences (weights). The ranking techniques considered in this study are TOPSIS, the relative distance and trade-off ranking methods. TOPSIS and the relative distance method measure a distance from an alternative to the ideal and antiideal solutions. In turn, the trade-off ranking calculates a distance of an alternative to the extreme solutions and other alternatives. Several test cases are considered to study the performance of each ranking technique in both types of uncertainties.

  6. Evaluating user reputation in online rating systems via an iterative group-based ranking method

    Science.gov (United States)

    Gao, Jian; Zhou, Tao

    2017-05-01

    Reputation is a valuable asset in online social lives and it has drawn increased attention. Due to the existence of noisy ratings and spamming attacks, how to evaluate user reputation in online rating systems is especially significant. However, most of the previous ranking-based methods either follow a debatable assumption or have unsatisfied robustness. In this paper, we propose an iterative group-based ranking method by introducing an iterative reputation-allocation process into the original group-based ranking method. More specifically, the reputation of users is calculated based on the weighted sizes of the user rating groups after grouping all users by their rating similarities, and the high reputation users' ratings have larger weights in dominating the corresponding user rating groups. The reputation of users and the user rating group sizes are iteratively updated until they become stable. Results on two real data sets with artificial spammers suggest that the proposed method has better performance than the state-of-the-art methods and its robustness is considerably improved comparing with the original group-based ranking method. Our work highlights the positive role of considering users' grouping behaviors towards a better online user reputation evaluation.

  7. A Multiobjective Programming Method for Ranking All Units Based on Compensatory DEA Model

    Directory of Open Access Journals (Sweden)

    Haifang Cheng

    2014-01-01

    Full Text Available In order to rank all decision making units (DMUs on the same basis, this paper proposes a multiobjective programming (MOP model based on a compensatory data envelopment analysis (DEA model to derive a common set of weights that can be used for the full ranking of all DMUs. We first revisit a compensatory DEA model for ranking all units, point out the existing problem for solving the model, and present an improved algorithm for which an approximate global optimal solution of the model can be obtained by solving a sequence of linear programming. Then, we applied the key idea of the compensatory DEA model to develop the MOP model in which the objectives are to simultaneously maximize all common weights under constraints that the sum of efficiency values of all DMUs is equal to unity and the sum of all common weights is also equal to unity. In order to solve the MOP model, we transform it into a single objective programming (SOP model using a fuzzy programming method and solve the SOP model using the proposed approximation algorithm. To illustrate the ranking method using the proposed method, two numerical examples are solved.

  8. NDRC: A Disease-Causing Genes Prioritized Method Based on Network Diffusion and Rank Concordance.

    Science.gov (United States)

    Fang, Minghong; Hu, Xiaohua; Wang, Yan; Zhao, Junmin; Shen, Xianjun; He, Tingting

    2015-07-01

    Disease-causing genes prioritization is very important to understand disease mechanisms and biomedical applications, such as design of drugs. Previous studies have shown that promising candidate genes are mostly ranked according to their relatedness to known disease genes or closely related disease genes. Therefore, a dangling gene (isolated gene) with no edges in the network can not be effectively prioritized. These approaches tend to prioritize those genes that are highly connected in the PPI network while perform poorly when they are applied to loosely connected disease genes. To address these problems, we propose a new disease-causing genes prioritization method that based on network diffusion and rank concordance (NDRC). The method is evaluated by leave-one-out cross validation on 1931 diseases in which at least one gene is known to be involved, and it is able to rank the true causal gene first in 849 of all 2542 cases. The experimental results suggest that NDRC significantly outperforms other existing methods such as RWR, VAVIEN, DADA and PRINCE on identifying loosely connected disease genes and successfully put dangling genes as potential candidate disease genes. Furthermore, we apply NDRC method to study three representative diseases, Meckel syndrome 1, Protein C deficiency and Peroxisome biogenesis disorder 1A (Zellweger). Our study has also found that certain complex disease-causing genes can be divided into several modules that are closely associated with different disease phenotype.

  9. A Point-Set-Based Footprint Model and Spatial Ranking Method for Geographic Information Retrieval

    Directory of Open Access Journals (Sweden)

    Yong Gao

    2016-07-01

    Full Text Available In the recent big data era, massive spatial related data are continuously generated and scrambled from various sources. Acquiring accurate geographic information is also urgently demanded. How to accurately retrieve desired geographic information has become the prominent issue, needing to be resolved in high priority. The key technologies in geographic information retrieval are modeling document footprints and ranking documents based on their similarity evaluation. The traditional spatial similarity evaluation methods are mainly performed using a MBR (Minimum Bounding Rectangle footprint model. However, due to its nature of simplification and roughness, the results of traditional methods tend to be isotropic and space-redundant. In this paper, a new model that constructs the footprints in the form of point-sets is presented. The point-set-based footprint coincides the nature of place names in web pages, so it is redundancy-free, consistent, accurate, and anisotropic to describe the spatial extents of documents, and can handle multi-scale geographic information. The corresponding spatial ranking method is also presented based on the point-set-based model. The new similarity evaluation algorithm of this method firstly measures multiple distances for the spatial proximity across different scales, and then combines the frequency of place names to improve the accuracy and precision. The experimental results show that the proposed method outperforms the traditional methods with higher accuracies under different searching scenarios.

  10. A New Method for Defuzzification and Ranking of Fuzzy Numbers Based on the Statistical Beta Distribution

    Directory of Open Access Journals (Sweden)

    A. Rahmani

    2016-01-01

    Full Text Available Granular computing is an emerging computing theory and paradigm that deals with the processing of information granules, which are defined as a number of information entities grouped together due to their similarity, physical adjacency, or indistinguishability. In most aspects of human reasoning, these granules have an uncertain formation, so the concept of granularity of fuzzy information could be of special interest for the applications where fuzzy sets must be converted to crisp sets to avoid uncertainty. This paper proposes a novel method of defuzzification based on the mean value of statistical Beta distribution and an algorithm for ranking fuzzy numbers based on the crisp number ranking system on R. The proposed method is quite easy to use, but the main reason for following this approach is the equality of left spread, right spread, and mode of Beta distribution with their corresponding values in fuzzy numbers within (0,1 interval, in addition to the fact that the resulting method can satisfy all reasonable properties of fuzzy quantity ordering defined by Wang et al. The algorithm is illustrated through several numerical examples and it is then compared with some of the other methods provided by literature.

  11. Statistical methods for ranking data

    CERN Document Server

    Alvo, Mayer

    2014-01-01

    This book introduces advanced undergraduate, graduate students and practitioners to statistical methods for ranking data. An important aspect of nonparametric statistics is oriented towards the use of ranking data. Rank correlation is defined through the notion of distance functions and the notion of compatibility is introduced to deal with incomplete data. Ranking data are also modeled using a variety of modern tools such as CART, MCMC, EM algorithm and factor analysis. This book deals with statistical methods used for analyzing such data and provides a novel and unifying approach for hypotheses testing. The techniques described in the book are illustrated with examples and the statistical software is provided on the authors’ website.

  12. The optimized expansion based low-rank method for wavefield extrapolation

    KAUST Repository

    Wu, Zedong

    2014-03-01

    Spectral methods are fast becoming an indispensable tool for wavefield extrapolation, especially in anisotropic media because it tends to be dispersion and artifact free as well as highly accurate when solving the wave equation. However, for inhomogeneous media, we face difficulties in dealing with the mixed space-wavenumber domain extrapolation operator efficiently. To solve this problem, we evaluated an optimized expansion method that can approximate this operator with a low-rank variable separation representation. The rank defines the number of inverse Fourier transforms for each time extrapolation step, and thus, the lower the rank, the faster the extrapolation. The method uses optimization instead of matrix decomposition to find the optimal wavenumbers and velocities needed to approximate the full operator with its explicit low-rank representation. As a result, we obtain lower rank representations compared with the standard low-rank method within reasonable accuracy and thus cheaper extrapolations. Additional bounds set on the range of propagated wavenumbers to adhere to the physical wave limits yield unconditionally stable extrapolations regardless of the time step. An application on the BP model provided superior results compared to those obtained using the decomposition approach. For transversely isotopic media, because we used the pure P-wave dispersion relation, we obtained solutions that were free of the shear wave artifacts, and the algorithm does not require that n > 0. In addition, the required rank for the optimization approach to obtain high accuracy in anisotropic media was lower than that obtained by the decomposition approach, and thus, it was more efficient. A reverse time migration result for the BP tilted transverse isotropy model using this method as a wave propagator demonstrated the ability of the algorithm.

  13. Rank-Based Methods for Selection of Landscape Metrics for Land Cover Pattern Change Detection

    Directory of Open Access Journals (Sweden)

    Priyakant Sinha

    2016-02-01

    Full Text Available Often landscape metrics are not thoroughly evaluated with respect to remote sensing data characteristics, such as their behavior in relation to variation in spatial and temporal resolution, number of land cover classes or dominant land cover categories. In such circumstances, it may be difficult to ascertain whether a change in a metric is due to landscape pattern change or due to the inherent variability in multi-temporal data. This study builds on this important consideration and proposes a rank-based metric selection process through computation of four difference-based indices (β, γ, ξ and θ using a Max–Min/Max normalization approach. Land cover classification was carried out for two contrasting provinces, the Liverpool Range (LR and Liverpool Plains (LP, of the Brigalow Belt South Bioregion (BBSB of NSW, Australia. Landsat images, Multi Spectral Scanner (MSS of 1972–1973 and TM of 1987–1988, 1993–1994, 1999–2000 and 2009–2010 were classified using object-based image analysis methods. A total of 30 landscape metrics were computed and their sensitivities towards variation in spatial and temporal resolutions, number of land cover classes and dominant land cover categories were evaluated by computing a score based on Max–Min/Max normalization. The landscape metrics selected on the basis of the proposed methods (Diversity index (MSIDI, Area weighted mean patch fractal dimension (SHAPE_AM, Mean core area (CORE_MN, Total edge (TE, No. of patches (NP, Contagion index (CONTAG, Mean nearest neighbor index (ENN_MN and Mean patch fractal dimension (FRAC_MN were successful and effective in identifying changes over five different change periods. Major changes in land cover pattern after 1993 were observed, and though the trends were similar in both cases, the LP region became more fragmented than the LR. The proposed method was straightforward to apply, and can deal with multiple metrics when selection of an appropriate set can become

  14. Sequential rank agreement methods for comparison of ranked lists

    DEFF Research Database (Denmark)

    Ekstrøm, Claus Thorn; Gerds, Thomas Alexander; Jensen, Andreas Kryger

    2015-01-01

    The comparison of alternative rankings of a set of items is a general and prominent task in applied statistics. Predictor variables are ranked according to magnitude of association with an outcome, prediction models rank subjects according to the personalized risk of an event, and genetic studies...... are illustrated using gene rankings, and using data from two Danish ovarian cancer studies where we assess the within and between agreement of different statistical classification methods.......The comparison of alternative rankings of a set of items is a general and prominent task in applied statistics. Predictor variables are ranked according to magnitude of association with an outcome, prediction models rank subjects according to the personalized risk of an event, and genetic studies...

  15. Augmenting the Deliberative Method for Ranking Risks.

    Science.gov (United States)

    Susel, Irving; Lasley, Trace; Montezemolo, Mark; Piper, Joel

    2016-01-01

    The Department of Homeland Security (DHS) characterized and prioritized the physical cross-border threats and hazards to the nation stemming from terrorism, market-driven illicit flows of people and goods (illegal immigration, narcotics, funds, counterfeits, and weaponry), and other nonmarket concerns (movement of diseases, pests, and invasive species). These threats and hazards pose a wide diversity of consequences with very different combinations of magnitudes and likelihoods, making it very challenging to prioritize them. This article presents the approach that was used at DHS to arrive at a consensus regarding the threats and hazards that stand out from the rest based on the overall risk they pose. Due to time constraints for the decision analysis, it was not feasible to apply multiattribute methodologies like multiattribute utility theory or the analytic hierarchy process. Using a holistic approach was considered, such as the deliberative method for ranking risks first published in this journal. However, an ordinal ranking alone does not indicate relative or absolute magnitude differences among the risks. Therefore, the use of the deliberative method for ranking risks is not sufficient for deciding whether there is a material difference between the top-ranked and bottom-ranked risks, let alone deciding what the stand-out risks are. To address this limitation of ordinal rankings, the deliberative method for ranking risks was augmented by adding an additional step to transform the ordinal ranking into a ratio scale ranking. This additional step enabled the selection of stand-out risks to help prioritize further analysis. © 2015 Society for Risk Analysis.

  16. Citation graph based ranking in Invenio

    CERN Document Server

    Marian, Ludmila; Rajman, Martin; Vesely, Martin

    2010-01-01

    Invenio is the web-based integrated digital library system developed at CERN. Within this framework, we present four types of ranking models based on the citation graph that complement the simple approach based on citation counts: time-dependent citation counts, a relevancy ranking which extends the PageRank model, a time-dependent ranking which combines the freshness of citations with PageRank and a ranking that takes into consideration the external citations. We present our analysis and results obtained on two main data sets: Inspire and CERN Document Server. Our main contributions are: (i) a study of the currently available ranking methods based on the citation graph; (ii) the development of new ranking methods that correct some of the identified limitations of the current methods such as treating all citations of equal importance, not taking time into account or considering the citation graph complete; (iii) a detailed study of the key parameters for these ranking methods. (The original publication is ava...

  17. A comparison of sequential and information-based methods for determining the co-integration rank in heteroskedastic VAR MODELS

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Angelis, Luca De; Rahbek, Anders

    2015-01-01

    In this article, we investigate the behaviour of a number of methods for estimating the co-integration rank in VAR systems characterized by heteroskedastic innovation processes. In particular, we compare the efficacy of the most widely used information criteria, such as Akaike Information Criterion...... (AIC) and Bayesian Information Criterion (BIC) , with the commonly used sequential approach of Johansen [Likelihood-based Inference in Cointegrated Vector Autoregressive Models (1996)] based around the use of either asymptotic or wild bootstrap-based likelihood ratio type tests. Complementing recent...... work done for the latter in Cavaliere, Rahbek and Taylor [Econometric Reviews (2014) forthcoming], we establish the asymptotic properties of the procedures based on information criteria in the presence of heteroskedasticity (conditional or unconditional) of a quite general and unknown form...

  18. A multicriteria model for ranking of improvement approaches in construction companies based on the PROMETHÉE II method

    Directory of Open Access Journals (Sweden)

    Renata Maciel de Melo

    2015-03-01

    Full Text Available The quality of the construction production process may be improved using several different methods such as Lean Construction, ISO 9001, ISO 14001 or ISO 18001. Construction companies need a preliminary study and systematic implementation of changes to become more competitive and efficient. This paper presents a multicriteria decision model for the selection and ranking of such alternatives for improvement approaches regarding the aspects of quality, sustainability and safety, based on the PROMETHEE II method. The adoption of this model provides more confidence and visibility for decision makers. One of the differentiators of this model is the use of a fragmented set of improvement alternatives. These alternatives were combined with some restrictions to create a global set of alternatives. An application to three scenarios, considering realistic data, was developed. The results of the application show that the model should be incorporated into the strategic planning process of organizations.

  19. A Modification on the Hesitant Fuzzy Set Lexicographical Ranking Method

    Directory of Open Access Journals (Sweden)

    Xiaodi Liu

    2016-12-01

    Full Text Available Recently, a novel hesitant fuzzy set (HFS ranking technique based on the idea of lexicographical ordering is proposed and an example is presented to demonstrate that the proposed ranking method is invariant with multiple occurrences of any element of a hesitant fuzzy element (HFE. In this paper, we show by examples that the HFS lexicographical ordering method is sometimes invalid, and a modified ranking method is presented. In comparison with the HFS lexicographical ordering method, the modified ranking method is more reasonable in more general cases.

  20. Critical review of methods for risk ranking of food related hazards, based on risks for human health

    DEFF Research Database (Denmark)

    van der Fels-Klerx, H. J.; van Asselt, E. D.; Raley, M.

    2017-01-01

    -criteria decision analysis, risk matrix, flow charts/decision trees, stated preference techniques and expert synthesis. Method categories were described by their characteristics, weaknesses and strengths, data resources, and fields of applications. It was concluded there is no single best method for risk ranking...

  1. Application of fuzzy-MOORA method: Ranking of components for reliability estimation of component-based software systems

    Directory of Open Access Journals (Sweden)

    Zeeshan Ali Siddiqui

    2016-01-01

    Full Text Available Component-based software system (CBSS development technique is an emerging discipline that promises to take software development into a new era. As hardware systems are presently being constructed from kits of parts, software systems may also be assembled from components. It is more reliable to reuse software than to create. It is the glue code and individual components reliability that contribute to the reliability of the overall system. Every component contributes to overall system reliability according to the number of times it is being used, some components are of critical usage, known as usage frequency of component. The usage frequency decides the weight of each component. According to their weights, each component contributes to the overall reliability of the system. Therefore, ranking of components may be obtained by analyzing their reliability impacts on overall application. In this paper, we propose the application of fuzzy multi-objective optimization on the basis of ratio analysis, Fuzzy-MOORA. The method helps us find the best suitable alternative, software component, from a set of available feasible alternatives named software components. It is an accurate and easy to understand tool for solving multi-criteria decision making problems that have imprecise and vague evaluation data. By the use of ratio analysis, the proposed method determines the most suitable alternative among all possible alternatives, and dimensionless measurement will realize the job of ranking of components for estimating CBSS reliability in a non-subjective way. Finally, three case studies are shown to illustrate the use of the proposed technique.

  2. PageRank as a method to rank biomedical literature by importance.

    Science.gov (United States)

    Yates, Elliot J; Dixon, Louise C

    2015-01-01

    Optimal ranking of literature importance is vital in overcoming article overload. Existing ranking methods are typically based on raw citation counts, giving a sum of 'inbound' links with no consideration of citation importance. PageRank, an algorithm originally developed for ranking webpages at the search engine, Google, could potentially be adapted to bibliometrics to quantify the relative importance weightings of a citation network. This article seeks to validate such an approach on the freely available, PubMed Central open access subset (PMC-OAS) of biomedical literature. On-demand cloud computing infrastructure was used to extract a citation network from over 600,000 full-text PMC-OAS articles. PageRanks and citation counts were calculated for each node in this network. PageRank is highly correlated with citation count (R = 0.905, P PageRank can be trivially computed on commodity cluster hardware and is linearly correlated with citation count. Given its putative benefits in quantifying relative importance, we suggest it may enrich the citation network, thereby overcoming the existing inadequacy of citation counts alone. We thus suggest PageRank as a feasible supplement to, or replacement of, existing bibliometric ranking methods.

  3. Ecological Efficiency Based Ranking of Cities: A Combined DEA Cross-Efficiency and Shannon’s Entropy Method

    Directory of Open Access Journals (Sweden)

    Corrado lo Storto

    2016-01-01

    Full Text Available In this paper, a method is proposed to calculate a comprehensive index that calculates the ecological efficiency of a city by combining together the measurements provided by some Data Envelopment Analysis (DEA cross-efficiency models using the Shannon’s entropy index. The DEA models include non-discretionary uncontrollable inputs, desirable and undesirable outputs. The method is implemented to compute the ecological efficiency of a sample of 116 Italian provincial capital cities in 2011 as a case study. Results emerging from the case study show that the proposed index has a good discrimination power and performs better than the ranking provided by the Sole24Ore, which is generally used in Italy to conduct benchmarking studies. While the sustainability index proposed by the Sole24Ore utilizes a set of subjective weights to aggregate individual indicators, the adoption of the DEA based method limits the subjectivity to the selection of the models. The ecological efficiency measurements generated by the implementation of the method for the Italian cities indicate that they perform very differently, and generally largest cities in terms of population size achieve a higher efficiency score.

  4. Monte Carlo methods of PageRank computation

    NARCIS (Netherlands)

    Litvak, Nelli

    2004-01-01

    We describe and analyze an on-line Monte Carlo method of PageRank computation. The PageRank is being estimated basing on results of a large number of short independent simulation runs initiated from each page that contains outgoing hyperlinks. The method does not require any storage of the hyperlink

  5. Efficient l1 -norm-based low-rank matrix approximations for large-scale problems using alternating rectified gradient method.

    Science.gov (United States)

    Kim, Eunwoo; Lee, Minsik; Choi, Chong-Ho; Kwak, Nojun; Oh, Songhwai

    2015-02-01

    Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2 -norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2 -norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1 -norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1 -norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.

  6. Ranking the Online Documents Based on Relative Credibility Measures

    Directory of Open Access Journals (Sweden)

    Ahmad Dahlan

    2009-05-01

    Full Text Available Information searching is the most popular activity in Internet. Usually the search engine provides the search results ranked by the relevance. However, for a certain purpose that concerns with information credibility, particularly citing information for scientific works, another approach of ranking the search engine results is required. This paper presents a study on developing a new ranking method based on the credibility of information. The method is built up upon two well-known algorithms, PageRank and Citation Analysis. The result of the experiment that used Spearman Rank Correlation Coefficient to compare the proposed rank (generated by the method with the standard rank (generated manually by a group of experts showed that the average Spearman 0 < rS < critical value. It means that the correlation was proven but it was not significant. Hence the proposed rank does not satisfy the standard but the performance could be improved.

  7. Ranking the Online Documents Based on Relative Credibility Measures

    Directory of Open Access Journals (Sweden)

    Ahmad Dahlan

    2013-09-01

    Full Text Available Information searching is the most popular activity in Internet. Usually the search engine provides the search results ranked by the relevance. However, for a certain purpose that concerns with information credibility, particularly citing information for scientific works, another approach of ranking the search engine results is required. This paper presents a study on developing a new ranking method based on the credibility of information. The method is built up upon two well-known algorithms, PageRank and Citation Analysis. The result of the experiment that used Spearman Rank Correlation Coefficient to compare the proposed rank (generated by the method with the standard rank (generated manually by a group of experts showed that the average Spearman 0 < rS < critical value. It means that the correlation was proven but it was not significant. Hence the proposed rank does not satisfy the standard but the performance could be improved.

  8. A new method for class prediction based on signed-rank algorithms applied to Affymetrix® microarray experiments

    Directory of Open Access Journals (Sweden)

    Vassal Aurélien

    2008-01-01

    Full Text Available Abstract Background The huge amount of data generated by DNA chips is a powerful basis to classify various pathologies. However, constant evolution of microarray technology makes it difficult to mix data from different chip types for class prediction of limited sample populations. Affymetrix® technology provides both a quantitative fluorescence signal and a decision (detection call: absent or present based on signed-rank algorithms applied to several hybridization repeats of each gene, with a per-chip normalization. We developed a new prediction method for class belonging based on the detection call only from recent Affymetrix chip type. Biological data were obtained by hybridization on U133A, U133B and U133Plus 2.0 microarrays of purified normal B cells and cells from three independent groups of multiple myeloma (MM patients. Results After a call-based data reduction step to filter out non class-discriminative probe sets, the gene list obtained was reduced to a predictor with correction for multiple testing by iterative deletion of probe sets that sequentially improve inter-class comparisons and their significance. The error rate of the method was determined using leave-one-out and 5-fold cross-validation. It was successfully applied to (i determine a sex predictor with the normal donor group classifying gender with no error in all patient groups except for male MM samples with a Y chromosome deletion, (ii predict the immunoglobulin light and heavy chains expressed by the malignant myeloma clones of the validation group and (iii predict sex, light and heavy chain nature for every new patient. Finally, this method was shown powerful when compared to the popular classification method Prediction Analysis of Microarray (PAM. Conclusion This normalization-free method is routinely used for quality control and correction of collection errors in patient reports to clinicians. It can be easily extended to multiple class prediction suitable with

  9. Toward optimal feature selection using ranking methods and classification algorithms

    Directory of Open Access Journals (Sweden)

    Novaković Jasmina

    2011-01-01

    Full Text Available We presented a comparison between several feature ranking methods used on two real datasets. We considered six ranking methods that can be divided into two broad categories: statistical and entropy-based. Four supervised learning algorithms are adopted to build models, namely, IB1, Naive Bayes, C4.5 decision tree and the RBF network. We showed that the selection of ranking methods could be important for classification accuracy. In our experiments, ranking methods with different supervised learning algorithms give quite different results for balanced accuracy. Our cases confirm that, in order to be sure that a subset of features giving the highest accuracy has been selected, the use of many different indices is recommended.

  10. Equal opportunity for low-degree network nodes: a PageRank-based method for protein target identification in metabolic graphs.

    Science.gov (United States)

    Bánky, Dániel; Iván, Gábor; Grolmusz, Vince

    2013-01-01

    Biological network data, such as metabolic-, signaling- or physical interaction graphs of proteins are increasingly available in public repositories for important species. Tools for the quantitative analysis of these networks are being developed today. Protein network-based drug target identification methods usually return protein hubs with large degrees in the networks as potentially important targets. Some known, important protein targets, however, are not hubs at all, and perturbing protein hubs in these networks may have several unwanted physiological effects, due to their interaction with numerous partners. Here, we show a novel method applicable in networks with directed edges (such as metabolic networks) that compensates for the low degree (non-hub) vertices in the network, and identifies important nodes, regardless of their hub properties. Our method computes the PageRank for the nodes of the network, and divides the PageRank by the in-degree (i.e., the number of incoming edges) of the node. This quotient is the same in all nodes in an undirected graph (even for large- and low-degree nodes, that is, for hubs and non-hubs as well), but may differ significantly from node to node in directed graphs. We suggest to assign importance to non-hub nodes with large PageRank/in-degree quotient. Consequently, our method gives high scores to nodes with large PageRank, relative to their degrees: therefore non-hub important nodes can easily be identified in large networks. We demonstrate that these relatively high PageRank scores have biological relevance: the method correctly finds numerous already validated drug targets in distinct organisms (Mycobacterium tuberculosis, Plasmodium falciparum and MRSA Staphylococcus aureus), and consequently, it may suggest new possible protein targets as well. Additionally, our scoring method was not chosen arbitrarily: its value for all nodes of all undirected graphs is constant; therefore its high value captures importance in the

  11. Equal opportunity for low-degree network nodes: a PageRank-based method for protein target identification in metabolic graphs.

    Directory of Open Access Journals (Sweden)

    Dániel Bánky

    Full Text Available Biological network data, such as metabolic-, signaling- or physical interaction graphs of proteins are increasingly available in public repositories for important species. Tools for the quantitative analysis of these networks are being developed today. Protein network-based drug target identification methods usually return protein hubs with large degrees in the networks as potentially important targets. Some known, important protein targets, however, are not hubs at all, and perturbing protein hubs in these networks may have several unwanted physiological effects, due to their interaction with numerous partners. Here, we show a novel method applicable in networks with directed edges (such as metabolic networks that compensates for the low degree (non-hub vertices in the network, and identifies important nodes, regardless of their hub properties. Our method computes the PageRank for the nodes of the network, and divides the PageRank by the in-degree (i.e., the number of incoming edges of the node. This quotient is the same in all nodes in an undirected graph (even for large- and low-degree nodes, that is, for hubs and non-hubs as well, but may differ significantly from node to node in directed graphs. We suggest to assign importance to non-hub nodes with large PageRank/in-degree quotient. Consequently, our method gives high scores to nodes with large PageRank, relative to their degrees: therefore non-hub important nodes can easily be identified in large networks. We demonstrate that these relatively high PageRank scores have biological relevance: the method correctly finds numerous already validated drug targets in distinct organisms (Mycobacterium tuberculosis, Plasmodium falciparum and MRSA Staphylococcus aureus, and consequently, it may suggest new possible protein targets as well. Additionally, our scoring method was not chosen arbitrarily: its value for all nodes of all undirected graphs is constant; therefore its high value captures

  12. A cross-benchmark comparison of 87 learning to rank methods

    NARCIS (Netherlands)

    Tax, Niek; Bockting, Sander; Hiemstra, Djoerd

    2015-01-01

    Learning to rank is an increasingly important scientific field that comprises the use of machine learning for the ranking task. New learning to rank methods are generally evaluated on benchmark test collections. However, comparison of learning to rank methods based on evaluation results is hindered

  13. CNN-based ranking for biomedical entity normalization.

    Science.gov (United States)

    Li, Haodi; Chen, Qingcai; Tang, Buzhou; Wang, Xiaolong; Xu, Hua; Wang, Baohua; Huang, Dong

    2017-10-03

    Most state-of-the-art biomedical entity normalization systems, such as rule-based systems, merely rely on morphological information of entity mentions, but rarely consider their semantic information. In this paper, we introduce a novel convolutional neural network (CNN) architecture that regards biomedical entity normalization as a ranking problem and benefits from semantic information of biomedical entities. The CNN-based ranking method first generates candidates using handcrafted rules, and then ranks the candidates according to their semantic information modeled by CNN as well as their morphological information. Experiments on two benchmark datasets for biomedical entity normalization show that our proposed CNN-based ranking method outperforms traditional rule-based method with state-of-the-art performance. We propose a CNN architecture that regards biomedical entity normalization as a ranking problem. Comparison results show that semantic information is beneficial to biomedical entity normalization and can be well combined with morphological information in our CNN architecture for further improvement.

  14. Quantum probability ranking principle for ligand-based virtual screening.

    Science.gov (United States)

    Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal

    2017-04-01

    Chemical libraries contain thousands of compounds that need screening, which increases the need for computational methods that can rank or prioritize compounds. The tools of virtual screening are widely exploited to enhance the cost effectiveness of lead drug discovery programs by ranking chemical compounds databases in decreasing probability of biological activity based upon probability ranking principle (PRP). In this paper, we developed a novel ranking approach for molecular compounds inspired by quantum mechanics, called quantum probability ranking principle (QPRP). The QPRP ranking criteria would make an attempt to draw an analogy between the physical experiment and molecular structure ranking process for 2D fingerprints in ligand based virtual screening (LBVS). The development of QPRP criteria in LBVS has employed the concepts of quantum at three different levels, firstly at representation level, this model makes an effort to develop a new framework of molecular representation by connecting the molecular compounds with mathematical quantum space. Secondly, estimate the similarity between chemical libraries and references based on quantum-based similarity searching method. Finally, rank the molecules using QPRP approach. Simulated virtual screening experiments with MDL drug data report (MDDR) data sets showed that QPRP outperformed the classical ranking principle (PRP) for molecular chemical compounds.

  15. Quantum probability ranking principle for ligand-based virtual screening

    Science.gov (United States)

    Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal

    2017-04-01

    Chemical libraries contain thousands of compounds that need screening, which increases the need for computational methods that can rank or prioritize compounds. The tools of virtual screening are widely exploited to enhance the cost effectiveness of lead drug discovery programs by ranking chemical compounds databases in decreasing probability of biological activity based upon probability ranking principle (PRP). In this paper, we developed a novel ranking approach for molecular compounds inspired by quantum mechanics, called quantum probability ranking principle (QPRP). The QPRP ranking criteria would make an attempt to draw an analogy between the physical experiment and molecular structure ranking process for 2D fingerprints in ligand based virtual screening (LBVS). The development of QPRP criteria in LBVS has employed the concepts of quantum at three different levels, firstly at representation level, this model makes an effort to develop a new framework of molecular representation by connecting the molecular compounds with mathematical quantum space. Secondly, estimate the similarity between chemical libraries and references based on quantum-based similarity searching method. Finally, rank the molecules using QPRP approach. Simulated virtual screening experiments with MDL drug data report (MDDR) data sets showed that QPRP outperformed the classical ranking principle (PRP) for molecular chemical compounds.

  16. Evaluating ranking methods on heterogeneous digital library collections

    CERN Document Server

    Canévet, Olivier; Marian, Ludmila; Chonavel, Thierry

    In the frame of research in particle physics, CERN has been developing its own web-based software /Invenio/ to run the digital library of all the documents related to CERN and fundamental physics. The documents (articles, photos, news, thesis, ...) can be retrieved through a search engine. The results matching the query of the user can be displayed in several ways: sorted by latest first, author, title and also ranked by word similarity. The purpose of this project is to study and implement a new ranking method in Invenio: distributed-ranking (D-Rank). This method aims at aggregating several ranking scores coming from different ranking methods into a new score. In addition to query-related scores such as word similarity, the goal of the work is to take into account non-query-related scores such as citations, journal impact factor and in particular scores related to the document access frequency in the database. The idea is that for two equally query-relevant documents, if one has been more downloaded for inst...

  17. Fuzzy Multicriteria Ranking of Aluminium Coating Methods

    Science.gov (United States)

    Batzias, A. F.

    2007-12-01

    This work deals with multicriteria ranking of aluminium coating methods. The alternatives used are: sulfuric acid anodization, A1; oxalic acid anodization, A2; chromic acid anodization, A3; phosphoric acid anodization, A4; integral color anodizing, A5; chemical conversion coating, A6; electrostatic powder deposition, A7. The criteria used are: cost of production, f1; environmental friendliness of production process, f2; appearance (texture), f3; reflectivity, f4; response to coloring, f5; corrosion resistance, f6; abrasion resistance, f7; fatigue resistance, f8. Five experts coming from relevant industrial units set grades to the criteria vector and the preference matrix according to a properly modified Delphi method. Sensitivity analysis of the ranked first alternative A1 against the `second best', which was A3 at low and A7 at high resolution levels proved that the solution is robust. The dependence of anodized products quality on upstream processes is presented and the impact of energy price increase on industrial cost is discussed.

  18. Application of the Simplified Dow Chemical Company Relative Ranking Hazard Assessment Method for Air Combat Command Bases

    Science.gov (United States)

    1993-09-01

    and Operability Studies { HAZOP ) . Both of these methods use systematic ways of considering the consequences of unexpected events. In the What-If Method...problems and all probable consequences, assigning a probability of hazard to each consequence based on the probability of occurrence. (Davis 1987:50) HAZOP

  19. THE USE OF RANKING SAMPLING METHOD WITHIN MARKETING RESEARCH

    Directory of Open Access Journals (Sweden)

    CODRUŢA DURA

    2011-01-01

    Full Text Available Marketing and statistical literature available to practitioners provides a wide range of sampling methods that can be implemented in the context of marketing research. Ranking sampling method is based on taking apart the general population into several strata, namely into several subdivisions which are relatively homogenous regarding a certain characteristic. In fact, the sample will be composed by selecting, from each stratum, a certain number of components (which can be proportional or non-proportional to the size of the stratum until the pre-established volume of the sample is reached. Using ranking sampling within marketing research requires the determination of some relevant statistical indicators - average, dispersion, sampling error etc. To that end, the paper contains a case study which illustrates the actual approach used in order to apply the ranking sample method within a marketing research made by a company which provides Internet connection services, on a particular category of customers – small and medium enterprises.

  20. A network-based dynamical ranking system

    CERN Document Server

    Motegi, Shun

    2012-01-01

    Ranking players or teams in sports is of practical interests. From the viewpoint of networks, a ranking system is equivalent a centrality measure for sports networks, whereby a directed link represents the result of a single game. Previously proposed network-based ranking systems are derived from static networks, i.e., aggregation of the results of games over time. However, the score (i.e., strength) of a player, for example, depends on time. Defeating a renowned player in the peak performance is intuitively more rewarding than defeating the same player in other periods. To account for this factor, we propose a dynamic variant of such a network-based ranking system and apply it to professional men's tennis data. Our ranking system, also interpreted as a centrality measure for directed temporal networks, has two parameters. One parameter represents the exponential decay rate of the past score, and the other parameter controls the effect of indirect wins on the score. We derive a set of linear online update equ...

  1. Weighted Discriminative Dictionary Learning based on Low-rank Representation

    Science.gov (United States)

    Chang, Heyou; Zheng, Hao

    2017-01-01

    Low-rank representation has been widely used in the field of pattern classification, especially when both training and testing images are corrupted with large noise. Dictionary plays an important role in low-rank representation. With respect to the semantic dictionary, the optimal representation matrix should be block-diagonal. However, traditional low-rank representation based dictionary learning methods cannot effectively exploit the discriminative information between data and dictionary. To address this problem, this paper proposed weighted discriminative dictionary learning based on low-rank representation, where a weighted representation regularization term is constructed. The regularization associates label information of both training samples and dictionary atoms, and encourages to generate a discriminative representation with class-wise block-diagonal structure, which can further improve the classification performance where both training and testing images are corrupted with large noise. Experimental results demonstrate advantages of the proposed method over the state-of-the-art methods.

  2. Analysis and research of influence factors ranking of fuzzy language translation accuracy in literary works based on catastrophe progression method

    Directory of Open Access Journals (Sweden)

    Wei Dong

    2017-02-01

    Full Text Available This paper researches the problem of decline in translation accuracy caused by language “vagueness” in literary translation, and proposes to use the catastrophe model for importance ranking of various factors affecting the fuzzy language translation accuracy in literary works, and finally gives out the order of factors to be considered before translation. The multi-level evaluation system can be used to construct the relevant catastrophe progression model, and the normalization formula can be used to calculate the relative membership degree of each system and evaluation index, and make evaluation combined with the evaluation criteria table. The results show that, in the fuzzy language translation, in order to improve the translation accuracy, there is a need to consider the indicators ranking: A2 fuzzy language context → A1 words attribute → A3 specific meaning of digital words; B2 fuzzy semantics, B3 blur color words → B1 multiple meanings of words → B4 fuzzy digital words; C3 combination with context and cultural background, C4 specific connotation of color words → C1 combination with words emotion, C2 selection of words meaning → C5 combination with digits and language background.

  3. An Analytic Hierarchy Process-based Method to Rank the Critical Success Factors of Implementing a Pharmacy Barcode System.

    Science.gov (United States)

    Alharthi, Hana; Sultana, Nahid; Al-Amoudi, Amjaad; Basudan, Afrah

    2015-01-01

    Pharmacy barcode scanning is used to reduce errors during the medication dispensing process. However, this technology has rarely been used in hospital pharmacies in Saudi Arabia. This article describes the barriers to successful implementation of a barcode scanning system in Saudi Arabia. A literature review was conducted to identify the relevant critical success factors (CSFs) for a successful dispensing barcode system implementation. Twenty-eight pharmacists from a local hospital in Saudi Arabia were interviewed to obtain their perception of these CSFs. In this study, planning (process flow issues and training requirements), resistance (fear of change, communication issues, and negative perceptions about technology), and technology (software, hardware, and vendor support) were identified as the main barriers. The analytic hierarchy process (AHP), one of the most widely used tools for decision making in the presence of multiple criteria, was used to compare and rank these identified CSFs. The results of this study suggest that resistance barriers have a greater impact than planning and technology barriers. In particular, fear of change is the most critical factor, and training is the least critical factor.

  4. Optimal Combination of Classification Algorithms and Feature Ranking Methods for Object-Based Classification of Submeter Resolution Z/I-Imaging DMC Imagery

    Directory of Open Access Journals (Sweden)

    Fulgencio Cánovas-García

    2015-04-01

    Full Text Available Object-based image analysis allows several different features to be calculated for the resulting objects. However, a large number of features means longer computing times and might even result in a loss of classification accuracy. In this study, we use four feature ranking methods (maximum correlation, average correlation, Jeffries–Matusita distance and mean decrease in the Gini index and five classification algorithms (linear discriminant analysis, naive Bayes, weighted k-nearest neighbors, support vector machines and random forest. The objective is to discover the optimal algorithm and feature subset to maximize accuracy when classifying a set of 1,076,937 objects, produced by the prior segmentation of a 0.45-m resolution multispectral image, with 356 features calculated on each object. The study area is both large (9070 ha and diverse, which increases the possibility to generalize the results. The mean decrease in the Gini index was found to be the feature ranking method that provided highest accuracy for all of the classification algorithms. In addition, support vector machines and random forest obtained the highest accuracy in the classification, both using their default parameters. This is a useful result that could be taken into account in the processing of high-resolution images in large and diverse areas to obtain a land cover classification.

  5. Ranking Journals Using Social Choice Theory Methods: A Novel Approach in Bibliometrics

    Energy Technology Data Exchange (ETDEWEB)

    Aleskerov, F.T.; Pislyakov, V.; Subochev, A.N.

    2016-07-01

    We use data on economic, management and political science journals to produce quantitative estimates of (in)consistency of evaluations based on seven popular bibliometric indica (impact factor, 5-year impact factor, immediacy index, article influence score, h-index, SNIP and SJR). We propose a new approach to aggregating journal rankings: since rank aggregation is a multicriteria decision problem, ordinal ranking methods from social choice theory may solve it. We apply either a direct ranking method based on majority rule (the Copeland rule, the Markovian method) or a sorting procedure based on a tournament solution, such as the uncovered set and the minimal externally stable set. We demonstrate that aggregate rankings reduce the number of contradictions and represent the set of single-indicator-based rankings better than any of the seven rankings themselves. (Author)

  6. Ranking images based on aesthetic qualities.

    OpenAIRE

    Gaur, Aarushi

    2015-01-01

    The qualitative assessment of image content and aesthetic impression is affected by various image attributes and relations between the attributes. Modelling of such assessments in the form of objective rankings and learning image representations based on them is not a straightforward problem. The criteria can be varied with different levels of complexity for various applications. A highly-complex problem could involve a large number of interrelated attributes and features alongside varied rul...

  7. Image Registration based on Low Rank Matrix: Rank-Regularized SSD.

    Science.gov (United States)

    Ghaffari, Aboozar; Fatemizadeh, Emad

    2017-08-25

    Similarity measure is a main core of image registration algorithms. Spatially varying intensity distortion is an important challenge which affects the performance of similarity measures. Correlation among pixels is the main characteristic of this distortion. Similarity measures such as sum-of-squareddifferences (SSD) and mutual information (MI) ignore this correlation; Hence, perfect registration cannot be achieved in the presence of this distortion. In this paper, we model this correlation with the aid of the low rank matrix theory. Based on this model, we compensate this distortion analytically and introduce Rank-Regularized SSD (RRSSD). This new similarity measure is a modified SSD based on singular values of difference image in mono-modal imaging. In fact, image registration and distortion correction are performed simultaneously in the proposed model. Based on our experiments, the RRSSD similarity measure achieves clinically acceptable registration results, and outperforms other state-of-the-art similarity measures such as the well-known method of residual complexity.

  8. A STUDY ON RANKING METHOD IN RETRIEVING WEB PAGES BASED ON CONTENT AND LINK ANALYSIS: COMBINATION OF FOURIER DOMAIN SCORING AND PAGERANK SCORING

    Directory of Open Access Journals (Sweden)

    Diana Purwitasari

    2008-01-01

    Full Text Available Ranking module is an important component of search process which sorts through relevant pages. Since collection of Web pages has additional information inherent in the hyperlink structure of the Web, it can be represented as link score and then combined with the usual information retrieval techniques of content score. In this paper we report our studies about ranking score of Web pages combined from link analysis, PageRank Scoring, and content analysis, Fourier Domain Scoring. Our experiments use collection of Web pages relate to Statistic subject from Wikipedia with objectives to check correctness and performance evaluation of combination ranking method. Evaluation of PageRank Scoring show that the highest score does not always relate to Statistic. Since the links within Wikipedia articles exists so that users are always one click away from more information on any point that has a link attached, it it possible that unrelated topics to Statistic are most likely frequently mentioned in the collection. While the combination method show link score which is given proportional weight to content score of Web pages does effect the retrieval results.

  9. Image Re-Ranking Based on Topic Diversity.

    Science.gov (United States)

    Qian, Xueming; Lu, Dan; Wang, Yaxiong; Zhu, Li; Tang, Yuan Yan; Wang, Meng

    2017-08-01

    Social media sharing Websites allow users to annotate images with free tags, which significantly contribute to the development of the web image retrieval. Tag-based image search is an important method to find images shared by users in social networks. However, how to make the top ranked result relevant and with diversity is challenging. In this paper, we propose a topic diverse ranking approach for tag-based image retrieval with the consideration of promoting the topic coverage performance. First, we construct a tag graph based on the similarity between each tag. Then, the community detection method is conducted to mine the topic community of each tag. After that, inter-community and intra-community ranking are introduced to obtain the final retrieved results. In the inter-community ranking process, an adaptive random walk model is employed to rank the community based on the multi-information of each topic community. Besides, we build an inverted index structure for images to accelerate the searching process. Experimental results on Flickr data set and NUS-Wide data sets show the effectiveness of the proposed approach.

  10. Note---A Preference Ranking Organisation Method

    OpenAIRE

    J. P. Brans; Ph. Vincke

    1985-01-01

    Principles for a new family of outranking methods are given. The main aim of the proposed PROMETHEE approach is to be as easily understood as possible by the decision-maker. It is based on extensions of the notion of criterion. Six possible extensions are considered. These extensions can easily be identified by the decision-maker because the parameters to be defined (at most 2) have an economic significance. A valued outranking graph is constructed by using a preference index. Two possibiliti...

  11. Adaptive Game Level Creation through Rank-based Interactive Evolution

    DEFF Research Database (Denmark)

    Liapis, Antonios; Martínez, Héctor Pérez; Togelius, Julian

    2013-01-01

    This paper introduces Rank-based Interactive Evolution (RIE) which is an alternative to interactive evolution driven by computational models of user preferences to generate personalized content. In RIE, the computational models are adapted to the preferences of users which, in turn, are used...... as fitness functions for the optimization of the generated content. The preference models are built via ranking-based preference learning, while the content is generated via evolutionary search. The proposed method is evaluated on the creation of strategy game maps, and its performance is tested using...

  12. PREFERENCE BASED TERM WEIGHTING FOR ARABIC FIQH DOCUMENT RANKING

    Directory of Open Access Journals (Sweden)

    Khadijah Fahmi Hayati Holle

    2015-03-01

    Full Text Available In document retrieval, besides the suitability of query with search results, there is also a subjective user assessment that is expected to be a deciding factor in document ranking. This preference aspect is referred at the fiqh document searching. People tend to prefer on certain fiqh methodology without rejecting other fiqh methodologies. It is necessary to investigate preference factor in addition to the relevance factor in the document ranking. Therefore, this research proposed a method of term weighting based on preference to rank documents according to user preference. The proposed method is also combined with term weighting based on documents index and books index so it sees relevance and preference aspect. The proposed method is Inverse Preference Frequency with α value (IPFα. In this method, we calculate preference value by IPF term weighting. Then, the preference values of terms that is equal with the query are multiplied by α. IPFα combined with the existing weighting methods become TF.IDF.IBF.IPFα. Experiment of the proposed method uses dataset of several Arabic fiqh documents. Evaluation uses recall, precision, and f-measure calculations. Proposed term weighting method is obtained to rank the document in the right order according to user preference. It is shown from the result with recall value reach 75%, precision 100%, and f-measure 85.7% respectively.

  13. Ranking Scientific Publications Based on Their Citation Graph

    CERN Document Server

    Marian, L; Rajman, M

    2009-01-01

    CDS Invenio is the web-based integrated digital library system developed at CERN. It is a suite of applications which provides the framework and tools for building and managing an autonomous digital library server. Within this framework, the goal of this project is to implement new ranking methods based on the bibliographic citation graph extracted from the CDS Invenio database. As a first step, we implemented the Citation Count as a baseline ranking method. The major disadvantage of this method is that all citations are treated equally, disregarding their importance and their publication date. To overcome this drawback, we consider two different approaches: a link-based approach which extends the PageRank model to the bibliographic citation graph and a time-dependent approach which takes into account time in the citation counts. In addition, we also combined these two approaches in a hybrid model based on a time-dependent PageRank. In the present document, we describe the conceptual background behind our new...

  14. Investigation of oxidation and tautomerization of a recently synthesized Schiff base in micellar media using multivariate curve resolution alternative least squares and rank annihilation factor analysis methods.

    Science.gov (United States)

    Afkhami, Abbas; Khajavi, Farzad; Khanmohammadi, Hamid

    2009-08-11

    The oxidation of the recently synthesized Schiff base 3,6-bis((2-aminoethyl-5-Br-salicyliden)thio)pyridazine (PABST) with hydrogen peroxide was investigated using spectrophotometric studies. The reaction rate order and observed rate constant of the oxidation reaction was obtained in the mixture of N,N-dimethylformamide (DMF):water (30:70, v/v) at pH 10 using multivariate cure resolution alternative least squares (MCR-ALS) method and rank annihilation factor analysis (RAFA). The effective parameters on the oxidation rate constant such as percents of DMF, the effect of transition metals like Cu(2+), Zn(2+), Mn(2+) and Hg(2+) and the presence of surfactants were investigated. The keto-enol equilibria in DMF:water (30:70, v/v) solution at pH 7.6 was also investigated in the presence of surfactants. At concentrations above critical micelle concentration (cmc) of cationic surfactant cetyltrimethylammonium bromide (CTAB), the keto form was the predominant species, while at concentrations above cmc of anionic surfactant sodium dodecyl sulfate (SDS), the enol form was the predominant species. The kinetic reaction order and the rate constant of tautomerization in micellar medium were obtained using MCR-ALS and RAFA. The results obtained by both the methods were in a good agreement with each other. Also the effect of different volume percents of DMF on the rate constant of tautomerization was investigated. The neutral surfactant (Triton X-100) had no effect on tautomerization equilibrium.

  15. An Adaptive Reordered Method for Computing PageRank

    Directory of Open Access Journals (Sweden)

    Yi-Ming Bu

    2013-01-01

    Full Text Available We propose an adaptive reordered method to deal with the PageRank problem. It has been shown that one can reorder the hyperlink matrix of PageRank problem to calculate a reduced system and get the full PageRank vector through forward substitutions. This method can provide a speedup for calculating the PageRank vector. We observe that in the existing reordered method, the cost of the recursively reordering procedure could offset the computational reduction brought by minimizing the dimension of linear system. With this observation, we introduce an adaptive reordered method to accelerate the total calculation, in which we terminate the reordering procedure appropriately instead of reordering to the end. Numerical experiments show the effectiveness of this adaptive reordered method.

  16. On the ranking of chemicals based on their PBT characteristics: comparison of different ranking methodologies using selected POPs as an illustrative example.

    Science.gov (United States)

    Sailaukhanuly, Yerbolat; Zhakupbekova, Arai; Amutova, Farida; Carlsen, Lars

    2013-01-01

    Knowledge of the environmental behavior of chemicals is a fundamental part of the risk assessment process. The present paper discusses various methods of ranking of a series of persistent organic pollutants (POPs) according to the persistence, bioaccumulation and toxicity (PBT) characteristics. Traditionally ranking has been done as an absolute (total) ranking applying various multicriteria data analysis methods like simple additive ranking (SAR) or various utility functions (UFs) based rankings. An attractive alternative to these ranking methodologies appears to be partial order ranking (POR). The present paper compares different ranking methods like SAR, UF and POR. Significant discrepancies between the rankings are noted and it is concluded that partial order ranking, as a method without any pre-assumptions concerning possible relation between the single parameters, appears as the most attractive ranking methodology. In addition to the initial ranking partial order methodology offers a wide variety of analytical tools to elucidate the interplay between the objects to be ranked and the ranking parameters. In the present study is included an analysis of the relative importance of the single P, B and T parameters. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Competitive Sports Athletes’s Ranking Method

    Directory of Open Access Journals (Sweden)

    Mikhail M. Polevshchikov

    2012-06-01

    Full Text Available The contribution presents the method of competitive sports athlete’s according to the accuracy of physical actions. The criterion of accuracy is considered the maximum absolute value of fault during response to the moving object. The results of experimental investigation are quoted.

  18. Low-Rank Affinity Based Local-Driven Multilabel Propagation

    Directory of Open Access Journals (Sweden)

    Teng Li

    2013-01-01

    Full Text Available This paper presents a novel low-rank affinity based local-driven algorithm to robustly propagate the multilabels from training images to test images. A graph is constructed over the segmented local image regions. The labels for vertices from the training data are derived based on the context among different training images, and the derived vertex labels are propagated to the unlabeled vertices via the graph. The multitask low-rank affinity, which jointly seeks the sparsity-consistent low-rank affinities from multiple feature matrices, is applied to compute the edge weights between graph vertices. The inference process of multitask low-rank affinity is formulated as a constrained nuclear norm and ℓ2,1-norm minimization problem. The optimization is conducted efficiently with the augmented Lagrange multiplier method. Based on the learned local patch labels we can predict the multilabels for the test images. Experiments on multilabel image annotation demonstrate the encouraging results from the proposed framework.

  19. A stable systemic risk ranking in China's banking sector: Based on principal component analysis

    Science.gov (United States)

    Fang, Libing; Xiao, Binqing; Yu, Honghai; You, Qixing

    2018-02-01

    In this paper, we compare five popular systemic risk rankings, and apply principal component analysis (PCA) model to provide a stable systemic risk ranking for the Chinese banking sector. Our empirical results indicate that five methods suggest vastly different systemic risk rankings for the same bank, while the combined systemic risk measure based on PCA provides a reliable ranking. Furthermore, according to factor loadings of the first component, PCA combined ranking is mainly based on fundamentals instead of market price data. We clearly find that price-based rankings are not as practical a method as fundamentals-based ones. This PCA combined ranking directly shows systemic risk contributions of each bank for banking supervision purpose and reminds banks to prevent and cope with the financial crisis in advance.

  20. Aggregate Interview Method of ranking orthopedic applicants predicts future performance.

    Science.gov (United States)

    Geissler, Jacqueline; VanHeest, Ann; Tatman, Penny; Gioe, Terence

    2013-07-01

    This article evaluates and describes a process of ranking orthopedic applicants using what the authors term the Aggregate Interview Method. The authors hypothesized that higher-ranking applicants using this method at their institution would perform better than those ranked lower using multiple measures of resident performance. A retrospective review of 115 orthopedic residents was performed at the authors' institution. Residents were grouped into 3 categories by matching rank numbers: 1-5, 6-14, and 15 or higher. Each rank group was compared with resident performance as measured by faculty evaluations, the Orthopaedic In-Training Examination (OITE), and American Board of Orthopaedic Surgery (ABOS) test results. Residents ranked 1-5 scored significantly better on patient care, behavior, and overall competence by faculty evaluation (Porthopedic resident candidates who scored highly on the Accreditation Council for Graduate Medical Education resident core competencies as measured by faculty evaluations, performed above the national average on the OITE, and passed the ABOS part 1 examination at rates exceeding the national average. Copyright 2013, SLACK Incorporated.

  1. A Hybrid Distance-Based Ideal-Seeking Consensus Ranking Model

    Directory of Open Access Journals (Sweden)

    Madjid Tavana

    2007-01-01

    Full Text Available Ordinal consensus ranking problems have received much attention in the management science literature. A problem arises in situations where a group of k decision makers (DMs is asked to rank order n alternatives. The question is how to combine the DM rankings into one consensus ranking. Several different approaches have been suggested to aggregate DM responses into a compromise or consensus ranking; however, the similarity of consensus rankings generated by the different algorithms is largely unknown. In this paper, we propose a new hybrid distance-based ideal-seeking consensus ranking model (DCM. The proposed hybrid model combines parts of the two commonly used consensus ranking techniques of Beck and Lin (1983 and Cook and Kress (1985 into an intuitive and computationally simple model. We illustrate our method and then run a Monte Carlo simulation across a range of k and n to compare the similarity of the consensus rankings generated by our method with the best-known method of Borda and Kendall (Kendall 1962 and the two methods proposed by Beck and Lin (1983 and Cook and Kress (1985. DCM and Beck and Lin's method yielded the most similar consensus rankings, whereas the Cook-Kress method and the Borda-Kendall method yielded the least similar consensus rankings.

  2. RRCRank: a fusion method using rank strategy for residue-residue contact prediction.

    Science.gov (United States)

    Jing, Xiaoyang; Dong, Qiwen; Lu, Ruqian

    2017-09-02

    In structural biology area, protein residue-residue contacts play a crucial role in protein structure prediction. Some researchers have found that the predicted residue-residue contacts could effectively constrain the conformational search space, which is significant for de novo protein structure prediction. In the last few decades, related researchers have developed various methods to predict residue-residue contacts, especially, significant performance has been achieved by using fusion methods in recent years. In this work, a novel fusion method based on rank strategy has been proposed to predict contacts. Unlike the traditional regression or classification strategies, the contact prediction task is regarded as a ranking task. First, two kinds of features are extracted from correlated mutations methods and ensemble machine-learning classifiers, and then the proposed method uses the learning-to-rank algorithm to predict contact probability of each residue pair. First, we perform two benchmark tests for the proposed fusion method (RRCRank) on CASP11 dataset and CASP12 dataset respectively. The test results show that the RRCRank method outperforms other well-developed methods, especially for medium and short range contacts. Second, in order to verify the superiority of ranking strategy, we predict contacts by using the traditional regression and classification strategies based on the same features as ranking strategy. Compared with these two traditional strategies, the proposed ranking strategy shows better performance for three contact types, in particular for long range contacts. Third, the proposed RRCRank has been compared with several state-of-the-art methods in CASP11 and CASP12. The results show that the RRCRank could achieve comparable prediction precisions and is better than three methods in most assessment metrics. The learning-to-rank algorithm is introduced to develop a novel rank-based method for the residue-residue contact prediction of proteins, which

  3. Diagrammatic perturbation methods in networks and sports ranking combinatorics

    Science.gov (United States)

    Park, Juyong

    2010-04-01

    Analytic and computational tools developed in statistical physics are being increasingly applied to the study of complex networks. Here we present recent developments in the diagrammatic perturbation methods for the exponential random graph models, and apply them to the combinatoric problem of determining the ranking of nodes in directed networks that represent pairwise competitions.

  4. Ranking Fuzzy Numbers with a Distance Method using Circumcenter of Centroids and an Index of Modality

    Directory of Open Access Journals (Sweden)

    P. Phani Bushan Rao

    2011-01-01

    Full Text Available Ranking fuzzy numbers are an important aspect of decision making in a fuzzy environment. Since their inception in 1965, many authors have proposed different methods for ranking fuzzy numbers. However, there is no method which gives a satisfactory result to all situations. Most of the methods proposed so far are nondiscriminating and counterintuitive. This paper proposes a new method for ranking fuzzy numbers based on the Circumcenter of Centroids and uses an index of optimism to reflect the decision maker's optimistic attitude and also an index of modality that represents the neutrality of the decision maker. This method ranks various types of fuzzy numbers which include normal, generalized trapezoidal, and triangular fuzzy numbers along with crisp numbers with the particularity that crisp numbers are to be considered particular cases of fuzzy numbers.

  5. Do PageRank-based author rankings outperform simple citation counts?

    CERN Document Server

    Fiala, Dalibor; Žitnik, Slavko; Bajec, Marko

    2015-01-01

    The basic indicators of a researcher's productivity and impact are still the number of publications and their citation counts. These metrics are clear, straightforward, and easy to obtain. When a ranking of scholars is needed, for instance in grant, award, or promotion procedures, their use is the fastest and cheapest way of prioritizing some scientists over others. However, due to their nature, there is a danger of oversimplifying scientific achievements. Therefore, many other indicators have been proposed including the usage of the PageRank algorithm known for the ranking of webpages and its modifications suited to citation networks. Nevertheless, this recursive method is computationally expensive and even if it has the advantage of favouring prestige over popularity, its application should be well justified, particularly when compared to the standard citation counts. In this study, we analyze three large datasets of computer science papers in the categories of artificial intelligence, software engineering,...

  6. Combining Document-and Paragraph-Based Entity Ranking

    NARCIS (Netherlands)

    Rode, H.; Serdyukov, Pavel; Hiemstra, Djoerd

    2008-01-01

    We study entity ranking on the INEX entity track and pro- pose a simple graph-based ranking approach that enables to combine scores on document and paragraph level. The com- bined approach improves the retrieval results not only on the INEX testset, but similarly on TREC’s expert finding task.

  7. Sentiment analysis of feature ranking methods for classification accuracy

    Science.gov (United States)

    Joseph, Shashank; Mugauri, Calvin; Sumathy, S.

    2017-11-01

    Text pre-processing and feature selection are important and critical steps in text mining. Text pre-processing of large volumes of datasets is a difficult task as unstructured raw data is converted into structured format. Traditional methods of processing and weighing took much time and were less accurate. To overcome this challenge, feature ranking techniques have been devised. A feature set from text preprocessing is fed as input for feature selection. Feature selection helps improve text classification accuracy. Of the three feature selection categories available, the filter category will be the focus. Five feature ranking methods namely: document frequency, standard deviation information gain, CHI-SQUARE, and weighted-log likelihood –ratio is analyzed.

  8. INTEL: Intel based systems move up in supercomputing ranks

    CERN Document Server

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  9. Ranking the dermatology programs based on measurements of academic achievement.

    Science.gov (United States)

    Wu, Jashin J; Ramirez, Claudia C; Alonso, Carol A; Berman, Brian; Tyring, Stephen K

    2007-07-13

    The only dermatology rankings in the past were based on National Institutes of Health (NIH) funding and journal citations. To determine the highest ranking academic dermatology programs based on 5 outcome measures and on an overall ranking scale. To the best of our knowledge, this is the first report to rank the dermatology programs on 4 of the following outcome measures of academic achievement and with an overall ranking. We collected extensive 2001 to 2004 data ranging from total publications to grant funding on 107 U.S. dermatology programs and their full-time faculty. Data from part-time and volunteer faculty were not used. Publications in 2001 to 2004; NIH funding in 2004; Dermatology Foundation grants in 2001 to 2004; faculty lectures in 2004 delivered at national conferences; number of full-time faculty members who were on the editorial boards of the top 3 U.S. dermatology journals and the top 4 subspecialty journals We used the 5 outcome measures to tabulate the highest ranking programs in each category. Using a weighted ranking system, we also tabulated the overall top 30 dermatology programs based on these 5 outcome measures. We were not able to determine the total amount of NIH funding in dollars of the dermatology divisions. The impact factors of the journal in which these publications appeared was not factored into our calculations. Since faculty members may collaborate on the same publication, some publications may have been double-counted. In descending order, the 5 highest ranked academic programs are the University of Pennsylvania; University of California, San Francisco; Yale-New Haven Medical Center; New York University; and University of Michigan. This ranking system may allow residents and faculty to improve the academic achievements at their respective programs.

  10. A network-based dynamical ranking system for competitive sports

    Science.gov (United States)

    Motegi, Shun; Masuda, Naoki

    2012-12-01

    From the viewpoint of networks, a ranking system for players or teams in sports is equivalent to a centrality measure for sports networks, whereby a directed link represents the result of a single game. Previously proposed network-based ranking systems are derived from static networks, i.e., aggregation of the results of games over time. However, the score of a player (or team) fluctuates over time. Defeating a renowned player in the peak performance is intuitively more rewarding than defeating the same player in other periods. To account for this factor, we propose a dynamic variant of such a network-based ranking system and apply it to professional men's tennis data. We derive a set of linear online update equations for the score of each player. The proposed ranking system predicts the outcome of the future games with a higher accuracy than the static counterparts.

  11. A network-based dynamical ranking system for competitive sports.

    Science.gov (United States)

    Motegi, Shun; Masuda, Naoki

    2012-01-01

    From the viewpoint of networks, a ranking system for players or teams in sports is equivalent to a centrality measure for sports networks, whereby a directed link represents the result of a single game. Previously proposed network-based ranking systems are derived from static networks, i.e., aggregation of the results of games over time. However, the score of a player (or team) fluctuates over time. Defeating a renowned player in the peak performance is intuitively more rewarding than defeating the same player in other periods. To account for this factor, we propose a dynamic variant of such a network-based ranking system and apply it to professional men's tennis data. We derive a set of linear online update equations for the score of each player. The proposed ranking system predicts the outcome of the future games with a higher accuracy than the static counterparts.

  12. Monte Carlo methods in PageRank computation: When one iteration is sufficient

    NARCIS (Netherlands)

    Avrachenkov, K.; Litvak, Nelli; Nemirovsky, D.; Osipova, N.

    2005-01-01

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer and thus it reflects the popularity of a Web page. Google computes the PageRank using the power iteration method which requires

  13. Monte Carlo methods in PageRank computation: When one iteration is sufficient

    NARCIS (Netherlands)

    Avrachenkov, K.; Litvak, Nelli; Nemirovsky, D.; Osipova, N.

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer, and thus it reflects the popularity of a Web page. Google computes the PageRank using the power iteration method, which requires

  14. Severe language effect in university rankings: particularly Germany and France are wronged in citation-based rankings.

    Science.gov (United States)

    van Raan, Anthony F J; van Leeuwen, Thed N; Visser, Martijn S

    2011-08-01

    We applied a set of standard bibliometric indicators to monitor the scientific state-of-arte of 500 universities worldwide and constructed a ranking on the basis of these indicators (Leiden Ranking 2010). We find a dramatic and hitherto largely underestimated language effect in the bibliometric, citation-based measurements of research performance when comparing the ranking based on all Web of Science (WoS) covered publications and on only English WoS covered publications, particularly for Germany and France.

  15. Image Inpainting Algorithm Based on Low-Rank Approximation and Texture Direction

    Directory of Open Access Journals (Sweden)

    Jinjiang Li

    2014-01-01

    Full Text Available Existing image inpainting algorithm based on low-rank matrix approximation cannot be suitable for complex, large-scale, damaged texture image. An inpainting algorithm based on low-rank approximation and texture direction is proposed in the paper. At first, we decompose the image using low-rank approximation method. Then the area to be repaired is interpolated by level set algorithm, and we can reconstruct a new image by the boundary values of level set. In order to obtain a better restoration effect, we make iteration for low-rank decomposition and level set interpolation. Taking into account the impact of texture direction, we segment the texture and make low-rank decomposition at texture direction. Experimental results show that the new algorithm is suitable for texture recovery and maintaining the overall consistency of the structure, which can be used to repair large-scale damaged image.

  16. Consistent linguistic fuzzy preference relations method with ranking fuzzy numbers

    Science.gov (United States)

    Ridzuan, Siti Amnah Mohd; Mohamad, Daud; Kamis, Nor Hanimah

    2014-12-01

    Multi-Criteria Decision Making (MCDM) methods have been developed to help decision makers in selecting the best criteria or alternatives from the options given. One of the well known methods in MCDM is the Consistent Fuzzy Preference Relation (CFPR) method, essentially utilizes a pairwise comparison approach. This method was later improved to cater subjectivity in the data by using fuzzy set, known as the Consistent Linguistic Fuzzy Preference Relations (CLFPR). The CLFPR method uses the additive transitivity property in the evaluation of pairwise comparison matrices. However, the calculation involved is lengthy and cumbersome. To overcome this problem, a method of defuzzification was introduced by researchers. Nevertheless, the defuzzification process has a major setback where some information may lose due to the simplification process. In this paper, we propose a method of CLFPR that preserves the fuzzy numbers form throughout the process. In obtaining the desired ordering result, a method of ranking fuzzy numbers is utilized in the procedure. This improved procedure for CLFPR is implemented to a case study to verify its effectiveness. This method is useful for solving decision making problems and can be applied to many areas of applications.

  17. A method for integrating and ranking the evidence for biochemical pathways by mining reactions from text

    Science.gov (United States)

    Miwa, Makoto; Ohta, Tomoko; Rak, Rafal; Rowley, Andrew; Kell, Douglas B.; Pyysalo, Sampo; Ananiadou, Sophia

    2013-01-01

    Motivation: To create, verify and maintain pathway models, curators must discover and assess knowledge distributed over the vast body of biological literature. Methods supporting these tasks must understand both the pathway model representations and the natural language in the literature. These methods should identify and order documents by relevance to any given pathway reaction. No existing system has addressed all aspects of this challenge. Method: We present novel methods for associating pathway model reactions with relevant publications. Our approach extracts the reactions directly from the models and then turns them into queries for three text mining-based MEDLINE literature search systems. These queries are executed, and the resulting documents are combined and ranked according to their relevance to the reactions of interest. We manually annotate document-reaction pairs with the relevance of the document to the reaction and use this annotation to study several ranking methods, using various heuristic and machine-learning approaches. Results: Our evaluation shows that the annotated document-reaction pairs can be used to create a rule-based document ranking system, and that machine learning can be used to rank documents by their relevance to pathway reactions. We find that a Support Vector Machine-based system outperforms several baselines and matches the performance of the rule-based system. The success of the query extraction and ranking methods are used to update our existing pathway search system, PathText. Availability: An online demonstration of PathText 2 and the annotated corpus are available for research purposes at http://www.nactem.ac.uk/pathtext2/. Contact: makoto.miwa@manchester.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23813008

  18. Fast and precise method of contingency ranking in modern power system

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Chen, Zhe; Thøgersen, Paul

    2011-01-01

    Contingency Analysis is one of the most important aspect of Power System Security Analysis. This paper presents a fast and precise method of contingency ranking for effective power system security analysis. The method proposed in this research work takes due consideration of both apparent power...... is based on realistic approach taking practical situations into account. Besides taking real situations into consideration the proposed method is fast enough to be considered for on-line security analysis....

  19. MONITORING THE PROCESS MEAN BASED ON QUALITY CONTROL CHARTS USING ON FOLDED RANKED SET SAMPLING

    Directory of Open Access Journals (Sweden)

    Amjad Al-Nasser

    2013-02-01

    Full Text Available In this paper, we propose a new quality control chart for the sample mean based on the folded ranked set sampling (FRSS method. The new charts are compared with the classical control charts using simple random sampling (SRS and ranked set sampling (RSS. A simulation study shows that the FRSS control charts have smaller average run length (ARL compared with their counterpart charts using SRS and RSS.

  20. Entity-based Stochastic Analysis of Search Results for Query Expansion and Results Re-Ranking

    Science.gov (United States)

    2015-11-20

    Entity-based Stochastic Analysis of Search Results for Query Expansion and Results Re-Ranking Pavlos Fafalios and Yannis Tzitzikas Institute of...dynamically and analyzed stochastically using a Random Walk method. The result of this analysis is exploited in two different contexts: for automatic query ...expansion and for re-ranking a set of retrieved results. Eval- uation results in the 2015 TREC Clinical Decision Support Track illustrate that query

  1. Correlation Test Application of Supplier’s Ranking Using TOPSIS and AHP-TOPSIS Method

    Directory of Open Access Journals (Sweden)

    Ika Yuniwati

    2016-05-01

    Full Text Available The supplier selection process can be done using multi-criteria decision making (MCDM methods in firms. There are many MCDM Methods, but firms must choose the method suitable with the firm condition. Company A has analyzed supplier’s ranking using TOPSIS method. TOPSIS method has a marjor weakness in its subjective weighting. This flaw is overcome using AHP method weighting having undergone a consistency test. In this study, the comparison of supplier’s ranking using TOPSIS and AHP-TOPSIS method used correlation test. The aim of this paper is to determine different result from two methods. Data in suppliers’ ranking is ordinal data, so this process used Spearman’s rank and Kendall’s tau b correlation. If most of the ranked scored are same, Kendall’s tau b correlation should be used. The other way, Spearman rank should be used. The result of this study is that most of the ranked scored are different, so the process used Spearman rank p-value of Spearman’s rank correlation of 0.505. It is greater than 0.05, means there is no statistically significant correlation between two methods. Furthermore, increment or decrement of supplier’s ranking in one method is not significantly related to the increment or decrement of supplier’s ranking in the second method

  2. Ranking of bank branches with undesirable and fuzzy data: A DEA-based approach

    Directory of Open Access Journals (Sweden)

    Sohrab Kordrostami

    2016-07-01

    Full Text Available Banks are one of the most important financial sectors in order to the economic development of each country. Certainly, efficiency scores and ranks of banks are significant and effective aspects towards future planning. Sometimes the performance of banks must be measured in the presence of undesirable and vague factors. For these reasons in the current paper a procedure based on data envelopment analysis (DEA is introduced for evaluating the efficiency and complete ranking of decision making units (DMUs where undesirable and fuzzy measures exist. To illustrate, in the presence of undesirable and fuzzy measures, DMUs are evaluated by using a fuzzy expected value approach and DMUs with similar efficiency scores are ranked by using constraints and the Maximal Balance Index based on the optimal shadow prices. Afterwards, the efficiency scores of 25 branches of an Iranian commercial bank are evaluated using the proposed method. Also, a complete ranking of bank branches is presented to discriminate branches.

  3. A rank-based Prediction Algorithm of Learning User's Intention

    Science.gov (United States)

    Shen, Jie; Gao, Ying; Chen, Cang; Gong, HaiPing

    Internet search has become an important part in people's daily life. People can find many types of information to meet different needs through search engines on the Internet. There are two issues for the current search engines: first, the users should predetermine the types of information they want and then change to the appropriate types of search engine interfaces. Second, most search engines can support multiple kinds of search functions, each function has its own separate search interface. While users need different types of information, they must switch between different interfaces. In practice, most queries are corresponding to various types of information results. These queries can search the relevant results in various search engines, such as query "Palace" contains the websites about the introduction of the National Palace Museum, blog, Wikipedia, some pictures and video information. This paper presents a new aggregative algorithm for all kinds of search results. It can filter and sort the search results by learning three aspects about the query words, search results and search history logs to achieve the purpose of detecting user's intention. Experiments demonstrate that this rank-based method for multi-types of search results is effective. It can meet the user's search needs well, enhance user's satisfaction, provide an effective and rational model for optimizing search engines and improve user's search experience.

  4. Patch-Based Image Inpainting via Two-Stage Low Rank Approximation.

    Science.gov (United States)

    Guo, Qiang; Gao, Shanshan; Zhang, Xiaofeng; Yin, Yilong; Zhang, Caiming

    2017-05-09

    To recover the corrupted pixels, traditional inpainting methods based on low-rank priors generally need to solve a convex optimization problem by an iterative singular value shrinkage algorithm. In this paper, we propose a simple method for image inpainting using low rank approximation, which avoids the time-consuming iterative shrinkage. Specifically, if similar patches of a corrupted image are identified and reshaped as vectors, then a patch matrix can be constructed by collecting these similar patch-vectors. Due to its columns being highly linearly correlated, this patch matrix is low-rank. Instead of using an iterative singular value shrinkage scheme, the proposed method utilizes low rank approximation with truncated singular values to derive a closed-form estimate for each patch matrix. Depending upon an observation that there exists a distinct gap in the singular spectrum of patch matrix, the rank of each patch matrix is empirically determined by a heuristic procedure. Inspired by the inpainting algorithms with component decomposition, a two-stage low rank approximation (TSLRA) scheme is designed to recover image structures and refine texture details of corrupted images. Experimental results on various inpainting tasks demonstrate that the proposed method is comparable and even superior to some state-of-the-art inpainting algorithms.

  5. ConformRank: A conformity-based rank for finding top-k influential users

    Science.gov (United States)

    Wang, Qiyao; Jin, Yuehui; Cheng, Shiduan; Yang, Tan

    2017-05-01

    Finding influential users is a hot topic in social networks. For example, advertisers identify influential users to make a successful campaign. Retweeters forward messages from original users, who originally publish messages. This action is referred to as retweeting. Retweeting behaviors generate influence. Original users have influence on retweeters. Whether retweeters keep the same sentiment as original users is taken into consideration in this study. Influence is calculated based on conformity from emotional perspective after retweeting. A conformity-based algorithm, called ConformRank, is proposed to find top-k influential users, who make the most users keep the same sentiment after retweeting messages. Emotional conformity is introduced to denote how users conform to original users from the emotional perspective. Conforming weights are introduced to denote how two users keep the same sentiment after retweeting messages. Emotional conformity is applied for users and conforming weights are used for relations. Experiments were conducted on Sina Weibo. Experimental results show that users have larger influence when they publish positive messages.

  6. GoDec+: Fast and Robust Low-Rank Matrix Decomposition Based on Maximum Correntropy.

    Science.gov (United States)

    Guo, Kailing; Liu, Liu; Xu, Xiangmin; Xu, Dong; Tao, Dacheng

    2017-04-24

    GoDec is an efficient low-rank matrix decomposition algorithm. However, optimal performance depends on sparse errors and Gaussian noise. This paper aims to address the problem that a matrix is composed of a low-rank component and unknown corruptions. We introduce a robust local similarity measure called correntropy to describe the corruptions and, in doing so, obtain a more robust and faster low-rank decomposition algorithm: GoDec+. Based on half-quadratic optimization and greedy bilateral paradigm, we deliver a solution to the maximum correntropy criterion (MCC)-based low-rank decomposition problem. Experimental results show that GoDec+ is efficient and robust to different corruptions including Gaussian noise, Laplacian noise, salt & pepper noise, and occlusion on both synthetic and real vision data. We further apply GoDec+ to more general applications including classification and subspace clustering. For classification, we construct an ensemble subspace from the low-rank GoDec+ matrix and introduce an MCC-based classifier. For subspace clustering, we utilize GoDec+ values low-rank matrix for MCC-based self-expression and combine it with spectral clustering. Face recognition, motion segmentation, and face clustering experiments show that the proposed methods are effective and robust. In particular, we achieve the state-of-the-art performance on the Hopkins 155 data set and the first 10 subjects of extended Yale B for subspace clustering.

  7. Introducing Trimming and Function Ranking to SolidWorks based on Function Analysis

    NARCIS (Netherlands)

    Chechurin, L.S.; Wits, Wessel Willems; Bakker, Hans M.; Vaneker, Thomas H.J.

    2015-01-01

    TRIZ based Function Analysis models existing products based on functional interactions between product parts. Such a function model description is the ideal starting point for product innovation. Design engineers can apply (TRIZ) methods such as trimming and function ranking to this function model

  8. Introducing trimming and function ranking to Solid Works based on function analysis

    NARCIS (Netherlands)

    Chechurin, Leonid S.; Wits, Wessel Willems; Bakker, Hans M.; Cascini, G.; Vaneker, Thomas H.J.

    2011-01-01

    TRIZ based Function Analysis models existing products based on functional interactions between product parts. Such a function model description is the ideal starting point for product innovation. Design engineers can apply (TRIZ) methods such as trimming and function ranking to this function model

  9. Low-Rank Representation-Based Object Tracking Using Multitask Feature Learning with Joint Sparsity

    Directory of Open Access Journals (Sweden)

    Hyuncheol Kim

    2014-01-01

    Full Text Available We address object tracking problem as a multitask feature learning process based on low-rank representation of features with joint sparsity. We first select features with low-rank representation within a number of initial frames to obtain subspace basis. Next, the features represented by the low-rank and sparse property are learned using a modified joint sparsity-based multitask feature learning framework. Both the features and sparse errors are then optimally updated using a novel incremental alternating direction method. The low-rank minimization problem for learning multitask features can be achieved by a few sequences of efficient closed form update process. Since the proposed method attempts to perform the feature learning problem in both multitask and low-rank manner, it can not only reduce the dimension but also improve the tracking performance without drift. Experimental results demonstrate that the proposed method outperforms existing state-of-the-art tracking methods for tracking objects in challenging image sequences.

  10. CT image sequence restoration based on sparse and low-rank decomposition.

    Directory of Open Access Journals (Sweden)

    Shuiping Gou

    Full Text Available Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA, Linearized Alternating Direction Method with Adaptive Penalty (LADMAP and Go Decomposition (GoDec. Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images.

  11. CT image sequence restoration based on sparse and low-rank decomposition.

    Science.gov (United States)

    Gou, Shuiping; Wang, Yueyue; Wang, Zhilong; Peng, Yong; Zhang, Xiaopeng; Jiao, Licheng; Wu, Jianshe

    2013-01-01

    Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT) image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA), Linearized Alternating Direction Method with Adaptive Penalty (LADMAP) and Go Decomposition (GoDec). Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images.

  12. Computational Methods for Large Spatio-temporal Datasets and Functional Data Ranking

    KAUST Repository

    Huang, Huang

    2017-07-16

    This thesis focuses on two topics, computational methods for large spatial datasets and functional data ranking. Both are tackling the challenges of big and high-dimensional data. The first topic is motivated by the prohibitive computational burden in fitting Gaussian process models to large and irregularly spaced spatial datasets. Various approximation methods have been introduced to reduce the computational cost, but many rely on unrealistic assumptions about the process and retaining statistical efficiency remains an issue. We propose a new scheme to approximate the maximum likelihood estimator and the kriging predictor when the exact computation is infeasible. The proposed method provides different types of hierarchical low-rank approximations that are both computationally and statistically efficient. We explore the improvement of the approximation theoretically and investigate the performance by simulations. For real applications, we analyze a soil moisture dataset with 2 million measurements with the hierarchical low-rank approximation and apply the proposed fast kriging to fill gaps for satellite images. The second topic is motivated by rank-based outlier detection methods for functional data. Compared to magnitude outliers, it is more challenging to detect shape outliers as they are often masked among samples. We develop a new notion of functional data depth by taking the integration of a univariate depth function. Having a form of the integrated depth, it shares many desirable features. Furthermore, the novel formation leads to a useful decomposition for detecting both shape and magnitude outliers. Our simulation studies show the proposed outlier detection procedure outperforms competitors in various outlier models. We also illustrate our methodology using real datasets of curves, images, and video frames. Finally, we introduce the functional data ranking technique to spatio-temporal statistics for visualizing and assessing covariance properties, such as

  13. Implementation of ranked positional weight method (RPWM) for double-sided assembly line balancing problems

    Science.gov (United States)

    Hamzas, M. F. M. A.; Bareduan, S. A.; Zakaria, M. Z.; Ghazali, S.; Zairi, S.

    2017-09-01

    Line balancing is about arranging a production line so that there is an even flow of production from one work station to the next. There is an urge to achieve high productivity, improve the level of efficiency and reducing expenditure cost in a manufacturing process. The aim of this study to minimize the number work station for assembly line balancing. It is also to solve the assembly line balancing problem by using Ranked Positional Weight Heuristic Method. This study is focused of double sided type assembly line balancing by using Ranked Positional Weight Heuristic Method and focused to automotive industry. Based on this study, the result shows the efficiency increases from 86% to 92%. Besides that, the result shows the number of workstations also can be minimizing from 17 workstations to 16 workstations.

  14. Analysis of some methods for reduced rank Gaussian process regression

    DEFF Research Database (Denmark)

    Quinonero-Candela, J.; Rasmussen, Carl Edward

    2005-01-01

    proliferation of a number of cost-effective approximations to GPs, both for classification and for regression. In this paper we analyze one popular approximation to GPs for regression: the reduced rank approximation. While generally GPs are equivalent to infinite linear models, we show that Reduced Rank......While there is strong motivation for using Gaussian Processes (GPs) due to their excellent performance in regression and classification problems, their computational complexity makes them impractical when the size of the training set exceeds a few thousand cases. This has motivated the recent...... Gaussian Processes (RRGPs) are equivalent to finite sparse linear models. We also introduce the concept of degenerate GPs and show that they correspond to inappropriate priors. We show how to modify the RRGP to prevent it from being degenerate at test time. Training RRGPs consists both in learning...

  15. Hyper-local, directions-based ranking of places

    DEFF Research Database (Denmark)

    Venetis, Petros; Gonzalez, Hector; Jensen, Christian S.

    2011-01-01

    , including the frequency with which places have been referred to in directions queries. Next, the paper proposes an algorithm and accompanying data structures capable of ranking places in response to hyper-local web queries. Finally, an empirical study with very large directions query logs offers insight......, enables so-called hyper-local web querying where the location of a user is accurate at a much finer granularity than with IP-based positioning. This paper addresses the problem of determining the importance of points of interest, or places, in local-search results. In doing so, the paper proposes...

  16. Cardinal priority ranking based decision making for economic ...

    African Journals Online (AJOL)

    To access the indifference band, interaction with the decision maker is obtained via cardinal priority ranking (CPR) of the objectives. The cardinal priority ranking is constructed in the functional space and then transformed into the decision space, so the cardinal priority ranking of objectives relate the decision maker's ...

  17. Life-Cycle Assessment Harmonization and Soil Science Ranking Results on Food-Waste Management Methods.

    Science.gov (United States)

    Morris, Jeffrey; Brown, Sally; Cotton, Matthew; Matthews, H Scott

    2017-05-16

    This study reviewed 147 life cycle studies, with 28 found suitable for harmonizing food waste management methods' climate and energy impacts. A total of 80 scientific soil productivity studies were assessed to rank management method soil benefits. Harmonized climate impacts per kilogram of food waste range from -0.20 kg of carbon dioxide equivalents (CO 2 e) for anaerobic digestion (AD) to 0.38 kg of CO 2 e for landfill gas-to-energy (LFGTE). Aerobic composting (AC) emits -0.10 kg of CO 2 e. In-sink grinding (ISG) via a food-waste disposer and flushing for management with other sewage at a wastewater treatment plant emits 0.10 kg of CO 2 e. Harmonization reduced climate emissions versus nonharmonized averages. Harmonized energy impacts range from -0.32 MJ for ISG to 1.14 MJ for AC. AD at 0.27 MJ and LFGTE at 0.40 MJ fall in between. Rankings based on soil studies show AC first for carbon storage and water conservation, with AD second. AD first for fertilizer replacement, with AC second, and AC and AD tied for first for plant yield increase. ISG ranks third and LFGTE fourth on all four soil-quality and productivity indicators. Suggestions for further research include developing soil benefits measurement methods and resolving inconsistencies in the results between life-cycle assessments and soil science studies.

  18. Rank-Based Analysis of Unbalanced Repeated Measures Data

    Directory of Open Access Journals (Sweden)

    M. Mushfiqur Rashid

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} In this article, we have developed a rank (intra-subject based analysis of clinical trials with unbalanced repeated measures data. We assume that the errors within each patient are exchangeable and continuous random variables. This rank-based inference is valid when the unbalanced data are missing either completely at random or by design. A drop in dispersion test is developed for general linear hypotheses. A numerical example is given to illustrate the procedure.

  19. Image restoration via patch orientation-based low-rank matrix approximation and nonlocal means

    Science.gov (United States)

    Zhang, Di; He, Jiazhong; Du, Minghui

    2016-03-01

    Low-rank matrix approximation and nonlocal means (NLM) are two popular techniques for image restoration. Although the basic principle for applying these two techniques is the same, i.e., similar image patches are abundant in the image, previously published related algorithms use either low-rank matrix approximation or NLM because they manipulate the information of similar patches in different ways. We propose a method for image restoration by jointly using low-rank matrix approximation and NLM in a unified minimization framework. To improve the accuracy of determining similar patches, we also propose a patch similarity measurement based on curvelet transform. Extensive experiments on image deblurring and compressive sensing image recovery validate that the proposed method achieves better results than many state-of-the-art algorithms in terms of both quantitative measures and visual perception.

  20. Ranking of office lease options by multi-criteria methods

    Directory of Open Access Journals (Sweden)

    Ginevičius Romualdas

    2016-12-01

    Full Text Available Due to the growing scales of business and internationalisation, the issue of the office lease is becoming more and more relevant for companies. They are becoming an inherent part of business, on which the results of the commercial activity depend. Currently existing methodologies for the assessment of the office lease options are imperfect as they lack complexity; they are not associated with the objective of the lease – the improvement of the business results; the methods of the quantitative assessment of lease options are far from perfect. The paper aims at formulating the hierarchical indicator system of commercial real estate facilities (offices adjusted for the multi-criteria assessment and at calculating the lease options based on this indicator system. To achieve this goal, the following methods have been used: scientific literature analysis and multi-criteria assessment methods.

  1. The Extrapolation-Accelerated Multilevel Aggregation Method in PageRank Computation

    Directory of Open Access Journals (Sweden)

    Bing-Yuan Pu

    2013-01-01

    Full Text Available An accelerated multilevel aggregation method is presented for calculating the stationary probability vector of an irreducible stochastic matrix in PageRank computation, where the vector extrapolation method is its accelerator. We show how to periodically combine the extrapolation method together with the multilevel aggregation method on the finest level for speeding up the PageRank computation. Detailed numerical results are given to illustrate the behavior of this method, and comparisons with the typical methods are also made.

  2. Tensor Rank

    OpenAIRE

    Erdtman, Elias; Jönsson, Carl

    2012-01-01

    This master's thesis addresses numerical methods of computing the typical ranks of tensors over the real numbers and explores some properties of tensors over finite fields. We present three numerical methods to compute typical tensor rank. Two of these have already been published and can be used to calculate the lowest typical ranks of tensors and an approximate percentage of how many tensors have the lowest typical ranks (for some tensor formats), respectively. The third method was developed...

  3. A practical sensitivity analysis method for ranking sources of uncertainty in thermal–hydraulics applications

    Energy Technology Data Exchange (ETDEWEB)

    Pourgol-Mohammad, Mohammad, E-mail: pourgolmohammad@sut.ac.ir [Department of Mechanical Engineering, Sahand University of Technology, Tabriz (Iran, Islamic Republic of); Hoseyni, Seyed Mohsen [Department of Basic Sciences, East Tehran Branch, Islamic Azad University, Tehran (Iran, Islamic Republic of); Hoseyni, Seyed Mojtaba [Building & Housing Research Center, Tehran (Iran, Islamic Republic of); Sepanloo, Kamran [Nuclear Science and Technology Research Institute, Tehran (Iran, Islamic Republic of)

    2016-08-15

    Highlights: • Existing uncertainty ranking methods prove inconsistent for TH applications. • Introduction of a new method for ranking sources of uncertainty in TH codes. • Modified PIRT qualitatively identifies and ranks uncertainty sources more precisely. • The importance of parameters is calculated by a limited number of TH code executions. • Methodology is applied successfully on LOFT-LB1 test facility. - Abstract: In application to thermal–hydraulic calculations by system codes, sensitivity analysis plays an important role for managing the uncertainties of code output and risk analysis. Sensitivity analysis is also used to confirm the results of qualitative Phenomena Identification and Ranking Table (PIRT). Several methodologies have been developed to address uncertainty importance assessment. Generally, uncertainty importance measures, mainly devised for the Probabilistic Risk Assessment (PRA) applications, are not affordable for computationally demanding calculations of the complex thermal–hydraulics (TH) system codes. In other words, for effective quantification of the degree of the contribution of each phenomenon to the total uncertainty of the output, a practical approach is needed by considering high computational burden of TH calculations. This study aims primarily to show the inefficiency of the existing approaches and then introduces a solution to cope with the challenges in this area by modification of variance-based uncertainty importance method. Important parameters are identified by the modified PIRT approach qualitatively then their uncertainty importance is quantified by a local derivative index. The proposed index is attractive from its practicality point of view on TH applications. It is capable of calculating the importance of parameters by a limited number of TH code executions. Application of the proposed methodology is demonstrated on LOFT-LB1 test facility.

  4. A Comparative Approach for Ranking Contaminated Sites Based on the Risk Assessment Paradigm Using Fuzzy PROMETHEE

    Science.gov (United States)

    Zhang, Kejiang; Kluck, Cheryl; Achari, Gopal

    2009-11-01

    A ranking system for contaminated sites based on comparative risk methodology using fuzzy Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE) was developed in this article. It combines the concepts of fuzzy sets to represent uncertain site information with the PROMETHEE, a subgroup of Multi-Criteria Decision Making (MCDM) methods. Criteria are identified based on a combination of the attributes (toxicity, exposure, and receptors) associated with the potential human health and ecological risks posed by contaminated sites, chemical properties, site geology and hydrogeology and contaminant transport phenomena. Original site data are directly used avoiding the subjective assignment of scores to site attributes. When the input data are numeric and crisp the PROMETHEE method can be used. The Fuzzy PROMETHEE method is preferred when substantial uncertainties and subjectivities exist in site information. The PROMETHEE and fuzzy PROMETHEE methods are both used in this research to compare the sites. The case study shows that this methodology provides reasonable results.

  5. A method for generating permutation distribution of ranks in a k ...

    African Journals Online (AJOL)

    sample experiment is presented. This provides a methodology for constructing exact test of significance of a rank statistic. The proposed method is linked to the partition of integers and in a combinatorial sense the distribution of the ranks is ...

  6. Using Metric Distance Ranking Method to Find Intuitionistic Fuzzy Critical Path

    Directory of Open Access Journals (Sweden)

    P. Jayagowri

    2015-01-01

    Full Text Available Network analysis is a technique which determines the various sequences of activities concerning a project and the project completion time. The popular methods of this technique which is widely used are the critical path method and program evaluation and review techniques. The aim of this paper is to present an analytical method for measuring the criticality in an (Atanassov intuitionistic fuzzy project network. Vague parameters in the project network are represented by (Atanassov intuitionistic trapezoidal fuzzy numbers. A metric distance ranking method for (Atanassov intuitionistic fuzzy numbers to a critical path method is proposed. (Atanassov Intuitionistic fuzzy critical length of the project network is found without converting the (Atanassov intuitionistic fuzzy activity times to classical numbers. The fuzzified conversion of the problem has been discussed with the numerical example. We also apply four different ranking procedures and we compare it with metric distance ranking method. Comparison reveals that the proposed ranking method is better than other raking procedures.

  7. Development of geopolitically relevant ranking criteria for geoengineering methods

    Science.gov (United States)

    Boyd, Philip W.

    2016-11-01

    A decade has passed since Paul Crutzen published his editorial essay on the potential for stratospheric geoengineering to cool the climate in the Anthropocene. He synthesized the effects of the 1991 Pinatubo eruption on the planet's radiative budget and used this large-scale event to broaden and deepen the debate on the challenges and opportunities of large-scale geoengineering. Pinatubo had pronounced effects, both in the short and longer term (months to years), on the ocean, land, and the atmosphere. This rich set of data on how a large-scale natural event influences many regional and global facets of the Earth System provides a comprehensive viewpoint to assess the wider ramifications of geoengineering. Here, I use the Pinatubo archives to develop a range of geopolitically relevant ranking criteria for a suite of different geoengineering approaches. The criteria focus on the spatial scales needed for geoengineering and whether large-scale dispersal is a necessary requirement for a technique to deliver significant cooling or carbon dioxide reductions. These categories in turn inform whether geoengineering approaches are amenable to participation (the "democracy of geoengineering") and whether they will lead to transboundary issues that could precipitate geopolitical conflicts. The criteria provide the requisite detail to demarcate different geoengineering approaches in the context of geopolitics. Hence, they offer another tool that can be used in the development of a more holistic approach to the debate on geoengineering.

  8. Who Should Rank Our Journals...And Based on What?

    Science.gov (United States)

    Cherkowski, Sabre; Currie, Russell; Hilton, Sandy

    2012-01-01

    Purpose: This study aims to establish the use of active scholar assessment (ASA) in the field of education leadership as a new methodology in ranking administration and leadership journals. The secondary purpose of this study is to respond to the paucity of research on journal ranking in educational administration and leadership.…

  9. Enhancing Sketch-Based Image Retrieval by Re-Ranking and Relevance Feedback.

    Science.gov (United States)

    Xueming Qian; Xianglong Tan; Yuting Zhang; Richang Hong; Meng Wang

    2016-01-01

    A sketch-based image retrieval often needs to optimize the tradeoff between efficiency and precision. Index structures are typically applied to large-scale databases to realize efficient retrievals. However, the performance can be affected by quantization errors. Moreover, the ambiguousness of user-provided examples may also degrade the performance, when compared with traditional image retrieval methods. Sketch-based image retrieval systems that preserve the index structure are challenging. In this paper, we propose an effective sketch-based image retrieval approach with re-ranking and relevance feedback schemes. Our approach makes full use of the semantics in query sketches and the top ranked images of the initial results. We also apply relevance feedback to find more relevant images for the input query sketch. The integration of the two schemes results in mutual benefits and improves the performance of the sketch-based image retrieval.

  10. Methods of computing vocabulary size for the two-parameter rank distribution

    Science.gov (United States)

    Edmundson, H. P.; Fostel, G.; Tung, I.; Underwood, W.

    1972-01-01

    A summation method is described for computing the vocabulary size for given parameter values in the 1- and 2-parameter rank distributions. Two methods of determining the asymptotes for the family of 2-parameter rank-distribution curves are also described. Tables are computed and graphs are drawn relating paris of parameter values to the vocabulary size. The partial product formula for the Riemann zeta function is investigated as an approximation to the partial sum formula for the Riemann zeta function. An error bound is established that indicates that the partial product should not be used to approximate the partial sum in calculating the vocabulary size for the 2-parameter rank distribution.

  11. Selection and ranking of occupational safety indicators based on fuzzy AHP: A case study in road construction companies

    OpenAIRE

    Janackovic, Goran Lj.; Suzana M. Savic; Stankovic, Miomir S.

    2013-01-01

    This paper presents the factors, performance, and indicators of occupational safety, as well as a method to select and rank occupational safety indicators based on the expert evaluation method and the fuzzy analytic hierarchy process (fuzzy AHP). A case study is done on road construction companies in Serbia. The key safety performance indicators for the road construction industry are identified and ranked according to the results of a survey that included experts who assessed occupational saf...

  12. A network-based dynamical ranking system for competitive sports

    National Research Council Canada - National Science Library

    Motegi, Shun; Masuda, Naoki

    2012-01-01

    From the viewpoint of networks, a ranking system for players or teams in sports is equivalent to a centrality measure for sports networks, whereby a directed link represents the result of a single game...

  13. Personalized Profile Based Search Interface With Ranked and Clustered Display

    National Research Council Canada - National Science Library

    Kumar, Sachin; Oztekin, B. U; Ertoz, Levent; Singhal, Saurabh; Han, Euihong; Kumar, Vipin

    2001-01-01

    We have developed an experimental meta-search engine, which takes the snippets from traditional search engines and presents them to the user either in the form of clusters, indices or re-ranked list...

  14. Structural MRI-based detection of Alzheimer's disease using feature ranking and classification error.

    Science.gov (United States)

    Beheshti, Iman; Demirel, Hasan; Farokhian, Farnaz; Yang, Chunlan; Matsuda, Hiroshi

    2016-12-01

    This paper presents an automatic computer-aided diagnosis (CAD) system based on feature ranking for detection of Alzheimer's disease (AD) using structural magnetic resonance imaging (sMRI) data. The proposed CAD system is composed of four systematic stages. First, global and local differences in the gray matter (GM) of AD patients compared to the GM of healthy controls (HCs) are analyzed using a voxel-based morphometry technique. The aim is to identify significant local differences in the volume of GM as volumes of interests (VOIs). Second, the voxel intensity values of the VOIs are extracted as raw features. Third, the raw features are ranked using a seven-feature ranking method, namely, statistical dependency (SD), mutual information (MI), information gain (IG), Pearson's correlation coefficient (PCC), t-test score (TS), Fisher's criterion (FC), and the Gini index (GI). The features with higher scores are more discriminative. To determine the number of top features, the estimated classification error based on training set made up of the AD and HC groups is calculated, with the vector size that minimized this error selected as the top discriminative feature. Fourth, the classification is performed using a support vector machine (SVM). In addition, a data fusion approach among feature ranking methods is introduced to improve the classification performance. The proposed method is evaluated using a data-set from ADNI (130 AD and 130 HC) with 10-fold cross-validation. The classification accuracy of the proposed automatic system for the diagnosis of AD is up to 92.48% using the sMRI data. An automatic CAD system for the classification of AD based on feature-ranking method and classification errors is proposed. In this regard, seven-feature ranking methods (i.e., SD, MI, IG, PCC, TS, FC, and GI) are evaluated. The optimal size of top discriminative features is determined by the classification error estimation in the training phase. The experimental results indicate that

  15. Assessment of Entrepreneurial Territorial Attractiveness by the Ranking Method

    Science.gov (United States)

    Gavrilova, Marina A.; Shepelev, Victor M.; Kosyakova, Inessa V.; Belikova, Lyudmila F.; Chistik, Olga F.

    2016-01-01

    The relevance of the researched problem is caused by existence of differentiation in development of separate regional units (urban districts and municipalities) within the region. The aim of this article is to offer a method, which determines the level of differentiation in development of various components of the region, and also in producing a…

  16. Speech Denoising in White Noise Based on Signal Subspace Low-rank Plus Sparse Decomposition

    Directory of Open Access Journals (Sweden)

    yuan Shuai

    2017-01-01

    Full Text Available In this paper, a new subspace speech enhancement method using low-rank and sparse decomposition is presented. In the proposed method, we firstly structure the corrupted data as a Toeplitz matrix and estimate its effective rank for the underlying human speech signal. Then the low-rank and sparse decomposition is performed with the guidance of speech rank value to remove the noise. Extensive experiments have been carried out in white Gaussian noise condition, and experimental results show the proposed method performs better than conventional speech enhancement methods, in terms of yielding less residual noise and lower speech distortion.

  17. COMPARATIVE ANALYSIS OF SOME PROMINENT MCDM METHODS: A CASE OF RANKING SERBIAN BANKS

    Directory of Open Access Journals (Sweden)

    Dragisa Stanujkic

    2013-11-01

    Full Text Available In the literature, many multiple criteria decision making methods have been proposed. There are also a number of papers, which are devoted to comparison of their characteristics and performances. However, a definitive answer to questions: which method is most suitable and which method is most effective is still actual. Therefore, in this paper, the use of some prominent multiple criteria decision making methods is considered on the example of ranking Serbian banks. The objective of this paper is not to determine which method is most appropriate for ranking banks. The objective of this paper is to emphasize that the use of various multiple criteria decision making methods sometimes can produce different ranking orders of alternatives, highlighted some reasons which lead to different results, and indicate that different results obtained by different MCDM methods are not just a random event, but rather reality.

  18. Improving predicted protein loop structure ranking using a Pareto-optimality consensus method.

    Science.gov (United States)

    Li, Yaohang; Rata, Ionel; Chiu, See-wing; Jakobsson, Eric

    2010-07-20

    Accurate protein loop structure models are important to understand functions of many proteins. Identifying the native or near-native models by distinguishing them from the misfolded ones is a critical step in protein loop structure prediction. We have developed a Pareto Optimal Consensus (POC) method, which is a consensus model ranking approach to integrate multiple knowledge- or physics-based scoring functions. The procedure of identifying the models of best quality in a model set includes: 1) identifying the models at the Pareto optimal front with respect to a set of scoring functions, and 2) ranking them based on the fuzzy dominance relationship to the rest of the models. We apply the POC method to a large number of decoy sets for loops of 4- to 12-residue in length using a functional space composed of several carefully-selected scoring functions: Rosetta, DOPE, DDFIRE, OPLS-AA, and a triplet backbone dihedral potential developed in our lab. Our computational results show that the sets of Pareto-optimal decoys, which are typically composed of approximately 20% or less of the overall decoys in a set, have a good coverage of the best or near-best decoys in more than 99% of the loop targets. Compared to the individual scoring function yielding best selection accuracy in the decoy sets, the POC method yields 23%, 37%, and 64% less false positives in distinguishing the native conformation, indentifying a near-native model (RMSD functions based on Pareto optimality and fuzzy dominance, the POC method is effective in distinguishing the best loop models from the other ones within a loop model set.

  19. Systematic comparison of hedonic ranking and rating methods demonstrates few practical differences.

    Science.gov (United States)

    Kozak, Marcin; Cliff, Margaret A

    2013-08-01

    Hedonic ranking is one of the commonly used methods to evaluate consumer preferences. Some authors suggest that it is the best methodology for discriminating among products, while others recommend hedonic rating. These mixed findings suggest the statistical outcome(s) are dependent on the experimental conditions or a user's expectation of "what is" and "what is not" desirable for evaluating consumer preferences. Therefore, sensory and industry professionals may be uncertain or confused regarding the appropriate application of hedonic tests. This paper would like to put this controversy to rest, by evaluating 3 data sets (3 yogurts, 79 consumers; 6 yogurts, 109 consumers; 4 apple cultivars, 70 consumers) collected using the same consumers and by calculating nontied ranks from hedonic scores. Consumer responses were evaluated by comparing bivariate associations between the methods (nontied ranks, tied ranks, hedonic rating scores) using trellis displays, determining the number of consumers with discrepancies in their responses between the methods, and comparing mean values using conventional statistical analyses. Spearman's rank correlations (0.33-0.84) revealed significant differences between the methods for all products, whether or not means separation tests differentiated the products. The work illustrated the inherent biases associated with hedonic ranking and recommended alternate hedonic methodologies. © 2013 Institute of Food Technologists®

  20. Evaluation of an automatic dry eye test using MCDM methods and rank correlation.

    Science.gov (United States)

    Peteiro-Barral, Diego; Remeseiro, Beatriz; Méndez, Rebeca; Penedo, Manuel G

    2017-04-01

    Dry eye is an increasingly common disease in modern society which affects a wide range of population and has a negative impact on their daily activities, such as working with computers or driving. It can be diagnosed through an automatic clinical test for tear film lipid layer classification based on color and texture analysis. Up to now, researchers have mainly focused on the improvement of the image analysis step. However, there is still large room for improvement on the machine learning side. This paper presents a methodology to optimize this problem by means of class binarization, feature selection, and classification. The methodology can be used as a baseline in other classification problems to provide several solutions and evaluate their performance using a set of representative metrics and decision-making methods. When several decision-making methods are used, they may offer disagreeing rankings that will be solved by conflict handling in which rankings are merged into a single one. The experimental results prove the effectiveness of the proposed methodology in this domain. Also, its general purpose allows to adapt it to other classification problems in different fields such as medicine and biology.

  1. Ranking differentially expressed genes from Affymetrix gene expression data: methods with reproducibility, sensitivity, and specificity

    OpenAIRE

    Shimizu Kentaro; Nakai Yuji; Kadota Koji

    2009-01-01

    Abstract Background To identify differentially expressed genes (DEGs) from microarray data, users of the Affymetrix GeneChip system need to select both a preprocessing algorithm to obtain expression-level measurements and a way of ranking genes to obtain the most plausible candidates. We recently recommended suitable combinations of a preprocessing algorithm and gene ranking method that can be used to identify DEGs with a higher level of sensitivity and specificity. However, in addition to th...

  2. A note on ranking efficient DMUs based on a single virtual inefficient DMU in DEA

    Directory of Open Access Journals (Sweden)

    Farzad Rezai Balf

    2017-11-01

    Full Text Available In this paper we give a comment on paper by Shetty and Pakkala [U. Shetty, T.P.M. Pakkala, (2010, Ranking efficient DMUs based on single virtual inefficient DMU in DEA. OPSEARCH, 47 (1:50-72]. They proposed an approach to rank the efficient decision making units based on a virtual DMU in the constant return to scale DEA model. The input and output levels of this virtual DMU are the average of input and outputs of all DMUs. We obtain another method for select virtual DMU as negative ideal DMU. Our approach doesn't need to existence at least an inefficient DMU in the set DMUs. In addition, it can be used in constant and variable return to scale DEA models. This brief comment provides an alternative approach for their work.

  3. Ranking of CMIP5-based global climate models for India using compromise programming

    Science.gov (United States)

    Srinivasa Raju, K.; Sonali, P.; Nagesh Kumar, D.

    2017-05-01

    Thirty-six Coupled Model Intercomparison Project-5-based global climate models (GCMs) are explored to evaluate the performance of maximum ( T max) and minimum ( T min) temperature simulations for India covering 40 grid points. Three performance indicators used for evaluating GCMs are correlation coefficient (CC), normalised root mean square error (NRMSE) and skill score (SS). Entropy method is applied to compute the weights of the three indicators employed. However, equal weights are also considered as part of sensitivity analysis studies. Compromise programming (CP), a distance-based decision-making technique, is employed to rank the GCMs. Group decision-making approach is used to aggregate the ranking patterns obtained for individual grid points. A simple but effective ensemble approach is also suggested.

  4. Low-rank approximations with sparse factors II: Penalized methods with discrete Newton-like iterations

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhenyue [Zhejiang Univ., Hangzhou (People' s Republic of China); Zha, Hongyuan [Pennsylvania State Univ., University Park, PA (United States); Simon, Horst [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2006-07-31

    In this paper, we developed numerical algorithms for computing sparse low-rank approximations of matrices, and we also provided a detailed error analysis of the proposed algorithms together with some numerical experiments. The low-rank approximations are constructed in a certain factored form with the degree of sparsity of the factors controlled by some user-specified parameters. In this paper, we cast the sparse low-rank approximation problem in the framework of penalized optimization problems. We discuss various approximation schemes for the penalized optimization problem which are more amenable to numerical computations. We also include some analysis to show the relations between the original optimization problem and the reduced one. We then develop a globally convergent discrete Newton-like iterative method for solving the approximate penalized optimization problems. We also compare the reconstruction errors of the sparse low-rank approximations computed by our new methods with those obtained using the methods in the earlier paper and several other existing methods for computing sparse low-rank approximations. Numerical examples show that the penalized methods are more robust and produce approximations with factors which have fewer columns and are sparser.

  5. Ranking alternatives based on imprecise multi-criteria data and pairwise overlap dominance relations

    DEFF Research Database (Denmark)

    Franco de los Rios, Camilo Andres; Hougaard, Jens Leth; Nielsen, Kurt

    illustrative example is given for comparison with standard methods like PROMETHEE. The proposed methodology takes into account the risk attitudes of decision makers, organizing the alternatives and ranking them according to their relevance. The whole interactive decision support allows understanding......This paper explores a multi-criteria outranking methodology that is designed to both handle uncertain and imprecise data in describing alternatives as well as treating the decision maker's preference information in a sensible way that re flects the difficulties in articulating preferences. Based...... on fuzzy interval degrees, representing and measuring data imprecision, this procedure obtains a set of semi-equivalence classes assigning an intransitive order on the alternatives. Relevance measures are then explored for ranking alternatives with respect to the semi-equivalence classes, and a final...

  6. L1 -norm low-rank matrix factorization by variational Bayesian method.

    Science.gov (United States)

    Zhao, Qian; Meng, Deyu; Xu, Zongben; Zuo, Wangmeng; Yan, Yan

    2015-04-01

    The L1 -norm low-rank matrix factorization (LRMF) has been attracting much attention due to its wide applications to computer vision and pattern recognition. In this paper, we construct a new hierarchical Bayesian generative model for the L1 -norm LRMF problem and design a mean-field variational method to automatically infer all the parameters involved in the model by closed-form equations. The variational Bayesian inference in the proposed method can be understood as solving a weighted LRMF problem with different weights on matrix elements based on their significance and with L2 -regularization penalties on parameters. Throughout the inference process of our method, the weights imposed on the matrix elements can be adaptively fitted so that the adverse influence of noises and outliers embedded in data can be largely suppressed, and the parameters can be appropriately regularized so that the generalization capability of the problem can be statistically guaranteed. The robustness and the efficiency of the proposed method are substantiated by a series of synthetic and real data experiments, as compared with the state-of-the-art L1 -norm LRMF methods. Especially, attributed to the intrinsic generalization capability of the Bayesian methodology, our method can always predict better on the unobserved ground truth data than existing methods.

  7. MicroRNA prediction with a novel ranking algorithm based on random walks.

    Science.gov (United States)

    Xu, Yunpen; Zhou, Xuefeng; Zhang, Weixiong

    2008-07-01

    MicroRNA (miRNAs) play essential roles in post-transcriptional gene regulation in animals and plants. Several existing computational approaches have been developed to complement experimental methods in discovery of miRNAs that express restrictively in specific environmental conditions or cell types. These computational methods require a sufficient number of characterized miRNAs as training samples, and rely on genome annotation to reduce the number of predicted putative miRNAs. However, most sequenced genomes have not been well annotated and many of them have a very few experimentally characterized miRNAs. As a result, the existing methods are not effective or even feasible for identifying miRNAs in these genomes. Aiming at identifying miRNAs from genomes with a few known miRNA and/or little annotation, we propose and develop a novel miRNA prediction method, miRank, based on our new random walks- based ranking algorithm. We first tested our method on Homo sapiens genome; using a very few known human miRNAs as samples, our method achieved a prediction accuracy greater than 95%. We then applied our method to predict 200 miRNAs in Anopheles gambiae, which is the most important vector of malaria in Africa. Our further study showed that 78 out of the 200 putative miRNA precursors encode mature miRNAs that are conserved in at least one other animal species. These conserved putative miRNAs are good candidates for further experimental study to understand malaria infection. MiRank is programmed in Matlab on Windows platform. The source code is available upon request.

  8. Rankings from Fuzzy Pairwise Comparisons

    NARCIS (Netherlands)

    van den Broek, P.M.; Noppen, J.A.R.; Mohammadian, M.

    2006-01-01

    We propose a new method for deriving rankings from fuzzy pairwise comparisons. It is based on the observation that quantification of the uncertainty of the pairwise comparisons should be used to obtain a better crisp ranking, instead of a fuzzified version of the ranking obtained from crisp pairwise

  9. Greedy bases in rank 2 quantum cluster algebras.

    Science.gov (United States)

    Lee, Kyungyong; Li, Li; Rupel, Dylan; Zelevinsky, Andrei

    2014-07-08

    We identify a quantum lift of the greedy basis for rank 2 coefficient-free cluster algebras. Our main result is that our construction does not depend on the choice of initial cluster, that it builds all cluster monomials, and that it produces bar-invariant elements. We also present several conjectures related to this quantum greedy basis and the triangular basis of Berenstein and Zelevinsky.

  10. A rank based social norms model of how people judge their levels of drunkenness whilst intoxicated

    Directory of Open Access Journals (Sweden)

    Simon C. Moore

    2016-09-01

    Full Text Available Abstract Background A rank based social norms model predicts that drinkers’ judgements about their drinking will be based on the rank of their breath alcohol level amongst that of others in the immediate environment, rather than their actual breath alcohol level, with lower relative rank associated with greater feelings of safety. This study tested this hypothesis and examined how people judge their levels of drunkenness and the health consequences of their drinking whilst they are intoxicated in social drinking environments. Methods Breath alcohol testing of 1,862 people (mean age = 26.96 years; 61.86 % male in drinking environments. A subset (N = 400 also answered four questions asking about their perceptions of their drunkenness and the health consequences of their drinking (plus background measures. Results Perceptions of drunkenness and the health consequences of drinking were regressed on: (a breath alcohol level, (b the rank of the breath alcohol level amongst that of others in the same environment, and (c covariates. Only rank of breath alcohol level predicted perceptions: How drunk they felt (b 3.78, 95 % CI 1.69 5.87, how extreme they regarded their drinking that night (b 3.7, 95 % CI 1.3 6.20, how at risk their long-term health was due to their current level of drinking (b 4.1, 95 % CI 0.2 8.0 and how likely they felt they would experience liver cirrhosis (b 4.8. 95 % CI 0.7 8.8. People were more influenced by more sober others than by more drunk others. Conclusion Whilst intoxicated and in drinking environments, people base judgements regarding their drinking on how their level of intoxication ranks relative to that of others of the same gender around them, not on their actual levels of intoxication. Thus, when in the company of others who are intoxicated, drinkers were found to be more likely to underestimate their own level of drinking, drunkenness and associated risks. The implications of these results, for example

  11. Bibliometric Rankings of Journals Based on the Thomson Reuters Citations Database

    NARCIS (Netherlands)

    C-L. Chang (Chia-Lin); M.J. McAleer (Michael)

    2015-01-01

    markdownabstract__Abstract__ Virtually all rankings of journals are based on citations, including self citations by journals and individual academics. The gold standard for bibliometric rankings based on citations data is the widely-used Thomson Reuters Web of Science (2014) citations database,

  12. Interpolation method for accurate affinity ranking of arrayed ligand analyte interactions

    NARCIS (Netherlands)

    Schasfoort, Richardus B.M.; Andree, Kiki Carlijn; van der Velde, N.; van der Kooi, A.; Stojanovic, Ivan; Terstappen, Leonardus Wendelinus Mathias Marie

    2016-01-01

    The values of the affinity constants (kd, ka, and KD) that are determined by label-free interaction analysis methods are affected by the ligand density. This article outlines a surface plasmon resonance (SPR) imaging method that yields high-throughput globally fitted affinity ranking values using a

  13. Iteratively reweighted generalized rank annihilation method 1. Improved handling of prediction bias

    NARCIS (Netherlands)

    Faber, N.M.; Ferre, J.; Boque, R.

    2001-01-01

    The generalized rank annihilation method (GRAM) is a method for curve resolution and calibration that uses two bilinear matrices simultaneously, i.e., one for the unknown and one for the calibration sample. A GRAM calculation amounts to solving an eigenvalue problem for which the eigenvalues are

  14. Fuzzy ranking based non-dominated sorting genetic algorithm-II for network overload alleviation

    Directory of Open Access Journals (Sweden)

    Pandiarajan K.

    2014-09-01

    Full Text Available This paper presents an effective method of network overload management in power systems. The three competing objectives 1 generation cost 2 transmission line overload and 3 real power loss are optimized to provide pareto-optimal solutions. A fuzzy ranking based non-dominated sorting genetic algorithm-II (NSGA-II is used to solve this complex nonlinear optimization problem. The minimization of competing objectives is done by generation rescheduling. Fuzzy ranking method is employed to extract the best compromise solution out of the available non-dominated solutions depending upon its highest rank. N-1 contingency analysis is carried out to identify the most severe lines and those lines are selected for outage. The effectiveness of the proposed approach is demonstrated for different contingency cases in IEEE 30 and IEEE 118 bus systems with smooth cost functions and their results are compared with other single objective evolutionary algorithms like Particle swarm optimization (PSO and Differential evolution (DE. Simulation results show the effectiveness of the proposed approach to generate well distributed pareto-optimal non-dominated solutions of multi-objective problem

  15. APPLYING ROBUST RANKING METHOD IN TWO PHASE FUZZY OPTIMIZATION LINEAR PROGRAMMING PROBLEMS (FOLPP

    Directory of Open Access Journals (Sweden)

    Monalisha Pattnaik

    2014-12-01

    Full Text Available Background: This paper explores the solutions to the fuzzy optimization linear program problems (FOLPP where some parameters are fuzzy numbers. In practice, there are many problems in which all decision parameters are fuzzy numbers, and such problems are usually solved by either probabilistic programming or multi-objective programming methods. Methods: In this paper, using the concept of comparison of fuzzy numbers, a very effective method is introduced for solving these problems. This paper extends linear programming based problem in fuzzy environment. With the problem assumptions, the optimal solution can still be theoretically solved using the two phase simplex based method in fuzzy environment. To handle the fuzzy decision variables can be initially generated and then solved and improved sequentially using the fuzzy decision approach by introducing robust ranking technique. Results and conclusions: The model is illustrated with an application and a post optimal analysis approach is obtained. The proposed procedure was programmed with MATLAB (R2009a version software for plotting the four dimensional slice diagram to the application. Finally, numerical example is presented to illustrate the effectiveness of the theoretical results, and to gain additional managerial insights. 

  16. Application of Two Decision Making Methods in Analyzing Experts’ Ranking of Change Models

    Directory of Open Access Journals (Sweden)

    Maryam Nakhoda

    2011-10-01

    Full Text Available Using consensus method and analyzing the ranked viewpoints, this paper aimed to select an appropriate change model to the Iranian academic libraries. Therefore, these libraries can apply the mentioned model in order to overcome challenges resulted of change and improve change management. The research questionnaire including three selected change models from literature with criteria for ranking them, submitted to a group of experts in Library and Information Science and Management. The analysis of experts’ viewpoints and choosing the preferred option were done by using methods in Multiple Attribute Decision Making (MADM. The findings suggested the selection of “Information Services Framework for Managing Change” by Penfold

  17. Flexible manufacturing system selection using preference ranking methods : A comparative study

    Directory of Open Access Journals (Sweden)

    Prasenjit Chatterjee

    2014-04-01

    Full Text Available Flexible manufacturing systems (FMSs offer opportunities for the manufacturers to improve their technology, competitiveness and profitability through a highly efficient and focused approach to manufacturing effectiveness. Justification, evaluation and selection of FMSs have now been receiving significant attention in the manufacturing environment. Evaluating alternative FMSs in the presence of multiple conflicting criteria and performance measures is often a difficult task for the decision maker. Preference ranking tools are special types of multi-criteria decision-making methods in which the decision maker’s preferences on criteria are aggregated together to arrive at the final evaluation and selection of the alternatives. This paper deals with the application of six most potential preference ranking methods for selecting the best FMS for a given manufacturing organization. It is observed that although the performances of these six methods are almost similar, ORESTE (Organization, Rangement Et Synthese De Donnes Relationnelles method slightly outperforms the others. These methods use some preference function or utility value or Besson ranking of criteria and alternatives, to indicate how much an alternative is preferred to the others. Most of these methods need quantification of criteria weights or different preference parameters, but ORESTE method, being an ordinal outranking approach, only requires ordinal data and attribute rankings according to their importance. Therefore, it is particularly applicable to those situations where the decision maker is unable to provide crisp evaluation data and attribute weights.

  18. Methods for evaluating and ranking transportation energy conservation programs. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1981-04-30

    Methods for comparative evaluations of the Office of Transportation programs designed to help achieve significant reductions in the consumption of petroleum by different forms of transportation while maintaining public, commercial, and industrial mobility are described. Assessments of the programs in terms of petroleum savings, incremental costs to consumers of the technologies and activities, probability of technical and market success, and external impacts due to environmental, economic, and social factors are inputs to the evaluation methodologies presented. The methods described for evaluating the programs on a comparative basis are three ranking functions and a policy matrix listing important attributes of the programs and the technologies and activities with which they are concerned and include the traditional net present value measure which computes the present worth of petroleum savings less the present worth of costs. This is modified by dividing by the present value of DOE funding to obtain a net present value per program dollar, which is the second ranking function. The third ranking function is broader in that it takes external impacts into account and is known as the comprehensive ranking function. Procedures are described for making computations of the ranking functions and the attributes that require computation. Computations are made for the electric vehicle, Stirling engine, gas turbine, and MPG mileage guide program. (MCW)

  19. Hazard Ranking Method for Populations Exposed to Arsenic in Private Water Supplies: Relation to Bedrock Geology

    Directory of Open Access Journals (Sweden)

    Helen Crabbe

    2017-12-01

    Full Text Available Approximately one million people in the UK are served by private water supplies (PWS where main municipal water supply system connection is not practical or where PWS is the preferred option. Chronic exposure to contaminants in PWS may have adverse effects on health. South West England is an area with elevated arsenic concentrations in groundwater and over 9000 domestic dwellings here are supplied by PWS. There remains uncertainty as to the extent of the population exposed to arsenic (As, and the factors predicting such exposure. We describe a hazard assessment model based on simplified geology with the potential to predict exposure to As in PWS. Households with a recorded PWS in Cornwall were recruited to take part in a water sampling programme from 2011 to 2013. Bedrock geologies were aggregated and classified into nine Simplified Bedrock Geological Categories (SBGC, plus a cross-cutting “mineralized” area. PWS were sampled by random selection within SBGCs and some 508 households volunteered for the study. Transformations of the data were explored to estimate the distribution of As concentrations for PWS by SBGC. Using the distribution per SBGC, we predict the proportion of dwellings that would be affected by high concentrations and rank the geologies according to hazard. Within most SBGCs, As concentrations were found to have log-normal distributions. Across these areas, the proportion of dwellings predicted to have drinking water over the prescribed concentration value (PCV for As ranged from 0% to 20%. From these results, a pilot predictive model was developed calculating the proportion of PWS above the PCV for As and hazard ranking supports local decision making and prioritization. With further development and testing, this can help local authorities predict the number of dwellings that might fail the PCV for As, based on bedrock geology. The model presented here for Cornwall could be applied in areas with similar geologies

  20. Hazard Ranking Method for Populations Exposed to Arsenic in Private Water Supplies: Relation to Bedrock Geology.

    Science.gov (United States)

    Crabbe, Helen; Fletcher, Tony; Close, Rebecca; Watts, Michael J; Ander, E Louise; Smedley, Pauline L; Verlander, Neville Q; Gregory, Martin; Middleton, Daniel R S; Polya, David A; Studden, Mike; Leonardi, Giovanni S

    2017-12-01

    Approximately one million people in the UK are served by private water supplies (PWS) where main municipal water supply system connection is not practical or where PWS is the preferred option. Chronic exposure to contaminants in PWS may have adverse effects on health. South West England is an area with elevated arsenic concentrations in groundwater and over 9000 domestic dwellings here are supplied by PWS. There remains uncertainty as to the extent of the population exposed to arsenic (As), and the factors predicting such exposure. We describe a hazard assessment model based on simplified geology with the potential to predict exposure to As in PWS. Households with a recorded PWS in Cornwall were recruited to take part in a water sampling programme from 2011 to 2013. Bedrock geologies were aggregated and classified into nine Simplified Bedrock Geological Categories (SBGC), plus a cross-cutting "mineralized" area. PWS were sampled by random selection within SBGCs and some 508 households volunteered for the study. Transformations of the data were explored to estimate the distribution of As concentrations for PWS by SBGC. Using the distribution per SBGC, we predict the proportion of dwellings that would be affected by high concentrations and rank the geologies according to hazard. Within most SBGCs, As concentrations were found to have log-normal distributions. Across these areas, the proportion of dwellings predicted to have drinking water over the prescribed concentration value (PCV) for As ranged from 0% to 20%. From these results, a pilot predictive model was developed calculating the proportion of PWS above the PCV for As and hazard ranking supports local decision making and prioritization. With further development and testing, this can help local authorities predict the number of dwellings that might fail the PCV for As, based on bedrock geology. The model presented here for Cornwall could be applied in areas with similar geologies. Application of the method

  1. Reference Information Based Remote Sensing Image Reconstruction with Generalized Nonconvex Low-Rank Approximation

    Directory of Open Access Journals (Sweden)

    Hongyang Lu

    2016-06-01

    Full Text Available Because of the contradiction between the spatial and temporal resolution of remote sensing images (RSI and quality loss in the process of acquisition, it is of great significance to reconstruct RSI in remote sensing applications. Recent studies have demonstrated that reference image-based reconstruction methods have great potential for higher reconstruction performance, while lacking accuracy and quality of reconstruction. For this application, a new compressed sensing objective function incorporating a reference image as prior information is developed. We resort to the reference prior information inherent in interior and exterior data simultaneously to build a new generalized nonconvex low-rank approximation framework for RSI reconstruction. Specifically, the innovation of this paper consists of the following three respects: (1 we propose a nonconvex low-rank approximation for reconstructing RSI; (2 we inject reference prior information to overcome over smoothed edges and texture detail losses; (3 on this basis, we combine conjugate gradient algorithms and a single-value threshold (SVT simultaneously to solve the proposed algorithm. The performance of the algorithm is evaluated both qualitatively and quantitatively. Experimental results demonstrate that the proposed algorithm improves several dBs in terms of peak signal to noise ratio (PSNR and preserves image details significantly compared to most of the current approaches without reference images as priors. In addition, the generalized nonconvex low-rank approximation of our approach is naturally robust to noise, and therefore, the proposed algorithm can handle low resolution with noisy inputs in a more unified framework.

  2. Acceleration of MR parameter mapping using annihilating filter-based low rank hankel matrix (ALOHA).

    Science.gov (United States)

    Lee, Dongwook; Jin, Kyong Hwan; Kim, Eung Yeop; Park, Sung-Hong; Ye, Jong Chul

    2016-12-01

    MR parameter mapping is one of clinically valuable MR imaging techniques. However, increased scan time makes it difficult for routine clinical use. This article aims at developing an accelerated MR parameter mapping technique using annihilating filter based low-rank Hankel matrix approach (ALOHA). When a dynamic sequence can be sparsified using spatial wavelet and temporal Fourier transform, this results in a rank-deficient Hankel structured matrix that is constructed using weighted k-t measurements. ALOHA then utilizes the low rank matrix completion algorithm combined with a multiscale pyramidal decomposition to estimate the missing k-space data. Spin-echo inversion recovery and multiecho spin echo pulse sequences for T1 and T2 mapping, respectively, were redesigned to perform undersampling along the phase encoding direction according to Gaussian distribution. The missing k-space is reconstructed using ALOHA. Then, the parameter maps were constructed using nonlinear regression. Experimental results confirmed that ALOHA outperformed the existing compressed sensing algorithms. Compared with the existing methods, the reconstruction errors appeared scattered throughout the entire images rather than exhibiting systematic distortion along edges and the parameter maps. Given that many diagnostic errors are caused by the systematic distortion of images, ALOHA may have a great potential for clinical applications. Magn Reson Med 76:1848-1864, 2016. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  3. Hyperspectral Anomaly Detection Based on Low-Rank Representation and Learned Dictionary

    Directory of Open Access Journals (Sweden)

    Yubin Niu

    2016-03-01

    Full Text Available In this paper, a novel hyperspectral anomaly detector based on low-rank representation (LRR and learned dictionary (LD has been proposed. This method assumes that a two-dimensional matrix transformed from a three-dimensional hyperspectral imagery can be decomposed into two parts: a low rank matrix representing the background and a sparse matrix standing for the anomalies. The direct application of LRR model is sensitive to a tradeoff parameter that balances the two parts. To mitigate this problem, a learned dictionary is introduced into the decomposition process. The dictionary is learned from the whole image with a random selection process and therefore can be viewed as the spectra of the background only. It also requires a less computational cost with the learned dictionary. The statistic characteristic of the sparse matrix allows the application of basic anomaly detection method to obtain detection results. Experimental results demonstrate that, compared to other anomaly detection methods, the proposed method based on LRR and LD shows its robustness and has a satisfactory anomaly detection result.

  4. PROPOSITION OF A METHOD TO MEASURE RANKINGS USING THE DELPHI TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Sidney Chaves

    2013-01-01

    Full Text Available Delphi technique has been largely used in researches, as much via its traditional form, focused on estimations, as via its ranking-type. However, despite being a technique established for over 60 years, there are not in the academic literature, until now, properly structured and complete schemes or scripts that can be used to apply the Delphi technique in its ranking-type. In order to fill this gap, a new script is presented in this paper – the Method for Achieving Rankings Using the Delphi Technique (MARD, in its abbreviated form. In addition, it is depicted a study in which MARD was applied. Due to be derived from scripts already tested, MARD takes into account several aspects that the authors of those scripts believe to be relevant, and also presents, in a more detailed way, the steps to be performed along all stages required to carry out a Delphi panel in its ranking-type. MARD showed to be complete and robust, by providing adequate methodological support for the Delphi panel presented as an example, and by revealing potential to be generalized and therefore applied to other sorts of Delphi panels in its ranking-type.

  5. Ranking and selection of commercial off-the-shelf using fuzzy distance based approach

    Directory of Open Access Journals (Sweden)

    Rakesh Garg

    2015-06-01

    Full Text Available There is a tremendous growth of the use of the component based software engineering (CBSE approach for the development of software systems. The selection of the best suited COTS components which fulfils the necessary requirement for the development of software(s has become a major challenge for the software developers. The complexity of the optimal selection problem increases with an increase in alternative potential COTS components and the corresponding selection criteria. In this research paper, the problem of ranking and selection of Data Base Management Systems (DBMS components is modeled as a multi-criteria decision making problem. A ‘Fuzzy Distance Based Approach (FDBA’ method is proposed for the optimal ranking and selection of DBMS COTS components of an e-payment system based on 14 selection criteria grouped under three major categories i.e. ‘Vendor Capabilities’, ‘Business Issues’ and ‘Cost’. The results of this method are compared with other Analytical Hierarchy Process (AHP which is termed as a typical multi-criteria decision making approach. The proposed methodology is explained with an illustrated example.

  6. Prewhitening for Rank-Deficient Noise in Subspace Methods for Noise Reduction

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Søren Holdt

    2005-01-01

    A fundamental issue in connection with subspace methods for noise reduction is that the covariance matrix for the noise is required to have full rank, in order for the prewhitening step to be defined. However, there are important cases where this requirement is not fulfilled, e.g., when the noise...

  7. Combinatoric Models of Information Retrieval Ranking Methods and Performance Measures for Weakly-Ordered Document Collections

    Science.gov (United States)

    Church, Lewis

    2010-01-01

    This dissertation answers three research questions: (1) What are the characteristics of a combinatoric measure, based on the Average Search Length (ASL), that performs the same as a probabilistic version of the ASL?; (2) Does the combinatoric ASL measure produce the same performance result as the one that is obtained by ranking a collection of…

  8. RANWAR: rank-based weighted association rule mining from gene expression and methylation data.

    Science.gov (United States)

    Mallik, Saurav; Mukhopadhyay, Anirban; Maulik, Ujjwal

    2015-01-01

    Ranking of association rules is currently an interesting topic in data mining and bioinformatics. The huge number of evolved rules of items (or, genes) by association rule mining (ARM) algorithms makes confusion to the decision maker. In this article, we propose a weighted rule-mining technique (say, RANWAR or rank-based weighted association rule-mining) to rank the rules using two novel rule-interestingness measures, viz., rank-based weighted condensed support (wcs) and weighted condensed confidence (wcc) measures to bypass the problem. These measures are basically depended on the rank of items (genes). Using the rank, we assign weight to each item. RANWAR generates much less number of frequent itemsets than the state-of-the-art association rule mining algorithms. Thus, it saves time of execution of the algorithm. We run RANWAR on gene expression and methylation datasets. The genes of the top rules are biologically validated by Gene Ontologies (GOs) and KEGG pathway analyses. Many top ranked rules extracted from RANWAR that hold poor ranks in traditional Apriori, are highly biologically significant to the related diseases. Finally, the top rules evolved from RANWAR, that are not in Apriori, are reported.

  9. Unsupervised ensemble ranking of terms in electronic health record notes based on their importance to patients.

    Science.gov (United States)

    Chen, Jinying; Yu, Hong

    2017-04-01

    Allowing patients to access their own electronic health record (EHR) notes through online patient portals has the potential to improve patient-centered care. However, EHR notes contain abundant medical jargon that can be difficult for patients to comprehend. One way to help patients is to reduce information overload and help them focus on medical terms that matter most to them. Targeted education can then be developed to improve patient EHR comprehension and the quality of care. The aim of this work was to develop FIT (Finding Important Terms for patients), an unsupervised natural language processing (NLP) system that ranks medical terms in EHR notes based on their importance to patients. We built FIT on a new unsupervised ensemble ranking model derived from the biased random walk algorithm to combine heterogeneous information resources for ranking candidate terms from each EHR note. Specifically, FIT integrates four single views (rankers) for term importance: patient use of medical concepts, document-level term salience, word co-occurrence based term relatedness, and topic coherence. It also incorporates partial information of term importance as conveyed by terms' unfamiliarity levels and semantic types. We evaluated FIT on 90 expert-annotated EHR notes and used the four single-view rankers as baselines. In addition, we implemented three benchmark unsupervised ensemble ranking methods as strong baselines. FIT achieved 0.885 AUC-ROC for ranking candidate terms from EHR notes to identify important terms. When including term identification, the performance of FIT for identifying important terms from EHR notes was 0.813 AUC-ROC. Both performance scores significantly exceeded the corresponding scores from the four single rankers (P<0.001). FIT also outperformed the three ensemble rankers for most metrics. Its performance is relatively insensitive to its parameter. FIT can automatically identify EHR terms important to patients. It may help develop future interventions

  10. Selection and ranking of occupational safety indicators based on fuzzy AHP: A case study in road construction companies

    Directory of Open Access Journals (Sweden)

    Janackovic, Goran Lj.

    2013-11-01

    Full Text Available This paper presents the factors, performance, and indicators of occupational safety, as well as a method to select and rank occupational safety indicators based on the expert evaluation method and the fuzzy analytic hierarchy process (fuzzy AHP. A case study is done on road construction companies in Serbia. The key safety performance indicators for the road construction industry are identified and ranked according to the results of a survey that included experts who assessed occupational safety risks in these companies. The case study confirmed that organisational factors have a dominant effect on the quality of the occupational health and safety management system in Serbian road construction companies.

  11. Ranking Based Locality Sensitive Hashing Enabled Cancelable Biometrics: Index-of-Max Hashing

    OpenAIRE

    Jin, Zhe; Lai, Yen-Lung; Hwang, Jung-Yeon; Kim, Soohyung; Teoh, Andrew Beng Jin

    2017-01-01

    In this paper, we propose a ranking based locality sensitive hashing inspired two-factor cancelable biometrics, dubbed "Index-of-Max" (IoM) hashing for biometric template protection. With externally generated random parameters, IoM hashing transforms a real-valued biometric feature vector into discrete index (max ranked) hashed code. We demonstrate two realizations from IoM hashing notion, namely Gaussian Random Projection based and Uniformly Random Permutation based hashing schemes. The disc...

  12. A result-driven minimum blocking method for PageRank parallel computing

    Science.gov (United States)

    Tao, Wan; Liu, Tao; Yu, Wei; Huang, Gan

    2017-01-01

    Matrix blocking is a common method for improving computational efficiency of PageRank, but the blocking rules are hard to be determined, and the following calculation is complicated. In tackling these problems, we propose a minimum blocking method driven by result needs to accomplish a parallel implementation of PageRank algorithm. The minimum blocking just stores the element which is necessary for the result matrix. In return, the following calculation becomes simple and the consumption of the I/O transmission is cut down. We do experiments on several matrixes of different data size and different sparsity degree. The results show that the proposed method has better computational efficiency than traditional blocking methods.

  13. Ranking and similarity of conventional, microwave and ultrasound element sequential extraction methods.

    Science.gov (United States)

    Relić, Dubravka; Héberger, Károly; Sakan, Sanja; Škrbić, Biljana; Popović, Aleksandar; Đorđević, Dragana

    2018-01-03

    This study aims to compare three extraction techniques of four sequential element extraction steps from soil and sediment samples that were taken from the location of the Pančevo petrochemical industry (Serbia). Elements were extracted using three different techniques: conventional, microwave and ultrasound extraction. A novel procedure - sum of the ranking differences (SRD) - was able to rank the techniques and elements, to see whether this method is a suitable tool to reveal the similarities and dissimilarities in element extraction techniques, provided that a proper ranking reference is available. The concentrations of the following elements Al, Ba, Ca, Cd, Co, Cr, Cu, Fe, K, Mg, Mn, Na, Ni, Pb, Si, Sn, Sr, V and Zn were determined through ICP OES. The different efficiencies and recovery values of element concentrations using each of the three extraction techniques were examined by the CRM BCR-701. By using SRD, we obtained a better separation between the different extraction techniques and steps when we rank their differences among the samples while lower separation was obtained according to analysed elements. Appling this method for ordering the elements could be useful for three purposes: (i) to find possible associations among the elements; (ii) to find possible elements that have outlier concentrations or (iii) detect differences in geochemical origin or behaviour of elements. Cross-validation of the SRD values in combination with cluster and principal component analysis revealed the same groups of extraction steps and techniques. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Weakly intrusive low-rank approximation method for nonlinear parameter-dependent equations

    KAUST Repository

    Giraldi, Loic

    2017-06-30

    This paper presents a weakly intrusive strategy for computing a low-rank approximation of the solution of a system of nonlinear parameter-dependent equations. The proposed strategy relies on a Newton-like iterative solver which only requires evaluations of the residual of the parameter-dependent equation and of a preconditioner (such as the differential of the residual) for instances of the parameters independently. The algorithm provides an approximation of the set of solutions associated with a possibly large number of instances of the parameters, with a computational complexity which can be orders of magnitude lower than when using the same Newton-like solver for all instances of the parameters. The reduction of complexity requires efficient strategies for obtaining low-rank approximations of the residual, of the preconditioner, and of the increment at each iteration of the algorithm. For the approximation of the residual and the preconditioner, weakly intrusive variants of the empirical interpolation method are introduced, which require evaluations of entries of the residual and the preconditioner. Then, an approximation of the increment is obtained by using a greedy algorithm for low-rank approximation, and a low-rank approximation of the iterate is finally obtained by using a truncated singular value decomposition. When the preconditioner is the differential of the residual, the proposed algorithm is interpreted as an inexact Newton solver for which a detailed convergence analysis is provided. Numerical examples illustrate the efficiency of the method.

  15. Efficient completion for corrupted low-rank images via alternating direction method

    Science.gov (United States)

    Li, Wei; Zhao, Lei; Xu, Duanqing; Lu, Dongming

    2014-05-01

    We propose an efficient and easy-to-implement method to settle the inpainting problem for low-rank images following the recent studies about low-rank matrix completion. In general, our method has three steps: first, corresponding to the three channels of RGB color space, an incomplete image is split into three incomplete matrices; second, each matrix is restored by solving a convex problem derived from the nuclear norm relaxation; at last, the three recovered matrices are merged to produce the final output. During the process, in order to efficiently solve the nuclear norm minimization problem, we employ the alternating direction method. Except for the basic image inpainting problem, we also enable our method to handle cases where corrupted images not only have missing values but also have noisy entries. Our experiments show that our method outperforms the existing inpainting techniques both quantitatively and qualitatively. We also demonstrate that our method is capable of processing many other situations, including block-wise low-rank image completion, large-scale image restoration, and object removal.

  16. Fuzzy Group Decision Making Approach for Ranking Work Stations Based on Physical Pressure

    Directory of Open Access Journals (Sweden)

    Hamed Salmanzadeh

    2014-06-01

    Full Text Available This paper proposes a Fuzzy Group Decision Making approach for ranking work stations based on physical pressure. Fuzzy group decision making approach allows experts to evaluate different ergonomic factors using linguistic terms such as very high, high, medium, low, very low, rather than precise numerical values. In this way, there is no need to measure parameters and evaluation can be easily made in a group. According to ergonomics much work contents and situations, accompanied with multiple parameters and uncertainties, fuzzy group decision making is the best way to evaluate such a chameleon of concept. A case study was down to utilize the approach and illustrate its application in ergonomic assessment and ranking the work stations based on work pressure and found that this approach provides flexibility, practicality, efficiency in making decision around ergonomics areas. The normalized defuzzification numbers which are resulted from this method are compared with result of quantitative assessment of Automotive Assembly Work Sheet auto, it’s demonstrated that the proposed method result is 10% less than Automotive Assembly Work Sheet, approximately.

  17. Acceleration of MR parameter mapping using annihilating filter‐based low rank hankel matrix (ALOHA)

    National Research Council Canada - National Science Library

    Lee, Dongwook; Jin, Kyong Hwan; Kim, Eung Yeop; Park, Sung‐Hong; Ye, Jong Chul

    2016-01-01

    .... However, increased scan time makes it difficult for routine clinical use. This article aims at developing an accelerated MR parameter mapping technique using annihilating filter based low-rank Hankel matrix approach (ALOHA...

  18. Multiple graph regularized protein domain ranking

    Directory of Open Access Journals (Sweden)

    Wang Jim

    2012-11-01

    Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  19. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-11-19

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  20. How to Rank Journals.

    Science.gov (United States)

    Bradshaw, Corey J A; Brook, Barry W

    2016-01-01

    There are now many methods available to assess the relative citation performance of peer-reviewed journals. Regardless of their individual faults and advantages, citation-based metrics are used by researchers to maximize the citation potential of their articles, and by employers to rank academic track records. The absolute value of any particular index is arguably meaningless unless compared to other journals, and different metrics result in divergent rankings. To provide a simple yet more objective way to rank journals within and among disciplines, we developed a κ-resampled composite journal rank incorporating five popular citation indices: Impact Factor, Immediacy Index, Source-Normalized Impact Per Paper, SCImago Journal Rank and Google 5-year h-index; this approach provides an index of relative rank uncertainty. We applied the approach to six sample sets of scientific journals from Ecology (n = 100 journals), Medicine (n = 100), Multidisciplinary (n = 50); Ecology + Multidisciplinary (n = 25), Obstetrics & Gynaecology (n = 25) and Marine Biology & Fisheries (n = 25). We then cross-compared the κ-resampled ranking for the Ecology + Multidisciplinary journal set to the results of a survey of 188 publishing ecologists who were asked to rank the same journals, and found a 0.68-0.84 Spearman's ρ correlation between the two rankings datasets. Our composite index approach therefore approximates relative journal reputation, at least for that discipline. Agglomerative and divisive clustering and multi-dimensional scaling techniques applied to the Ecology + Multidisciplinary journal set identified specific clusters of similarly ranked journals, with only Nature & Science separating out from the others. When comparing a selection of journals within or among disciplines, we recommend collecting multiple citation-based metrics for a sample of relevant and realistic journals to calculate the composite rankings and their relative uncertainty windows.

  1. Lake and Reservoir Evaporation Estimation: Sensitivity Analysis and Ranking Existing Methods

    Directory of Open Access Journals (Sweden)

    maysam majidi

    2016-02-01

    were acquired from the Doosti Dam weather station. Relative humidity, wind speed, atmospheric pressure and precipitation were acquired from the Pol−Khatoon weather station. Dew point temperature and sunshine data were collected from the Sarakhs weather station. Lake area was estimated from hypsometric curve in relation to lake level data. Temperature measurements were often performed in 16−day periods or biweekly from September 2011 to September 2012. Temperature profile of the lake (required for lake evaporation estimation was measured at different points of the reservoir using a portable multi−meter. The eighteen existing methods were compared and ranked based on Bowen ratio energy balance method (BREB. Results and Discussion: The estimated annual evaporation values by all of the applied methods in this study, ranged from 21 to 113mcm (million cubic meters. BREB annual evaporation obtained value was equal to 69.86mcm and evaporation rate averaged 5.47mm d-1 during the study period. According to the results, there is a relatively large difference between the obtained evaporation values from the adopted methods. The sensitivity analysis of evaporation methods for some input parameters indicated that the Hamon method (Eq. 16 was the most sensitive to the input parameters followed by the Brutsaert−Stricker and BREB, and radiation−temperature methods (Makkink, Jensen−Haise and Stephen−Stewart had the least sensitivity to input data. Besides, the air temperature, solar radiation (sunshine data, water surface temperature and wind speed data had the most effect on lake evaporation estimations, respectively. Finally, all evaporation estimation methods in this study have been ranked based on RMSD values. On a daily basis, the Jensen−Haise and the Makkink (solar radiation, temperature group, Penman (Combination group and Hamon (temperature, day length group methods had a relatively reasonable performance. As the results on a monthly scale, the Jensen−Haise and

  2. A Citation-Based Ranking of German-Speaking Researchers in Business Administration with Data of Google Scholar

    Science.gov (United States)

    Dilger, Alexander; Müller, Harry

    2013-01-01

    Rankings of academics can be constructed in two different ways, either based on journal rankings or based on citations. Although citation-based rankings promise some fundamental advantages they are still not common in German-speaking business administration. However, the choice of the underlying database is crucial. This article argues that for…

  3. Research on the Fusion of Dependent Evidence Based on Rank Correlation Coefficient.

    Science.gov (United States)

    Shi, Fengjian; Su, Xiaoyan; Qian, Hong; Yang, Ning; Han, Wenhua

    2017-10-16

    In order to meet the higher accuracy and system reliability requirements, the information fusion for multi-sensor systems is an increasing concern. Dempster-Shafer evidence theory (D-S theory) has been investigated for many applications in multi-sensor information fusion due to its flexibility in uncertainty modeling. However, classical evidence theory assumes that the evidence is independent of each other, which is often unrealistic. Ignoring the relationship between the evidence may lead to unreasonable fusion results, and even lead to wrong decisions. This assumption severely prevents D-S evidence theory from practical application and further development. In this paper, an innovative evidence fusion model to deal with dependent evidence based on rank correlation coefficient is proposed. The model first uses rank correlation coefficient to measure the dependence degree between different evidence. Then, total discount coefficient is obtained based on the dependence degree, which also considers the impact of the reliability of evidence. Finally, the discount evidence fusion model is presented. An example is illustrated to show the use and effectiveness of the proposed method.

  4. Research on the Fusion of Dependent Evidence Based on Rank Correlation Coefficient

    Directory of Open Access Journals (Sweden)

    Fengjian Shi

    2017-10-01

    Full Text Available In order to meet the higher accuracy and system reliability requirements, the information fusion for multi-sensor systems is an increasing concern. Dempster–Shafer evidence theory (D–S theory has been investigated for many applications in multi-sensor information fusion due to its flexibility in uncertainty modeling. However, classical evidence theory assumes that the evidence is independent of each other, which is often unrealistic. Ignoring the relationship between the evidence may lead to unreasonable fusion results, and even lead to wrong decisions. This assumption severely prevents D–S evidence theory from practical application and further development. In this paper, an innovative evidence fusion model to deal with dependent evidence based on rank correlation coefficient is proposed. The model first uses rank correlation coefficient to measure the dependence degree between different evidence. Then, total discount coefficient is obtained based on the dependence degree, which also considers the impact of the reliability of evidence. Finally, the discount evidence fusion model is presented. An example is illustrated to show the use and effectiveness of the proposed method.

  5. A Direct Elliptic Solver Based on Hierarchically Low-Rank Schur Complements

    KAUST Repository

    Chávez, Gustavo

    2017-03-17

    A parallel fast direct solver for rank-compressible block tridiagonal linear systems is presented. Algorithmic synergies between Cyclic Reduction and Hierarchical matrix arithmetic operations result in a solver with O(Nlog2N) arithmetic complexity and O(NlogN) memory footprint. We provide a baseline for performance and applicability by comparing with well-known implementations of the $$\\\\mathcal{H}$$ -LU factorization and algebraic multigrid within a shared-memory parallel environment that leverages the concurrency features of the method. Numerical experiments reveal that this method is comparable with other fast direct solvers based on Hierarchical Matrices such as $$\\\\mathcal{H}$$ -LU and that it can tackle problems where algebraic multigrid fails to converge.

  6. A DYNAMIC FEATURE SELECTION METHOD FOR DOCUMENT RANKING WITH RELEVANCE FEEDBACK APPROACH

    Directory of Open Access Journals (Sweden)

    K. Latha

    2010-07-01

    Full Text Available Ranking search results is essential for information retrieval and Web search. Search engines need to not only return highly relevant results, but also be fast to satisfy users. As a result, not all available features can be used for ranking, and in fact only a small percentage of these features can be used. Thus, it is crucial to have a feature selection mechanism that can find a subset of features that both meets latency requirements and achieves high relevance. In this paper we describe a 0/1 knapsack procedure for automatically selecting features to use within Generalization model for Document Ranking. We propose an approach for Relevance Feedback using Expectation Maximization method and evaluate the algorithm on the TREC Collection for describing classes of feedback textual information retrieval features. Experimental results, evaluated on standard TREC-9 part of the OHSUMED collections, show that our feature selection algorithm produces models that are either significantly more effective than, or equally effective as, models such as Markov Random Field model, Correlation Co-efficient and Count Difference method

  7. A Core Journal Decision Model Based on Weighted Page Rank

    Science.gov (United States)

    Wang, Hei-Chia; Chou, Ya-lin; Guo, Jiunn-Liang

    2011-01-01

    Purpose: The paper's aim is to propose a core journal decision method, called the local impact factor (LIF), which can evaluate the requirements of the local user community by combining both the access rate and the weighted impact factor, and by tracking citation information on the local users' articles. Design/methodology/approach: Many…

  8. A generalized rank-order method for nonparametric analysis of data from exercise science: a tutorial.

    Science.gov (United States)

    Thomas, J R; Nelson, J K; Thomas, K T

    1999-03-01

    Frequent violations of the assumption that data are normally distributed occur in exercise science and other life and behavioral sciences. When this assumption is violated, parametric statistical analyses may be inappropriate for data analysis. We provide a rationale for using a generalized form of nonparametric analyses based on the Puri and Sen (1985) L treated as a chi 2 approximation. If data do not meet the assumption of normality, this nonparametric approach has substantial power and is easy to use. An advantage of this generalized technique is that ranked data may be used in standard parametric statistical programs widely available on desktop and mainframe computers, for example, regression, analysis of variance (ANOVA), multivariate analysis of variance (MANOVA) within BioMed, SAS, SPSS. Once the data are ranked and analyzed with these programs, the only adjustment required is to use a standard formula to calculate the nonparametric test statistic, L, instead of the parametric test statistic (e.g., F). Thus, rank-order nonparametric models become parallel with their parametric counterparts allowing the researcher to select between them based on characteristics of the data distribution. Examples of this approach are provided using data from exercise science for regression, ANOVA (including repeated measures) and MANOVA techniques from SPSSPC. Using these procedures, researchers can easily examine data distributions and make an appropriate decision about parametric or nonparametric analyses while continuing to use their regular statistical packages.

  9. Discovering urban mobility patterns with PageRank based traffic modeling and prediction

    Science.gov (United States)

    Wang, Minjie; Yang, Su; Sun, Yi; Gao, Jun

    2017-11-01

    Urban transportation system can be viewed as complex network with time-varying traffic flows as links to connect adjacent regions as networked nodes. By computing urban traffic evolution on such temporal complex network with PageRank, it is found that for most regions, there exists a linear relation between the traffic congestion measure at present time and the PageRank value of the last time. Since the PageRank measure of a region does result from the mutual interactions of the whole network, it implies that the traffic state of a local region does not evolve independently but is affected by the evolution of the whole network. As a result, the PageRank values can act as signatures in predicting upcoming traffic congestions. We observe the aforementioned laws experimentally based on the trajectory data of 12000 taxies in Beijing city for one month.

  10. Low-rank Quasi-Newton updates for Robust Jacobian lagging in Newton methods

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.; Brune, P. [Mathematics and Computer Science Division, Argonne National Laboratory, 9700 S. Cass Ave., Argonne, IL 60439 (United States)

    2013-07-01

    Newton-Krylov methods are standard tools for solving nonlinear problems. A common approach is to 'lag' the Jacobian when assembly or preconditioner setup is computationally expensive, in exchange for some degradation in the convergence rate and robustness. We show that this degradation may be partially mitigated by using the lagged Jacobian as an initial operator in a quasi-Newton method, which applies unassembled low-rank updates to the Jacobian until the next full reassembly. We demonstrate the effectiveness of this technique on problems in glaciology and elasticity. (authors)

  11. Web based collaborative decision making in flood risk management: Application of TOPSIS and visualisation techniques for ranking of alternatives

    Science.gov (United States)

    Evers, Mariele; Almoradie, Adrian; Jonoski, Andreja

    2014-05-01

    Development of flood risk management (FRM) plans is ideally carried out in a participatory process with relevant stakeholders. Integrating stakeholders knowledge and information in the decision making process creates trust amongst decision makers and stakeholders that often leads to a successful implementation of measures. Stakeholder participation however does not come without challenges and hindrances (e.g. limitation of resources, spatial distribution and interest to participate). The most challenging type of participation is Collaborative decision making (CDM). A web-based mobile or computer-aided environment offers an innovative approach to address these challenges and hindrances. Moreover, this also enhances participation. Different phases or steps of a CDM process are addressing relevant management objectives, identify scenarios and sets of proposed alternatives, individually rank these alternatives in order of preference and present an aggregated rank to view the groups position. In individual ranking, formulation of judgement should combine scientific facts with stakeholders' beliefs and attitudes. This paper presents a developed web-based CDM framework and its implementation, highlighting the application of a Muti-Criteria Decision Making (MCDM) method for individual ranking of alternative, the method Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) with Fuzzy logic. Moreover, an innovative visualisation technique for stakeholders' group ranking is also presented. Case studies are the Alster catchment (Hamburg, Germany) and Cranbrook catchment, (London, UK). A series of stakeholders' workshops was done to test and evaluate the environments. It shows that the TOPSIS method provides a close representation of the stakeholders' preferences regarding the measures and alternatives. Overall the evaluation shows that web-based environments can address the challenges and hindrances and it enhances participation in flood risk management. The

  12. A Similarity-Ranking Method on Semantic Computing for Providing Information-Services in Station-Concierge System

    Directory of Open Access Journals (Sweden)

    Motoki Yokoyama

    2017-07-01

    Full Text Available The prevalence of smartphones and wireless broadband networks have been progressing as a new Railway infomration environment. According to the spread of such devices and information technology, various types of information can be obtained from databases connected to the Internet. One scenario of obtaining such a wide variety of information resources is in the phase of user’s transportation. This paper proposes an information provision system, named the Station Concierge System that matches the situation and intention of passengers. The purpose of this system is to estimate the needs of passengers like station staff or hotel concierge and to provide information resources that satisfy user’s expectations dynamically. The most important module of the system is constructed based on a new information ranking method for passenger intention prediction and service recommendation. This method has three main features, which are (1 projecting a user to semantic vector space by using her current context, (2 predicting the intention of a user based on selecting a semantic vector subspace, and (3 ranking the services by a descending order of relevant scores to the user’ intention. By comparing the predicted results of our method with those of two straightforward computation methods, the experimental studies show the effectiveness and efficiency of the proposed method. Using this system, users can obtain transit information and service map that dynamically matches their context.

  13. A new method for ranking mode-specific sensitivity of freshwater arthropods to insecticides and its relationship to biological traits.

    Science.gov (United States)

    Rubach, Mascha N; Baird, Donald J; Van den Brink, Paul J

    2010-02-01

    The problem of how to deal with species sensitivity differences to toxic substances has been addressed successfully with the species sensitivity distribution (SSD), yet this has not increased understanding about the underlying mechanisms of sensitivity. Other researchers have identified the mode of action of chemicals and also biological traits of species as determinants for sensitivity, yet no systematic approach combines these factors. To achieve this, first existing data on organophosphate, carbamate, and pyrethroid toxicity and mode of action and also species trait information were mined. Second, we linked taxon sensitivity to their traits at the family level to generate empirical and mechanistic hypotheses about sensitivity-trait relationships. In this way, a mode-specific sensitivity (MSS) ranking method was developed, and tested at the taxonomic level of family and genus. The application of several quality criteria indicated overall confidence in rankings, but confidence in exact taxon rank was less certain, due to data insufficiency for certain groups. The MSS rankings were found to be applicable for trait-based approaches and were successfully linked to existing trait data to identify traits with predictive potential. Although this empirical analysis cannot test causality relationships between traits and sensitivity, testable hypotheses were generated, for further experimental investigation. Single traits as well as combinations of traits can be used to predict laboratory sensitivity to the substances tested, although associations were not as strong as in previous studies. We conclude that existing trait data are not suitable for every trait-based research question and that important traits remain to be identified and quantified in relation to the processes of toxicity, i.e., the toxicokinetics and toxicodynamics. Copyright 2009 SETAC.

  14. The ranking of scientists based on scientific publications assessment.

    Science.gov (United States)

    Zerem, Enver

    2017-11-01

    It is generally accepted that the scientific impact factor (Web of Science) and the total number of citations of the articles published in a journal, are the most relevant parameters of the journal's significance. However, the significance of scientists is much more complicated to establish and the value of their scientific production cannot be directly reflected by the importance of the journals in which their articles are published. Evaluating the significance of scientists' accomplishments involves more complicated metrics than just their publication records. Based on a long term of academic experience, the author proposes objective criteria to estimate the scientific merit of an individual's publication record. This metric can serve as a pragmatic tool and the nidus for discussion within the readership of this journal. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Rank-Based miRNA Signatures for Early Cancer Detection

    Directory of Open Access Journals (Sweden)

    Mario Lauria

    2014-01-01

    Full Text Available We describe a new signature definition and analysis method to be used as biomarker for early cancer detection. Our new approach is based on the construction of a reference map of transcriptional signatures of both healthy and cancer affected individuals using circulating miRNA from a large number of subjects. Once such a map is available, the diagnosis for a new patient can be performed by observing the relative position on the map of his/her transcriptional signature. To demonstrate its efficacy for this specific application we report the results of the application of our method to published datasets of circulating miRNA, and we quantify its performance compared to current state-of-the-art methods. A number of additional features make this method an ideal candidate for large-scale use, for example, as a mass screening tool for early cancer detection or for at-home diagnostics. Specifically, our method is minimally invasive (because it works well with circulating miRNA, it is robust with respect to lab-to-lab protocol variability and batch effects (it requires that only the relative ranking of expression value of miRNA in a profile be accurate not their absolute values, and it is scalable to a large number of subjects. Finally we discuss the need for HPC capability in a widespread application of our or similar methods.

  16. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  17. Ranking library materials

    OpenAIRE

    Lewandowski, Dirk

    2015-01-01

    Purpose: This paper discusses ranking factors suitable for library materials and shows that ranking in general is a complex process and that ranking for library materials requires a variety of techniques. Design/methodology/approach: The relevant literature is reviewed to provide a systematic overview of suitable ranking factors. The discussion is based on an overview of ranking factors used in Web search engines. Findings: While there are a wide variety of ranking factors appl...

  18. An Efficient Normalized Rank Based SVM for Room Level Indoor WiFi Localization with Diverse Devices

    Directory of Open Access Journals (Sweden)

    Yasmine Rezgui

    2017-01-01

    Full Text Available This paper proposes an efficient and effective WiFi fingerprinting-based indoor localization algorithm, which uses the Received Signal Strength Indicator (RSSI of WiFi signals. In practical harsh indoor environments, RSSI variation and hardware variance can significantly degrade the performance of fingerprinting-based localization methods. To address the problem of hardware variance and signal fluctuation in WiFi fingerprinting-based localization, we propose a novel normalized rank based Support Vector Machine classifier (NR-SVM. Moving from RSSI value based analysis to the normalized rank transformation based analysis, the principal features are prioritized and the dimensionalities of signature vectors are taken into account. The proposed method has been tested using sixteen different devices in a shopping mall with 88 shops. The experimental results demonstrate its robustness with no less than 98.75% correct estimation in 93.75% of the tested cases and 100% correct rate in 56.25% of cases. In the experiments, the new method shows better performance over the KNN, Naïve Bayes, Random Forest, and Neural Network algorithms. Furthermore, we have compared the proposed approach with three popular calibration-free transformation based methods, including difference method (DIFF, Signal Strength Difference (SSD, and the Hyperbolic Location Fingerprinting (HLF based SVM. The results show that the NR-SVM outperforms these popular methods.

  19. A rank-based sequence aligner with applications in phylogenetic analysis.

    Directory of Open Access Journals (Sweden)

    Liviu P Dinu

    Full Text Available Recent tools for aligning short DNA reads have been designed to optimize the trade-off between correctness and speed. This paper introduces a method for assigning a set of short DNA reads to a reference genome, under Local Rank Distance (LRD. The rank-based aligner proposed in this work aims to improve correctness over speed. However, some indexing strategies to speed up the aligner are also investigated. The LRD aligner is improved in terms of speed by storing [Formula: see text]-mer positions in a hash table for each read. Another improvement, that produces an approximate LRD aligner, is to consider only the positions in the reference that are likely to represent a good positional match of the read. The proposed aligner is evaluated and compared to other state of the art alignment tools in several experiments. A set of experiments are conducted to determine the precision and the recall of the proposed aligner, in the presence of contaminated reads. In another set of experiments, the proposed aligner is used to find the order, the family, or the species of a new (or unknown organism, given only a set of short Next-Generation Sequencing DNA reads. The empirical results show that the aligner proposed in this work is highly accurate from a biological point of view. Compared to the other evaluated tools, the LRD aligner has the important advantage of being very accurate even for a very low base coverage. Thus, the LRD aligner can be considered as a good alternative to standard alignment tools, especially when the accuracy of the aligner is of high importance. Source code and UNIX binaries of the aligner are freely available for future development and use at http://lrd.herokuapp.com/aligners. The software is implemented in C++ and Java, being supported on UNIX and MS Windows.

  20. PageRank-based identification of signaling crosstalk from transcriptomics data: the case of Arabidopsis thaliana.

    Science.gov (United States)

    Omranian, Nooshin; Mueller-Roeber, Bernd; Nikoloski, Zoran

    2012-04-01

    The levels of cellular organization, from gene transcription to translation to protein-protein interaction and metabolism, operate via tightly regulated mutual interactions, facilitating organismal adaptability and various stress responses. Characterizing the mutual interactions between genes, transcription factors, and proteins involved in signaling, termed crosstalk, is therefore crucial for understanding and controlling cells' functionality. We aim at using high-throughput transcriptomics data to discover previously unknown links between signaling networks. We propose and analyze a novel method for crosstalk identification which relies on transcriptomics data and overcomes the lack of complete information for signaling pathways in Arabidopsis thaliana. Our method first employs a network-based transformation of the results from the statistical analysis of differential gene expression in given groups of experiments under different signal-inducing conditions. The stationary distribution of a random walk (similar to the PageRank algorithm) on the constructed network is then used to determine the putative transcripts interrelating different signaling pathways. With the help of the proposed method, we analyze a transcriptomics data set including experiments from four different stresses/signals: nitrate, sulfur, iron, and hormones. We identified promising gene candidates, downstream of the transcription factors (TFs), associated to signaling crosstalk, which were validated through literature mining. In addition, we conduct a comparative analysis with the only other available method in this field which used a biclustering-based approach. Surprisingly, the biclustering-based approach fails to robustly identify any candidate genes involved in the crosstalk of the analyzed signals. We demonstrate that our proposed method is more robust in identifying gene candidates involved downstream of the signaling crosstalk for species for which large transcriptomics data sets

  1. Fast Multipole Method as a Matrix-Free Hierarchical Low-Rank Approximation

    KAUST Repository

    Yokota, Rio

    2018-01-03

    There has been a large increase in the amount of work on hierarchical low-rank approximation methods, where the interest is shared by multiple communities that previously did not intersect. This objective of this article is two-fold; to provide a thorough review of the recent advancements in this field from both analytical and algebraic perspectives, and to present a comparative benchmark of two highly optimized implementations of contrasting methods for some simple yet representative test cases. The first half of this paper has the form of a survey paper, to achieve the former objective. We categorize the recent advances in this field from the perspective of compute-memory tradeoff, which has not been considered in much detail in this area. Benchmark tests reveal that there is a large difference in the memory consumption and performance between the different methods.

  2. A new method for comparing rankings through complex networks: Model and analysis of competitiveness of major European soccer leagues

    Science.gov (United States)

    Criado, Regino; García, Esther; Pedroche, Francisco; Romance, Miguel

    2013-12-01

    In this paper, we show a new technique to analyze families of rankings. In particular, we focus on sports rankings and, more precisely, on soccer leagues. We consider that two teams compete when they change their relative positions in consecutive rankings. This allows to define a graph by linking teams that compete. We show how to use some structural properties of this competitivity graph to measure to what extend the teams in a league compete. These structural properties are the mean degree, the mean strength, and the clustering coefficient. We give a generalization of the Kendall's correlation coefficient to more than two rankings. We also show how to make a dynamic analysis of a league and how to compare different leagues. We apply this technique to analyze the four major European soccer leagues: Bundesliga, Italian Lega, Spanish Liga, and Premier League. We compare our results with the classical analysis of sport ranking based on measures of competitive balance.

  3. TERM WEIGHTING BASED ON POSITIVE IMPACT FACTOR QUERY FOR ARABIC FIQH DOCUMENT RANKING

    Directory of Open Access Journals (Sweden)

    Rizka Sholikah

    2017-02-01

    Full Text Available Query becomes one of the most decisive factor on documents searching. A query contains several words, where one of them will become a key term. Key term is a word that has higher information and value than the others in query. It can be used in any kind of text documents, including Arabic Fiqh documents. Using key term in term weighting process could led to an improvement on result’s relevancy. In Arabic Fiqh document searching, not using the proper method in term weighting will relieve important value of key term. In this paper, we propose a new term weighting method based on Positive Impact Factor Query (PIFQ for Arabic Fiqh documents ranking. PIFQ calculated using key term’s frequency on each category (mazhab on Fiqh. The key term that frequently appear on a certain mazhab will get higher score on that mazhab, and vice versa. After PIFQ values are acquired, TF.IDF calculation will be done to each words. Then, PIFQ weight will be combine with the result from TF.IDF so that the new weight values for each words will be produced. Experimental result performed on a number of queries using 143 Arabic Fiqh documents show that the proposed method is better than traditional TF.IDF, with 77.9%, 83.1%, and 80.1% of precision, recall, and F-measure respectively.

  4. Sum of ranking differences (SRD) to ensemble multivariate calibration model merits for tuning parameter selection and comparing calibration methods

    Energy Technology Data Exchange (ETDEWEB)

    Kalivas, John H., E-mail: kalijohn@isu.edu [Department of Chemistry, Idaho State University, Pocatello, ID 83209 (United States); Héberger, Károly [Research Centre for Natural Sciences, Hungarian Academy of Sciences, Pusztaszeri út 59-67, 1025 Budapest (Hungary); Andries, Erik [Center for Advanced Research Computing, University of New Mexico, Albuquerque, NM 87106 (United States); Department of Mathematics, Central New Mexico Community College, Albuquerque, NM 87106 (United States)

    2015-04-15

    Highlights: • Sum of ranking differences (SRD) used for tuning parameter selection based on fusion of multicriteria. • No weighting scheme is needed for the multicriteria. • SRD allows automatic selection of one model or a collection of models if so desired. • SRD allows simultaneous comparison of different calibration methods with tuning parameter selection. • New MATLAB programs are described and made available. - Abstract: Most multivariate calibration methods require selection of tuning parameters, such as partial least squares (PLS) or the Tikhonov regularization variant ridge regression (RR). Tuning parameter values determine the direction and magnitude of respective model vectors thereby setting the resultant predication abilities of the model vectors. Simultaneously, tuning parameter values establish the corresponding bias/variance and the underlying selectivity/sensitivity tradeoffs. Selection of the final tuning parameter is often accomplished through some form of cross-validation and the resultant root mean square error of cross-validation (RMSECV) values are evaluated. However, selection of a “good” tuning parameter with this one model evaluation merit is almost impossible. Including additional model merits assists tuning parameter selection to provide better balanced models as well as allowing for a reasonable comparison between calibration methods. Using multiple merits requires decisions to be made on how to combine and weight the merits into an information criterion. An abundance of options are possible. Presented in this paper is the sum of ranking differences (SRD) to ensemble a collection of model evaluation merits varying across tuning parameters. It is shown that the SRD consensus ranking of model tuning parameters allows automatic selection of the final model, or a collection of models if so desired. Essentially, the user’s preference for the degree of balance between bias and variance ultimately decides the merits used in SRD

  5. Conflict-cost based random sampling design for parallel MRI with low rank constraints

    Science.gov (United States)

    Kim, Wan; Zhou, Yihang; Lyu, Jingyuan; Ying, Leslie

    2015-05-01

    In compressed sensing MRI, it is very important to design sampling pattern for random sampling. For example, SAKE (simultaneous auto-calibrating and k-space estimation) is a parallel MRI reconstruction method using random undersampling. It formulates image reconstruction as a structured low-rank matrix completion problem. Variable density (VD) Poisson discs are typically adopted for 2D random sampling. The basic concept of Poisson disc generation is to guarantee samples are neither too close to nor too far away from each other. However, it is difficult to meet such a condition especially in the high density region. Therefore the sampling becomes inefficient. In this paper, we present an improved random sampling pattern for SAKE reconstruction. The pattern is generated based on a conflict cost with a probability model. The conflict cost measures how many dense samples already assigned are around a target location, while the probability model adopts the generalized Gaussian distribution which includes uniform and Gaussian-like distributions as special cases. Our method preferentially assigns a sample to a k-space location with the least conflict cost on the circle of the highest probability. To evaluate the effectiveness of the proposed random pattern, we compare the performance of SAKEs using both VD Poisson discs and the proposed pattern. Experimental results for brain data show that the proposed pattern yields lower normalized mean square error (NMSE) than VD Poisson discs.

  6. Ranking system for national regulatory jurisdictions based on pesticide standard values in major exposures

    Directory of Open Access Journals (Sweden)

    Zijian Li

    2017-07-01

    Full Text Available To control the risk of human exposure to pesticides, about 50 nations have promulgated pesticide soil regulatory guidance values (RGVs, and 104 nations have provided pesticide drinking water maximum concentration levels (MCLs. In addition, 90 nations have regulated pesticide agricultural commodity maximum residue limits (MRLs. Pesticide standard values (PSVs for one single pesticide varied in a range of six, seven, or even eight orders of magnitude. Some PSVs are too large to prevent the impact of pesticides on human health. Many nations have not provided PSVs for some commonly used pesticides until now. This research has introduced several completeness values and numerical values methods to evaluate the national jurisdiction’s performance on PSVs on a nation base. The national jurisdiction ranking system developed by these methods will be beneficial to the environmental regulation makers in the management of PSVs. Results also indicate that European countries perform better in the regulation of pesticide soil RGVs, drinking water MCLs, and agricultural commodity MRLs.

  7. Enhancement of dynamic myocardial perfusion PET images based on low-rank plus sparse decomposition.

    Science.gov (United States)

    Lu, Lijun; Ma, Xiaomian; Mohy-Ud-Din, Hassan; Ma, Jianhua; Feng, Qianjin; Rahmim, Arman; Chen, Wufan

    2018-02-01

    The absolute quantification of dynamic myocardial perfusion (MP) PET imaging is challenged by the limited spatial resolution of individual frame images due to division of the data into shorter frames. This study aims to develop a method for restoration and enhancement of dynamic PET images. We propose that the image restoration model should be based on multiple constraints rather than a single constraint, given the fact that the image characteristic is hardly described by a single constraint alone. At the same time, it may be possible, but not optimal, to regularize the image with multiple constraints simultaneously. Fortunately, MP PET images can be decomposed into a superposition of background vs. dynamic components via low-rank plus sparse (L + S) decomposition. Thus, we propose an L + S decomposition based MP PET image restoration model and express it as a convex optimization problem. An iterative soft thresholding algorithm was developed to solve the problem. Using realistic dynamic 82Rb MP PET scan data, we optimized and compared its performance with other restoration methods. The proposed method resulted in substantial visual as well as quantitative accuracy improvements in terms of noise versus bias performance, as demonstrated in extensive 82Rb MP PET simulations. In particular, the myocardium defect in the MP PET images had improved visual as well as contrast versus noise tradeoff. The proposed algorithm was also applied on an 8-min clinical cardiac 82Rb MP PET study performed on the GE Discovery PET/CT, and demonstrated improved quantitative accuracy (CNR and SNR) compared to other algorithms. The proposed method is effective for restoration and enhancement of dynamic PET images. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A heuristic ranking approach on capacity benefit margin determination using Pareto-based evolutionary programming technique.

    Science.gov (United States)

    Othman, Muhammad Murtadha; Abd Rahman, Nurulazmi; Musirin, Ismail; Fotuhi-Firuzabad, Mahmud; Rajabi-Ghahnavieh, Abbas

    2015-01-01

    This paper introduces a novel multiobjective approach for capacity benefit margin (CBM) assessment taking into account tie-line reliability of interconnected systems. CBM is the imperative information utilized as a reference by the load-serving entities (LSE) to estimate a certain margin of transfer capability so that a reliable access to generation through interconnected system could be attained. A new Pareto-based evolutionary programming (EP) technique is used to perform a simultaneous determination of CBM for all areas of the interconnected system. The selection of CBM at the Pareto optimal front is proposed to be performed by referring to a heuristic ranking index that takes into account system loss of load expectation (LOLE) in various conditions. Eventually, the power transfer based available transfer capability (ATC) is determined by considering the firm and nonfirm transfers of CBM. A comprehensive set of numerical studies are conducted on the modified IEEE-RTS79 and the performance of the proposed method is numerically investigated in detail. The main advantage of the proposed technique is in terms of flexibility offered to an independent system operator in selecting an appropriate solution of CBM simultaneously for all areas.

  9. Efficient Multi-keyword Ranked Search over Outsourced Cloud Data based on Homomorphic Encryption

    Directory of Open Access Journals (Sweden)

    Nie Mengxi

    2016-01-01

    Full Text Available With the development of cloud computing, more and more data owners are motivated to outsource their data to the cloud server for great flexibility and less saving expenditure. Because the security of outsourced data must be guaranteed, some encryption methods should be used which obsoletes traditional data utilization based on plaintext, e.g. keyword search. To solve the search of encrypted data, some schemes were proposed to solve the search of encrypted data, e.g. top-k single or multiple keywords retrieval. However, the efficiency of these proposed schemes is not high enough to be impractical in the cloud computing. In this paper, we propose a new scheme based on homomorphic encryption to solve this challenging problem of privacy-preserving efficient multi-keyword ranked search over outsourced cloud data. In our scheme, the inner product is adopted to measure the relevance scores and the technique of relevance feedback is used to reflect the search preference of the data users. Security analysis shows that the proposed scheme can meet strict privacy requirements for such a secure cloud data utilization system. Performance evaluation demonstrates that the proposed scheme can achieve low overhead on both computation and communication.

  10. Environmental and health hazard ranking and assessment of plastic polymers based on chemical composition

    Energy Technology Data Exchange (ETDEWEB)

    Lithner, Delilah, E-mail: delilah.lithner@gmail.com; Larsson, Ake; Dave, Goeran

    2011-08-15

    Plastics constitute a large material group with a global annual production that has doubled in 15 years (245 million tonnes in 2008). Plastics are present everywhere in society and the environment, especially the marine environment, where large amounts of plastic waste accumulate. The knowledge of human and environmental hazards and risks from chemicals associated with the diversity of plastic products is very limited. Most chemicals used for producing plastic polymers are derived from non-renewable crude oil, and several are hazardous. These may be released during the production, use and disposal of the plastic product. In this study the environmental and health hazards of chemicals used in 55 thermoplastic and thermosetting polymers were identified and compiled. A hazard ranking model was developed for the hazard classes and categories in the EU classification and labelling (CLP) regulation which is based on the UN Globally Harmonized System. The polymers were ranked based on monomer hazard classifications, and initial assessments were made. The polymers that ranked as most hazardous are made of monomers classified as mutagenic and/or carcinogenic (category 1A or 1B). These belong to the polymer families of polyurethanes, polyacrylonitriles, polyvinyl chloride, epoxy resins, and styrenic copolymers. All have a large global annual production (1-37 million tonnes). A considerable number of polymers (31 out of 55) are made of monomers that belong to the two worst of the ranking model's five hazard levels, i.e. levels IV-V. The polymers that are made of level IV monomers and have a large global annual production (1-5 million tonnes) are phenol formaldehyde resins, unsaturated polyesters, polycarbonate, polymethyl methacrylate, and urea-formaldehyde resins. This study has identified hazardous substances used in polymer production for which the risks should be evaluated for decisions on the need for risk reduction measures, substitution, or even phase out

  11. Ranking critical success factor in chaos management using BSC and AHP method

    Directory of Open Access Journals (Sweden)

    Ehsan Khosravi Asil

    2013-06-01

    Full Text Available Managing an organization under chaos and uncertainty is often a concern of academic society. These days, we may face unpleasant natural, economical or even political incidents where mangers need to handle them, properly. This paper presents an empirical survey to investigate on an electromotor maker when it faces different chaos. The proposed study uses balanced scorecard in terms of four different perspectives including internal process, learning and growth, customer and financial performances. For each perspective, the proposed study uses analytical hierarchy process to rank different sub-criteria. Based on the results of our survey profit margin is the most important item followed by profit capability and brand name while productivity and sales force performance were the least important items.

  12. Ranking multiple docking solutions based on the conservation of inter-residue contacts

    KAUST Repository

    Oliva, Romina M.

    2013-06-17

    Molecular docking is the method of choice for investigating the molecular basis of recognition in a large number of functional protein complexes. However, correctly scoring the obtained docking solutions (decoys) to rank native-like (NL) conformations in the top positions is still an open problem. Herein we present CONSRANK, a simple and effective tool to rank multiple docking solutions, which relies on the conservation of inter-residue contacts in the analyzed decoys ensemble. First it calculates a conservation rate for each inter-residue contact, then it ranks decoys according to their ability to match the more frequently observed contacts. We applied CONSRANK to 102 targets from three different benchmarks, RosettaDock, DOCKGROUND, and Critical Assessment of PRedicted Interactions (CAPRI). The method performs consistently well, both in terms of NL solutions ranked in the top positions and of values of the area under the receiver operating characteristic curve. Its ideal application is to solutions coming from different docking programs and procedures, as in the case of CAPRI targets. For all the analyzed CAPRI targets where a comparison is feasible, CONSRANK outperforms the CAPRI scorers. The fraction of NL solutions in the top ten positions in the RosettaDock, DOCKGROUND, and CAPRI benchmarks is enriched on average by a factor of 3.0, 1.9, and 9.9, respectively. Interestingly, CONSRANK is also able to specifically single out the high/medium quality (HMQ) solutions from the docking decoys ensemble: it ranks 46.2 and 70.8% of the total HMQ solutions available for the RosettaDock and CAPRI targets, respectively, within the top 20 positions. © 2013 Wiley Periodicals, Inc.

  13. An efficient matrix bi-factorization alternative optimization method for low-rank matrix recovery and completion.

    Science.gov (United States)

    Liu, Yuanyuan; Jiao, L C; Shang, Fanhua; Yin, Fei; Liu, F

    2013-12-01

    In recent years, matrix rank minimization problems have aroused considerable interests from machine learning, data mining and computer vision communities. All of these problems can be solved via their convex relaxations which minimize the trace norm instead of the rank of the matrix, and have to be solved iteratively and involve singular value decomposition (SVD) at each iteration. Therefore, those algorithms for trace norm minimization problems suffer from high computation cost of multiple SVDs. In this paper, we propose an efficient Matrix Bi-Factorization (MBF) method to approximate the original trace norm minimization problem and mitigate the computation cost of performing SVDs. The proposed MBF method can be used to address a wide range of low-rank matrix recovery and completion problems such as low-rank and sparse matrix decomposition (LRSD), low-rank representation (LRR) and low-rank matrix completion (MC). We also present three small scale matrix trace norm models for LRSD, LRR and MC problems, respectively. Moreover, we develop two concrete linearized proximal alternative optimization algorithms for solving the above three problems. Experimental results on a variety of synthetic and real-world data sets validate the efficiency, robustness and effectiveness of our MBF method comparing with the state-of-the-art trace norm minimization algorithms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. In vitro to in vivo extrapolation for drug-induced liver injury using a pair ranking method.

    Science.gov (United States)

    Liu, Zhichao; Fang, Hong; Borlak, Jürgen; Roberts, Ruth; Tong, Weida

    2017-01-01

    Preclinical animal toxicity studies may not accurately predict human toxicity. In light of this, in vitro systems have been developed that have the potential to supplement or even replace animal use. We examined in vitro to in vivo extrapolation (IVIVE) of gene expression data obtained from The Open Japanese Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System (Open TG-GATEs) for 131 compounds given to rats for 28 days, and to human or rat hepatocytes for 24 hours. Notably, a pair ranking (PRank) method was developed to assess IVIVE potential with a PRank score based on the preservation of the order of similarity rankings of compound pairs between the platforms using a receiver operating characteristic (ROC) curve analysis to measure area under the curve (AUC). A high IVIVE potential was noted for rat primary hepatocytes when compared to rat 28-day studies (PRank score = 0.71) whereas the IVIVE potential for human primary hepatocytes compared to rat 28-day studies was lower (PRank score = 0.58), indicating that species difference plays a critical role in IVIVE. When limiting the analysis to only those drugs causing drug-induced liver injury, the IVIVE potential was slightly improved both for rats (from 0.71 to 0.76) and for humans (from 0.58 to 0.62). Similarly, PRank scores were improved when the analysis focused on specific hepatotoxic endpoints such as hepatocellular injury, or cholestatic injury. In conclusion, toxicogenomic data generated in vitro yields a ranking of drugs regarding their potential to cause toxicity which is comparable to that generated by in vivo analyses.

  15. A multimedia retrieval framework based on semi-supervised ranking and relevance feedback.

    Science.gov (United States)

    Yang, Yi; Nie, Feiping; Xu, Dong; Luo, Jiebo; Zhuang, Yueting; Pan, Yunhe

    2012-04-01

    We present a new framework for multimedia content analysis and retrieval which consists of two independent algorithms. First, we propose a new semi-supervised algorithm called ranking with Local Regression and Global Alignment (LRGA) to learn a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking scores of its neighboring points. A unified objective function is then proposed to globally align the local models from all the data points so that an optimal ranking score can be assigned to each data point. Second, we propose a semi-supervised long-term Relevance Feedback (RF) algorithm to refine the multimedia data representation. The proposed long-term RF algorithm utilizes both the multimedia data distribution in multimedia feature space and the history RF information provided by users. A trace ratio optimization problem is then formulated and solved by an efficient algorithm. The algorithms have been applied to several content-based multimedia retrieval applications, including cross-media retrieval, image retrieval, and 3D motion/pose data retrieval. Comprehensive experiments on four data sets have demonstrated its advantages in precision, robustness, scalability, and computational efficiency.

  16. Color correction with blind image restoration based on multiple images using a low-rank model

    Science.gov (United States)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  17. Interval estimation for rank correlation coefficients based on the probit transformation with extension to measurement error correction of correlated ranked data.

    Science.gov (United States)

    Rosner, Bernard; Glynn, Robert J

    2007-02-10

    The Spearman (rho(s)) and Kendall (tau) rank correlation coefficient are routinely used as measures of association between non-normally distributed random variables. However, confidence limits for rho(s) are only available under the assumption of bivariate normality and for tau under the assumption of asymptotic normality of tau. In this paper, we introduce another approach for obtaining confidence limits for rho(s) or tau based on the arcsin transformation of sample probit score correlations. This approach is shown to be applicable for an arbitrary bivariate distribution. The arcsin-based estimators for rho(s) and tau (denoted by rho(s,a), tau(a)) are shown to have asymptotic relative efficiency (ARE) of 9/pi2 compared with the usual estimators rho(s) and tau when rho(s) and tau are, respectively, 0. In some nutritional applications, the Spearman rank correlation between nutrient intake as assessed by a reference instrument versus nutrient intake as assessed by a surrogate instrument is used as a measure of validity of the surrogate instrument. However, if only a single replicate (or a few replicates) are available for the reference instrument, then the estimated Spearman rank correlation will be downwardly biased due to measurement error. In this paper, we use the probit transformation as a tool for specifying an ANOVA-type model for replicate ranked data resulting in a point and interval estimate of a measurement error corrected rank correlation. This extends previous work by Rosner and Willett for obtaining point and interval estimates of measurement error corrected Pearson correlations. 2006 John Wiley & Sons, Ltd.

  18. A Hybrid Distance-Based Ideal-Seeking Consensus Ranking Model

    OpenAIRE

    Tavana, Madjid; LoPinto, Frank; Smither, James W.

    2007-01-01

    Ordinal consensus ranking problems have received much attention in the management science literature. A problem arises in situations where a group of k decision makers (DMs) is asked to rank order n alternatives. The question is how to combine the DM rankings into one consensus ranking. Several different approaches have been suggested to aggregate DM responses into a compromise or consensus ranking; however, the similarity of consensus rankings generated by the differe...

  19. Data depth and rank-based tests for covariance and spectral density matrices

    KAUST Repository

    Chau, Joris

    2017-06-26

    In multivariate time series analysis, objects of primary interest to study cross-dependences in the time series are the autocovariance or spectral density matrices. Non-degenerate covariance and spectral density matrices are necessarily Hermitian and positive definite, and our primary goal is to develop new methods to analyze samples of such matrices. The main contribution of this paper is the generalization of the concept of statistical data depth for collections of covariance or spectral density matrices by exploiting the geometric properties of the space of Hermitian positive definite matrices as a Riemannian manifold. This allows one to naturally characterize most central or outlying matrices, but also provides a practical framework for rank-based hypothesis testing in the context of samples of covariance or spectral density matrices. First, the desired properties of a data depth function acting on the space of Hermitian positive definite matrices are presented. Second, we propose two computationally efficient pointwise and integrated data depth functions that satisfy each of these requirements. Several applications of the developed methodology are illustrated by the analysis of collections of spectral matrices in multivariate brain signal time series datasets.

  20. Personalization by Relevance Ranking Feedback in Impression-based Retrieval for Multimedia Database

    Directory of Open Access Journals (Sweden)

    Tsuyoshi TAKAYAMA

    2005-04-01

    Full Text Available This paper proposes an approach to personalization by relevance `ranking' feedback in impression-based retrieval for a multimedia database. Impression-based retrieval is a kind of ambiguous retrieval, and it enables a database user to find not only a known data but also an unknown data to him/her. Conventional approaches using relevance feedback technique only return a binary information: `relevant' or `not relevant', for his/her retrieval intention. In this paper, he/she returns each relevance ranking to his/her retrieval intention for top n data of a retrieval result. From this feedback information, an adjustment data inherent to him/her is produced, and utilized for personalization. We show its effectiveness by an evaluation using our pilot system.

  1. Discrepancies between multicriteria decision analysis-based ranking and intuitive ranking for pharmaceutical benefit-risk profiles in a hypothetical setting.

    Science.gov (United States)

    Hoshikawa, K; Ono, S

    2017-02-01

    Multicriteria decision analysis (MCDA) has been generally considered a promising decision-making methodology for the assessment of drug benefit-risk profiles. There have been many discussions in both public and private sectors on its feasibility and applicability, but it has not been employed in official decision-makings. For the purpose of examining to what extent MCDA would reflect the first-hand, intuitive preference of evaluators in practical pharmaceutical assessments, we conducted a questionnaire survey involving the participation of employees of pharmaceutical companies. Showing profiles of the efficacy and safety of four hypothetical drugs, each respondent was asked to rank them following the standard MCDA process and then to rank them intuitively (i.e. without applying any analytical framework). These two approaches resulted in substantially different ranking patterns from the same individuals, and the concordance rate was surprisingly low (17%). Although many respondents intuitively showed a preference for mild, balanced risk-benefit profiles over profiles with a conspicuous advantage in either risk or benefit, the ranking orders based on MCDA scores did not reflect the intuitive preference. Observed discrepancies between the rankings seemed to be primarily attributed to the structural characteristics of MCDA, which assumes that evaluation on each benefit and risk component should have monotonic impact on final scores. It would be difficult for MCDA to reflect commonly observed non-monotonic preferences for risk and benefit profiles. Possible drawbacks of MCDA should be further investigated prior to the real-world application of its benefit-risk assessment. © 2016 John Wiley & Sons Ltd.

  2. Interpretation of personal genome sequencing data in terms of disease ranks based on mutual information.

    Science.gov (United States)

    Na, Young-Ji; Sohn, Kyung-Ah; Kim, Ju Han

    2015-01-01

    The rapid advances in genome sequencing technologies have resulted in an unprecedented number of genome variations being discovered in humans. However, there has been very limited coverage of interpretation of the personal genome sequencing data in terms of diseases. In this paper we present the first computational analysis scheme for interpreting personal genome data by simultaneously considering the functional impact of damaging variants and curated disease-gene association data. This method is based on mutual information as a measure of the relative closeness between the personal genome and diseases. We hypothesize that a higher mutual information score implies that the personal genome is more susceptible to a particular disease than other diseases. The method was applied to the sequencing data of 50 acute myeloid leukemia (AML) patients in The Cancer Genome Atlas. The utility of associations between a disease and the personal genome was explored using data of healthy (control) people obtained from the 1000 Genomes Project. The ranks of the disease terms in the AML patient group were compared with those in the healthy control group using "Leukemia, Myeloid, Acute" (C04.557.337.539.550) as the corresponding MeSH disease term. Overall, the area under the receiver operating characteristics curve was significantly larger for the AML patient data than for the healthy controls. This methodology could contribute to consequential discoveries and explanations for mining personal genome sequencing data in terms of diseases, and have versatility with respect to genomic-based knowledge such as drug-gene and environmental-factor-gene interactions.

  3. Ranking Entity Based on Both of Word Frequency and Word Sematic Features

    OpenAIRE

    Jin, Xiao-Bo; Geng, Guang-Gang; Huang, Kaizhu; Yan, Zhi-Wei

    2016-01-01

    Entity search is a new application meeting either precise or vague requirements from the search engines users. Baidu Cup 2016 Challenge just provided such a chance to tackle the problem of the entity search. We achieved the first place with the average MAP scores on 4 tasks including movie, tvShow, celebrity and restaurant. In this paper, we propose a series of similarity features based on both of the word frequency features and the word semantic features and describe our ranking architecture...

  4. A rank based social norms model of how people judge their levels of drunkenness whilst intoxicated.

    Science.gov (United States)

    Moore, Simon C; Wood, Alex M; Moore, Laurence; Shepherd, Jonathan; Murphy, Simon; Brown, Gordon D A

    2016-09-13

    A rank based social norms model predicts that drinkers' judgements about their drinking will be based on the rank of their breath alcohol level amongst that of others in the immediate environment, rather than their actual breath alcohol level, with lower relative rank associated with greater feelings of safety. This study tested this hypothesis and examined how people judge their levels of drunkenness and the health consequences of their drinking whilst they are intoxicated in social drinking environments. Breath alcohol testing of 1,862 people (mean age = 26.96 years; 61.86 % male) in drinking environments. A subset (N = 400) also answered four questions asking about their perceptions of their drunkenness and the health consequences of their drinking (plus background measures). Perceptions of drunkenness and the health consequences of drinking were regressed on: (a) breath alcohol level, (b) the rank of the breath alcohol level amongst that of others in the same environment, and (c) covariates. Only rank of breath alcohol level predicted perceptions: How drunk they felt (b 3.78, 95 % CI 1.69 5.87), how extreme they regarded their drinking that night (b 3.7, 95 % CI 1.3 6.20), how at risk their long-term health was due to their current level of drinking (b 4.1, 95 % CI 0.2 8.0) and how likely they felt they would experience liver cirrhosis (b 4.8. 95 % CI 0.7 8.8). People were more influenced by more sober others than by more drunk others. Whilst intoxicated and in drinking environments, people base judgements regarding their drinking on how their level of intoxication ranks relative to that of others of the same gender around them, not on their actual levels of intoxication. Thus, when in the company of others who are intoxicated, drinkers were found to be more likely to underestimate their own level of drinking, drunkenness and associated risks. The implications of these results, for example that increasing the numbers of sober people in night time

  5. Enhancements to Graph based methods for Multi Document Summarization

    Directory of Open Access Journals (Sweden)

    Rengaramanujam Srinivasan

    2009-01-01

    Full Text Available This paper focuses its attention on extractivesummarization using popular graph based approaches. Graphbased methods can be broadly classified into two categories:non- PageRank type and PageRank type methods. Of themethods already proposed - the Centrality Degree methodbelongs to the former category while LexRank and ContinuousLexRank methods belong to later category. The paper goes on tosuggest two enhancements to both PageRank type and non-PageRank type methods. The first modification is that ofrecursively discounting the selected sentences, i.e. if a sentence isselected it is removed from further consideration and the nextsentence is selected based upon the contributions of theremaining sentences only. Next the paper suggests a method ofincorporating position weight to these schemes. In all 14methods –six of non- PageRank type and eight of PageRanktype have been investigated. To clearly distinguish betweenvarious schemes, we call the methods of incorporatingdiscounting and position weight enhancements over LexicalRank schemes as Sentence Rank (SR methods. Intrinsicevaluation of all the 14 graph based methods were done usingconventional Precision metric and metrics earlier proposed byus - Effectiveness1 (E1 and Effectiveness2 (E2. Experimentalstudy brings out that the proposed SR methods are superior toall the other methods.

  6. Novel Opportunistic Network Routing Based on Social Rank for Device-to-Device Communication

    Directory of Open Access Journals (Sweden)

    Tong Wang

    2017-01-01

    Full Text Available In recent years, there has been dramatic proliferation of research concerned with fifth-generation (5G mobile communication networks, among which device-to-device (D2D communication is one of the key technologies. Due to the intermittent connection of nodes, the D2D network topology may be disconnected frequently, which will lead to failure in transmission of large data files. In opportunistic networks, in case of encountering nodes which never meet before a flood message blindly to cause tremendous network overhead, a novel opportunistic network routing protocol based on social rank and intermeeting time (SRIT is proposed in this paper. An improved utility approach applied in utility replication based on encounter durations and intermeeting time is put forward to enhance the routing efficiency. Meanwhile, in order to select better candidate nodes in the network, a social graph among people is established when they socially relate to each other in social rank replication. The results under the scenario show an advantage of the proposed opportunistic network routing based on social rank and intermeeting time (SRIT over the compared algorithms in terms of delivery ratio, average delivery latency, and overhead ratio.

  7. Automated confidence ranked classification of randomized controlled trial articles: an aid to evidence-based medicine

    Science.gov (United States)

    Smalheiser, Neil R; McDonagh, Marian S; Yu, Clement; Adams, Clive E; Davis, John M; Yu, Philip S

    2015-01-01

    Objective: For many literature review tasks, including systematic review (SR) and other aspects of evidence-based medicine, it is important to know whether an article describes a randomized controlled trial (RCT). Current manual annotation is not complete or flexible enough for the SR process. In this work, highly accurate machine learning predictive models were built that include confidence predictions of whether an article is an RCT. Materials and Methods: The LibSVM classifier was used with forward selection of potential feature sets on a large human-related subset of MEDLINE to create a classification model requiring only the citation, abstract, and MeSH terms for each article. Results: The model achieved an area under the receiver operating characteristic curve of 0.973 and mean squared error of 0.013 on the held out year 2011 data. Accurate confidence estimates were confirmed on a manually reviewed set of test articles. A second model not requiring MeSH terms was also created, and performs almost as well. Discussion: Both models accurately rank and predict article RCT confidence. Using the model and the manually reviewed samples, it is estimated that about 8000 (3%) additional RCTs can be identified in MEDLINE, and that 5% of articles tagged as RCTs in Medline may not be identified. Conclusion: Retagging human-related studies with a continuously valued RCT confidence is potentially more useful for article ranking and review than a simple yes/no prediction. The automated RCT tagging tool should offer significant savings of time and effort during the process of writing SRs, and is a key component of a multistep text mining pipeline that we are building to streamline SR workflow. In addition, the model may be useful for identifying errors in MEDLINE publication types. The RCT confidence predictions described here have been made available to users as a web service with a user query form front end at: http://arrowsmith.psych

  8. Non-Convex Sparse and Low-Rank Based Robust Subspace Segmentation for Data Mining.

    Science.gov (United States)

    Cheng, Wenlong; Zhao, Mingbo; Xiong, Naixue; Chui, Kwok Tai

    2017-07-15

    Parsimony, including sparsity and low-rank, has shown great importance for data mining in social networks, particularly in tasks such as segmentation and recognition. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with convex l₁-norm or nuclear norm constraints. However, the obtained results by convex optimization are usually suboptimal to solutions of original sparse or low-rank problems. In this paper, a novel robust subspace segmentation algorithm has been proposed by integrating lp-norm and Schatten p-norm constraints. Our so-obtained affinity graph can better capture local geometrical structure and the global information of the data. As a consequence, our algorithm is more generative, discriminative and robust. An efficient linearized alternating direction method is derived to realize our model. Extensive segmentation experiments are conducted on public datasets. The proposed algorithm is revealed to be more effective and robust compared to five existing algorithms.

  9. ROCIT : a visual object recognition algorithm based on a rank-order coding scheme.

    Energy Technology Data Exchange (ETDEWEB)

    Gonzales, Antonio Ignacio; Reeves, Paul C.; Jones, John J.; Farkas, Benjamin D.

    2004-06-01

    This document describes ROCIT, a neural-inspired object recognition algorithm based on a rank-order coding scheme that uses a light-weight neuron model. ROCIT coarsely simulates a subset of the human ventral visual stream from the retina through the inferior temporal cortex. It was designed to provide an extensible baseline from which to improve the fidelity of the ventral stream model and explore the engineering potential of rank order coding with respect to object recognition. This report describes the baseline algorithm, the model's neural network architecture, the theoretical basis for the approach, and reviews the history of similar implementations. Illustrative results are used to clarify algorithm details. A formal benchmark to the 1998 FERET fafc test shows above average performance, which is encouraging. The report concludes with a brief review of potential algorithmic extensions for obtaining scale and rotational invariance.

  10. Evaluation of Vienna’s World Economic Position Based on Global and World City Rankings

    Directory of Open Access Journals (Sweden)

    Andrea Uszkai

    2016-12-01

    Full Text Available Metropolitan areas play a dominant role in today’s economic, social and environmental processes; therefore the scientific interest has also increased related to the global and world cities. They can be considered as key players of the world economy and a very complex competition takes place among them, which crosses the national state borders. Every city tries to reach the most favorable position and this rivalry has helped the birth of several city rankings. This paper has two important aims. Firstly, it explains the term of the world and global city based on the international literature and it is also looking for the answer, whether the Austrian capital belongs to which category. Secondly, it examines the position of Vienna in the different world and global city rankings.

  11. Assembly line balancing with resource constraints using new rank-based crossovers

    Science.gov (United States)

    Kamarudin, N. H.; Rashid, M. F. F. Ab.

    2017-10-01

    Assembly line balancing (ALB) is about distributing the assembly tasks into workstations with the almost equal workload. Recently, researchers started to consider the resource constraints in ALB such as machine and worker, to make the assembly layout more efficient. This paper presents an ALB with resource constraints (ALB-RC) to minimize the workstation, machine and worker. For the optimization purpose, genetic algorithm (GA) with two new crossovers is introduced. The crossovers are developed using ranking approach and known as rank-based crossover type I and type II (RBC-I and RBC-II). These crossovers are tested against popular combinatorial crossovers using 17 benchmark problems. The computational experiment results indicated that the RBC-II has better overall performance because of the balance between divergence and guidance in the reproduction process. In future, the RBC-I and RBC-II will be tested for different variant of ALB problems.

  12. GLRT-Based Spectrum Sensing with Blindly Learned Feature under Rank-1 Assumption

    CERN Document Server

    Zhang, Peng

    2011-01-01

    Prior knowledge can improve the performance of spectrum sensing. Instead of using universal features as prior knowledge, we propose to blindly learn the localized feature at the secondary user. Motivated by pattern recognition in machine learning, we define signal feature as the leading eigenvector of the signal's sample covariance matrix. Feature learning algorithm (FLA) for blind feature learning and feature template matching algorithm (FTM) for spectrum sensing are proposed. Furthermore, we implement the FLA and FTM in hardware. Simulations and hardware experiments show that signal feature can be learned blindly. In addition, by using signal feature as prior knowledge, the detection performance can be improved by about 2 dB. Motivated by experimental results, we derive several GLRT based spectrum sensing algorithms under rank-1 assumption, considering signal feature, signal power and noise power as the available parameters. The performance of our proposed algorithms is tested on both synthesized rank-1 sig...

  13. Environmental and health hazard ranking and assessment of plastic polymers based on chemical composition.

    Science.gov (United States)

    Lithner, Delilah; Larsson, Ake; Dave, Göran

    2011-08-15

    Plastics constitute a large material group with a global annual production that has doubled in 15 years (245 million tonnes in 2008). Plastics are present everywhere in society and the environment, especially the marine environment, where large amounts of plastic waste accumulate. The knowledge of human and environmental hazards and risks from chemicals associated with the diversity of plastic products is very limited. Most chemicals used for producing plastic polymers are derived from non-renewable crude oil, and several are hazardous. These may be released during the production, use and disposal of the plastic product. In this study the environmental and health hazards of chemicals used in 55 thermoplastic and thermosetting polymers were identified and compiled. A hazard ranking model was developed for the hazard classes and categories in the EU classification and labelling (CLP) regulation which is based on the UN Globally Harmonized System. The polymers were ranked based on monomer hazard classifications, and initial assessments were made. The polymers that ranked as most hazardous are made of monomers classified as mutagenic and/or carcinogenic (category 1A or 1B). These belong to the polymer families of polyurethanes, polyacrylonitriles, polyvinyl chloride, epoxy resins, and styrenic copolymers. All have a large global annual production (1-37 million tonnes). A considerable number of polymers (31 out of 55) are made of monomers that belong to the two worst of the ranking model's five hazard levels, i.e. levels IV-V. The polymers that are made of level IV monomers and have a large global annual production (1-5 million tonnes) are phenol formaldehyde resins, unsaturated polyesters, polycarbonate, polymethyl methacrylate, and urea-formaldehyde resins. This study has identified hazardous substances used in polymer production for which the risks should be evaluated for decisions on the need for risk reduction measures, substitution, or even phase out. Copyright

  14. Academic rankings: an approach to a Portuguese ranking

    OpenAIRE

    Bernardino, Pedro; Marques,Rui

    2009-01-01

    The academic rankings are a controversial subject in higher education. However, despite all the criticism, academic rankings are here to stay and more and more different stakeholders use rankings to obtain information about the institutions’ performance. The two most well-known rankings, The Times and the Shanghai Jiao Tong University rankings have different methodologies. The Times ranking is based on peer review, whereas the Shanghai ranking has only quantitative indicators and is mainly ba...

  15. Ranking Operations Management conferences

    NARCIS (Netherlands)

    Steenhuis, H.J.; de Bruijn, E.J.; Gupta, Sushil; Laptaned, U

    2007-01-01

    Several publications have appeared in the field of Operations Management which rank Operations Management related journals. Several ranking systems exist for journals based on , for example, perceived relevance and quality, citation, and author affiliation. Many academics also publish at conferences

  16. Methodological Bases for Ranking the European Union Countries in Terms of Macroeconomic Security

    Directory of Open Access Journals (Sweden)

    Tymoshenko Olena V.

    2015-11-01

    Full Text Available The fundamental contradictions of existing methodical approaches to assessing the level of the state economic security have been substantiated and proposals on the introduction of a unified methodology for its assessment, which would be acceptable for use at the international level or for a specific cluster of countries, have been developed. Based on the conducted researches it has been found that the there are no unified signs for such classification of countries. To determine the most significant coefficients and critical values of the indicators of economic security, it is appropriate that the countries should be grouped in terms of the level of the economic development proposed by the UN Commission and the IMF. Analysis of the economic security level has been conducted for the countries-members of the European Union as a separate cluster of countries on the example of macroeconomic security indicators. Based on the evaluation it has been found that the proposed list of indicators and their critical values is economically sound and built on the principle of adequacy, representativeness and comprehensiveness. In 2004 the most secure countries of the EU corresponding to the macroeconomic security standards were Austria, Denmark, Sweden, Finland, and as in 2014 the percentage of absolutely secure countries decreased from 14.3 to 7.1%, only Denmark and Sweden remained in the ranking. During the analyzed period Bulgaria and Croatia got into the risk zone, Estonia, Lithuania, Latvia, Romania were in a danger zone. In 2014 Ukraine in terms of its macroeconomic security was in a critical state, which testified about serious structural and system imbalances in its development.

  17. A Topology Evolution Model Based on Revised PageRank Algorithm and Node Importance for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xiaogang Qi

    2015-01-01

    Full Text Available Wireless sensor network (WSN is a classical self-organizing communication network, and its topology evolution currently becomes one of the attractive issues in this research field. Accordingly, the problem is divided into two subproblems: one is to design a new preferential attachment method and the other is to analyze the dynamics of the network topology evolution. To solve the first subproblem, a revised PageRank algorithm, called Con-rank, is proposed to evaluate the node importance upon the existing node contraction, and then a novel preferential attachment is designed based on the node importance calculated by the proposed Con-rank algorithm. To solve the second one, we firstly analyze the network topology evolution dynamics in a theoretical way and then simulate the evolution process. Theoretical analysis proves that the network topology evolution of our model agrees with power-law distribution, and simulation results are well consistent with our conclusions obtained from the theoretical analysis and simultaneously show that our topology evolution model is superior to the classic BA model in the average path length and the clustering coefficient, and the network topology is more robust and can tolerate the random attacks.

  18. A Simple Model to Rank Shellfish Farming Areas Based on the Risk of Disease Introduction and Spread.

    Science.gov (United States)

    Thrush, M A; Pearce, F M; Gubbins, M J; Oidtmann, B C; Peeler, E J

    2017-08-01

    The European Union Council Directive 2006/88/EC requires that risk-based surveillance (RBS) for listed aquatic animal diseases is applied to all aquaculture production businesses. The principle behind this is the efficient use of resources directed towards high-risk farm categories, animal types and geographic areas. To achieve this requirement, fish and shellfish farms must be ranked according to their risk of disease introduction and spread. We present a method to risk rank shellfish farming areas based on the risk of disease introduction and spread and demonstrate how the approach was applied in 45 shellfish farming areas in England and Wales. Ten parameters were used to inform the risk model, which were grouped into four risk themes based on related pathways for transmission of pathogens: (i) live animal movement, (ii) transmission via water, (iii) short distance mechanical spread (birds) and (iv) long distance mechanical spread (vessels). Weights (informed by expert knowledge) were applied both to individual parameters and to risk themes for introduction and spread to reflect their relative importance. A spreadsheet model was developed to determine quantitative scores for the risk of pathogen introduction and risk of pathogen spread for each shellfish farming area. These scores were used to independently rank areas for risk of introduction and for risk of spread. Thresholds were set to establish risk categories (low, medium and high) for introduction and spread based on risk scores. Risk categories for introduction and spread for each area were combined to provide overall risk categories to inform a risk-based surveillance programme directed at the area level. Applying the combined risk category designation framework for risk of introduction and spread suggested by European Commission guidance for risk-based surveillance, 4, 10 and 31 areas were classified as high, medium and low risk, respectively. © 2016 Crown copyright.

  19. Super-resolution reconstruction of 4D-CT lung data via patch-based low-rank matrix reconstruction

    Science.gov (United States)

    Fang, Shiting; Wang, Huafeng; Liu, Yueliang; Zhang, Minghui; Yang, Wei; Feng, Qianjin; Chen, Wufan; Zhang, Yu

    2017-10-01

    Lung 4D computed tomography (4D-CT), which is a time-resolved CT data acquisition, performs an important role in explicitly including respiratory motion in treatment planning and delivery. However, the radiation dose is usually reduced at the expense of inter-slice spatial resolution to minimize radiation-related health risk. Therefore, resolution enhancement along the superior-inferior direction is necessary. In this paper, a super-resolution (SR) reconstruction method based on a patch low-rank matrix reconstruction is proposed to improve the resolution of lung 4D-CT images. Specifically, a low-rank matrix related to every patch is constructed by using a patch searching strategy. Thereafter, the singular value shrinkage is employed to recover the high-resolution patch under the constraints of the image degradation model. The output high-resolution patches are finally assembled to output the entire image. This method is extensively evaluated using two public data sets. Quantitative analysis shows that the proposed algorithm decreases the root mean square error by 9.7%-33.4% and the edge width by 11.4%-24.3%, relative to linear interpolation, back projection (BP) and Zhang et al’s algorithm. A new algorithm has been developed to improve the resolution of 4D-CT. In all experiments, the proposed method outperforms various interpolation methods, as well as BP and Zhang et al’s method, thus indicating the effectivity and competitiveness of the proposed algorithm.

  20. Reference-free single-pass EPI Nyquist ghost correction using annihilating filter-based low rank Hankel matrix (ALOHA).

    Science.gov (United States)

    Lee, Juyoung; Jin, Kyong Hwan; Ye, Jong Chul

    2016-12-01

    MR measurements from an echo-planar imaging (EPI) sequence produce Nyquist ghost artifacts that originate from inconsistencies between odd and even echoes. Several reconstruction algorithms have been proposed to reduce such artifacts, but most of these methods require either additional reference scans or multipass EPI acquisition. This article proposes a novel and accurate single-pass EPI ghost artifact correction method that does not require any additional reference data. After converting a ghost correction problem into separate k-space data interpolation problems for even and odd phase encoding, our algorithm exploits an observation that the differential k-space data between the even and odd echoes is a Fourier transform of an underlying sparse image. Accordingly, we can construct a rank-deficient Hankel structured matrix, whose missing data can be recovered using an annihilating filter-based low rank Hankel structured matrix completion approach. The proposed method was applied to EPI data for both single and multicoil acquisitions. Experimental results using in vivo data confirmed that the proposed method can completely remove ghost artifacts successfully without prescan echoes. Owing to the discovery of the annihilating filter relationship from the intrinsic EPI image property, the proposed method successfully suppresses ghost artifacts without a prescan step. Magn Reson Med 76:1775-1789, 2016. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  1. Quantitative Analysis of Mixtures of Monoprotic Acids Applying Modified Model-Based Rank Annihilation Factor Analysis on Variation Matrices of Spectrophotometric Acid-Base Titrations

    Directory of Open Access Journals (Sweden)

    Ebrahim Ghorbani-Kalhor

    2015-04-01

    Full Text Available In the current work, a new version of rank annihilation factor analysis was developedto circumvent the rank deficiency problem in multivariate data measurements.Simultaneous determination of dissociation constant and concentration of monoprotic acids was performed by applying model-based rank annihilation factor analysis on variation matrices of spectrophotometric acid-base titrations data. Variation matrices can be obtained by subtracting first row of data matrix from all rows of the main data matrix. This method uses variation matrices instead of multivariate spectrophotometric acid-base titrations matrices to circumvent the rank deficiency problem in the rank quantitation step. The applicability of this approach was evaluated by simulated data at first stage, then the binary mixtures of ascorbic and sorbic acids as model compounds were investigated by the proposed method. At the end, the proposed method was successfully applied for resolving the ascorbic and sorbic acid in an orange juice real sample. Therefore, unique results were achieved by applying rank annihilation factor analysis on variation matrix and using hard soft model combination advantage without any problem and difficulty in rank determination. Normal 0 false false false EN-US X-NONE AR-SA /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;}    

  2. A dose-volume-based tool for evaluating and ranking IMRT treatment plans.

    Science.gov (United States)

    Miften, Moyed M; Das, Shiva K; Su, Min; Marks, Lawrence B

    2004-01-01

    External beam radiotherapy is commonly used for patients with cancer. While tumor shrinkage and palliation are frequently achieved, local control and cure remain elusive for many cancers. With regard to local control, the fundamental problem is that radiotherapy-induced normal tissue injury limits the dose that can be delivered to the tumor. While intensity-modulated radiation therapy (IMRT) allows for the delivery of higher tumor doses and the sparing of proximal critical structures, multiple competing plans can be generated based on dosimetric and/or biological constraints that need to be considered/compared. In this work, an IMRT treatment plan evaluation and ranking tool, based on dosimetric criteria, is presented. The treatment plan with the highest uncomplicated target conformity index (TCI+) is ranked at the top. The TCI+ is a dose-volume-based index that considers both a target conformity index (TCI) and a normal tissue-sparing index (NTSI). TCI+ is designed to assist in the process of judging the merit of a clinical treatment plan. To demonstrate the utility of this tool, several competing lung and prostate IMRT treatment plans are compared. Results show that the plan with the highest TCI+ values accomplished the competing goals of tumor coverage and critical structures sparing best, among rival treatment plans for both treatment sites. The study demonstrates, first, that dose-volume-based indices, which summarize complex dose distributions through a single index, can be used to automatically select the optimal plan among competing plans, and second, that this dose-volume-based index may be appropriate for ranking IMRT dose distributions.

  3. Population models and simulation methods: The case of the Spearman rank correlation.

    Science.gov (United States)

    Astivia, Oscar L Olvera; Zumbo, Bruno D

    2017-11-01

    The purpose of this paper is to highlight the importance of a population model in guiding the design and interpretation of simulation studies used to investigate the Spearman rank correlation. The Spearman rank correlation has been known for over a hundred years to applied researchers and methodologists alike and is one of the most widely used non-parametric statistics. Still, certain misconceptions can be found, either explicitly or implicitly, in the published literature because a population definition for this statistic is rarely discussed within the social and behavioural sciences. By relying on copula distribution theory, a population model is presented for the Spearman rank correlation, and its properties are explored both theoretically and in a simulation study. Through the use of the Iman-Conover algorithm (which allows the user to specify the rank correlation as a population parameter), simulation studies from previously published articles are explored, and it is found that many of the conclusions purported in them regarding the nature of the Spearman correlation would change if the data-generation mechanism better matched the simulation design. More specifically, issues such as small sample bias and lack of power of the t-test and r-to-z Fisher transformation disappear when the rank correlation is calculated from data sampled where the rank correlation is the population parameter. A proof for the consistency of the sample estimate of the rank correlation is shown as well as the flexibility of the copula model to encompass results previously published in the mathematical literature. © 2017 The British Psychological Society.

  4. Detecting determinism with improved sensitivity in time series: Rank-based nonlinear predictability score

    Science.gov (United States)

    Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G.

    2014-09-01

    The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).

  5. Detecting determinism with improved sensitivity in time series: rank-based nonlinear predictability score.

    Science.gov (United States)

    Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G

    2014-09-01

    The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).

  6. Pareto-Ranking Based Quantum-Behaved Particle Swarm Optimization for Multiobjective Optimization

    Directory of Open Access Journals (Sweden)

    Na Tian

    2015-01-01

    Full Text Available A study on pareto-ranking based quantum-behaved particle swarm optimization (QPSO for multiobjective optimization problems is presented in this paper. During the iteration, an external repository is maintained to remember the nondominated solutions, from which the global best position is chosen. The comparison between different elitist selection strategies (preference order, sigma value, and random selection is performed on four benchmark functions and two metrics. The results demonstrate that QPSO with preference order has comparative performance with sigma value according to different number of objectives. Finally, QPSO with sigma value is applied to solve multiobjective flexible job-shop scheduling problems.

  7. Facilitating Software Architecting by Ranking Requirements based on their Impact on the Architecture Process

    NARCIS (Netherlands)

    Galster, Matthias; Eberlein, Armin; Sprinkle, J; Sterritt, R; Breitman, K

    2011-01-01

    Ranking software requirements helps decide what requirements to implement during a software development project, and when. Currently, requirements ranking techniques focus on resource constraints or stakeholder priorities and neglect the effect of requirements on the software architecture process.

  8. Environmental restoration risk-based prioritization work package planning and risk ranking methodology. Revision 2

    Energy Technology Data Exchange (ETDEWEB)

    Dail, J.L.; Nanstad, L.D.; White, R.K.

    1995-06-01

    This document presents the risk-based prioritization methodology developed to evaluate and rank Environmental Restoration (ER) work packages at the five US Department of Energy, Oak Ridge Field Office (DOE-ORO) sites [i.e., Oak Ridge K-25 Site (K-25), Portsmouth Gaseous Diffusion Plant (PORTS), Paducah Gaseous Diffusion Plant (PGDP), Oak Ridge National Laboratory (ORNL), and the Oak Ridge Y-12 Plant (Y-12)], the ER Off-site Program, and Central ER. This prioritization methodology was developed to support the increased rigor and formality of work planning in the overall conduct of operations within the DOE-ORO ER Program. Prioritization is conducted as an integral component of the fiscal ER funding cycle to establish program budget priorities. The purpose of the ER risk-based prioritization methodology is to provide ER management with the tools and processes needed to evaluate, compare, prioritize, and justify fiscal budget decisions for a diverse set of remedial action, decontamination and decommissioning, and waste management activities. The methodology provides the ER Program with a framework for (1) organizing information about identified DOE-ORO environmental problems, (2) generating qualitative assessments of the long- and short-term risks posed by DOE-ORO environmental problems, and (3) evaluating the benefits associated with candidate work packages designed to reduce those risks. Prioritization is conducted to rank ER work packages on the basis of the overall value (e.g., risk reduction, stakeholder confidence) each package provides to the ER Program. Application of the methodology yields individual work package ``scores`` and rankings that are used to develop fiscal budget requests. This document presents the technical basis for the decision support tools and process.

  9. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm.

    Science.gov (United States)

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the "quality of service" as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services.

  10. Monte Carlo methods for top-k personalized PageRank lists and name disambiguation

    NARCIS (Netherlands)

    Avrachenkov, Konstatin; Litvak, Nelli; Nemirovsky, Danil; Smirnova, Elena; Sokol, Marina

    We study a problem of quick detection of top-k Personalized PageRank lists. This problem has a number of important applications such as finding local cuts in large graphs, estimation of similarity distance and name disambiguation. In particular, we apply our results to construct efficient algorithms

  11. Ranking Method for Peak-Load Shifting Considering Different Types of Data

    DEFF Research Database (Denmark)

    Wang, Peng; Wen, Fushuan; Pinson, Pierre

    2016-01-01

    Management measures for peak-load shifting are employed to alleviate power shortages during the peak hours in some countries with power-supply shortages, such as China. One of the most popular measures is to rank the electricity users with respect to their relative importance in the society. In r...

  12. New approach for evaluating risk and ranking spillways based on operational safety

    Energy Technology Data Exchange (ETDEWEB)

    Briand, M.H.; Huard, M.O.; Hanno, H. [RSW Inc., Montreal, PQ (Canada); Manescu, D.; Morin, J.P. [Hydro-Quebec, Montreal, PQ (Canada). Dept. of Dam Safety

    2009-07-01

    This paper discussed a method developed by Hydro-Quebec to rank dam spillways according to their ability to react safely to mechanical or electrical component failures, uncertainties in hydrological conditions; unexpected human responses; and spillway component degradation. A computer simulation was used to model assess the impact of the various scenarios of reservoir levels. Four case studies were used to investigate a wide variety of spillway characteristics and hydrological conditions. Results from power plant tripping events using functional equipment were used as an index of the spillway's safety. Results from the sensitivity analyses of equipment failures and modified hydrology simulations were then used to build a vulnerability index for the development of inspection and maintenance priorities. Spillway conditions during flood routing conditions from inspection reports were then used to prepare a functionality index. A sensitivity matrix was then developed to produce a global ranking index for each spillway in the study. The method is now being applied to existing hydro-electric developments as a validation procedure. 3 refs., 11 tabs., 2 figs.

  13. Davidon-Broyden rank-one minimization methods in Hilbert space with application to optimal control problems

    Science.gov (United States)

    Straeter, T. A.

    1972-01-01

    The Davidon-Broyden class of rank one, quasi-Newton minimization methods is extended from Euclidean spaces to infinite-dimensional, real Hilbert spaces. For several techniques of choosing the step size, conditions are found which assure convergence of the associated iterates to the location of the minimum of a positive definite quadratic functional. For those techniques, convergence is achieved without the problem of the computation of a one-dimensional minimum at each iteration. The application of this class of minimization methods for the direct computation of the solution of an optimal control problem is outlined. The performance of various members of the class are compared by solving a sample optimal control problem. Finally, the sample problem is solved by other known gradient methods, and the results are compared with those obtained with the rank one quasi-Newton methods.

  14. Validation of a model for ranking aquaculture facilities for risk-based disease surveillance.

    Science.gov (United States)

    Diserens, Nicolas; Falzon, Laura Cristina; von Siebenthal, Beat; Schüpbach-Regula, Gertraud; Wahli, Thomas

    2017-09-15

    A semi-quantitative model for risk ranking of aquaculture facilities in Switzerland with regard to the introduction and spread of Viral Haemorrhagic Septicaemia (VHS) and Infectious Haematopoietic Necrosis (IHN) was developed in a previous study (Diserens et al., 2013). The objective of the present study was to validate this model using data collected during field visits on aquaculture sites in four Swiss cantons compared to data collected through a questionnaire in the previous study. A discrepancy between the values obtained with the two different methods was found in 32.8% of the parameters, resulting in a significant difference (psystem could be advantageous for the factors which were identified as being more likely to vary over time, in particular for factors considering fish movements, which showed a marginally significant difference in their risk scores (p≥0.1) within a six- month period. Nevertheless, the model proved to be stable over the considered period of time as no substantial fluctuations in the risk categorisation were observed (Kappa agreement of 0.77).Finally, the model proved to be suitable to deliver a reliable risk ranking of Swiss aquaculture facilities according to their risk of getting infected with or spreading of VHS and IHN, as the five facilities that tested positive for these diseases in the last ten years were ranked as medium or high risk. Moreover, because the seven fish farms that were infected with Infectious Pancreatic Necrosis (IPN) during the same period also belonged to the risk categories medium and high, the classification appeared to correlate with the occurrence of this third viral fish disease. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. A rank-based approach for correcting systematic biases in spatial disaggregation of coarse-scale climate simulations

    Science.gov (United States)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-07-01

    Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.

  16. Quantified Risk Ranking Model for Condition-Based Risk and Reliability Centered Maintenance

    Science.gov (United States)

    Chattopadhyaya, Pradip Kumar; Basu, Sushil Kumar; Majumdar, Manik Chandra

    2017-06-01

    In the recent past, risk and reliability centered maintenance (RRCM) framework is introduced with a shift in the methodological focus from reliability and probabilities (expected values) to reliability, uncertainty and risk. In this paper authors explain a novel methodology for risk quantification and ranking the critical items for prioritizing the maintenance actions on the basis of condition-based risk and reliability centered maintenance (CBRRCM). The critical items are identified through criticality analysis of RPN values of items of a system and the maintenance significant precipitating factors (MSPF) of items are evaluated. The criticality of risk is assessed using three risk coefficients. The likelihood risk coefficient treats the probability as a fuzzy number. The abstract risk coefficient deduces risk influenced by uncertainty, sensitivity besides other factors. The third risk coefficient is called hazardous risk coefficient, which is due to anticipated hazards which may occur in the future and the risk is deduced from criteria of consequences on safety, environment, maintenance and economic risks with corresponding cost for consequences. The characteristic values of all the three risk coefficients are obtained with a particular test. With few more tests on the system, the values may change significantly within controlling range of each coefficient, hence `random number simulation' is resorted to obtain one distinctive value for each coefficient. The risk coefficients are statistically added to obtain final risk coefficient of each critical item and then the final rankings of critical items are estimated. The prioritization in ranking of critical items using the developed mathematical model for risk assessment shall be useful in optimization of financial losses and timing of maintenance actions.

  17. Semantic descriptor ranking: a quantitative method for evaluating qualitative verbal reports of visual cognition in the laboratory or the clinic.

    Science.gov (United States)

    Maestri, Matthew; Odel, Jeffrey; Hegdé, Jay

    2014-01-01

    For scientific, clinical, and machine learning purposes alike, it is desirable to quantify the verbal reports of high-level visual percepts. Methods to do this simply do not exist at present. Here we propose a novel methodological principle to help fill this gap, and provide empirical evidence designed to serve as the initial "proof" of this principle. In the proposed method, subjects view images of real-world scenes and describe, in their own words, what they saw. The verbal description is independently evaluated by several evaluators. Each evaluator assigns a rank score to the subject's description of each visual object in each image using a novel ranking principle, which takes advantage of the well-known fact that semantic descriptions of real life objects and scenes can usually be rank-ordered. Thus, for instance, "animal," "dog," and "retriever" can be regarded as increasingly finer-level, and therefore higher ranking, descriptions of a given object. These numeric scores can preserve the richness of the original verbal description, and can be subsequently evaluated using conventional statistical procedures. We describe an exemplar implementation of this method and empirical data that show its feasibility. With appropriate future standardization and validation, this novel method can serve as an important tool to help quantify the subjective experience of the visual world. In addition to being a novel, potentially powerful testing tool, our method also represents, to our knowledge, the only available method for numerically representing verbal accounts of real-world experience. Given that its minimal requirements, i.e., a verbal description and the ground truth that elicited the description, our method has a wide variety of potential real-world applications.

  18. Semantic Descriptor Ranking: A Quantitative Method for Evaluating Qualitative Verbal Reports of Visual Cognition in the Laboratory or the Clinic

    Directory of Open Access Journals (Sweden)

    Matthew eMaestri

    2014-03-01

    Full Text Available For scientific, clinical, and machine learning purposes alike, it is desirable to quantify the verbal reports of high-level visual percepts. Methods to do this simply do not exist at present. Here we propose a novel methodological principle to help fill this gap, and provide empirical evidence designed to serve as the initial ‘proof’ of this principle. In the proposed method, subjects view images real-world scenes and describe, in their own words, what they saw. The verbal description is independently evaluated by several evaluators. Each evaluator assigns a rank score to the subject’s description of each visual object in each image using a novel ranking principle, which takes advantage of the well-known fact that semantic descriptions of real-life objects and scenes can usually be rank-ordered. Thus, for instance, ‘animal’, ‘dog’, and ‘retriever’ can be regarded as increasingly finer-level, and therefore higher-ranking, descriptions of a given object. These numeric scores can preserve the richness of the original verbal description, and can be subsequently evaluated using conventional statistical procedures. We describe an exemplar implementation of this method and empirical data that show its feasibility. With appropriate future standardization and validation, this novel method can serve as an important tool to help quantify the subjective experience of the visual world. In addition to being a novel, potentially powerful testing tool, our method also represents, to our knowledge, the only available method for numerically representing verbal accounts of real-world experience. Given that its minimal requirements, i.e., a verbal description and the ground truth that elicited the description, our method has a wide variety of potential real-world applications.

  19. A Novel Multi-Sensor Environmental Perception Method Using Low-Rank Representation and a Particle Filter for Vehicle Reversing Safety

    Directory of Open Access Journals (Sweden)

    Zutao Zhang

    2016-06-01

    Full Text Available Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety.

  20. Rank Dynamics

    Science.gov (United States)

    Gershenson, Carlos

    Studies of rank distributions have been popular for decades, especially since the work of Zipf. For example, if we rank words of a given language by use frequency (most used word in English is 'the', rank 1; second most common word is 'of', rank 2), the distribution can be approximated roughly with a power law. The same applies for cities (most populated city in a country ranks first), earthquakes, metabolism, the Internet, and dozens of other phenomena. We recently proposed ``rank diversity'' to measure how ranks change in time, using the Google Books Ngram dataset. Studying six languages between 1800 and 2009, we found that the rank diversity curves of languages are universal, adjusted with a sigmoid on log-normal scale. We are studying several other datasets (sports, economies, social systems, urban systems, earthquakes, artificial life). Rank diversity seems to be universal, independently of the shape of the rank distribution. I will present our work in progress towards a general description of the features of rank change in time, along with simple models which reproduce it

  1. Modeling ranking, time trade-off, and visual analog scale values for EQ-5D health states: a review and comparison of methods.

    Science.gov (United States)

    Craig, Benjamin M; Busschbach, Jan J V; Salomon, Joshua A

    2009-06-01

    There is rising interest in eliciting health state valuations using rankings. Due to their relative simplicity, ordinal measurement methods may offer an attractive practical alternative to cardinal methods, such as time trade-off (TTO) and visual analog scale (VAS). In this article, we explore alternative models for estimating cardinal health state values from rank responses in a unique multicountry database. We highlight an estimation challenge pertaining to health states just below perfect health (the "nonoptimal gap") and propose an analytic solution to ameliorate this problem. Using a standardized protocol developed by the EuroQol Group, rank, VAS, and TTO responses were collected for 43 health states in 8 countries: Slovenia, Argentina, Denmark, Japan, Netherlands, Spain, United Kingdom, and United States, yielding a sample of 179,431 state responses from 11,483 subjects. States were described using the EQ-5D system, which allows for 3 different possible levels on 5 different dimensions of health. We estimated conditional logit and probit regression models for rank responses. The regressions included 17 health state attribute variables reflecting specific levels on each dimension and counts of different levels across dimensions. This flexible specification accommodates previously published valuation models, such as models applied in the United Kingdom and United States. In addition to fitting standard conditional logit and probit models, which assume equal variance across health states (homoscedasticity), we examined a heteroscedastic probit model that assumes no variance for the 2 points anchoring the scale ("optimal health" and "dead") and relaxes the equal-variance assumption for all other states. Rank-based predictions for the 243 unique states defined by the EQ-5D system were compared with predictions from conventional linear models fitted to TTO and VAS responses. By construction, the TTO and VAS models assume no variance around the anchoring states of

  2. Ranking in Swiss system chess team tournaments

    OpenAIRE

    Csató, László

    2015-01-01

    The paper uses paired comparison-based scoring procedures for ranking the participants of a Swiss system chess team tournament. We present the main challenges of ranking in Swiss system, the features of individual and team competitions as well as the failures of official lexicographical orders. The tournament is represented as a ranking problem, our model is discussed with respect to the properties of the score, generalized row sum and least squares methods. The proposed procedure is illustra...

  3. Correlated Spatio-Temporal Data Collection in Wireless Sensor Networks Based on Low Rank Matrix Approximation and Optimized Node Sampling

    Directory of Open Access Journals (Sweden)

    Xinglin Piao

    2014-12-01

    Full Text Available The emerging low rank matrix approximation (LRMA method provides an energy efficient scheme for data collection in wireless sensor networks (WSNs by randomly sampling a subset of sensor nodes for data sensing. However, the existing LRMA based methods generally underutilize the spatial or temporal correlation of the sensing data, resulting in uneven energy consumption and thus shortening the network lifetime. In this paper, we propose a correlated spatio-temporal data collection method for WSNs based on LRMA. In the proposed method, both the temporal consistence and the spatial correlation of the sensing data are simultaneously integrated under a new LRMA model. Moreover, the network energy consumption issue is considered in the node sampling procedure. We use Gini index to measure both the spatial distribution of the selected nodes and the evenness of the network energy status, then formulate and resolve an optimization problem to achieve optimized node sampling. The proposed method is evaluated on both the simulated and real wireless networks and compared with state-of-the-art methods. The experimental results show the proposed method efficiently reduces the energy consumption of network and prolongs the network lifetime with high data recovery accuracy and good stability.

  4. Comparison of Document Index Graph Using TextRank and HITS Weighting Method in Automatic Text Summarization

    Science.gov (United States)

    Hadyan, Fadhlil; Shaufiah; Arif Bijaksana, Moch.

    2017-01-01

    Automatic summarization is a system that can help someone to take the core information of a long text instantly. The system can help by summarizing text automatically. there’s Already many summarization systems that have been developed at this time but there are still many problems in those system. In this final task proposed summarization method using document index graph. This method utilizes the PageRank and HITS formula used to assess the web page, adapted to make an assessment of words in the sentences in a text document. The expected outcome of this final task is a system that can do summarization of a single document, by utilizing document index graph with TextRank and HITS to improve the quality of the summary results automatically.

  5. A new betweenness centrality measure based on an algorithm for ranking the nodes of a network

    OpenAIRE

    Agryzkov, Taras; Oliver, Jose L.; Tortosa Grau, Leandro; Vicent, Jose F.

    2014-01-01

    We propose and discuss a new centrality index for urban street patterns represented as networks in geographical space. This centrality measure, that we call ranking-betweenness centrality, combines the idea behind the random-walk betweenness centrality measure and the idea of ranking the nodes of a network produced by an adapted PageRank algorithm. We initially use a PageRank algorithm in which we are able to transform some information of the network that we want to analyze into numerical val...

  6. Fast live cell imaging at nanometer scale using annihilating filter-based low-rank Hankel matrix approach

    Science.gov (United States)

    Min, Junhong; Carlini, Lina; Unser, Michael; Manley, Suliana; Ye, Jong Chul

    2015-09-01

    Localization microscopy such as STORM/PALM can achieve a nanometer scale spatial resolution by iteratively localizing fluorescence molecules. It was shown that imaging of densely activated molecules can accelerate temporal resolution which was considered as major limitation of localization microscopy. However, this higher density imaging needs to incorporate advanced localization algorithms to deal with overlapping point spread functions (PSFs). In order to address this technical challenges, previously we developed a localization algorithm called FALCON1, 2 using a quasi-continuous localization model with sparsity prior on image space. It was demonstrated in both 2D/3D live cell imaging. However, it has several disadvantages to be further improved. Here, we proposed a new localization algorithm using annihilating filter-based low rank Hankel structured matrix approach (ALOHA). According to ALOHA principle, sparsity in image domain implies the existence of rank-deficient Hankel structured matrix in Fourier space. Thanks to this fundamental duality, our new algorithm can perform data-adaptive PSF estimation and deconvolution of Fourier spectrum, followed by truly grid-free localization using spectral estimation technique. Furthermore, all these optimizations are conducted on Fourier space only. We validated the performance of the new method with numerical experiments and live cell imaging experiment. The results confirmed that it has the higher localization performances in both experiments in terms of accuracy and detection rate.

  7. Ecological vulnerability in wildlife: application of a species-ranking method to food chains and habitats.

    Science.gov (United States)

    De Lange, Hendrika J; Lahr, Joost; Van der Pol, Joost J C; Faber, Jack H

    2010-12-01

    Nature development in The Netherlands is often planned on contaminated soils or sediments. This contamination may present a risk for wildlife species desired at those nature development sites and must be assessed by specific risk assessment methods. In a previous study, we developed a method to predict ecological vulnerability in wildlife species by using autecological data and expert judgment; in the current study, this method is further extended to assess ecological vulnerability of food chains and terrestrial and aquatic habitats typical for The Netherlands. The method is applied to six chemicals: Cd, Cu, Zn, dichlorodiphenyltrichloroethane, chlorpyrifos, and ivermectin. The results indicate that species in different food chains differ in vulnerability, with earthworm-based food chains the most vulnerable. Within and between food chains, vulnerability varied with habitat, particularly at low trophic levels. The concept of habitat vulnerability was applied to a case study of four different habitat types in floodplains contaminated with cadmium and zinc along the river Dommel, The Netherlands. The alder floodplain forest habitat contained the most vulnerable species. The differences among habitats were significant for Cd. We further conclude that the method has good potential for application in mapping of habitat vulnerability. © 2010 SETAC.

  8. Ranking serials in oceanography: An analysis based on the Indian contributions and their citations

    Digital Repository Service at National Institute of Oceanography (India)

    Tapaswi, M.P.; Maheswarappa, B.S.

    negative correlation with a marginal difference of -0.214 is observed between these two rank lists. This difference is attributed to studies from different geographical areas in these two rank sets. Bradford graphs for all datasets, but one, showed typical...

  9. Google goes cancer: improving outcome prediction for cancer patients by network-based ranking of marker genes.

    Directory of Open Access Journals (Sweden)

    Christof Winter

    Full Text Available Predicting the clinical outcome of cancer patients based on the expression of marker genes in their tumors has received increasing interest in the past decade. Accurate predictors of outcome and response to therapy could be used to personalize and thereby improve therapy. However, state of the art methods used so far often found marker genes with limited prediction accuracy, limited reproducibility, and unclear biological relevance. To address this problem, we developed a novel computational approach to identify genes prognostic for outcome that couples gene expression measurements from primary tumor samples with a network of known relationships between the genes. Our approach ranks genes according to their prognostic relevance using both expression and network information in a manner similar to Google's PageRank. We applied this method to gene expression profiles which we obtained from 30 patients with pancreatic cancer, and identified seven candidate marker genes prognostic for outcome. Compared to genes found with state of the art methods, such as Pearson correlation of gene expression with survival time, we improve the prediction accuracy by up to 7%. Accuracies were assessed using support vector machine classifiers and Monte Carlo cross-validation. We then validated the prognostic value of our seven candidate markers using immunohistochemistry on an independent set of 412 pancreatic cancer samples. Notably, signatures derived from our candidate markers were independently predictive of outcome and superior to established clinical prognostic factors such as grade, tumor size, and nodal status. As the amount of genomic data of individual tumors grows rapidly, our algorithm meets the need for powerful computational approaches that are key to exploit these data for personalized cancer therapies in clinical practice.

  10. In Silico target fishing: addressing a “Big Data” problem by ligand-based similarity rankings with data fusion

    Science.gov (United States)

    2014-01-01

    Background Ligand-based in silico target fishing can be used to identify the potential interacting target of bioactive ligands, which is useful for understanding the polypharmacology and safety profile of existing drugs. The underlying principle of the approach is that known bioactive ligands can be used as reference to predict the targets for a new compound. Results We tested a pipeline enabling large-scale target fishing and drug repositioning, based on simple fingerprint similarity rankings with data fusion. A large library containing 533 drug relevant targets with 179,807 active ligands was compiled, where each target was defined by its ligand set. For a given query molecule, its target profile is generated by similarity searching against the ligand sets assigned to each target, for which individual searches utilizing multiple reference structures are then fused into a single ranking list representing the potential target interaction profile of the query compound. The proposed approach was validated by 10-fold cross validation and two external tests using data from DrugBank and Therapeutic Target Database (TTD). The use of the approach was further demonstrated with some examples concerning the drug repositioning and drug side-effects prediction. The promising results suggest that the proposed method is useful for not only finding promiscuous drugs for their new usages, but also predicting some important toxic liabilities. Conclusions With the rapid increasing volume and diversity of data concerning drug related targets and their ligands, the simple ligand-based target fishing approach would play an important role in assisting future drug design and discovery. PMID:24976868

  11. Recognition and ranking of customer financing methods in international trade: An approach to customers’ satisfaction in the Export Development Bank of Iran

    Directory of Open Access Journals (Sweden)

    Fatemeh Heidari

    2015-05-01

    Full Text Available One of the most important events in the present era is the rapid development of international trade and the presence of countries in different areas of international trade. This issue has posed various new challenges for countries in recent years. Management of these challenges has mostly been done with the intention to achieve two objectives of economic development and promotion to different areas of international economics and global trade. Nowadays, getting customers’ satisfaction is one of the most important principles in the competition among customer-oriented organizations because customers are the main motive for commercial organizations which seek major improvements for progress. Therefore, in this kind of market-based economy, banking system which is thought as one of the main constituents of the economy of every country and bears great responsibilities is not an exception to the above-mentioned principle. The purpose of this study was recognizing and ranking of customer’s financing methods in international trade with an approach to customers’ satisfaction. To achieve this purpose, the research questions were tested in a descriptive method of survey type. The statistical population of this study consisted of all the staff working in all branches of the Export Development Bank of Iran. The sample of the study which was selected according to cluster random sampling method consisted of 217 staff of the Export Development Bank. The data gathering instrument for this study was a questionnaire including 16 questions with a reliability of 0.85. According to the findings, the indexes of open account, prepayment, letter of credit, and documentary collection are considered as customer financing methods in international trade. Based on the ranking presented by the method of TOPSIS, open account and prepayment had the respective higher ranks at the traditional payment system and letter of credit and documentary collection took priority over

  12. A comparative study on the ranking performance of some multi-criteria decision-making methods for industrial robot selection

    Directory of Open Access Journals (Sweden)

    Vijay Manikrao Athawale

    2011-10-01

    Full Text Available Industrial robots are mainly employed to perform repetitive and hazardous production jobs, multi-shift operations etc. to reduce the delivery time, improve the work environment, lower the production cost and even increase the product range to fulfill the customers’ needs. When a choice is to be made from among several alternative robots for a given industrial application, it is necessary to compare their performance characteristics in a decisive way. As the industrial robot selection problem involves multiple conflicting criteria and a finite set of candidate alternatives, different multi-criteria decision-making (MCDM methods can be effectively used to solve such type of problem. In this paper, ten most popular MCDM methods are considered and their relative performance are compared with respect to the rankings of the alternative robots as engaged in some industrial pick-n-place operation. It is observed that all these methods give almost the same rankings of the alternative robots, although the performance of WPM, TOPSIS and GRA methods are slightly better than the others. It can be concluded that for a given industrial robot selection problem, more attention is to be paid on the proper selection of the relevant criteria and alternatives, not on choosing the most appropriate MCDM method to be employed.

  13. An Integrated MCDM Method in Ranking BSC Perspectives and key Performance Indicators (KPIs

    Directory of Open Access Journals (Sweden)

    Mohsen Alvandi

    2012-04-01

    Full Text Available The balanced scorecard (BSC approach is an effective technique for performance evaluation. BSC can better reflect the dependence and feedback problems of each factor in real world situations. This study aims at developing a set of appropriate key performance indicators according to (BSC approach for SAPCO using multiple criteria decision making(MCDM method. We provide key performance indicators through literature reviews and experts' idea in SAPCO, which is one of the biggest vehicle spare suppliers in Iran. The proposed study uses decision making trial and evaluation laboratory (DEMATEL and analytic network process (ANP, respectively to measure the casual relationship between the perspectives as well as the relative weights. The results based on ANP method shows that ‘‘Customer’’ is the most influential factor. In addition, internal process, financial and learning and growth are in two to four positions. Three important key performance indicators are as bellow: Total price of parts, Customer satisfaction and Lack of parts in production.

  14. An empirical investigation on ranking financial risk factors using AHP method

    Directory of Open Access Journals (Sweden)

    Hassan Ghodrati

    2014-05-01

    Full Text Available This paper determines and ranks financial risk factors in Iranian corporations, using analytical hierarchy process (AHP. The present research includes one main question and four sub- questions. Its universe population includes managers, production and financial personnel of great corporations activating in Tehran Stock Exchange, who were selected to explain importance and weight of economic risks indices. The source of great corporations recognition is the Companies Registration Organization in Tehran Province, and according to this, there are 120 corporations. The results have indicated that financing risk maintains the highest priority followed by credit risk, liquidity risk, inflation risk and exchange risk. In terms of different risks associated with financing risk, risk of profit per share has been the number one priority followed by the risk of divisional profit per share, the risk of recessionary or boom and the risk of increasing partial pay profit rate. In terms of credit risk, the risk of loan has been number one priority followed by the risk of inability of loan payment and interest payment. Liquidity risk is another risk factor where demand has been the most important factor followed by rules and regulations and inflation risk. In terms of inflation, producers price risk has been the most important factor followed by consumer price risk, gross domestic product and producers price risk. Finally, in terms of different factors influencing exchange risk, export related issues are considered as the most important factors.

  15. Behavioral economics and socio-economics journals: A citation-based ranking

    OpenAIRE

    Azar, Ofer H.

    2006-01-01

    Journal quality is a major consideration for authors, readers, and promotion and tenure committees, among others. Unfortunately, most behavioral economics and socio-economics journals are not included in published rankings or in Journal Citation Reports. Consequently, no objective ranking of these journals exists. To address this need, a list of journals in behavioral economics and socio-economics was compiled, and the number of articles that cited each journal was recorded for the periods 20...

  16. Personalized PageRank Clustering: A graph clustering algorithm based on random walks

    Science.gov (United States)

    A. Tabrizi, Shayan; Shakery, Azadeh; Asadpour, Masoud; Abbasi, Maziar; Tavallaie, Mohammad Ali

    2013-11-01

    Graph clustering has been an essential part in many methods and thus its accuracy has a significant effect on many applications. In addition, exponential growth of real-world graphs such as social networks, biological networks and electrical circuits demands clustering algorithms with nearly-linear time and space complexity. In this paper we propose Personalized PageRank Clustering (PPC) that employs the inherent cluster exploratory property of random walks to reveal the clusters of a given graph. We combine random walks and modularity to precisely and efficiently reveal the clusters of a graph. PPC is a top-down algorithm so it can reveal inherent clusters of a graph more accurately than other nearly-linear approaches that are mainly bottom-up. It also gives a hierarchy of clusters that is useful in many applications. PPC has a linear time and space complexity and has been superior to most of the available clustering algorithms on many datasets. Furthermore, its top-down approach makes it a flexible solution for clustering problems with different requirements.

  17. A frequency-based technique to improve the spelling suggestion rank in medical queries.

    Science.gov (United States)

    Crowell, Jonathan; Zeng, Qing; Ngo, Long; Lacroix, Eve-Marie

    2004-01-01

    There is an abundance of health-related information online, and millions of consumers search for such information. Spell checking is of crucial importance in returning pertinent results, so the authors propose a technique for increasing the effectiveness of spell-checking tools used for health-related information retrieval. A sample of incorrectly spelled medical terms was submitted to two different spell-checking tools, and the resulting suggestions, derived under two different dictionary configurations, were re-sorted according to how frequently each term appeared in log data from a medical search engine. Univariable analysis was carried out to assess the effect of each factor (spell-checking tool, dictionary type, re-sort, or no re-sort) on the probability of success. The factors that were statistically significant in the univariable analysis were then used in multivariable analysis to evaluate the independent effect of each of the factors. The re-sorted suggestions proved to be significantly more accurate than the original list returned by the spell-checking tool. The odds of finding the correct suggestion in the number one rank were increased by 63% after re-sorting using the authors' method. This effect was independent of both the dictionary and the spell-checking tools that were used. Using knowledge about the frequency of a given word's occurrence in the medical domain can significantly improve spelling correction for medical queries.

  18. PosMed: ranking genes and bioresources based on Semantic Web Association Study

    Science.gov (United States)

    Makita, Yuko; Kobayashi, Norio; Yoshida, Yuko; Doi, Koji; Mochizuki, Yoshiki; Nishikata, Koro; Matsushima, Akihiro; Takahashi, Satoshi; Ishii, Manabu; Takatsuki, Terue; Bhatia, Rinki; Khadbaatar, Zolzaya; Watabe, Hajime; Masuya, Hiroshi; Toyoda, Tetsuro

    2013-01-01

    Positional MEDLINE (PosMed; http://biolod.org/PosMed) is a powerful Semantic Web Association Study engine that ranks biomedical resources such as genes, metabolites, diseases and drugs, based on the statistical significance of associations between user-specified phenotypic keywords and resources connected directly or inferentially through a Semantic Web of biological databases such as MEDLINE, OMIM, pathways, co-expressions, molecular interactions and ontology terms. Since 2005, PosMed has long been used for in silico positional cloning studies to infer candidate disease-responsible genes existing within chromosomal intervals. PosMed is redesigned as a workbench to discover possible functional interpretations for numerous genetic variants found from exome sequencing of human disease samples. We also show that the association search engine enhances the value of mouse bioresources because most knockout mouse resources have no phenotypic annotation, but can be associated inferentially to phenotypes via genes and biomedical documents. For this purpose, we established text-mining rules to the biomedical documents by careful human curation work, and created a huge amount of correct linking between genes and documents. PosMed associates any phenotypic keyword to mouse resources with 20 public databases and four original data sets as of May 2013. PMID:23761449

  19. Rank-based biomarker index to assess cadmium ecotoxicity on the earthworm Eisenia andrei.

    Science.gov (United States)

    Panzarino, O; Hyršl, P; Dobeš, P; Vojtek, L; Vernile, P; Bari, G; Terzano, R; Spagnuolo, M; de Lillo, E

    2016-02-01

    A proper soil risk assessment needs to estimate the processes that affect the fate and the behaviour of a contaminant, which are influenced by soil biotic and abiotic components. For this reason, the measurement of biomarkers in soil bioindicator organisms, such as earthworms, has recently received increasing attention. In this study, the earthworm Eisenia andrei was used to assess the pollutant-induced stress syndrome after exposure to sublethal concentrations of Cd (10 or 100 μg g(-1)) in OECD soil, after 14 d of exposure. Cadmium bioaccumulation and potential biomarkers such as catalase (CAT), hydrogen peroxide (H2O2), glutathione-S-transferase (GST), malondialdehyde (MDA), phenoloxidase (PO), metallothioneins (MTs) and genotoxic damage were determined. Results suggested that the exposure to 10 and 100 μg g(-1) Cd significantly increased Cd bioaccumulation, MTs and MDA; 100 μg g(-1) Cd contamination evidenced significantly higher values of H2O2 content and PO activity; CAT activity was inhibited at the higher concentration while GST and Comet assay did not show any significant differences from the control. Rank-based biomarker index showed that both different contaminated soils had an effect on the earthworms and allowed to validate the ecotoxicological relevance of this battery of biomarkers for a promising integrated multi-marker approach in soil monitoring and assessment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Ranking REACH registered neutral, ionizable and ionic organic chemicals based on their aquatic persistency and mobility.

    Science.gov (United States)

    Arp, H P H; Brown, T N; Berger, U; Hale, S E

    2017-07-19

    The contaminants that have the greatest chances of appearing in drinking water are those that are mobile enough in the aquatic environment to enter drinking water sources and persistent enough to survive treatment processes. Herein a screening procedure to rank neutral, ionizable and ionic organic compounds for being persistent and mobile organic compounds (PMOCs) is presented and applied to the list of industrial substances registered under the EU REACH legislation as of December 2014. This comprised 5155 identifiable, unique organic structures. The minimum cut-off criteria considered for PMOC classification herein are a freshwater half-life >40 days, which is consistent with the REACH definition of freshwater persistency, and a log Doc organic carbon-water distribution coefficient). Experimental data were given the highest priority, followed by data from an array of available quantitative structure-activity relationships (QSARs), and as a third resort, an original Iterative Fragment Selection (IFS) QSAR. In total, 52% of the unique REACH structures made the minimum criteria to be considered a PMOC, and 21% achieved the highest PMOC ranking (half-life > 40 days, log Doc < 1.0 between pH 4-10). Only 9% of neutral substances received the highest PMOC ranking, compared to 30% of ionizable compounds and 44% of ionic compounds. Predicted hydrolysis products for all REACH parents (contributing 5043 additional structures) were found to have higher PMOC rankings than their parents, due to increased mobility but not persistence. The fewest experimental data available were for ionic compounds; therefore, their ranking is more uncertain than neutral and ionizable compounds. The most sensitive parameter for the PMOC ranking was freshwater persistency, which was also the parameter that QSARs performed the most poorly at predicting. Several prioritized drinking water contaminants in the EU and USA, and other contaminants of concern, were identified as PMOCs. This

  1. CONSRANK: a server for the analysis, comparison and ranking of docking models based on inter-residue contacts

    KAUST Repository

    Chermak, Edrisse

    2014-12-21

    Summary: Herein, we present CONSRANK, a web tool for analyzing, comparing and ranking protein–protein and protein–nucleic acid docking models, based on the conservation of inter-residue contacts and its visualization in 2D and 3D interactive contact maps.

  2. Local constructions of gender-based violence amongst IDPs in northern Uganda: analysis of archival data collected using a gender- and age-segmented participatory ranking methodology.

    Science.gov (United States)

    Ager, Alastair; Bancroft, Carolyn; Berger, Elizabeth; Stark, Lindsay

    2018-01-01

    Gender-based violence (GBV) is a significant problem in conflict-affected settings. Understanding local constructions of such violence is crucial to developing preventive and responsive interventions to address this issue. This study reports on a secondary analysis of archived data collected as part of formative qualitative work - using a group participatory ranking methodology (PRM) - informing research on the prevalence of GBV amongst IDPs in northern Uganda in 2006. Sixty-four PRM group discussions were held with women, with men, with girls (aged 14 to 18 years), and with boys (aged 14 to 18 years) selected on a randomized basis across four internally displaced persons (IDP) camps in Lira District. Discussions elicited problems facing women in the camps, and - through structured participatory methods - consensus ranking of their importance and narrative accounts explaining these judgments. Amongst forms of GBV faced by women, rape was ranked as the greatest concern amongst participants (with a mean problem rank of 3.4), followed by marital rape (mean problem rank of 4.5) and intimate partner violence (mean problem rank of 4.9). Girls ranked all forms of GBV as higher priority concerns than other participants. Discussions indicated that these forms of GBV were generally considered normalized within the camp. Gender roles and power, economic deprivation, and physical and social characteristics of the camp setting emerged as key explanatory factors in accounts of GBV prevalence, although these played out in different ways with respect to differing forms of violence. All groups acknowledged GBV to represent a significant threat - among other major concerns such as transportation, water, shelter, food and security - for women residing in the camps. Given evidence of the significantly higher risk in the camp of intimate partner violence and marital rape, the relative prominence of the issue of rape in all rankings suggests normalization of violence within the home

  3. An evaluation of the use of the dry-weight-rank and the comparative yield biomass estimation methods in paramo ecosystem research

    Directory of Open Access Journals (Sweden)

    Hofstede Robert G.M.

    1993-12-01

    Full Text Available The use of the combination of the semi-destructive comparative yield method for overall biomass estimation and the non- destructive dry-weight-rank method for studying botanical composition on a dry weight basis in an undisturbed páramo vegetation in the Los Nevados national park (Colombian Central Cordillera was evaluated. These methods, developed for Australian production grasslands, were adapted for use in the páramo ecosystem. The average above ground biomass in the area was estimated as 2864 g dryweight. m-2 (sd.48, of which the bunchgrass Calamagrostis effusa contributed with ca 70%. When used with some adaptations, the comparative yield method seems suitable for biomass estimations in the páramo ecosystem. The here presented estimation of botanical eomposition with this method gave better results than dry-weight-rank method, which had too many shortcomings for use in the complex páramo grassland ecosystem.Se evaluó la aplicabilidad de una combinación de dos étodos para estimar la biomasa  general y la composición botánica, en una vegetación natural paramuna en el Parque Nacional Natural los Nevados (Cordillera Central de Colombia. El primer método (ecomparative yield determina la biomasa general, destruyendo parcialmente la vegetación de los cuadrantes de muestreo y el segundo (dryweight rank determina la composición botánica con base en el peso seco, sin destruir la vegetación. Estos métodos, inicialmente desarrollados para pajonales forrajeros en Australia, se adaptaron para ser utilizados en el ecosistema paramuno. Como resultado se obtuvo una estimación de la biomasa aérea de 2864 g peso seco m2 (desviación stándard 48 en la cual, la gramínea Calamagrostis effusa contribuyó con el 70%. Puede concluirse que el método de producción comparativa es útil para estimar la biomasa en el ecosistema paramuno, siempre y cuando se utilicen las adaptaciones mencionadas. Por otra parte la estimación de la composición bot

  4. A fuzzy rule based remedial priority ranking system for contaminated sites.

    Science.gov (United States)

    Polat, Sener; Aksoy, Aysegul; Unlu, Kahraman

    2015-01-01

    Contaminated site remediation is generally difficult, time consuming, and expensive. As a result ranking may aid in efficient allocation of resources. In order to rank the priorities of contaminated sites, input parameters relevant to contaminant fate and transport, and exposure assessment should be as accurate as possible. Yet, in most cases these parameters are vague or not precise. Most of the current remediation priority ranking methodologies overlook the vagueness in parameter values or do not go beyond assigning a contaminated site to a risk class. The main objective of this study is to develop an alternative remedial priority ranking system (RPRS) for contaminated sites in which vagueness in parameter values is considered. RPRS aims to evaluate potential human health risks due to contamination using sufficiently comprehensive and readily available parameters in describing the fate and transport of contaminants in air, soil, and groundwater. Vagueness in parameter values is considered by means of fuzzy set theory. A fuzzy expert system is proposed for the evaluation of contaminated sites and a software (ConSiteRPRS) is developed in Microsoft Office Excel 2007 platform. Rankings are employed for hypothetical and real sites. Results show that RPRS is successful in distinguishing between the higher and lower risk cases. © 2014, National Ground Water Association.

  5. Reliability of journal impact factor rankings

    Science.gov (United States)

    Greenwood, Darren C

    2007-01-01

    Background Journal impact factors and their ranks are used widely by journals, researchers, and research assessment exercises. Methods Based on citations to journals in research and experimental medicine in 2005, Bayesian Markov chain Monte Carlo methods were used to estimate the uncertainty associated with these journal performance indicators. Results Intervals representing plausible ranges of values for journal impact factor ranks indicated that most journals cannot be ranked with great precision. Only the top and bottom few journals could place any confidence in their rank position. Intervals were wider and overlapping for most journals. Conclusion Decisions placed on journal impact factors are potentially misleading where the uncertainty associated with the measure is ignored. This article proposes that caution should be exercised in the interpretation of journal impact factors and their ranks, and specifically that a measure of uncertainty should be routinely presented alongside the point estimate. PMID:18005435

  6. Reliability of journal impact factor rankings

    Directory of Open Access Journals (Sweden)

    Greenwood Darren C

    2007-11-01

    Full Text Available Abstract Background Journal impact factors and their ranks are used widely by journals, researchers, and research assessment exercises. Methods Based on citations to journals in research and experimental medicine in 2005, Bayesian Markov chain Monte Carlo methods were used to estimate the uncertainty associated with these journal performance indicators. Results Intervals representing plausible ranges of values for journal impact factor ranks indicated that most journals cannot be ranked with great precision. Only the top and bottom few journals could place any confidence in their rank position. Intervals were wider and overlapping for most journals. Conclusion Decisions placed on journal impact factors are potentially misleading where the uncertainty associated with the measure is ignored. This article proposes that caution should be exercised in the interpretation of journal impact factors and their ranks, and specifically that a measure of uncertainty should be routinely presented alongside the point estimate.

  7. A method for the design and development of medical or health care information websites to optimize search engine results page rankings on Google

    National Research Council Canada - National Science Library

    Dunne, Suzanne; Cummins, Niamh Maria; Hannigan, Ailish; Shannon, Bill; Dunne, Colum; Cullen, Walter

    2013-01-01

    .... This study describes a method of evaluating material for new medical/health care websites, or for assessing those already in existence, which is correlated with higher rankings on Google's Search Engine Results Pages (SERPs...

  8. Reply to “Ranking filter methods for concentrating pathogens in lake water”

    Science.gov (United States)

    Bushon, Rebecca N.; Francy, Donna S.; Gallardo, Vicente J.; Lindquist, H.D. Alan; Villegas, Eric N.; Ware, Michael W.

    2013-01-01

    Accurately comparing filtration methods is indeed difficult. Our method (1) and the method described by Borchardt et al. for determining recoveries are both acceptable approaches; however, each is designed to achieve a different research goal. Our study was designed to compare recoveries of multiple microorganisms in surface-water samples. Because, in practice, water-matrix effects come into play throughout filtration, concentration, and detection processes, we felt it important to incorporate those effects into the recovery results.

  9. Identifying and ranking implicit leadership strategies to promote evidence-based practice implementation in addiction health services.

    Science.gov (United States)

    Guerrero, Erick G; Padwa, Howard; Fenwick, Karissa; Harris, Lesley M; Aarons, Gregory A

    2016-05-14

    Despite a solid research base supporting evidence-based practices (EBPs) for addiction treatment such as contingency management and medication-assisted treatment, these services are rarely implemented and delivered in community-based addiction treatment programs in the USA. As a result, many clients do not benefit from the most current and efficacious treatments, resulting in reduced quality of care and compromised treatment outcomes. Previous research indicates that addiction program leaders play a key role in supporting EBP adoption and use. The present study expanded on this previous work to identify strategies that addiction treatment program leaders report using to implement new practices. We relied on a staged and iterative mixed-methods approach to achieve the following four goals: (a) collect data using focus groups and semistructured interviews and conduct analyses to identify implicit managerial strategies for implementation, (b) use surveys to quantitatively rank strategy effectiveness, (c) determine how strategies fit with existing theories of organizational management and change, and (d) use a consensus group to corroborate and expand on the results of the previous three stages. Each goal corresponded to a methodological phase, which included data collection and analytic approaches to identify and evaluate leadership interventions that facilitate EBP implementation in community-based addiction treatment programs. Findings show that the top-ranked strategies involved the recruitment and selection of staff members receptive to change, offering support and requesting feedback during the implementation process, and offering in vivo and hands-on training. Most strategies corresponded to emergent implementation leadership approaches that also utilize principles of transformational and transactional leadership styles. Leadership behaviors represented orientations such as being proactive to respond to implementation needs, supportive to assist staff members

  10. Ranking provinces based on development scale in agriculture sector using taxonomy technique

    Directory of Open Access Journals (Sweden)

    Shahram Rostampour

    2012-08-01

    Full Text Available The purpose of this paper is to determine comparative ranking of agricultural development in different provinces of Iran using taxonomy technique. The independent variables are amount of annual rainfall amount, the number of permanent rivers, the width of pastures and forest, cultivated level of agricultural harvests and garden harvests, number of beehives, the number of fish farming ranches, the number of tractors and combines, the number of cooperative production societies, the number of industrial cattle breeding and aviculture. The results indicate that the maximum development coefficient value is associated with Razavi Khorasan province followed by Mazandaran, East Azarbayjan while the minimum ranking value belongs to Bushehr province.

  11. Stellar surface as low-rank modification in iterative methods for binary neutron stars

    Science.gov (United States)

    Lau, Stephen R.

    2017-11-01

    We present a new multidomain spectral method for the treatment of non-spherical stellar surfaces in iterative methods for binary neutron stars. A stellar surface changes throughout the course of an iterative solution, potentially stalling the convergence. Our method affords low-complexity updates of the relevant subdomain preconditioners, thereby avoiding such stalling. Unlike current collocation (or nodal) approaches for treating surfaces (which rely on coordinate transformations to ensure that stellar surfaces arise at subdomain boundaries), our approach requires no regridding or nontrivial Jacobians. For polytropes with an equation of state specified by an integer polytropic index, our method delivers exponential accuracy with increased truncation, although for "stiff" equations of state (e.g. fractional) it suffers from the same accuracy loss as current methods. We have presented an outline of our approach before, but here present details with numerical tests.

  12. A hybrid method of grey relational analysis and data envelopment analysis for evaluating and selecting efficient suppliers plus a novel ranking method for grey numbers

    Directory of Open Access Journals (Sweden)

    Mohsen Sayyah Markabi

    2014-10-01

    Full Text Available Purpose: Evaluation and selection of efficient suppliers is one of the key issues in supply chain management which depends on wide range of qualitative and quantitative criteria. The aim of this research is to develop a mathematical model for evaluating and selecting efficient suppliers when faced with supply and demand uncertainties.Design/methodology/approach: In this research Grey Relational Analysis (GRA and Data Envelopment Analysis (DEA are used to evaluate and select efficient suppliers under uncertainties. Furthermore, a novel ranking method is introduced for the units that their efficiencies are obtained in the form of interval grey numbers.Findings: The study indicates that the proposed model in addition to providing satisfactory and acceptable results avoids time-consuming computations and consequently reduces the solution time. To name another advantage of the proposed model, we can point out that it enables us to make decision based on different levels of risk.Originality/value: The paper presents a mathematical model for evaluating and selecting efficient suppliers in a stochastic environment so that companies can use in order to make better decisions.

  13. Ranking the quality of protein structure models using sidechain based network properties.

    Science.gov (United States)

    Ghosh, Soma; Vishveshwara, Saraswathi

    2014-01-01

    Determining the correct structure of a protein given its sequence still remains an arduous task with many researchers working towards this goal. Most structure prediction methodologies result in the generation of a large number of probable candidates with the final challenge being to select the best amongst these. In this work, we have used Protein Structure Networks of native and modeled proteins in combination with Support Vector Machines to estimate the quality of a protein structure model and finally to provide ranks for these models. Model ranking is performed using regression analysis and helps in model selection from a group of many similar and good quality structures. Our results show that structures with a rank greater than 16 exhibit native protein-like properties while those below 10 are non-native like. The tool is also made available as a web-server ( http://vishgraph.mbu.iisc.ernet.in/GraProStr/native_non_native_ranking.html), where, 5 modelled structures can be evaluated at a given time.

  14. Stakeholder Perspectives on Citation and Peer-Based Rankings of Higher Education Journals

    Science.gov (United States)

    Wilkins, Stephen; Huisman, Jeroen

    2015-01-01

    The purpose of this article is to identify and discuss the possible uses of higher education journal rankings, and the associated advantages and disadvantages of using them. The research involved 40 individuals--lecturers, university managers, journal editors and publishers--who represented a range of stakeholders involved with research into…

  15. Inhibitor Ranking Through QM based Chelation Calculations for Virtual Screening of HIV-1 RNase H inhibition

    DEFF Research Database (Denmark)

    Poongavanam, Vasanthanathan; Svendsen, Casper Steinmann; Kongsted, Jacob

    2014-01-01

    Quantum mechanical (QM) calculations have been used to predict the binding affinity of a set of ligands towards HIV-1 RT associated RNase H (RNH). The QM based chelation calculations show improved binding affinity prediction for the inhibitors compared to using an empirical scoring function...... of the methods based on the use of a training set of molecules, QM based chelation calculations were used as filter in virtual screening of compounds in the ZINC database. By this, we find, compared to regular docking, QM based chelation calculations to significantly reduce the large number of false positives...

  16. Ranking scientific publications: the effect of nonlinearity.

    Science.gov (United States)

    Yao, Liyang; Wei, Tian; Zeng, An; Fan, Ying; Di, Zengru

    2014-10-17

    Ranking the significance of scientific publications is a long-standing challenge. The network-based analysis is a natural and common approach for evaluating the scientific credit of papers. Although the number of citations has been widely used as a metric to rank papers, recently some iterative processes such as the well-known PageRank algorithm have been applied to the citation networks to address this problem. In this paper, we introduce nonlinearity to the PageRank algorithm when aggregating resources from different nodes to further enhance the effect of important papers. The validation of our method is performed on the data of American Physical Society (APS) journals. The results indicate that the nonlinearity improves the performance of the PageRank algorithm in terms of ranking effectiveness, as well as robustness against malicious manipulations. Although the nonlinearity analysis is based on the PageRank algorithm, it can be easily extended to other iterative ranking algorithms and similar improvements are expected.

  17. Ranking scientific publications: the effect of nonlinearity

    Science.gov (United States)

    Yao, Liyang; Wei, Tian; Zeng, An; Fan, Ying; di, Zengru

    2014-10-01

    Ranking the significance of scientific publications is a long-standing challenge. The network-based analysis is a natural and common approach for evaluating the scientific credit of papers. Although the number of citations has been widely used as a metric to rank papers, recently some iterative processes such as the well-known PageRank algorithm have been applied to the citation networks to address this problem. In this paper, we introduce nonlinearity to the PageRank algorithm when aggregating resources from different nodes to further enhance the effect of important papers. The validation of our method is performed on the data of American Physical Society (APS) journals. The results indicate that the nonlinearity improves the performance of the PageRank algorithm in terms of ranking effectiveness, as well as robustness against malicious manipulations. Although the nonlinearity analysis is based on the PageRank algorithm, it can be easily extended to other iterative ranking algorithms and similar improvements are expected.

  18. Ranking the types of intersections for assessing the safety of pedestrians using TOPSIS method

    Directory of Open Access Journals (Sweden)

    Călin ŞERBU

    2014-11-01

    Full Text Available Every year, more than 1500 accidents with pedestrian occur in the intersections in Romania. The number of accidents involving pedestrians in roundabouts intersections type increased approximately three times in 2013 compared to 2009 in Romania. This alarming increase led to the need of assessing the safety of pedestrians in intersections with or without safety systems. The safety systems for pedestrians and drivers include: the road marking, the pedestrian crossings marking, signal intersections with road signs, traffic lights or pedestrian safety barriers. We propose to assess the types of intersections with TOPSIS method.

  19. Characterization of lignin monomers in low rank coal humic acids using the derivatization/reductive cleavage method

    Energy Technology Data Exchange (ETDEWEB)

    Grasset, L.; Vlckova, Z.; Kucerik, J.; Ambles, A. [University of Poitiers, Poitiers (France)

    2010-09-15

    Traditional CuO oxidation and thermochemolysis with tetramethylammonium hydroxide are the two main methods for lignin characterization in gymnosperm wood, and in soils and sediments formed from degraded gymnosperm wood, or for assessing the supply of terrestrial organic matter to marine sediments. In some cases, the overall lignin yield and the compound ratios used as plant source proxies have been found to be considerably different, depending on the method used. Thus, there is a need for finding efficient and more selective methods for lignin alpha- and beta-aryl ether cleavage. Derivatization followed by reductive cleavage (the DFRC method) is suitable for lignocellulose material. Results from the DFRC method applied to the characterization of humic acids of a lignite (low rank coal) from the Czech Republic show that they contain intact lignin monomers with a dominance of coniferyl units, in accord with the gymnosperm origin of the lignite. It is expected that DFRC will be suitable also for tracing lignin in other sediments.

  20. Heat Exchanger Ranking Program Using Genetic Algorithm and ε-NTU Method for Optimal Design

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Soon Ho; Kim, Minsung; Ha, Man Yeong; Park, Sang-Hu; Min, June Kee [Pusan National University, Busan (Korea, Republic of)

    2014-11-15

    Today, computational fluid dynamics (CFD) is widely used in industry because of the availability of high performance computers. However, full-scale analysis poses problems owing to the limited resources and time. In this study, the performance and optimal size of a heat exchanger were calculated using the effectiveness-number of transfer units (ε-NTU) method and a database of characteristics heat exchanger. Information about the geometry and performance of various heat exchangers is collected, and the performance of the heat exchanger is calculated under the given operating conditions. To determine the optimal size of the heat exchanger, a Genetic Algorithm (GA) is used, and MATLAB and REFPROP are used for the calculation.

  1. A collaborative filtering recommendation algorithm based on weighted SimRank and social trust

    Science.gov (United States)

    Su, Chang; Zhang, Butao

    2017-05-01

    Collaborative filtering is one of the most widely used recommendation technologies, but the data sparsity and cold start problem of collaborative filtering algorithms are difficult to solve effectively. In order to alleviate the problem of data sparsity in collaborative filtering algorithm, firstly, a weighted improved SimRank algorithm is proposed to compute the rating similarity between users in rating data set. The improved SimRank can find more nearest neighbors for target users according to the transmissibility of rating similarity. Then, we build trust network and introduce the calculation of trust degree in the trust relationship data set. Finally, we combine rating similarity and trust to build a comprehensive similarity in order to find more appropriate nearest neighbors for target user. Experimental results show that the algorithm proposed in this paper improves the recommendation precision of the Collaborative algorithm effectively.

  2. Microseismic Event Grouping Based on PageRank Linkage at the Newberry Volcano Geothermal Site

    Science.gov (United States)

    Aguiar, A. C.; Myers, S. C.

    2016-12-01

    The Newberry Volcano DOE FORGE site in Central Oregon has been stimulated two times using high-pressure fluid injection to study the Enhanced Geothermal Systems (EGS) technology. Several hundred microseismic events were generated during the first stimulation in the fall of 2012. Initial locations of this microseismicity do not show well defined subsurface structure in part because event location uncertainties are large (Foulger and Julian, 2013). We focus on this stimulation to explore the spatial and temporal development of microseismicity, which is key to understanding how subsurface stimulation modifies stress, fractures rock, and increases permeability. We use PageRank, Google's initial search algorithm, to determine connectivity within the events (Aguiar and Beroza, 2014) and assess signal-correlation topology for the micro-earthquakes. We then use this information to create signal families and compare these to the spatial and temporal proximity of associated earthquakes. We relocate events within families (identified by PageRank linkage) using the Bayesloc approach (Myers et al., 2007). Preliminary relocations show tight spatial clustering of event families as well as evidence of events relocating to a different cluster than originally reported. We also find that signal similarity (linkage) at several stations, not just one or two, is needed in order to determine that events are in close proximity to one another. We show that indirect linkage of signals using PageRank is a reliable way to increase the number of events that are confidently determined to be similar to one another, which may lead to efficient and effective grouping of earthquakes with similar physical characteristics, such as focal mechanisms and stress drop. Our ultimate goal is to determine whether changes in the state of stress and/or changes in the generation of subsurface fracture networks can be detected using PageRank topology as well as aid in the event relocation to obtain more accurate

  3. Citation-Based Journal Rankings for AI Research A Business Perspective

    OpenAIRE

    Cheng, Chun Hung; Clyde W. Holsapple; Lee, Anita

    1996-01-01

    A significant and growing area of business-computing research is concerned with AI. Knowledge about which journals are the most influential forums for disseminating AI research is important for business school faculty, students, administrators, and librarians. To date, there has been only one study attempting to rank AI journals from a business-computing perspective. It used a subjective methodology, surveying opinions of business faculty about a prespecified list of 30 journals. Here, we rep...

  4. A Distributed Taxation Based Rank Adaptation Scheme for 5G Small Cells

    DEFF Research Database (Denmark)

    Catania, Davide; Cattoni, Andrea Fabio; Mahmood, Nurul Huda

    2015-01-01

    approach is to discourage the choice of transmitting with multiple spatial streams in highly interfered scenarios, and to exploit and encourage the usage of multiple spatial transmission streams in low interference scenarios. We show that our proposed algorithm can be adjusted to preserve and guarantee...... a good outage performance, while providing the benefit of higher average throughputs, in both low and highly interfered scenarios, when compared to fixed rank configurations, and distributed selfish schemes....

  5. Rankings of High School Sports Injury Rates Differ Based on Time Loss Assessments.

    Science.gov (United States)

    Kerr, Zachary Y; Roos, Karen G; Djoko, Aristarque; Dompier, Thomas P; Marshall, Stephen W

    2017-11-01

    To examine how injury definition inclusiveness affects the rank order of injury rates in 27 high school (HS) sports. The National Athletic Treatment, Injury and Outcomes Network (NATION) used certified athletic trainers (ATs) to collect injury and athlete-exposure (AE) data in practices and competitions for 27 HS sports during the 2011/2012 to 2013/2014 academic years. Time loss (TL) injuries resulted in ≥24 hours of participation restriction. Nontime loss (NTL) injuries resulted in sports. High school student-athletes. Sports injury data from the National Athletic Treatment, Injury and Outcomes Network. Time loss and TL + NTL injury rates were calculated. Sport-specific rates were placed in rank order, stratified by gender. Most of the 47 014 injuries reported were NTL (82.8%). Among boys' sports, TL injury rates were greatest in football (3.27/1000AE) and wrestling (2.43/1000AE); TL + NTL injury rates were greatest also in football (15.29/1000AE) and wrestling (11.62/1000AE). Among girls' sports, TL injury rates were greatest in soccer (1.97/1000AE) and basketball (1.76/1000AE); TL + NTL injury rates were greatest in field hockey and lacrosse (both 11.32/1000AE). The rank order of injury rates and the resulting injury prevention priorities may depend on injury definition inclusiveness, particularly in female HS sports.

  6. A tilting approach to ranking influence

    KAUST Repository

    Genton, Marc G.

    2014-12-01

    We suggest a new approach, which is applicable for general statistics computed from random samples of univariate or vector-valued or functional data, to assessing the influence that individual data have on the value of a statistic, and to ranking the data in terms of that influence. Our method is based on, first, perturbing the value of the statistic by ‘tilting’, or reweighting, each data value, where the total amount of tilt is constrained to be the least possible, subject to achieving a given small perturbation of the statistic, and, then, taking the ranking of the influence of data values to be that which corresponds to ranking the changes in data weights. It is shown, both theoretically and numerically, that this ranking does not depend on the size of the perturbation, provided that the perturbation is sufficiently small. That simple result leads directly to an elegant geometric interpretation of the ranks; they are the ranks of the lengths of projections of the weights onto a ‘line’ determined by the first empirical principal component function in a generalized measure of covariance. To illustrate the generality of the method we introduce and explore it in the case of functional data, where (for example) it leads to generalized boxplots. The method has the advantage of providing an interpretable ranking that depends on the statistic under consideration. For example, the ranking of data, in terms of their influence on the value of a statistic, is different for a measure of location and for a measure of scale. This is as it should be; a ranking of data in terms of their influence should depend on the manner in which the data are used. Additionally, the ranking recognizes, rather than ignores, sign, and in particular can identify left- and right-hand ‘tails’ of the distribution of a random function or vector.

  7. Ranking Economic History Journals

    DEFF Research Database (Denmark)

    Di Vaio, Gianfranco; Weisdorf, Jacob Louis

    This study ranks - for the first time - 12 international academic journals that have economic history as their main topic. The ranking is based on data collected for the year 2007. Journals are ranked using standard citation analysis where we adjust for age, size and self-citation of journals. We...... also compare the leading economic history journals with the leading journals in economics in order to measure the influence on economics of economic history, and vice versa. With a few exceptions, our results confirm the general idea about what economic history journals are the most influential...

  8. Ranking economic history journals

    DEFF Research Database (Denmark)

    Di Vaio, Gianfranco; Weisdorf, Jacob Louis

    2010-01-01

    This study ranks-for the first time-12 international academic journals that have economic history as their main topic. The ranking is based on data collected for the year 2007. Journals are ranked using standard citation analysis where we adjust for age, size and self-citation of journals. We also...... compare the leading economic history journals with the leading journals in economics in order to measure the influence on economics of economic history, and vice versa. With a few exceptions, our results confirm the general idea about what economic history journals are the most influential for economic...

  9. Diffusion of scientific credits and the ranking of scientists

    OpenAIRE

    Radicchi, Filippo; Fortunato, Santo; Markines, Benjamin; Vespignani, Alessandro

    2009-01-01

    Recently, the abundance of digital data enabled the implementation of graph based ranking algorithms that provide system level analysis for ranking publications and authors. Here we take advantage of the entire Physical Review publication archive (1893-2006) to construct authors' networks where weighted edges, as measured from opportunely normalized citation counts, define a proxy for the mechanism of scientific credit transfer. On this network we define a ranking method based on a diffusion ...

  10. Complex step-based low-rank extended Kalman filtering for state-parameter estimation in subsurface transport models

    KAUST Repository

    El Gharamti, Mohamad

    2014-02-01

    The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.

  11. Dual channel rank-based intensity weighting for quantitative co-localization of microscopy images

    LENUS (Irish Health Repository)

    Singan, Vasanth R

    2011-10-21

    Abstract Background Accurate quantitative co-localization is a key parameter in the context of understanding the spatial co-ordination of molecules and therefore their function in cells. Existing co-localization algorithms consider either the presence of co-occurring pixels or correlations of intensity in regions of interest. Depending on the image source, and the algorithm selected, the co-localization coefficients determined can be highly variable, and often inaccurate. Furthermore, this choice of whether co-occurrence or correlation is the best approach for quantifying co-localization remains controversial. Results We have developed a novel algorithm to quantify co-localization that improves on and addresses the major shortcomings of existing co-localization measures. This algorithm uses a non-parametric ranking of pixel intensities in each channel, and the difference in ranks of co-localizing pixel positions in the two channels is used to weight the coefficient. This weighting is applied to co-occurring pixels thereby efficiently combining both co-occurrence and correlation. Tests with synthetic data sets show that the algorithm is sensitive to both co-occurrence and correlation at varying levels of intensity. Analysis of biological data sets demonstrate that this new algorithm offers high sensitivity, and that it is capable of detecting subtle changes in co-localization, exemplified by studies on a well characterized cargo protein that moves through the secretory pathway of cells. Conclusions This algorithm provides a novel way to efficiently combine co-occurrence and correlation components in biological images, thereby generating an accurate measure of co-localization. This approach of rank weighting of intensities also eliminates the need for manual thresholding of the image, which is often a cause of error in co-localization quantification. We envisage that this tool will facilitate the quantitative analysis of a wide range of biological data sets

  12. A least square support vector machine-based approach for contingency classification and ranking in a large power system

    Directory of Open Access Journals (Sweden)

    Bhanu Pratap Soni

    2016-12-01

    Full Text Available This paper proposes an effective supervised learning approach for static security assessment of a large power system. Supervised learning approach employs least square support vector machine (LS-SVM to rank the contingencies and predict the system severity level. The severity of the contingency is measured by two scalar performance indices (PIs: line MVA performance index (PIMVA and Voltage-reactive power performance index (PIVQ. SVM works in two steps. Step I is the estimation of both standard indices (PIMVA and PIVQ that is carried out under different operating scenarios and Step II contingency ranking is carried out based on the values of PIs. The effectiveness of the proposed methodology is demonstrated on IEEE 39-bus (New England system. The approach can be beneficial tool which is less time consuming and accurate security assessment and contingency analysis at energy management center.

  13. 3D Forest Structure Estimation from SAR Tomography by Means of a Full Rank Polarimetric Inversion Based on Compressive Sensing

    Science.gov (United States)

    Cazcarra Bes, Victor; Tello-Alonso, Maria; Papathanassiou, Kostas

    2015-04-01

    SAR tomography (TomoSAR) techniques allow a direct 3D imaging by exploiting angular diversity with different passes of the sensor. One of the main drawbacks of SAR tomography is that the estimation of the vertical reflectivity profile has to be performed through a limited set of multibaseline acquisitions, which requires solving a highly underdetermined system of equations. In TomoSAR literature, the Capon and the Fourier beamforming spectral estimators are widely employed. As an alternative, the application of Compressive Sensing (CS) techniques to the estimation of forest profiles has been recently introduced. In this paper, a different algorithm based on CS is proposed. It performs a full rank polarimetric inversion, allowing thus an estimation of the 3D coherency matrices. To study the full rank polarimetric TomoSAR inversion, a temporal series of airborne data is used. The results of the 3D polarimetric inversion will be contrasted to in situ measurements and LIDAR data.

  14. Personnel Selection Based on Fuzzy Methods

    Directory of Open Access Journals (Sweden)

    Lourdes Cañós

    2011-03-01

    Full Text Available The decisions of managers regarding the selection of staff strongly determine the success of the company. A correct choice of employees is a source of competitive advantage. We propose a fuzzy method for staff selection, based on competence management and the comparison with the valuation that the company considers the best in each competence (ideal candidate. Our method is based on the Hamming distance and a Matching Level Index. The algorithms, implemented in the software StaffDesigner, allow us to rank the candidates, even when the competences of the ideal candidate have been evaluated only in part. Our approach is applied in a numerical example.

  15. Extreme learning machine for ranking: generalization analysis and applications.

    Science.gov (United States)

    Chen, Hong; Peng, Jiangtao; Zhou, Yicong; Li, Luoqing; Pan, Zhibin

    2014-05-01

    The extreme learning machine (ELM) has attracted increasing attention recently with its successful applications in classification and regression. In this paper, we investigate the generalization performance of ELM-based ranking. A new regularized ranking algorithm is proposed based on the combinations of activation functions in ELM. The generalization analysis is established for the ELM-based ranking (ELMRank) in terms of the covering numbers of hypothesis space. Empirical results on the benchmark datasets show the competitive performance of the ELMRank over the state-of-the-art ranking methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. KaM_CRK: Clustering and Ranking Knowledge for Reasonable Results Based on Behaviors and Contexts

    Directory of Open Access Journals (Sweden)

    Changhong Hu

    2013-01-01

    Full Text Available A model named KaM_CRK is proposed, which can supply the clustered and ranked knowledge to the users on different contexts. By comparing the attributes of contexts and JANs, our findings indicate that our model can accumulate the JANs, whose attributes are similar with the user’s contexts, together. By applying the KaM_CLU algorithm and Centre rank strategy into the KaM_CRK model, the model boosts a significant promotion on the accuracy of provision of user's knowledge. By analyzing the users' behaviors, the dynamic coefficient BehaviorF is first presented in KaM_CLU. Compared to traditional approaches of K_means and DBSCAN, the KaM_CLU algorithm does not need to initialize the number of clusters. Additionally, its synthetic results are more accurate, reasonable, and fit than other approaches for users. It is known from our evaluation through real data that our strategy performs better on time efficiency and user's satisfaction, which will save by 30% and promote by 5%, respectively.

  17. Standard test method for ranking resistance of plastic materials to sliding wear using a block-on-ring configuration

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1997-01-01

    1.1 This test method covers a laboratory procedure to measure the resistance of plastic materials under dry sliding conditions. The test utilizes a block-on-ring geometry to rank materials according to their sliding wear characteristics under various conditions. 1.2 The test specimens are small so that they can be molded or cut from fabricated plastic parts. The test may be run at the load, velocity, and temperature which simulate the service condition. 1.3 Wear test results are reported as specific wear rates calculated from volume loss, sliding distance, and load. Materials with superior wear resistance have lower specific wear rates. 1.4 This test method allows the use of both single- and multi-station apparatus to determine the specific wear rates. 1.5 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information only. 1.6 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibil...

  18. Modeling Ranking, Time Trade-Off and Visual Analogue Scale Values for EQ-5D Health States : A Review and Comparison of Methods

    NARCIS (Netherlands)

    B.M. Craig (Benjamin); J.J. van Busschbach (Jan); J.A. Salomon (Joshua)

    2009-01-01

    textabstractAbstract BACKGROUND: There is rising interest in eliciting health state valuations using rankings. Due to their relative simplicity, ordinal measurement methods may offer an attractive practical alternative to cardinal methods, such as time trade-off (TTO) and visual analog scale (VAS).

  19. Using a consensus approach based on the conservation of inter-residue contacts to rank CAPRI models

    KAUST Repository

    Vangone, Anna

    2013-10-17

    Herein we propose the use of a consensus approach, CONSRANK, for ranking CAPRI models. CONSRANK relies on the conservation of inter-residue contacts in the analyzed decoys ensemble. Models are ranked according to their ability to match the most frequently observed contacts. We applied CONSRANK to 19 CAPRI protein-protein targets, covering a wide range of prediction difficulty and involved in a variety of biological functions. CONSRANK results are consistently good, both in terms of native-like (NL) solutions ranked in the top positions and of values of the Area Under the receiver operating characteristic Curve (AUC). For targets having a percentage of NL solutions above 3%, an excellent performance is found, with AUC values approaching 1. For the difficult target T46, having only 3.4% NL solutions, the number of NL solutions in the top 5 and 10 ranked positions is enriched by a factor 30, and the AUC value is as high as 0.997. AUC values below 0.8 are only found for targets featuring a percentage of NL solutions within 1.1%. Remarkably, a false consensus emerges only in one case, T42, which happens to be an artificial protein, whose assembly details remain uncertain, based on controversial experimental data. We also show that CONSRANK still performs very well on a limited number of models, provided that more than 1 NL solution is included in the ensemble, thus extending its applicability to cases where few dozens of models are available.© 2013 Wiley Periodicals, Inc.

  20. Standard test method for ranking resistance of materials to sliding wear using block-on-ring wear test

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2005-01-01

    1.1 This test method covers laboratory procedures for determining the resistance of materials to sliding wear. The test utilizes a block-on-ring friction and wear testing machine to rank pairs of materials according to their sliding wear characteristics under various conditions. 1.2 An important attribute of this test is that it is very flexible. Any material that can be fabricated into, or applied to, blocks and rings can be tested. Thus, the potential materials combinations are endless. However, the interlaboratory testing has been limited to metals. In addition, the test can be run with various lubricants, liquids, or gaseous atmospheres, as desired, to simulate service conditions. Rotational speed and load can also be varied to better correspond to service requirements. 1.3 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information only. Wear test results are reported as the volume loss in cubic millimetres for both the block and ring. Materials...

  1. Research of Subgraph Estimation Page Rank Algorithm for Web Page Rank

    Directory of Open Access Journals (Sweden)

    LI Lan-yin

    2017-04-01

    Full Text Available The traditional PageRank algorithm can not efficiently perform large data Webpage scheduling problem. This paper proposes an accelerated algorithm named topK-Rank,which is based on PageRank on the MapReduce platform. It can find top k nodes efficiently for a given graph without sacrificing accuracy. In order to identify top k nodes,topK-Rank algorithm prunes unnecessary nodes and edges in each iteration to dynamically construct subgraphs,and iteratively estimates lower/upper bounds of PageRank scores through subgraphs. Theoretical analysis shows that this method guarantees result exactness. Experiments show that topK-Rank algorithm can find k nodes much faster than the existing approaches.

  2. Ranking Adverse Drug Reactions With Crowdsourcing

    KAUST Repository

    Gottlieb, Assaf

    2015-03-23

    Background: There is no publicly available resource that provides the relative severity of adverse drug reactions (ADRs). Such a resource would be useful for several applications, including assessment of the risks and benefits of drugs and improvement of patient-centered care. It could also be used to triage predictions of drug adverse events. Objective: The intent of the study was to rank ADRs according to severity. Methods: We used Internet-based crowdsourcing to rank ADRs according to severity. We assigned 126,512 pairwise comparisons of ADRs to 2589 Amazon Mechanical Turk workers and used these comparisons to rank order 2929 ADRs. Results: There is good correlation (rho=.53) between the mortality rates associated with ADRs and their rank. Our ranking highlights severe drug-ADR predictions, such as cardiovascular ADRs for raloxifene and celecoxib. It also triages genes associated with severe ADRs such as epidermal growth-factor receptor (EGFR), associated with glioblastoma multiforme, and SCN1A, associated with epilepsy. Conclusions: ADR ranking lays a first stepping stone in personalized drug risk assessment. Ranking of ADRs using crowdsourcing may have useful clinical and financial implications, and should be further investigated in the context of health care decision making.

  3. A Study of the Effects of Rank and Gender on Officers' Club Membership and Club Usage at U.S. Air Force Bases in the Continental United States

    National Research Council Canada - National Science Library

    Smith, C

    1999-01-01

    Scope and Method of Study: The purpose of this study was to examine relationships between both officer rank and officer gender and both club membership and member usage at Air Force officers' clubs in the Continental United States (CONUS...

  4. Neophilia Ranking of Scientific Journals

    Science.gov (United States)

    Packalen, Mikko; Bhattacharya, Jay

    2017-01-01

    The ranking of scientific journals is important because of the signal it sends to scientists about what is considered most vital for scientific progress. Existing ranking systems focus on measuring the influence of a scientific paper (citations)—these rankings do not reward journals for publishing innovative work that builds on new ideas. We propose an alternative ranking based on the proclivity of journals to publish papers that build on new ideas, and we implement this ranking via a text-based analysis of all published biomedical papers dating back to 1946. In addition, we compare our neophilia ranking to citation-based (impact factor) rankings; this comparison shows that the two ranking approaches are distinct. Prior theoretical work suggests an active role for our neophilia index in science policy. Absent an explicit incentive to pursue novel science, scientists underinvest in innovative work because of a coordination problem: for work on a new idea to flourish, many scientists must decide to adopt it in their work. Rankings that are based purely on influence thus do not provide sufficient incentives for publishing innovative work. By contrast, adoption of the neophilia index as part of journal-ranking procedures by funding agencies and university administrators would provide an explicit incentive for journals to publish innovative work and thus help solve the coordination problem by increasing scientists' incentives to pursue innovative work. PMID:28713181

  5. Compression and Combining Based on Channel Shortening and Rank Reduction Technique for Cooperative Wireless Sensor Networks

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2013-12-18

    This paper investigates and compares the performance of wireless sensor networks where sensors operate on the principles of cooperative communications. We consider a scenario where the source transmits signals to the destination with the help of L sensors. As the destination has the capacity of processing only U out of these L signals, the strongest U signals are selected while the remaining (L?U) signals are suppressed. A preprocessing block similar to channel-shortening is proposed in this contribution. However, this preprocessing block employs a rank-reduction technique instead of channel-shortening. By employing this preprocessing, we are able to decrease the computational complexity of the system without affecting the bit error rate (BER) performance. From our simulations, it can be shown that these schemes outperform the channel-shortening schemes in terms of computational complexity. In addition, the proposed schemes have a superior BER performance as compared to channel-shortening schemes when sensors employ fixed gain amplification. However, for sensors which employ variable gain amplification, a tradeoff exists in terms of BER performance between the channel-shortening and these schemes. These schemes outperform channel-shortening scheme for lower signal-to-noise ratio.

  6. Reservoir characterization based on tracer response and rank analysis of production and injection rates

    Energy Technology Data Exchange (ETDEWEB)

    Refunjol, B.T. [Lagoven, S.A., Pdvsa (Venezuela); Lake, L.W. [Univ. of Texas, Austin, TX (United States)

    1997-08-01

    Quantification of the spatial distribution of properties is important for many reservoir-engineering applications. But, before applying any reservoir-characterization technique, the type of problem to be tackled and the information available should be analyzed. This is important because difficulties arise in reservoirs where production records are the only information for analysis. This paper presents the results of a practical technique to determine preferential flow trends in a reservoir. The technique is a combination of reservoir geology, tracer data, and Spearman rank correlation coefficient analysis. The Spearman analysis, in particular, will prove to be important because it appears to be insightful and uses injection/production data that are prevalent in circumstances where other data are nonexistent. The technique is applied to the North Buck Draw field, Campbell County, Wyoming. This work provides guidelines to assess information about reservoir continuity in interwell regions from widely available measurements of production and injection rates at existing wells. The information gained from the application of this technique can contribute to both the daily reservoir management and the future design, control, and interpretation of subsequent projects in the reservoir, without the need for additional data.

  7. Incorporation of Socio-Economic Features' Ranking in Multicriteria Analysis Based on Ecosystem Services for Marine Protected Area Planning.

    Science.gov (United States)

    Portman, Michelle E; Shabtay-Yanai, Ateret; Zanzuri, Asaf

    2016-01-01

    Developed decades ago for spatial choice problems related to zoning in the urban planning field, multicriteria analysis (MCA) has more recently been applied to environmental conflicts and presented in several documented cases for the creation of protected area management plans. Its application is considered here for the development of zoning as part of a proposed marine protected area management plan. The case study incorporates specially-explicit conservation features while considering stakeholder preferences, expert opinion and characteristics of data quality. It involves the weighting of criteria using a modified analytical hierarchy process. Experts ranked physical attributes which include socio-economically valued physical features. The parameters used for the ranking of (physical) attributes important for socio-economic reasons are derived from the field of ecosystem services assessment. Inclusion of these feature values results in protection that emphasizes those areas closest to shore, most likely because of accessibility and familiarity parameters and because of data biases. Therefore, other spatial conservation prioritization methods should be considered to supplement the MCA and efforts should be made to improve data about ecosystem service values farther from shore. Otherwise, the MCA method allows incorporation of expert and stakeholder preferences and ecosystem services values while maintaining the advantages of simplicity and clarity.

  8. Incorporation of Socio-Economic Features' Ranking in Multicriteria Analysis Based on Ecosystem Services for Marine Protected Area Planning.

    Directory of Open Access Journals (Sweden)

    Michelle E Portman

    Full Text Available Developed decades ago for spatial choice problems related to zoning in the urban planning field, multicriteria analysis (MCA has more recently been applied to environmental conflicts and presented in several documented cases for the creation of protected area management plans. Its application is considered here for the development of zoning as part of a proposed marine protected area management plan. The case study incorporates specially-explicit conservation features while considering stakeholder preferences, expert opinion and characteristics of data quality. It involves the weighting of criteria using a modified analytical hierarchy process. Experts ranked physical attributes which include socio-economically valued physical features. The parameters used for the ranking of (physical attributes important for socio-economic reasons are derived from the field of ecosystem services assessment. Inclusion of these feature values results in protection that emphasizes those areas closest to shore, most likely because of accessibility and familiarity parameters and because of data biases. Therefore, other spatial conservation prioritization methods should be considered to supplement the MCA and efforts should be made to improve data about ecosystem service values farther from shore. Otherwise, the MCA method allows incorporation of expert and stakeholder preferences and ecosystem services values while maintaining the advantages of simplicity and clarity.

  9. Methodical bases of geodemographic forecasting

    Directory of Open Access Journals (Sweden)

    Катерина Сегіда

    2016-10-01

    Full Text Available The article deals with methodological features of the forecast of population size and composition. The essence and features of probabilistic demographic forecasting, methods, a component and dynamic ranks are considered; requirements to initial indicators for each type of the forecast are provided. It is noted that geo-demographic forecast is an important component of regional geo-demographic characteristic. Features of the demographic forecast development by component method (recursors of age are given, basic formulae of calculation, including the equation of demographic balance, a formula recursors taking into account gender and age indicators, survival coefficient are presented. The basic methodical principles of the demographic forecast are given by an extrapolation method (dynamic ranks, calculation features by means of the generalized indicators, such as extrapolation on the basis of indicators of an average pure gain, average growth rate and average rate of a gain are presented. To develop population forecast, the method of retrospective extrapolation (for the short-term forecast and a component method (for the mid-term forecast are mostly used. The example of such development by component method for gender and age structure of the population of Kharkiv region with step-by-step explanation of calculation is provided. The example of Kharkiv region’s population forecast development is provided by the method of dynamic ranks. Having carried out calculations of the main forecast indicators by administrative units, it is possible to determine features of further regional demographic development, to reveal internal territorial distinctions in demographic development. Application of separate forecasting methods allows to develop the forecast for certain indicators, however essential a variety, nonlinearity and not stationarity of the processes constituting demographic development forces to look +for new approaches and

  10. Sensitivity ranking for freshwater invertebrates towards hydrocarbon contaminants.

    Science.gov (United States)

    Gerner, Nadine V; Cailleaud, Kevin; Bassères, Anne; Liess, Matthias; Beketov, Mikhail A

    2017-11-01

    Hydrocarbons have an utmost economical importance but may also cause substantial ecological impacts due to accidents or inadequate transportation and use. Currently, freshwater biomonitoring methods lack an indicator that can unequivocally reflect the impacts caused by hydrocarbons while being independent from effects of other stressors. The aim of the present study was to develop a sensitivity ranking for freshwater invertebrates towards hydrocarbon contaminants, which can be used in hydrocarbon-specific bioindicators. We employed the Relative Sensitivity method and developed the sensitivity ranking S hydrocarbons based on literature ecotoxicological data supplemented with rapid and mesocosm test results. A first validation of the sensitivity ranking based on an earlier field study has been conducted and revealed the S hydrocarbons ranking to be promising for application in sensitivity based indicators. Thus, the first results indicate that the ranking can serve as the core component of future hydrocarbon-specific and sensitivity trait based bioindicators.

  11. Hierarchical Rank Aggregation with Applications to Nanotoxicology.

    Science.gov (United States)

    Patel, Trina; Telesca, Donatello; Rallo, Robert; George, Saji; Xia, Tian; Nel, André E

    2013-06-01

    The development of high throughput screening (HTS) assays in the field of nanotoxicology provide new opportunities for the hazard assessment and ranking of engineered nanomaterials (ENMs). It is often necessary to rank lists of materials based on multiple risk assessment parameters, often aggregated across several measures of toxicity and possibly spanning an array of experimental platforms. Bayesian models coupled with the optimization of loss functions have been shown to provide an effective framework for conducting inference on ranks. In this article we present various loss-function-based ranking approaches for comparing ENM within experiments and toxicity parameters. Additionally, we propose a framework for the aggregation of ranks across different sources of evidence while allowing for differential weighting of this evidence based on its reliability and importance in risk ranking. We apply these methods to high throughput toxicity data on two human cell-lines, exposed to eight different nanomaterials, and measured in relation to four cytotoxicity outcomes. This article has supplementary material online.

  12. VHB-JOURQUAL2: Method, Results, and Implications of the German Academic Association for Business Research's Journal Ranking

    Directory of Open Access Journals (Sweden)

    Ulf Schrader

    2009-12-01

    Full Text Available VHB-JOURQUAL represents the official journal ranking of the German Academic Association for Business Research. Since its introduction in 2003, the ranking has become the most influential journal evaluation approach in German-speaking countries, impacting several key managerial decisions of German, Austrian, and Swiss business schools. This article reports the methodological approach of the ranking’s second edition. It also presents the main results and additional analyses on the validity of the rating and the underlying decision processes of the respondents. Selected implications for researchers and higher-education institutions are discussed.

  13. Ranking production units according to marginal efficiency contribution

    DEFF Research Database (Denmark)

    Ghiyasi, Mojtaba; Hougaard, Jens Leth

    League tables associated with various forms of service activities from schools to hospitals illustrate the public need for ranking institutions by their productive performance. We present a new method for ranking production units which is based on each units marginal contribution to the technical...

  14. Ranking the strategies for Indian medical tourism sector through the integration of SWOT analysis and TOPSIS method.

    Science.gov (United States)

    Ajmera, Puneeta

    2017-10-09

    Purpose Organizations have to evaluate their internal and external environments in this highly competitive world. Strengths, weaknesses, opportunities and threats (SWOT) analysis is a very useful technique which analyzes the strengths, weaknesses, opportunities and threats of an organization for taking strategic decisions and it also provides a foundation for the formulation of strategies. But the drawback of SWOT analysis is that it does not quantify the importance of individual factors affecting the organization and the individual factors are described in brief without weighing them. Because of this reason, SWOT analysis can be integrated with any multiple attribute decision-making (MADM) technique like the technique for order preference by similarity to ideal solution (TOPSIS), analytical hierarchy process, etc., to evaluate the best alternative among the available strategic alternatives. The paper aims to discuss these issues. Design/methodology/approach In this study, SWOT analysis is integrated with a multicriteria decision-making technique called TOPSIS to rank different strategies for Indian medical tourism in order of priority. Findings SO strategy (providing best facilitation and care to the medical tourists at par to developed countries) is the best strategy which matches with the four elements of S, W, O and T of SWOT matrix and 35 strategic indicators. Practical implications This paper proposes a solution based on a combined SWOT analysis and TOPSIS approach to help the organizations to evaluate and select strategies. Originality/value Creating a new technology or administering a new strategy always has some degree of resistance by employees. To minimize resistance, the author has used TOPSIS as it involves group thinking, requiring every manager of the organization to analyze and evaluate different alternatives and average measure of each parameter in final decision matrix.

  15. Binding mode prediction and MD/MMPBSA-based free energy ranking for agonists of REV-ERBα/NCoR

    Science.gov (United States)

    Westermaier, Yvonne; Ruiz-Carmona, Sergio; Theret, Isabelle; Perron-Sierra, Françoise; Poissonnet, Guillaume; Dacquet, Catherine; Boutin, Jean A.; Ducrot, Pierre; Barril, Xavier

    2017-08-01

    The knowledge of the free energy of binding of small molecules to a macromolecular target is crucial in drug design as is the ability to predict the functional consequences of binding. We highlight how a molecular dynamics (MD)-based approach can be used to predict the free energy of small molecules, and to provide priorities for the synthesis and the validation via in vitro tests. Here, we study the dynamics and energetics of the nuclear receptor REV-ERBα with its co-repressor NCoR and 35 novel agonists. Our in silico approach combines molecular docking, molecular dynamics (MD), solvent-accessible surface area (SASA) and molecular mechanics poisson boltzmann surface area (MMPBSA) calculations. While docking yielded initial hints on the binding modes, their stability was assessed by MD. The SASA calculations revealed that the presence of the ligand led to a higher exposure of hydrophobic REV-ERB residues for NCoR recruitment. MMPBSA was very successful in ranking ligands by potency in a retrospective and prospective manner. Particularly, the prospective MMPBSA ranking-based validations for four compounds, three predicted to be active and one weakly active, were confirmed experimentally.

  16. Ranking nodes in growing networks: When PageRank fails.

    Science.gov (United States)

    Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng

    2015-11-10

    PageRank is arguably the most popular ranking algorithm which is being applied in real systems ranging from information to biological and infrastructure networks. Despite its outstanding popularity and broad use in different areas of science, the relation between the algorithm's efficacy and properties of the network on which it acts has not yet been fully understood. We study here PageRank's performance on a network model supported by real data, and show that realistic temporal effects make PageRank fail in individuating the most valuable nodes for a broad range of model parameters. Results on real data are in qualitative agreement with our model-based findings. This failure of PageRank reveals that the static approach to information filtering is inappropriate for a broad class of growing systems, and suggest that time-dependent algorithms that are based on the temporal linking patterns of these systems are needed to better rank the nodes.

  17. PageRank versatility analysis of multilayer modality-based network for exploring the evolution of oil-water slug flow.

    Science.gov (United States)

    Gao, Zhong-Ke; Dang, Wei-Dong; Li, Shan; Yang, Yu-Xuan; Wang, Hong-Tao; Sheng, Jing-Ran; Wang, Xiao-Fan

    2017-07-14

    Numerous irregular flow structures exist in the complicated multiphase flow and result in lots of disparate spatial dynamical flow behaviors. The vertical oil-water slug flow continually attracts plenty of research interests on account of its significant importance. Based on the spatial transient flow information acquired through our designed double-layer distributed-sector conductance sensor, we construct multilayer modality-based network to encode the intricate spatial flow behavior. Particularly, we calculate the PageRank versatility and multilayer weighted clustering coefficient to quantitatively explore the inferred multilayer modality-based networks. Our analysis allows characterizing the complicated evolution of oil-water slug flow, from the opening formation of oil slugs, to the succedent inter-collision and coalescence among oil slugs, and then to the dispersed oil bubbles. These properties render our developed method particularly powerful for mining the essential flow features from the multilayer sensor measurements.

  18. OutRank

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Steinhausen, Uwe

    2008-01-01

    Outlier detection is an important data mining task for consistency checks, fraud detection, etc. Binary decision making on whether or not an object is an outlier is not appropriate in many applications and moreover hard to parametrize. Thus, recently, methods for outlier ranking have been proposed...

  19. CONSTRUCTION OF REGULAR LDPC LIKE CODES BASED ON FULL RANK CODES AND THEIR ITERATIVE DECODING USING A PARITY CHECK TREE

    Directory of Open Access Journals (Sweden)

    H. Prashantha Kumar

    2011-09-01

    Full Text Available Low density parity check (LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical Shannon limit for a memory less channel. LDPC codes are finding increasing use in applications like LTE-Networks, digital television, high density data storage systems, deep space communication systems etc. Several algebraic and combinatorial methods are available for constructing LDPC codes. In this paper we discuss a novel low complexity algebraic method for constructing regular LDPC like codes derived from full rank codes. We demonstrate that by employing these codes over AWGN channels, coding gains in excess of 2dB over un-coded systems can be realized when soft iterative decoding using a parity check tree is employed.

  20. From rankings to mission.

    Science.gov (United States)

    Kirch, Darrell G; Prescott, John E

    2013-08-01

    Since the 1980s, school ranking systems have been a topic of discussion among leaders of higher education. Various ranking systems are based on inadequate data that fail to illustrate the complex nature and special contributions of the institutions they purport to rank, including U.S. medical schools, each of which contributes uniquely to meeting national health care needs. A study by Tancredi and colleagues in this issue of Academic Medicine illustrates the limitations of rankings specific to primary care training programs. This commentary discusses, first, how each school's mission and strengths, as well as the impact it has on the community it serves, are distinct, and, second, how these schools, which are each unique, are poorly represented by overly subjective ranking methodologies. Because academic leaders need data that are more objective to guide institutional development, the Association of American Medical Colleges (AAMC) has been developing tools to provide valid data that are applicable to each medical school. Specifically, the AAMC's Medical School Admissions Requirements and its Missions Management Tool each provide a comprehensive assessment of medical schools that leaders are using to drive institutional capacity building. This commentary affirms the importance of mission while challenging the leaders of medical schools, teaching hospitals, and universities to use reliable data to continually improve the quality of their training programs to improve the health of all.

  1. High-Accuracy Approximation of High-Rank Derivatives: Isotropic Finite Differences Based on Lattice-Boltzmann Stencils

    Directory of Open Access Journals (Sweden)

    Keijo Kalervo Mattila

    2014-01-01

    Full Text Available We propose isotropic finite differences for high-accuracy approximation of high-rank derivatives. These finite differences are based on direct application of lattice-Boltzmann stencils. The presented finite-difference expressions are valid in any dimension, particularly in two and three dimensions, and any lattice-Boltzmann stencil isotropic enough can be utilized. A theoretical basis for the proposed utilization of lattice-Boltzmann stencils in the approximation of high-rank derivatives is established. In particular, the isotropy and accuracy properties of the proposed approximations are derived directly from this basis. Furthermore, in this formal development, we extend the theory of Hermite polynomial tensors in the case of discrete spaces and present expressions for the discrete inner products between monomials and Hermite polynomial tensors. In addition, we prove an equivalency between two approaches for constructing lattice-Boltzmann stencils. For the numerical verification of the presented finite differences, we introduce 5th-, 6th-, and 8th-order two-dimensional lattice-Boltzmann stencils.

  2. High-accuracy approximation of high-rank derivatives: isotropic finite differences based on lattice-Boltzmann stencils.

    Science.gov (United States)

    Mattila, Keijo Kalervo; Hegele Júnior, Luiz Adolfo; Philippi, Paulo Cesar

    2014-01-01

    We propose isotropic finite differences for high-accuracy approximation of high-rank derivatives. These finite differences are based on direct application of lattice-Boltzmann stencils. The presented finite-difference expressions are valid in any dimension, particularly in two and three dimensions, and any lattice-Boltzmann stencil isotropic enough can be utilized. A theoretical basis for the proposed utilization of lattice-Boltzmann stencils in the approximation of high-rank derivatives is established. In particular, the isotropy and accuracy properties of the proposed approximations are derived directly from this basis. Furthermore, in this formal development, we extend the theory of Hermite polynomial tensors in the case of discrete spaces and present expressions for the discrete inner products between monomials and Hermite polynomial tensors. In addition, we prove an equivalency between two approaches for constructing lattice-Boltzmann stencils. For the numerical verification of the presented finite differences, we introduce 5th-, 6th-, and 8th-order two-dimensional lattice-Boltzmann stencils.

  3. Drogue detection for vision-based autonomous aerial refueling via low rank and sparse decomposition with multiple features

    Science.gov (United States)

    Gao, Shibo; Cheng, Yongmei; Song, Chunhua

    2013-09-01

    The technology of vision-based probe-and-drogue autonomous aerial refueling is an amazing task in modern aviation for both manned and unmanned aircraft. A key issue is to determine the relative orientation and position of the drogue and the probe accurately for relative navigation system during the approach phase, which requires locating the drogue precisely. Drogue detection is a challenging task due to disorderly motion of drogue caused by both the tanker wake vortex and atmospheric turbulence. In this paper, the problem of drogue detection is considered as a problem of moving object detection. A drogue detection algorithm based on low rank and sparse decomposition with local multiple features is proposed. The global and local information of drogue is introduced into the detection model in a unified way. The experimental results on real autonomous aerial refueling videos show that the proposed drogue detection algorithm is effective.

  4. AptRank: an adaptive PageRank model for protein function prediction on   bi-relational graphs.

    Science.gov (United States)

    Jiang, Biaobin; Kloster, Kyle; Gleich, David F; Gribskov, Michael

    2017-06-15

    Diffusion-based network models are widely used for protein function prediction using protein network data and have been shown to outperform neighborhood-based and module-based methods. Recent studies have shown that integrating the hierarchical structure of the Gene Ontology (GO) data dramatically improves prediction accuracy. However, previous methods usually either used the GO hierarchy to refine the prediction results of multiple classifiers, or flattened the hierarchy into a function-function similarity kernel. No study has taken the GO hierarchy into account together with the protein network as a two-layer network model. We first construct a Bi-relational graph (Birg) model comprised of both protein-protein association and function-function hierarchical networks. We then propose two diffusion-based methods, BirgRank and AptRank, both of which use PageRank to diffuse information on this two-layer graph model. BirgRank is a direct application of traditional PageRank with fixed decay parameters. In contrast, AptRank utilizes an adaptive diffusion mechanism to improve the performance of BirgRank. We evaluate the ability of both methods to predict protein function on yeast, fly and human protein datasets, and compare with four previous methods: GeneMANIA, TMC, ProteinRank and clusDCA. We design four different validation strategies: missing function prediction, de novo function prediction, guided function prediction and newly discovered function prediction to comprehensively evaluate predictability of all six methods. We find that both BirgRank and AptRank outperform the previous methods, especially in missing function prediction when using only 10% of the data for training. The MATLAB code is available at https://github.rcac.purdue.edu/mgribsko/aptrank . gribskov@purdue.edu. Supplementary data are available at Bioinformatics online.

  5. LogDet Rank Minimization with Application to Subspace Clustering.

    Science.gov (United States)

    Kang, Zhao; Peng, Chong; Cheng, Jie; Cheng, Qiang

    2015-01-01

    Low-rank matrix is desired in many machine learning and computer vision problems. Most of the recent studies use the nuclear norm as a convex surrogate of the rank operator. However, all singular values are simply added together by the nuclear norm, and thus the rank may not be well approximated in practical problems. In this paper, we propose using a log-determinant (LogDet) function as a smooth and closer, though nonconvex, approximation to rank for obtaining a low-rank representation in subspace clustering. Augmented Lagrange multipliers strategy is applied to iteratively optimize the LogDet-based nonconvex objective function on potentially large-scale data. By making use of the angular information of principal directions of the resultant low-rank representation, an affinity graph matrix is constructed for spectral clustering. Experimental results on motion segmentation and face clustering data demonstrate that the proposed method often outperforms state-of-the-art subspace clustering algorithms.

  6. Ranking procedure based on mechanical, durability and thermal behavior of mortars with incorporation of phase change materials

    Directory of Open Access Journals (Sweden)

    Cunha, S.

    2015-12-01

    Full Text Available Nowadays, considering the high variety of construction products, adequate material selection, based on their properties and function, becomes increasingly important. In this research, a ranking procedure developed by Czarnecki and Lukowski is applied in mortars with incorporation of phase change materials (PCM. The ranking procedure transforms experimental results of properties into one numerical value. The products can be classified according to their individual properties or even an optimized combination of different properties. The main purpose of this study was the ranking of mortars with incorporation of different contents of PCM based in different binders. Aerial lime, hydraulic lime, gypsum and cement were the binders studied. For each binder, three different mortars were developed. Reference mortars, mortars with incorporation of 40% of PCM and mortars with incorporation of 40% of PCM and 1% of fibers, were tested. Results show that the incorporation of PCM in mortars changes their global performance.Actualmente, existen varios productos de construcción, siendo importante una adecuada selección, con base en sus principales propiedades y funciones. En esta investigación se aplicó un procedimiento de clasificación desarrollado por Czarnecki y Lukowski, en morteros con incorporación de materiales de cambio de fase (PCM. Este procedimiento transforma los resultados experimentales de las propiedades en un único valor numérico. Los productos se clasifican de acuerdo con sus propiedades individuales o en una combinación optimizada de diferentes propiedades. El principal objetivo de este estudio fue la clasificación de morteros basado en los diferentes aglutinantes con incorporación de diferentes cantidades de PCM. Los aglutinantes utilizados fueran la cal aérea, cal hidráulica, yeso y cemento. Para cada aglutinante se han desarrollado tres morteros, siendo morteros de referencia, con incorporación de 40% de PCM y con incorporaci

  7. A Statistical Ontology-Based Approach to Ranking for Multiword Search

    Science.gov (United States)

    Kim, Jinwoo

    2013-01-01

    Keyword search is a prominent data retrieval method for the Web, largely because the simple and efficient nature of keyword processing allows a large amount of information to be searched with fast response. However, keyword search approaches do not formally capture the clear meaning of a keyword query and fail to address the semantic relationships…

  8. Manifold ranking based scoring system with its application to cardiac arrest prediction: A retrospective study in emergency department patients.

    Science.gov (United States)

    Liu, Tianchi; Lin, Zhiping; Ong, Marcus Eng Hock; Koh, Zhi Xiong; Pek, Pin Pin; Yeo, Yong Kiang; Oh, Beom-Seok; Ho, Andrew Fu Wah; Liu, Nan

    2015-12-01

    The recently developed geometric distance scoring system has shown the effectiveness of scoring systems in predicting cardiac arrest within 72h and the potential to predict other clinical outcomes. However, the geometric distance scoring system predicts scores based on only local structure embedded by the data, thus leaving much room for improvement in terms of prediction accuracy. We developed a novel scoring system for predicting cardiac arrest within 72h. The scoring system was developed based on a semi-supervised learning algorithm, manifold ranking, which explores both the local and global consistency of the data. System evaluation was conducted on emergency department patients׳ data, including both vital signs and heart rate variability (HRV) parameters. Comparison of the proposed scoring system with previous work was given in terms of sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV). Out of 1025 patients, 52 (5.1%) met the primary outcome. Experimental results show that the proposed scoring system was able to achieve higher area under the curve (AUC) on both the balanced dataset (0.907 vs. 0.824) and the imbalanced dataset (0.774 vs. 0.734) compared to the geometric distance scoring system. The proposed scoring system improved the prediction accuracy by utilizing the global consistency of the training data. We foresee the potential of extending this scoring system, as well as manifold ranking algorithm, to other medical decision making problems. Furthermore, we will investigate the parameter selection process and other techniques to improve performance on the imbalanced dataset. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. A consensus-based tool for ranking the risk of blood-transmissible infections

    NARCIS (Netherlands)

    Oei, Welling; Neslo, Rabin; Janssen, Mart P.|info:eu-repo/dai/nl/304818208

    2016-01-01

    BACKGROUND: Emerging infectious diseases (EIDs) pose a threat to blood transfusion safety. Despite a lack of evidence, safety interventions may be required. However, what should decision makers base their decisions on? A model was developed that allows valuing the perceived risk of an EID for blood

  10. An AHP-based methodology to rank Critical Success Factors of Executive Information Systems

    OpenAIRE

    Salmerón, Jose L.; Herrero, Inés

    2005-01-01

    For academics and practitioners concerned with computer-based Information Systems, one central issue is the study of Critical Success Factors of Information Systems development and implementation. Whereas several Critical Success Factors analyses appear in the literature, most of them do not have any technical background. In this paper we propose the use of the Analytic Hierarchy Process to set Critical Success Factors priorities. Results suggest that technical elements are less critical than...

  11. Ranking nodes in growing networks: When PageRank fails

    Science.gov (United States)

    Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng

    2015-11-01

    PageRank is arguably the most popular ranking algorithm which is being applied in real systems ranging from information to biological and infrastructure networks. Despite its outstanding popularity and broad use in different areas of science, the relation between the algorithm’s efficacy and properties of the network on which it acts has not yet been fully understood. We study here PageRank’s performance on a network model supported by real data, and show that realistic temporal effects make PageRank fail in individuating the most valuable nodes for a broad range of model parameters. Results on real data are in qualitative agreement with our model-based findings. This failure of PageRank reveals that the static approach to information filtering is inappropriate for a broad class of growing systems, and suggest that time-dependent algorithms that are based on the temporal linking patterns of these systems are needed to better rank the nodes.

  12. Ranking beta sheet topologies of proteins

    DEFF Research Database (Denmark)

    Fonseca, Rasmus; Helles, Glennie; Winter, Pawel

    2010-01-01

    One of the challenges of protein structure prediction is to identify long-range interactions between amino acids. To reliably predict such interactions, we enumerate, score and rank all beta-topologies (partitions of beta-strands into sheets, orderings of strands within sheets and orientations...... of paired strands) of a given protein. We show that the beta-topology corresponding to the native structure is, with high probability, among the top-ranked. Since full enumeration is very time-consuming, we also suggest a method to deal with proteins with many beta-strands. The results reported...... in this paper are highly relevant for ab initio protein structure prediction methods based on decoy generation. The top-ranked beta-topologies can be used to find initial conformations from which conformational searches can be started. They can also be used to filter decoys by removing those with poorly...

  13. Diversifying customer review rankings.

    Science.gov (United States)

    Krestel, Ralf; Dokoohaki, Nima

    2015-06-01

    E-commerce Web sites owe much of their popularity to consumer reviews accompanying product descriptions. On-line customers spend hours and hours going through heaps of textual reviews to decide which products to buy. At the same time, each popular product has thousands of user-generated reviews, making it impossible for a buyer to read everything. Current approaches to display reviews to users or recommend an individual review for a product are based on the recency or helpfulness of each review. In this paper, we present a framework to rank product reviews by optimizing the coverage of the ranking with respect to sentiment or aspects, or by summarizing all reviews with the top-K reviews in the ranking. To accomplish this, we make use of the assigned star rating for a product as an indicator for a review's sentiment polarity and compare bag-of-words (language model) with topic models (latent Dirichlet allocation) as a mean to represent aspects. Our evaluation on manually annotated review data from a commercial review Web site demonstrates the effectiveness of our approach, outperforming plain recency ranking by 30% and obtaining best results by combining language and topic model representations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Perbandingan Metode Analytical Hierarchy Process (AHP) dan Metode Preference Ranking Organization Method For Enrichment Evaluation (PROMETHEE) untuk Pemilihan Hardisk Eksternal

    OpenAIRE

    Pratama, Didi

    2015-01-01

    Decision support system is an alternative solution to the decision maker to decide the choice. Many kinds of an External Hard Drive with functions different and uses are a new problem for the decision maker. The purpose of this research is how to choose a choice of an External Hard Drive which based on to user needs, based on the value of the selection criteria, and by paying attention to kind of decision. Decision support system used in this study is Analytical Hierarchy Process (AHP) method...

  15. Phylogeny, historical biogeography, and taxonomic ranking of Parnassiinae (Lepidoptera, Papilionidae) based on morphology and seven genes.

    Science.gov (United States)

    Nazari, Vazrick; Zakharov, Evgueni V; Sperling, Felix A H

    2007-01-01

    We tested the taxonomic utility of morphology and seven mitochondrial or nuclear genes in a phylogenetic reconstruction of swallowtail butterflies in the subfamily Parnassiinae. Our data included 236 morphological characters and DNA sequences for seven genes that are commonly used to infer lepidopteran relationships (COI+COII, ND5, ND1, 16S, EF-1alpha, and wg; total 5775 bp). Nuclear genes performed best for inferring phylogenies, particularly at higher taxonomic levels, while there was substantial variation in performance among mitochondrial genes. Multiple analyses of molecular data (MP, ML and Bayesian) consistently produced a tree topology different from that obtained by morphology alone. Based on molecular evidence, sister-group relationships were confirmed between the genera Hypermnestra and Parnassius, as well as between Archon and Luehdorfia, while the monophyly of the subfamily was weakly supported. We recognize three tribes within Parnassiinae, with Archon and Luehdorfia forming the tribe Luehdorfiini Tutt, 1896 [stat. rev.]. Three fossil taxa were incorporated into a molecular clock analysis with biogeographic time constraints. Based on dispersal-vicariance (DIVA) analysis, the most recent common ancestor of Parnassiinae occurred in the Iranian Plateau and Central Asia to China. Early diversification of Parnassiinae took place at the same time that India collided into Eurasia, 65-42 million years ago.

  16. Design and Analysis of a Ranking Approach to Private Location-Based Services

    DEFF Research Database (Denmark)

    Yiu, Man Lung; Jensen, Christian Søndergaard; Møller, Jesper

    2011-01-01

    Users of mobile services wish to retrieve nearby points of interest without disclosing their locations to the services. This article addresses the challenge of optimizing the query performance while satisfying given location privacy and query accuracy requirements. The article's proposal, Space......Twist, aims to offer location privacy for k nearest neighbor (kNN) queries at low communication cost without requiring a trusted anonymizer. The solution can be used with a conventional DBMS as well as with a server optimized for location-based services. In particular, we believe that this is the first...... solution that expresses the server-side functionality in a single SQL statement. In its basic form, SpaceTwist utilizes well-known incremental NN query processing on the server. When augmented with a server-side granular search technique, SpaceTwist is capable of exploiting relaxed query accuracy...

  17. Standard test method for ranking resistance of plastics to sliding wear using block-on-ring wear test—cumulative wear method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2003-01-01

    1.1 This test method covers laboratory procedures for determining the resistance of plastics to sliding wear. The test utilizes a block-on-ring friction and wear testing machine to rank plastics according to their sliding wear characteristics against metals or other solids. 1.2 An important attribute of this test is that it is very flexible. Any material that can be fabricated into, or applied to, blocks and rings can be tested. Thus, the potential materials combinations are endless. In addition, the test can be run with different gaseous atmospheres and elevated temperatures, as desired, to simulate service conditions. 1.3 Wear test results are reported as the volume loss in cubic millimetres for the block and ring. Materials of higher wear resistance will have lower volume loss. 1.4 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information only. 1.5 This standard does not purport to address all of the safety concerns, if any, associated with it...

  18. Social norm influences on evaluations of the risks associated with alcohol consumption: applying the rank-based decision by sampling model to health judgments.

    Science.gov (United States)

    Wood, Alex M; Brown, Gordon D A; Maltby, John

    2012-01-01

    The research first tested whether perceptions of other people's alcohol consumption influenced drinkers' perceptions of the riskiness of their own consumption. Second, the research tested how such comparisons are made-whether, for example, people compare their drinking to the 'average' drinker's or 'rank' their consumption amongst other people's. The latter untested possibility, suggested by the recent Decision by Sampling Model of judgment, would imply different cognitive mechanisms and suggest that information should be presented differently to people in social norm interventions. Study 1 surveyed students who provided information on (a) their own drinking, (b) their perceptions of the distribution of drinking in the UK and (c) their perceived risk of various alcohol-related disorders. Study 2 experimentally manipulated the rank of 'target' units of alcohol within the context of units viewed simultaneously. In both studies, the rank of an individual's drinking in a context of other drinkers predicted perceptions of developing alcohol-related disorders. There was no evidence for the alternative hypothesis that people compared with the average of other drinkers' consumptions. The position that subjects believed they occupied in the ranking of other drinkers predicted their perceived risk, and did so as strongly as how much they actually drank. Drinking comparisons are rank-based, which is consistent with other judgments in social, emotional and psychophysical domains. Interventions should be designed to work with people's natural ways of information processing, through providing clients with information on their drinking rank rather than how their drinking differs from the average.

  19. U.S. Natural Gas Storage Risk-Based Ranking Methodology and Results

    Energy Technology Data Exchange (ETDEWEB)

    Folga, Steve [Argonne National Lab. (ANL), Argonne, IL (United States); Portante, Edgar [Argonne National Lab. (ANL), Argonne, IL (United States); Shamsuddin, Shabbir [Argonne National Lab. (ANL), Argonne, IL (United States); Tompkins, Angeli [Argonne National Lab. (ANL), Argonne, IL (United States); Talaber, Leah [Argonne National Lab. (ANL), Argonne, IL (United States); McLamore, Mike [Argonne National Lab. (ANL), Argonne, IL (United States); Kavicky, Jim [Argonne National Lab. (ANL), Argonne, IL (United States); Conzelmann, Guenter [Argonne National Lab. (ANL), Argonne, IL (United States); Levin, Todd [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-10-01

    This report summarizes the methodology and models developed to assess the risk to energy delivery from the potential loss of underground gas storage (UGS) facilities located within the United States. The U.S. has a total of 418 existing storage fields, of which 390 are currently active. The models estimate the impacts of a disruption of each of the active UGS facilities on their owners/operators, including (1) local distribution companies (LDCs), (2) directly connected transporting pipelines and thus on the customers in downstream States, and (3) third-party entities and thus on contracted customers expecting the gas shipment. Impacts are measured across all natural gas customer classes. For the electric sector, impacts are quantified in terms of natural gas-fired electric generation capacity potentially affected from the loss of a UGS facility. For the purpose of calculating the overall supply risk, the overall consequence of the disruption of an UGS facility across all customer classes is expressed in terms of the number of expected equivalent residential customer outages per year, which combines the unit business interruption cost per customer class and the estimated number of affected natural gas customers with estimated probabilities of UGS disruptions. All models and analyses are based on publicly available data. The report presents a set of findings and recommendations in terms of data, further analyses, regulatory requirements and standards, and needs to improve gas/electric industry coordination for electric reliability.

  20. Hazard ranking system evaluation of CERCLA inactive waste sites at Hanford: Volume 2: Engineered-facility sites (HISS data base)

    Energy Technology Data Exchange (ETDEWEB)

    Jette, S.J.; Lamar, D.A.; McLaughlin, T.J.; Sherwood, D.R.; Van Houten, N.C.; Stenner, R.D.; Cramer, K.H.; Higley, K.A.

    1988-10-01

    The purpose of this report is to formally document the assessment activities at the US Department of Energy (DOE) Hanford Site. These activities were carried out pursuant to the DOE orders that address the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) Program for the cleanup of inactive waste sites. The DOE orders incorporate the US Environmental Protection Agency methodology, which is based on the Superfund Amendments and Reauthorization Act of 1986. This methodology includes: PA/SI, remedial investigation/feasibility study, record of decision, design and implementation of remedial action, operation and monitoring, and verification monitoring. Volume 1 of this report discusses the CERCLA inactive waste-site evaluation process, assumptions, and results of the Hazard Ranking System methodology employed. Volume 2 presents the data on the individual CERCLA engineered-facility sites at Hanford, as contained in the Hanford Inactive Site Surveillance (HISS) Data Base. Volume 3 presents the data on the individual CERCLA unplanned-release sites at Hanford, as contained in the HISS Data Base. 13 refs.

  1. Minkowski metrics in creating universal ranking algorithms

    Directory of Open Access Journals (Sweden)

    Andrzej Ameljańczyk

    2014-06-01

    Full Text Available The paper presents a general procedure for creating the rankings of a set of objects, while the relation of preference based on any ranking function. The analysis was possible to use the ranking functions began by showing the fundamental drawbacks of commonly used functions in the form of a weighted sum. As a special case of the ranking procedure in the space of a relation, the procedure based on the notion of an ideal element and generalized Minkowski distance from the element was proposed. This procedure, presented as universal ranking algorithm, eliminates most of the disadvantages of ranking functions in the form of a weighted sum.[b]Keywords[/b]: ranking functions, preference relation, ranking clusters, categories, ideal point, universal ranking algorithm

  2. Does an expert-based evaluation allow us to go beyond the Impact Factor? Experiences from building a ranking of national journals in Poland.

    Science.gov (United States)

    Kulczycki, Emanuel; Rozkosz, Ewa A

    2017-01-01

    This article discusses the Polish Journal Ranking, which is used in the research evaluation system in Poland. In 2015, the ranking, which represents all disciplines, allocated 17,437 journals into three lists: A, B, and C. The B list constitutes a ranking of Polish journals that are indexed neither in the Web of Science nor the European Reference Index for the Humanities. This ranking was built by evaluating journals in three dimensions: formal, bibliometric, and expert-based. We have analysed data on 2035 Polish journals from the B list. Our study aims to determine how an expert-based evaluation influenced the results of final evaluation. In our study, we used structural equation modelling, which is regression based, and we designed three pairs of theoretical models for three fields of science: (1) humanities, (2) social sciences, and (3) engineering, natural sciences, and medical sciences. Each pair consisted of the full model and the reduced model (i.e., the model without the expert-based evaluation). Our analysis revealed that the multidimensional evaluation of local journals should not rely only on the bibliometric indicators, which are based on the Web of Science or Scopus. Moreover, we have shown that the expert-based evaluation plays a major role in all fields of science. We conclude with recommendations that the formal evaluation should be reduced to verifiable parameters and that the expert-based evaluation should be based on common guidelines for the experts.

  3. Quantifying aflatoxins in peanuts using fluorescence spectroscopy coupled with multi-way methods: Resurrecting second-order advantage in excitation-emission matrices with rank overlap problem

    Science.gov (United States)

    Sajjadi, S. Maryam; Abdollahi, Hamid; Rahmanian, Reza; Bagheri, Leila

    2016-03-01

    A rapid, simple and inexpensive method using fluorescence spectroscopy coupled with multi-way methods for the determination of aflatoxins B1 and B2 in peanuts has been developed. In this method, aflatoxins are extracted with a mixture of water and methanol (90:10), and then monitored by fluorescence spectroscopy producing EEMs. Although the combination of EEMs and multi-way methods is commonly used to determine analytes in complex chemical systems with unknown interference(s), rank overlap problem in excitation and emission profiles may restrain the application of this strategy. If there is rank overlap in one mode, there are several three-way algorithms such as PARAFAC under some constraints that can resolve this kind of data successfully. However, the analysis of EEM data is impossible when some species have rank overlap in both modes because the information of the data matrix is equivalent to a zero-order data for that species, which is the case in our study. Aflatoxins B1 and B2 have the same shape of spectral profiles in both excitation and emission modes and we propose creating a third order data for each sample using solvent as a new additional selectivity mode. This third order data, in turn, converted to the second order data by augmentation, a fact which resurrects the second order advantage in original EEMs. The three-way data is constructed by stacking augmented data in the third way, and then analyzed by two powerful second order calibration methods (BLLS-RBL and PARAFAC) to quantify the analytes in four kinds of peanut samples. The results of both methods are in good agreement and reasonable recoveries are obtained.

  4. Complete hazard ranking to analyze right-censored data: An ALS survival study.

    Science.gov (United States)

    Huang, Zhengnan; Zhang, Hongjiu; Boss, Jonathan; Goutman, Stephen A; Mukherjee, Bhramar; Dinov, Ivo D; Guan, Yuanfang

    2017-12-01

    Survival analysis represents an important outcome measure in clinical research and clinical trials; further, survival ranking may offer additional advantages in clinical trials. In this study, we developed GuanRank, a non-parametric ranking-based technique to transform patients' survival data into a linear space of hazard ranks. The transformation enables the utilization of machine learning base-learners including Gaussian process regression, Lasso, and random forest on survival data. The method was submitted to the DREAM Amyotrophic Lateral Sclerosis (ALS) Stratification Challenge. Ranked first place, the model gave more accurate ranking predictions on the PRO-ACT ALS dataset in comparison to Cox proportional hazard model. By utilizing right-censored data in its training process, the method demonstrated its state-of-the-art predictive power in ALS survival ranking. Its feature selection identified multiple important factors, some of which conflicts with previous studies.

  5. Complete hazard ranking to analyze right-censored data: An ALS survival study.

    Directory of Open Access Journals (Sweden)

    Zhengnan Huang

    2017-12-01

    Full Text Available Survival analysis represents an important outcome measure in clinical research and clinical trials; further, survival ranking may offer additional advantages in clinical trials. In this study, we developed GuanRank, a non-parametric ranking-based technique to transform patients' survival data into a linear space of hazard ranks. The transformation enables the utilization of machine learning base-learners including Gaussian process regression, Lasso, and random forest on survival data. The method was submitted to the DREAM Amyotrophic Lateral Sclerosis (ALS Stratification Challenge. Ranked first place, the model gave more accurate ranking predictions on the PRO-ACT ALS dataset in comparison to Cox proportional hazard model. By utilizing right-censored data in its training process, the method demonstrated its state-of-the-art predictive power in ALS survival ranking. Its feature selection identified multiple important factors, some of which conflicts with previous studies.

  6. Selection and ranking of occupational safety indicators based on fuzzy AHP: A case study in road construction companies

    National Research Council Canada - National Science Library

    Janackovic, Goran Lj; Savic, Suzana M; Stankovic, Miomir S

    2013-01-01

    .... The key safety performance indicators for the road construction industry are identified and ranked according to the results of a survey that included experts who assessed occupational safety risks in these companies...

  7. Selection of suitable e-learning approach using TOPSIS technique with best ranked criteria weights

    Science.gov (United States)

    Mohammed, Husam Jasim; Kasim, Maznah Mat; Shaharanee, Izwan Nizal Mohd

    2017-11-01

    This paper compares the performances of four rank-based weighting assessment techniques, Rank Sum (RS), Rank Reciprocal (RR), Rank Exponent (RE), and Rank Order Centroid (ROC) on five identified e-learning criteria to select the best weights method. A total of 35 experts in a public university in Malaysia were asked to rank the criteria and to evaluate five e-learning approaches which include blended learning, flipped classroom, ICT supported face to face learning, synchronous learning, and asynchronous learning. The best ranked criteria weights are defined as weights that have the least total absolute differences with the geometric mean of all weights, were then used to select the most suitable e-learning approach by using TOPSIS method. The results show that RR weights are the best, while flipped classroom approach implementation is the most suitable approach. This paper has developed a decision framework to aid decision makers (DMs) in choosing the most suitable weighting method for solving MCDM problems.

  8. A Review of Ranking Models in Data Envelopment Analysis

    Directory of Open Access Journals (Sweden)

    F. Hosseinzadeh Lotfi

    2013-01-01

    Full Text Available In the course of improving various abilities of data envelopment analysis (DEA models, many investigations have been carried out for ranking decision-making units (DMUs. This is an important issue both in theory and practice. There exist a variety of papers which apply different ranking methods to a real data set. Here the ranking methods are divided into seven groups. As each of the existing methods can be viewed from different aspects, it is possible that somewhat these groups have an overlapping with the others. The first group conducts the evaluation by a cross-efficiency matrix where the units are self- and peer-evaluated. In the second one, the ranking units are based on the optimal weights obtained from multiplier model of DEA technique. In the third group, super-efficiency methods are dealt with which are based on the idea of excluding the unit under evaluation and analyzing the changes of frontier. The fourth group involves methods based on benchmarking, which adopts the idea of being a useful target for the inefficient units. The fourth group uses the multivariate statistical techniques, usually applied after conducting the DEA classification. The fifth research area ranks inefficient units through proportional measures of inefficiency. The sixth approach involves multiple-criteria decision methodologies with the DEA technique. In the last group, some different methods of ranking units are mentioned.

  9. Screening and ranking of POPs for global half-life: QSAR approaches for prioritization based on molecular structure.

    Science.gov (United States)

    Gramatica, Paola; Papa, Ester

    2007-04-15

    Persistence in the environment is an important criterion in prioritizing hazardous chemicals and in identifying new persistent organic pollutants (POPs). Degradation half-life in various compartments is among the more commonly used criteria for studying environmental persistence, but the limited availability of experimental data or reliable estimates is a serious problem. Available half-life data for degradation in air, water, sediment, and soil, for a set of 250 organic POP-type chemicals, were combined in a multivariate approach by principal component analysis to obtain a ranking of the studied organic pollutants according to their relative overall half-life. A global half-life index (GHLI) applicable for POP screening purposes is proposed. The reliability of this index was verified in comparison with multimedia model results. This global index was then modeled as a cumulative end-point using a QSAR approach based on few theoretical molecular descriptors, and a simple and robust regression model externally validated for its predictive ability was derived. The application of this model could allow a fast preliminary identification and prioritization of not yet known POPs, just from the knowledge of their molecular structure. This model can be applied a priori also in the chemical design of safer and alternative non-POP compounds.

  10. Chronic dietary risk characterization for pesticide residues: a ranking and scoring method integrating agricultural uses and food contamination data.

    Science.gov (United States)

    Nougadère, Alexandre; Reninger, Jean-Cédric; Volatier, Jean-Luc; Leblanc, Jean-Charles

    2011-07-01

    A method has been developed to identify pesticide residues and foodstuffs for inclusion in national monitoring programs with different priority levels. It combines two chronic dietary intake indicators: ATMDI based on maximum residue levels and agricultural uses, and EDI on food contamination data. The mean and 95th percentile of exposure were calculated for 490 substances using individual and national consumption data. The results show that mean ATMDI exceeds the acceptable daily intake (ADI) for 10% of the pesticides, and the mean upper-bound EDI is above the ADI for 1.8% of substances. A seven-level risk scale is presented for substances already analyzed in food in France and substances not currently sought. Of 336 substances analyzed, 70 pesticides of concern (levels 2-5) should be particularly monitored, 22 of which are priority pesticides (levels 4 and 5). Of 154 substances not sought, 36 pesticides of concern (levels 2-4) should be included in monitoring programs, including 8 priority pesticides (level 4). In order to refine exposure assessment, analytical improvements and developments are needed to lower the analytical limits for priority pesticide/commodity combinations. Developed nationally, this method could be applied at different geographic scales. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. A computer-aided diagnosis system of nuclear cataract via ranking.

    Science.gov (United States)

    Huang, Wei; Li, Huiqi; Chan, Kap Luk; Lim, Joo Hwee; Liu, Jiang; Wong, Tien Yin

    2009-01-01

    A novel computer-aided diagnosis system of nuclear cataract via ranking is firstly proposed in this paper. The grade of nuclear cataract in a slit-lamp image is predicted based on its neighboring labeled images in a ranked images list, which is achieved using an optimal ranking function. A new ranking evaluation measure is proposed for learning the optimal ranking function via direct optimization. Our system has been tested by a large dataset composed of 1000 slit-lamp images from 1000 different cases. Both experimental results and comparison with several state-of-the-art methods indicate the superiority of our system.

  12. Kriging for Simulation Metamodeling: Experimental Design, Reduced Rank Kriging, and Omni-Rank Kriging

    Science.gov (United States)

    Hosking, Michael Robert

    This dissertation improves an analyst's use of simulation by offering improvements in the utilization of kriging metamodels. There are three main contributions. First an analysis is performed of what comprises good experimental designs for practical (non-toy) problems when using a kriging metamodel. Second is an explanation and demonstration of how reduced rank decompositions can improve the performance of kriging, now referred to as reduced rank kriging. Third is the development of an extension of reduced rank kriging which solves an open question regarding the usage of reduced rank kriging in practice. This extension is called omni-rank kriging. Finally these results are demonstrated on two case studies. The first contribution focuses on experimental design. Sequential designs are generally known to be more efficient than "one shot" designs. However, sequential designs require some sort of pilot design from which the sequential stage can be based. We seek to find good initial designs for these pilot studies, as well as designs which will be effective if there is no following sequential stage. We test a wide variety of designs over a small set of test-bed problems. Our findings indicate that analysts should take advantage of any prior information they have about their problem's shape and/or their goals in metamodeling. In the event of a total lack of information we find that Latin hypercube designs are robust default choices. Our work is most distinguished by its attention to the higher levels of dimensionality. The second contribution introduces and explains an alternative method for kriging when there is noise in the data, which we call reduced rank kriging. Reduced rank kriging is based on using a reduced rank decomposition which artificially smoothes the kriging weights similar to a nugget effect. Our primary focus will be showing how the reduced rank decomposition propagates through kriging empirically. In addition, we show further evidence for our

  13. PageRank tracker: from ranking to tracking.

    Science.gov (United States)

    Gong, Chen; Fu, Keren; Loza, Artur; Wu, Qiang; Liu, Jia; Yang, Jie

    2014-06-01

    Video object tracking is widely used in many real-world applications, and it has been extensively studied for over two decades. However, tracking robustness is still an issue in most existing methods, due to the difficulties with adaptation to environmental or target changes. In order to improve adaptability, this paper formulates the tracking process as a ranking problem, and the PageRank algorithm, which is a well-known webpage ranking algorithm used by Google, is applied. Labeled and unlabeled samples in tracking application are analogous to query webpages and the webpages to be ranked, respectively. Therefore, determining the target is equivalent to finding the unlabeled sample that is the most associated with existing labeled set. We modify the conventional PageRank algorithm in three aspects for tracking application, including graph construction, PageRank vector acquisition and target filtering. Our simulations with the use of various challenging public-domain video sequences reveal that the proposed PageRank tracker outperforms mean-shift tracker, co-tracker, semiboosting and beyond semiboosting trackers in terms of accuracy, robustness and stability.

  14. A classification and ranking system on the H2 gas sensing capabilities of nanomaterials based on proposed coefficients of sensor performance and sensor efficiency equations

    CSIR Research Space (South Africa)

    Mwakikunga, BW

    2013-07-01

    Full Text Available Sensors and Actuators B 184 (2013) 170– 178 A classification and ranking system on the H2 gas sensing capabilities of nanomaterials based on proposed coefficients of sensor performance and sensor efficiency equations Bonex W. Mwakikungaa,b,∗, Sarah...

  15. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating e...... eigenvalues and eigenvectors. We give a number of different applications to regression and time series analysis, and show how the reduced rank regression estimator can be derived as a Gaussian maximum likelihood estimator. We briefly mention asymptotic results......The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

  16. A method for the design and development of medical or health care information websites to optimize search engine results page rankings on Google.

    Science.gov (United States)

    Dunne, Suzanne; Cummins, Niamh Maria; Hannigan, Ailish; Shannon, Bill; Dunne, Colum; Cullen, Walter

    2013-08-27

    The Internet is a widely used source of information for patients searching for medical/health care information. While many studies have assessed existing medical/health care information on the Internet, relatively few have examined methods for design and delivery of such websites, particularly those aimed at the general public. This study describes a method of evaluating material for new medical/health care websites, or for assessing those already in existence, which is correlated with higher rankings on Google's Search Engine Results Pages (SERPs). A website quality assessment (WQA) tool was developed using criteria related to the quality of the information to be contained in the website in addition to an assessment of the readability of the text. This was retrospectively applied to assess existing websites that provide information about generic medicines. The reproducibility of the WQA tool and its predictive validity were assessed in this study. The WQA tool demonstrated very high reproducibility (intraclass correlation coefficient=0.95) between 2 independent users. A moderate to strong correlation was found between WQA scores and rankings on Google SERPs. Analogous correlations were seen between rankings and readability of websites as determined by Flesch Reading Ease and Flesch-Kincaid Grade Level scores. The use of the WQA tool developed in this study is recommended as part of the design phase of a medical or health care information provision website, along with assessment of readability of the material to be used. This may ensure that the website performs better on Google searches. The tool can also be used retrospectively to make improvements to existing websites, thus, potentially enabling better Google search result positions without incurring the costs associated with Search Engine Optimization (SEO) professionals or paid promotion.

  17. PhyloPythiaS+: a self-training method for the rapid reconstruction of low-ranking taxonomic bins from metagenomes

    Directory of Open Access Journals (Sweden)

    Ivan Gregor

    2016-02-01

    Full Text Available Background. Metagenomics is an approach for characterizing environmental microbial communities in situ, it allows their functional and taxonomic characterization and to recover sequences from uncultured taxa. This is often achieved by a combination of sequence assembly and binning, where sequences are grouped into ‘bins’ representing taxa of the underlying microbial community. Assignment to low-ranking taxonomic bins is an important challenge for binning methods as is scalability to Gb-sized datasets generated with deep sequencing techniques. One of the best available methods for species bins recovery from deep-branching phyla is the expert-trained PhyloPythiaS package, where a human expert decides on the taxa to incorporate in the model and identifies ‘training’ sequences based on marker genes directly from the sample. Due to the manual effort involved, this approach does not scale to multiple metagenome samples and requires substantial expertise, which researchers who are new to the area do not have. Results. We have developed PhyloPythiaS+, a successor to our PhyloPythia(S software. The new (+ component performs the work previously done by the human expert. PhyloPythiaS+ also includes a new k-mer counting algorithm, which accelerated the simultaneous counting of 4–6-mers used for taxonomic binning 100-fold and reduced the overall execution time of the software by a factor of three. Our software allows to analyze Gb-sized metagenomes with inexpensive hardware, and to recover species or genera-level bins with low error rates in a fully automated fashion. PhyloPythiaS+ was compared to MEGAN, taxator-tk, Kraken and the generic PhyloPythiaS model. The results showed that PhyloPythiaS+ performs especially well for samples originating from novel environments in comparison to the other methods. Availability. PhyloPythiaS+ in a virtual machine is available for installation under Windows, Unix systems or OS X on: https://github.com/algbioi/ppsp/wiki.

  18. Universal scaling in sports ranking

    Science.gov (United States)

    Deng, Weibing; Li, Wei; Cai, Xu; Bulou, Alain; Wang, Qiuping A.

    2012-09-01

    Ranking is a ubiquitous phenomenon in human society. On the web pages of Forbes, one may find all kinds of rankings, such as the world's most powerful people, the world's richest people, the highest-earning tennis players, and so on and so forth. Herewith, we study a specific kind—sports ranking systems in which players' scores and/or prize money are accrued based on their performances in different matches. By investigating 40 data samples which span 12 different sports, we find that the distributions of scores and/or prize money follow universal power laws, with exponents nearly identical for most sports. In order to understand the origin of this universal scaling we focus on the tennis ranking systems. By checking the data we find that, for any pair of players, the probability that the higher-ranked player tops the lower-ranked opponent is proportional to the rank difference between the pair. Such a dependence can be well fitted to a sigmoidal function. By using this feature, we propose a simple toy model which can simulate the competition of players in different matches. The simulations yield results consistent with the empirical findings. Extensive simulation studies indicate that the model is quite robust with respect to the modifications of some parameters.

  19. Universal scaling in sports ranking

    CERN Document Server

    Deng, Weibing; Cai, Xu; Bulou, Alain; Wang, Qiuping A

    2011-01-01

    Ranking is a ubiquitous phenomenon in the human society. By clicking the web pages of Forbes, you may find all kinds of rankings, such as world's most powerful people, world's richest people, top-paid tennis stars, and so on and so forth. Herewith, we study a specific kind, sports ranking systems in which players' scores and prize money are calculated based on their performances in attending various tournaments. A typical example is tennis. It is found that the distributions of both scores and prize money follow universal power laws, with exponents nearly identical for most sports fields. In order to understand the origin of this universal scaling we focus on the tennis ranking systems. By checking the data we find that, for any pair of players, the probability that the higher-ranked player will top the lower-ranked opponent is proportional to the rank difference between the pair. Such a dependence can be well fitted to a sigmoidal function. By using this feature, we propose a simple toy model which can simul...

  20. Ranking Fuzzy Numbers and Its Application to Products Attributes Preferences

    OpenAIRE

    Abdullah, Lazim; Fauzee, Nor Nashrah Ahmad

    2011-01-01

    Ranking is one of the widely used methods in fuzzy decision making environment. The recent ranking fuzzy numbers proposed by Wang and Li is claimed to be the improved version in ranking. However, the method was never been simplified and tested in real life application. This paper presents a four-step computation of ranking fuzzy numbers and its application in ranking attributes of selected chocolate products. The four steps algorithm was formulated to rank fuzzy numbers and followed by a tes...

  1. A Review of Outcomes of Seven World University Ranking Systems

    National Research Council Canada - National Science Library

    Mahmood Khosrowjerdi; Neda Zeraatkar

    2012-01-01

    There are many national and international ranking systems rank the universities and higher education institutions of the world, nationally or internationally, based on the same or different criteria...

  2. Ranking Baltic States Researchers

    Directory of Open Access Journals (Sweden)

    Gyula Mester

    2017-10-01

    Full Text Available In this article, using the h-index and the total number of citations, the best 10 Lithuanian, Latvian and Estonian researchers from several disciplines are ranked. The list may be formed based on the h-index and the total number of citations, given in Web of Science, Scopus, Publish or Perish Program and Google Scholar database. Data for the first 10 researchers are presented. Google Scholar is the most complete. Therefore, to define a single indicator, h-index calculated by Google Scholar may be a good and simple one. The author chooses the Google Scholar database as it is the broadest one.

  3. A new DEA approach to rank alternatives in MCDA

    Directory of Open Access Journals (Sweden)

    Majid Darehmiraki

    2013-04-01

    Full Text Available One of the principal subjects in multiple criteria decision analysis is ranking alternatives. Here, we present a new method to rank alternatives by using data envelopment analysis. In this paper, one ranking method is proposed based on applying an artificial alternative called aggregate alternative. The method is based on the fact that one efficient alternative with a better performance has stronger effects on the group of other alternatives. That means its deletion forces the remaining alternatives to get smaller efficiency. The described idea in this paper is inspired of Lotfi and et al. (2011. One feature of the proposed method is that it does not need to determine the weight of the prior. Two examples are used to illustrate how the proposed method works in actual practices, and the results are compared with those obtained from the TOPSIS method.

  4. Unique Nanoparticle Optical Properties Confound Fluorescent Based Assays Widely Employed in Their In Vitro Toxicity Screening and Ranking

    Science.gov (United States)

    Nanoparticles (NPs) are novel materials having at least one dimension less than 100 nm and display unique physicochemical properties due to their nanoscale size. An emphasis has been placed on developing high throughput screening (HTS) assays to characterize and rank the toxiciti...

  5. Social ranking effects on tooth-brushing behaviour.

    Science.gov (United States)

    Maltby, John; Paterson, Kevin; Day, Liz; Jones, Ceri; Kinnear, Hayley; Buchanan, Heather

    2016-05-01

    A tooth-brushing social rank hypothesis is tested suggesting tooth-brushing duration is influenced when individuals position their behaviour in a rank when comparing their behaviour with other individuals. Study 1 used a correlation design, Study 2 used a semi-experimental design, and Study 3 used a randomized intervention design to examine the tooth-brushing social rank hypothesis in terms of self-reported attitudes, cognitions, and behaviour towards tooth-brushing duration. Study 1 surveyed participants to examine whether the perceived health benefits of tooth-brushing duration could be predicted from the ranking of each person's tooth-brushing duration. Study 2 tested whether manipulating the rank position of the tooth-brushing duration influenced participant-perceived health benefits of tooth-brushing duration. Study 3 used a longitudinal intervention method to examine whether messages relating to the rank positions of tooth-brushing durations causally influenced the self-report tooth-brushing duration. Study 1 demonstrates that perceptions of the health benefits from tooth-brushing duration are predicted by the perceptions of how that behaviour ranks in comparison to other people's behaviour. Study 2 demonstrates that the perceptions of the health benefits of tooth-brushing duration can be manipulated experimentally by changing the ranked position of a person's tooth-brushing duration. Study 3 experimentally demonstrates the possibility of increasing the length of time for which individuals clean their teeth by focusing on how they rank among their peers in terms of tooth-brushing duration. The effectiveness of interventions using social-ranking methods relative to those that emphasize comparisons made against group averages or normative guidelines are discussed. What is already known on this subject? Individual make judgements based on social rank information. Social rank information has been shown to influence positive health behaviours such as exercise

  6. Ranking structures and Rank-Rank Correlations of Countries. The FIFA and UEFA cases

    CERN Document Server

    Ausloos, Marcel; Gadomski, Adam; Vitanov, Nikolay K

    2014-01-01

    Ranking of agents competing with each other in complex systems may lead to paradoxes according to the pre-chosen different measures. A discussion is presented on such rank-rank, similar or not, correlations based on the case of European countries ranked by UEFA and FIFA from different soccer competitions. The first question to be answered is whether an empirical and simple law is obtained for such (self-) organizations of complex sociological systems with such different measuring schemes. It is found that the power law form is not the best description contrary to many modern expectations. The stretched exponential is much more adequate. Moreover, it is found that the measuring rules lead to some inner structures, in both cases.

  7. Ranking structures and rank-rank correlations of countries: The FIFA and UEFA cases

    Science.gov (United States)

    Ausloos, Marcel; Cloots, Rudi; Gadomski, Adam; Vitanov, Nikolay K.

    2014-04-01

    Ranking of agents competing with each other in complex systems may lead to paradoxes according to the pre-chosen different measures. A discussion is presented on such rank-rank, similar or not, correlations based on the case of European countries ranked by UEFA and FIFA from different soccer competitions. The first question to be answered is whether an empirical and simple law is obtained for such (self-) organizations of complex sociological systems with such different measuring schemes. It is found that the power law form is not the best description contrary to many modern expectations. The stretched exponential is much more adequate. Moreover, it is found that the measuring rules lead to some inner structures in both cases.

  8. Second-order differential equations for bosons with spin j ≥ 1 and in the bases of general tensor-spinors of rank 2j

    Science.gov (United States)

    Banda Guzmán, V. M.; Kirchbach, M.

    2016-09-01

    A boson of spin j≥ 1 can be described in one of the possibilities within the Bargmann-Wigner framework by means of one sole differential equation of order twice the spin, which however is known to be inconsistent as it allows for non-local, ghost and acausally propagating solutions, all problems which are difficult to tackle. The other possibility is provided by the Fierz-Pauli framework which is based on the more comfortable to deal with second-order Klein-Gordon equation, but it needs to be supplemented by an auxiliary condition. Although the latter formalism avoids some of the pathologies of the high-order equations, it still remains plagued by some inconsistencies such as the acausal propagation of the wave fronts of the (classical) solutions within an electromagnetic environment. We here suggest a method alternative to the above two that combines their advantages while avoiding the related difficulties. Namely, we suggest one sole strictly D^{(j,0)oplus (0,j)} representation specific second-order differential equation, which is derivable from a Lagrangian and whose solutions do not violate causality. The equation under discussion presents itself as the product of the Klein-Gordon operator with a momentum-independent projector on Lorentz irreducible representation spaces constructed from one of the Casimir invariants of the spin-Lorentz group. The basis used is that of general tensor-spinors of rank 2 j.

  9. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-04-17

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.

  10. Methods in Logic Based Control

    DEFF Research Database (Denmark)

    Christensen, Georg Kronborg

    1999-01-01

    Desing and theory of Logic Based Control systems.Boolean Algebra, Karnaugh Map, Quine McClusky's algorithm. Sequential control design. Logic Based Control Method, Cascade Control Method. Implementation techniques: relay, pneumatic, TTL/CMOS,PAL and PLC- and Soft_PLC implementation. PLC...

  11. Performance evaluation and ranking of direct sales stores using BSC approach and fuzzy multiple attribute decision-making methods

    Directory of Open Access Journals (Sweden)

    Mojtaba Soltannezhad Dizaji

    2017-07-01

    Full Text Available In an environment where markets go through a volatile process, and rapid fundamental changes occur due to technological advances, it is important to ensure and maintain a good performance measurement. Organizations, in their performance evaluation, should consider different types of financial and non-financial indicators. In systems like direct sales stores in which decision units have multiple inputs and outputs, all criteria influencing on performance must be combined and examined in a system, simultaneously. The purpose of this study is to evaluate the performance of different products through direct sales of a firm named Shirin Asal with a combination of Balanced Scorecard, fuzzy AHP and TOPSIS so that the weaknesses of subjectivity and selective consideration of evaluators in evaluating the performance indicators are reduced and evaluation integration is provided by considering the contribution of each indicator and each indicator group of balanced scorecard. The research method of this case study is applied. The data collection method is a questionnaire from the previous studies, the use of experts' opinions and the study of documents in the organization. MATLAB and SPSS were used to analyze the data. During this study, the customer and financial perspectives are of the utmost importance to assess the company branches. Among the sub-criteria, the rate of new customer acquisition in the customer dimension and the net income to sales ratio in financial dimension are of the utmost importance.

  12. Activity based costing (ABC Method

    Directory of Open Access Journals (Sweden)

    Prof. Ph.D. Saveta Tudorache

    2008-05-01

    Full Text Available In the present paper the need and advantages are presented of using the Activity BasedCosting method, need arising from the need of solving the information pertinence issue. This issue has occurreddue to the limitation of classic methods in this field, limitation also reflected by the disadvantages ofsuch classic methods in establishing complete costs.

  13. Activity based costing (ABC Method)

    OpenAIRE

    Prof. Ph.D. Saveta Tudorache; University Lecturer Anca Bratu

    2008-01-01

    In the present paper the need and advantages are presented of using the Activity Based Costing method, need arising from the need of solving the information pertinence issue. This issue has occurred due to the limitation of classic methods in this field, limitation also reflected by the disadvantages of such classic methods in establishing complete costs.

  14. UNIVERSITY RANKINGS BY COST OF LIVING ADJUSTED FACULTY COMPENSATION

    OpenAIRE

    Terrance Jalbert; Mercedes Jalbert; Karla Hayashi

    2010-01-01

    In this paper we rank 574 universities based on compensation paid to their faculty. The analysis examines universities both on a raw basis and on a cost of living adjusted basis. Rankings based on salary data and benefit data are presented. In addition rankings based on total compensation are presented. Separate rankings are provided for universities offering different degrees. The results indicate that rankings of universities based on raw and cost of living adjusted data are markedly differ...

  15. A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.

    Science.gov (United States)

    Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang

    2016-04-01

    Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.

  16. Estimation of rank correlation for clustered data.

    Science.gov (United States)

    Rosner, Bernard; Glynn, Robert J

    2017-06-30

    It is well known that the sample correlation coefficient (Rxy ) is the maximum likelihood estimator of the Pearson correlation (ρxy ) for independent and identically distributed (i.i.d.) bivariate normal data. However, this is not true for ophthalmologic data where X (e.g., visual acuity) and Y (e.g., visual field) are available for each eye and there is positive intraclass correlation for both X and Y in fellow eyes. In this paper, we provide a regression-based approach for obtaining the maximum likelihood estimator of ρxy for clustered data, which can be implemented using standard mixed effects model software. This method is also extended to allow for estimation of partial correlation by controlling both X and Y for a vector U_ of other covariates. In addition, these methods can be extended to allow for estimation of rank correlation for clustered data by (i) converting ranks of both X and Y to the probit scale, (ii) estimating the Pearson correlation between probit scores for X and Y, and (iii) using the relationship between Pearson and rank correlation for bivariate normally distributed data. The validity of the methods in finite-sized samples is supported by simulation studies. Finally, two examples from ophthalmology and analgesic abuse are used to illustrate the methods. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Ranking species in mutualistic networks.

    Science.gov (United States)

    Domínguez-García, Virginia; Muñoz, Miguel A

    2015-02-02

    Understanding the architectural subtleties of ecological networks, believed to confer them enhanced stability and robustness, is a subject of outmost relevance. Mutualistic interactions have been profusely studied and their corresponding bipartite networks, such as plant-pollinator networks, have been reported to exhibit a characteristic "nested" structure. Assessing the importance of any given species in mutualistic networks is a key task when evaluating extinction risks and possible cascade effects. Inspired in a recently introduced algorithm--similar in spirit to Google's PageRank but with a built-in non-linearity--here we propose a method which--by exploiting their nested architecture--allows us to derive a sound ranking of species importance in mutualistic networks. This method clearly outperforms other existing ranking schemes and can become very useful for ecosystem management and biodiversity preservation, where decisions on what aspects of ecosystems to explicitly protect need to be made.

  18. Power System Event Ranking Using a New Linear Parameter-Varying Modeling with a Wide Area Measurement System-Based Approach

    Directory of Open Access Journals (Sweden)

    Mohammad Bagher Abolhasani Jabali

    2017-07-01

    Full Text Available Detecting critical power system events for Dynamic Security Assessment (DSA is required for reliability improvement. The approach proposed in this paper investigates the effects of events on dynamic behavior during nonlinear system response while common approaches use steady-state conditions after events. This paper presents some new and enhanced indices for event ranking based on time-domain simulation and polytopic linear parameter-varying (LPV modeling of a power system. In the proposed approach, a polytopic LPV representation is generated via linearization about some points of the nonlinear dynamic behavior of power system using wide-area measurement system (WAMS concepts and then event ranking is done based on the frequency response of the system models on the vertices. Therefore, the nonlinear behaviors of the system in the time of fault occurrence are considered for events ranking. The proposed algorithm is applied to a power system using nonlinear simulation. The comparison of the results especially in different fault conditions shows the advantages of the proposed approach and indices.

  19. Characterizing groundwater quality ranks for drinking purposes in Sylhet district, Bangladesh, using entropy method, spatial autocorrelation index, and geostatistics.

    Science.gov (United States)

    Islam, Abu Reza Md Towfiqul; Ahmed, Nasir; Bodrud-Doza, Md; Chu, Ronghao

    2017-12-01

    Drinking water is susceptible to the poor quality of contaminated water affecting the health of humans. Thus, it is an essential study to investigate factors affecting groundwater quality and its suitability for drinking uses. In this paper, the entropy theory, multivariate statistics, spatial autocorrelation index, and geostatistics are applied to characterize groundwater quality and its spatial variability in the Sylhet district of Bangladesh. A total of 91samples have been collected from wells (e.g., shallow, intermediate, and deep tube wells at 15-300-m depth) from the study area. The results show that NO 3 - , then SO 4 2- , and As are the most contributed parameters influencing the groundwater quality according to the entropy theory. The principal component analysis (PCA) and correlation coefficient also confirm the results of the entropy theory. However, Na + has the highest spatial autocorrelation and the most entropy, thus affecting the groundwater quality. Based on the entropy-weighted water quality index (EWQI) and groundwater quality index (GWQI) classifications, it is observed that 60.45 and 53.86% of water samples are classified as having an excellent to good qualities, while the remaining samples vary from medium to extremely poor quality domains for drinking purposes. Furthermore, the EWQI classification provides the more reasonable results than GWQIs due to its simplicity, accuracy, and ignoring of artificial weight. A Gaussian semivariogram model has been chosen to the best fit model, and groundwater quality indices have a weak spatial dependence, suggesting that both geogenic and anthropogenic factors play a pivotal role in spatial heterogeneity of groundwater quality oscillations.

  20. Ranking stability and super-stable nodes in complex networks.

    Science.gov (United States)

    Ghoshal, Gourab; Barabási, Albert-László

    2011-07-19

    Pagerank, a network-based diffusion algorithm, has emerged as the leading method to rank web content, ecological species and even scientists. Despite its wide use, it remains unknown how the structure of the network on which it operates affects its performance. Here we show that for random networks the ranking provided by pagerank is sensitive to perturbations in the network topology, making it unreliable for incomplete or noisy systems. In contrast, in scale-free networks we predict analytically the emergence of super-stable nodes whose ranking is exceptionally stable to perturbations. We calculate the dependence of the number of super-stable nodes on network characteristics and demonstrate their presence in real networks, in agreement with the analytical predictions. These results not only deepen our understanding of the interplay between network topology and dynamical processes but also have implications in all areas where ranking has a role, from science to marketing.

  1. A comparison of hierarchical cluster analysis and league table rankings as methods for analysis and presentation of district health system performance data in Uganda.

    Science.gov (United States)

    Tashobya, Christine K; Dubourg, Dominique; Ssengooba, Freddie; Speybroeck, Niko; Macq, Jean; Criel, Bart

    2016-03-01

    In 2003, the Uganda Ministry of Health introduced the district league table for district health system performance assessment. The league table presents district performance against a number of input, process and output indicators and a composite index to rank districts. This study explores the use of hierarchical cluster analysis for analysing and presenting district health systems performance data and compares this approach with the use of the league table in Uganda. Ministry of Health and district plans and reports, and published documents were used to provide information on the development and utilization of the Uganda district league table. Quantitative data were accessed from the Ministry of Health databases. Statistical analysis using SPSS version 20 and hierarchical cluster analysis, utilizing Wards' method was used. The hierarchical cluster analysis was conducted on the basis of seven clusters determined for each year from 2003 to 2010, ranging from a cluster of good through moderate-to-poor performers. The characteristics and membership of clusters varied from year to year and were determined by the identity and magnitude of performance of the individual variables. Criticisms of the league table include: perceived unfairness, as it did not take into consideration district peculiarities; and being oversummarized and not adequately informative. Clustering organizes the many data points into clusters of similar entities according to an agreed set of indicators and can provide the beginning point for identifying factors behind the observed performance of districts. Although league table ranking emphasize summation and external control, clustering has the potential to encourage a formative, learning approach. More research is required to shed more light on factors behind observed performance of the different clusters. Other countries especially low-income countries that share many similarities with Uganda can learn from these experiences. © The Author 2015

  2. Maximum Waring ranks of monomials

    OpenAIRE

    Holmes, Erik; Plummer, Paul; Siegert, Jeremy; Teitler, Zach

    2013-01-01

    We show that monomials and sums of pairwise coprime monomials in four or more variables have Waring rank less than the generic rank, with a short list of exceptions. We asymptotically compare their ranks with the generic rank.

  3. University ranking methodologies. An interview with Ben Sowter about the Quacquarelli Symonds World University Ranking

    OpenAIRE

    Alberto Baccini; Antono Banfi; Giuseppe De Nicolao; Paola Galimberti

    2015-01-01

    University rankings represent a controversial issue in the debate about higher education policy. One of the best known university ranking is the Quacquarelli Symonds World University Rankings (QS), published annually since 2004 by Quacquarelli Symonds ltd, a company founded in 1990 and headquartered in London. QS provides a ranking based on a score calculated by weighting six different indicators. The 2015 edition, published in October 2015, introduced major methodological innovations and, as...

  4. Population Set based Optimization Method

    Science.gov (United States)

    Manekar, Y.; Verma, H. K.

    2013-09-01

    In this paper an population set based optimization method is proposed for solving some benchmark functions and also to solve optimal power flow problem like `combined economic and emission dispatch problem (CEED)' with multiple objective functions. This algorithm has taken into consideration all the equality and inequality constraints. The improvement in system performance is based on reduction in cost of power generation and active power loss. The proposed algorithms have been compared with the other methods like GA, PSO etc reported in the literature. The results are impressive and encouraging. The study results show that the proposed method holds better solutions in CEED problems.

  5. Rankings Methodology Hurts Public Institutions

    Science.gov (United States)

    Van Der Werf, Martin

    2007-01-01

    In the 1980s, when the "U.S. News & World Report" rankings of colleges were based solely on reputation, the nation's public universities were well represented at the top. However, as soon as the magazine began including its "measures of excellence," statistics intended to define quality, public universities nearly disappeared from the top. As the…

  6. Risk-Based Prioritization Method for the Classification of Groundwater Pollution from Hazardous Waste Landfills.

    Science.gov (United States)

    Yang, Yu; Jiang, Yong-Hai; Lian, Xin-Ying; Xi, Bei-Dou; Ma, Zhi-Fei; Xu, Xiang-Jian; An, Da

    2016-12-01

    Hazardous waste landfill sites are a significant source of groundwater pollution. To ensure that these landfills with a significantly high risk of groundwater contamination are properly managed, a risk-based ranking method related to groundwater contamination is needed. In this research, a risk-based prioritization method for the classification of groundwater pollution from hazardous waste landfills was established. The method encompasses five phases, including risk pre-screening, indicator selection, characterization, classification and, lastly, validation. In the risk ranking index system employed here, 14 indicators involving hazardous waste landfills and migration in the vadose zone as well as aquifer were selected. The boundary of each indicator was determined by K-means cluster analysis and the weight of each indicator was calculated by principal component analysis. These methods were applied to 37 hazardous waste landfills in China. The result showed that the risk for groundwater contamination from hazardous waste landfills could be ranked into three classes from low to high risk. In all, 62.2 % of the hazardous waste landfill sites were classified in the low and medium risk classes. The process simulation method and standardized anomalies were used to validate the result of risk ranking; the results were consistent with the simulated results related to the characteristics of contamination. The risk ranking method was feasible, valid and can provide reference data related to risk management for groundwater contamination at hazardous waste landfill sites.

  7. Using centrality to rank web snippets

    NARCIS (Netherlands)

    Jijkoun, V.; de Rijke, M.; Peters, C.; Jijkoun, V.; Mandl, T.; Müller, H.; Oard, D.W.; Peñas, A.; Petras, V.; Santos, D.

    2008-01-01

    We describe our participation in the WebCLEF 2007 task, targeted at snippet retrieval from web data. Our system ranks snippets based on a simple similarity-based centrality, inspired by the web page ranking algorithms. We experimented with retrieval units (sentences and paragraphs) and with the

  8. Fuzzy Critical Path Method Based on Lexicographic Ordering

    Directory of Open Access Journals (Sweden)

    Phani Bushan Rao P

    2012-01-01

    Full Text Available The Critical Path Method (CPM is useful for planning and control of complex projects. The CPM identifies the critical activities in the critical path of an activity network. The successful implementation of CPM requires the availability of clear determined time duration for each activity. However, in practical situations this requirement is usually hard to fulfil since many of activities will be executed for the first time. Hence, there is always uncertainty about the time durations of activities in the network planning.  This has led to the development of fuzzy CPM.  In this paper, we use a Lexicographic ordering method for ranking fuzzy numbers to a critical path method in a fuzzy project network, where the duration time of each activity is represented by a trapezoidal fuzzy number. The proposed method is compared with fuzzy CPM based on different ranking methods of fuzzy numbers. The comparison reveals that the method proposed in this paper is more effective in determining the activity criticalities and finding the critical path.   This new method is simple in calculating fuzzy critical path than many methods proposed so far in literature.  

  9. A universal rank-size law

    CERN Document Server

    Ausloos, Marcel

    2016-01-01

    A mere hyperbolic law, like the Zipf's law power function, is often inadequate to describe rank-size relationships. An alternative theoretical distribution is proposed based on theoretical physics arguments starting from the Yule-Simon distribution. A modeling is proposed leading to a universal form. A theoretical suggestion for the "best (or optimal) distribution", is provided through an entropy argument. The ranking of areas through the number of cities in various countries and some sport competition ranking serves for the present illustrations.

  10. Literature Based Discovery: Models, methods, and trends.

    Science.gov (United States)

    Henry, Sam; McInnes, Bridget T

    2017-10-01

    This paper provides an introduction and overview of literature based discovery (LBD) in the biomedical domain. It introduces the reader to modern and historical LBD models, key system components, evaluation methodologies, and current trends. After completion, the reader will be familiar with the challenges and methodologies of LBD. The reader will be capable of distinguishing between recent LBD systems and publications, and be capable of designing an LBD system for a specific application. From biomedical researchers curious about LBD, to someone looking to design an LBD system, to an LBD expert trying to catch up on trends in the field. The reader need not be familiar with LBD, but knowledge of biomedical text processing tools is helpful. This paper describes a unifying framework for LBD systems. Within this framework, different models and methods are presented to both distinguish and show overlap between systems. Topics include term and document representation, system components, and an overview of models including co-occurrence models, semantic models, and distributional models. Other topics include uninformative term filtering, term ranking, results display, system evaluation, an overview of the application areas of drug development, drug repurposing, and adverse drug event prediction, and challenges and future directions. A timeline showing contributions to LBD, and a table summarizing the works of several authors is provided. Topics are presented from a high level perspective. References are given if more detailed analysis is required. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Rank Modulation for Translocation Error Correction

    CERN Document Server

    Farnoud, Farzad; Milenkovic, Olgica

    2012-01-01

    We consider rank modulation codes for flash memories that allow for handling arbitrary charge drop errors. Unlike classical rank modulation codes used for correcting errors that manifest themselves as swaps of two adjacently ranked elements, the proposed \\emph{translocation rank codes} account for more general forms of errors that arise in storage systems. Translocations represent a natural extension of the notion of adjacent transpositions and as such may be analyzed using related concepts in combinatorics and rank modulation coding. Our results include tight bounds on the capacity of translocation rank codes, construction techniques for asymptotically good codes, as well as simple decoding methods for one class of structured codes. As part of our exposition, we also highlight the close connections between the new code family and permutations with short common subsequences, deletion and insertion error-correcting codes for permutations and permutation arrays.

  12. Auto insurance fraud detection using unsupervised spectral ranking for anomaly

    Directory of Open Access Journals (Sweden)

    Ke Nian

    2016-03-01

    Full Text Available For many data mining problems, obtaining labels is costly and time consuming, if not practically infeasible. In addition, unlabeled data often includes categorical or ordinal features which, compared with numerical features, can present additional challenges. We propose a new unsupervised spectral ranking method for anomaly (SRA. We illustrate that the spectral optimization in SRA can be viewed as a relaxation of an unsupervised SVM problem. We demonstrate that the first non-principal eigenvector of a Laplacian matrix is linked to a bi-class classification strength measure which can be used to rank anomalies. Using the first non-principal eigenvector of the Laplacian matrix directly, the proposed SRA generates an anomaly ranking either with respect to the majority class or with respect to two main patterns. The choice of the ranking reference can be made based on whether the cardinality of the smaller class (positive or negative is sufficiently large. Using an auto insurance claim data set but ignoring labels when generating ranking, we show that our proposed SRA significantly surpasses existing outlier-based fraud detection methods. Finally we demonstrate that, while proposed SRA yields good performance for a few similarity measures for the auto insurance claim data, notably ones based on the Hamming distance, choosing appropriate similarity measures for a fraud detection problem remains crucial.

  13. Method for gesture based modeling

    DEFF Research Database (Denmark)

    2006-01-01

    A computer program based method is described for creating models using gestures. On an input device, such as an electronic whiteboard, a user draws a gesture which is recognized by a computer program and interpreted relative to a predetermined meta-model. Based on the interpretation, an algorithm...... is assigned to the gesture drawn by the user. The executed algorithm may, for example, consist in creating a new model element, modifying an existing model element, or deleting an existing model element....

  14. Proteome scale census of major facilitator superfamily transporters in Trichoderma reesei using protein sequence and structure based classification enhanced ranking.

    Science.gov (United States)

    Chaudhary, Nitika; Kumari, Indu; Sandhu, Padmani; Ahmed, Mushtaq; Akhter, Yusuf

    2016-07-01

    Trichoderma spp. have been acknowledged as potent bio-control agents against microbial pathogens and also as plant growth promoters. Various secondary metabolites are attributed for these beneficial activities. Major facilitator superfamily (MFS) includes the large proportion of efflux-pumps which are linked with membrane transport of these secondary metabolites. We have carried out a proteome-wide identification of MFS transporters using protein sequence and structure based hierarchical method in Trichoderma reesei. 448 proteins out of 9115 were detected to carry transmembrane helices. MFS specific intragenic gene duplication and its context with transport function have been presented. Finally, using homology based techniques, domains and motifs of MFS families have been identified and utilized to classify them. From query dataset of 448 transmembrane proteins, 148 proteins are identified as potential MFS transporters. Sugar porter, drug: H(+) antiporter-1, monocarboxylate porter and anion: cation symporter emerged as major MFS families with 51, 35, 17 and 11 members respectively. Representative protein tertiary structures of these families are homology modeled for structure-function analysis. This study may help to understand the molecular basis of secretion and transport of agriculturally valuable secondary metabolites produced by these bio-control fungal agents which may be exploited in future for enhancing its biotechnological applications in eco-friendly sustainable development. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Error analysis of stochastic gradient descent ranking.

    Science.gov (United States)

    Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan

    2013-06-01

    Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.

  16. University ranking methodologies. An interview with Ben Sowter about the Quacquarelli Symonds World University Ranking

    Directory of Open Access Journals (Sweden)

    Alberto Baccini

    2015-10-01

    Full Text Available University rankings represent a controversial issue in the debate about higher education policy. One of the best known university ranking is the Quacquarelli Symonds World University Rankings (QS, published annually since 2004 by Quacquarelli Symonds ltd, a company founded in 1990 and headquartered in London. QS provides a ranking based on a score calculated by weighting six different indicators. The 2015 edition, published in October 2015, introduced major methodological innovations and, as a consequence, many universities worldwide underwent major changes of their scores and ranks. Ben Sowter, head of division of intelligence unit of Quacquarelli Symonds, responds to 15 questions about the new QS methodology.

  17. LogDet Rank Minimization with Application to Subspace Clustering

    Directory of Open Access Journals (Sweden)

    Zhao Kang

    2015-01-01

    Full Text Available Low-rank matrix is desired in many machine learning and computer vision problems. Most of the recent studies use the nuclear norm as a convex surrogate of the rank operator. However, all singular values are simply added together by the nuclear norm, and thus the rank may not be well approximated in practical problems. In this paper, we propose using a log-determinant (LogDet function as a smooth and closer, though nonconvex, approximation to rank for obtaining a low-rank representation in subspace clustering. Augmented Lagrange multipliers strategy is applied to iteratively optimize the LogDet-based nonconvex objective function on potentially large-scale data. By making use of the angular information of principal directions of the resultant low-rank representation, an affinity graph matrix is constructed for spectral clustering. Experimental results on motion segmentation and face clustering data demonstrate that the proposed method often outperforms state-of-the-art subspace clustering algorithms.

  18. A higher rank Racah algebra and the {Z}_2^n Laplace–Dunkl operator

    Science.gov (United States)

    De Bie, Hendrik; Genest, Vincent X.; van de Vijver, Wouter; Vinet, Luc

    2018-01-01

    A higher rank generalization of the (rank one) Racah algebra is obtained as the symmetry algebra of the Laplace–Dunkl operator associated to the {Z}_2n root system. This algebra is also the invariance algebra of the generic superintegrable model on the n-sphere. Bases of Dunkl harmonics are constructed explicitly using a Cauchy–Kovalevskaia theorem. These bases consist of joint eigenfunctions of labelling Abelian subalgebras of the higher rank Racah algebra. A method to obtain expressions for both the connection coefficients between these bases and the action of the symmetries on these bases is presented.

  19. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth

  20. A new physical method to assess handle properties of fabrics made from wood-based fibers

    Science.gov (United States)

    Abu-Rous, M.; Liftinger, E.; Innerlohinger, J.; Malengier, B.; Vasile, S.

    2017-10-01

    In this work, the handfeel of fabrics made of wood-based fibers such as viscose, modal and Lyocell was investigated in relation to cotton fabrics applying the Tissue Softness Analyzer (TSA) method in comparison to other classical methods. Two different construction groups of textile were investigated. The validity of TSA in assessing textile softness of these constructions was tested. TSA results were compared to human hand evaluation as well as to classical physical measurements like drape coefficient, ring pull-through and Handle-o-meter, as well as a newer device, the Fabric Touch Tester (FTT). Physical methods as well as human hand assessments mostly agreed on the softest and smoothest range, but showed different rankings in the harder/rougher side fabrics. TSA ranking of softness and smoothness corresponded to the rankings by other physical methods as well as with human hand feel for the basic textile constructions.

  1. Using reduced rank regression methods to identify dietary patterns associated with obesity: a cross-country study among European and Australian adolescents.

    Science.gov (United States)

    Huybrechts, Inge; Lioret, Sandrine; Mouratidou, Theodora; Gunter, Marc J; Manios, Yannis; Kersting, Mathilde; Gottrand, Frederic; Kafatos, Anthony; De Henauw, Stefaan; Cuenca-García, Magdalena; Widhalm, Kurt; Gonzales-Gross, Marcela; Molnar, Denes; Moreno, Luis A; McNaughton, Sarah A

    2017-01-01

    This study aims to examine repeatability of reduced rank regression (RRR) methods in calculating dietary patterns (DP) and cross-sectional associations with overweight (OW)/obesity across European and Australian samples of adolescents. Data from two cross-sectional surveys in Europe (2006/2007 Healthy Lifestyle in Europe by Nutrition in Adolescence study, including 1954 adolescents, 12-17 years) and Australia (2007 National Children's Nutrition and Physical Activity Survey, including 1498 adolescents, 12-16 years) were used. Dietary intake was measured using two non-consecutive, 24-h recalls. RRR was used to identify DP using dietary energy density, fibre density and percentage of energy intake from fat as the intermediate variables. Associations between DP scores and body mass/fat were examined using multivariable linear and logistic regression as appropriate, stratified by sex. The first DP extracted (labelled 'energy dense, high fat, low fibre') explained 47 and 31 % of the response variation in Australian and European adolescents, respectively. It was similar for European and Australian adolescents and characterised by higher consumption of biscuits/cakes, chocolate/confectionery, crisps/savoury snacks, sugar-sweetened beverages, and lower consumption of yogurt, high-fibre bread, vegetables and fresh fruit. DP scores were inversely associated with BMI z-scores in Australian adolescent boys and borderline inverse in European adolescent boys (so as with %BF). Similarly, a lower likelihood for OW in boys was observed with higher DP scores in both surveys. No such relationships were observed in adolescent girls. In conclusion, the DP identified in this cross-country study was comparable for European and Australian adolescents, demonstrating robustness of the RRR method in calculating DP among populations. However, longitudinal designs are more relevant when studying diet-obesity associations, to prevent reverse causality.

  2. Academic rankings: an approach to rank portuguese universities Rankings académicos: un abordaje para clasificar las universidades portuguesas Rankings acadêmicos: uma abordagem ao ranking das universidades portuguesas

    Directory of Open Access Journals (Sweden)

    Pedro Bernardino

    2010-03-01

    Full Text Available The academic rankings are a controversial subject in higher education. However, despite all the criticism, academic rankings are here to stay and more and more different stakeholders use rankings to obtain information about the institutions' performance. The two most well-known rankings, The Times and the Shanghai Jiao Tong University rankings have different methodologies. The Times ranking is based on peer review, whereas the Shanghai ranking has only quantitative indicators and is mainly based on research outputs. In Germany, the CHE ranking uses a different methodology from the traditional rankings, allowing the users to choose criteria and weights. The Portuguese higher education institutions are performing below their European peers, and the Government believes that an academic ranking could improve both performance and competitiveness between institutions. The purpose of this paper is to analyse the advantages and problems of academic rankings and provide guidance to a new Portuguese ranking.Los rankings académicos son un tema muy contradictorio en la enseñanza superior. Todavía, además de todas las críticas los rankings están para quedarse entre nosotros. Y cada vez más, diferentes stakeholders utilizan los rankings para obtener información sobre el desempeño de las instituciones. Dos de los rankings más conocidos, el The Times y el ranking de la universidad de Shangai Jiao Tong tienen métodos distintos. El The Times se basa en la opinión de expertos mientras el ranking de la universidad de Shangai presenta solamente indicadores cuantitativos y mayoritariamente basados en los resultados de actividades de investigación. En Alemania el ranking CHE usa un método distinto permitiendo al utilizador elegir los criterios y su importancia. Las instituciones de enseñanza superior portuguesas tienen un desempeño abajo de las europeas y el gobierno cree que un ranking académico podría contribuir para mejorar su desempeño y

  3. Biplots in Reduced-Rank Regression

    NARCIS (Netherlands)

    Braak, ter C.J.F.; Looman, C.W.N.

    1994-01-01

    Regression problems with a number of related response variables are typically analyzed by separate multiple regressions. This paper shows how these regressions can be visualized jointly in a biplot based on reduced-rank regression. Reduced-rank regression combines multiple regression and principal

  4. Asset ranking manager (ranking index of components)

    Energy Technology Data Exchange (ETDEWEB)

    Maloney, S.M.; Engle, A.M.; Morgan, T.A. [Applied Reliability, Maracor Software and Engineering (United States)

    2004-07-01

    The Ranking Index of Components (RIC) is an Asset Reliability Manager (ARM), which itself is a Web Enabled front end where plant database information fields from several disparate databases are combined. That information is used to create a specific weighted number (Ranking Index) relating to that components health and risk to the site. The higher the number, the higher priority that any work associated with that component receives. ARM provides site Engineering, Maintenance and Work Control personnel with a composite real time - (current condition) look at the components 'risk of not working' to the plant. Information is extracted from the existing Computerized Maintenance management System (CMMS) and specific site applications and processed nightly. ARM helps to ensure that the most important work is placed into the workweeks and the non value added work is either deferred, frequency changed or deleted. This information is on the web, updated each night, and available for all employees to use. This effort assists the work management specialist when allocating limited resources to the most important work. The use of this tool has maximized resource usage, performing the most critical work with available resources. The ARM numbers are valued inputs into work scoping for the workweek managers. System and Component Engineers are using ARM to identify the components that are at 'risk of failure' and therefore should be placed into the appropriate work week schedule.

  5. Using Bibliographic Knowledge for Ranking in Scientific Publication Databases

    CERN Document Server

    Vesely, Martin; Le Meur, Jean-Yves

    2008-01-01

    Document ranking for scientific publications involves a variety of specialized resources (e.g. author or citation indexes) that are usually difficult to use within standard general purpose search engines that usually operate on large-scale heterogeneous document collections for which the required specialized resources are not always available for all the documents present in the collections. Integrating such resources into specialized information retrieval engines is therefore important to cope with community-specific user expectations that strongly influence the perception of relevance within the considered community. In this perspective, this paper extends the notion of ranking with various methods exploiting different types of bibliographic knowledge that represent a crucial resource for measuring the relevance of scientific publications. In our work, we experimentally evaluated the adequacy of two such ranking methods (one based on freshness, i.e. the publication date, and the other on a novel index, the ...

  6. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  7. Fast Estimation of Approximate Matrix Ranks Using Spectral Densities.

    Science.gov (United States)

    Ubaru, Shashanka; Saad, Yousef; Seghouane, Abd-Krim

    2017-05-01

    Many machine learning and data-related applications require the knowledge of approximate ranks of large data matrices at hand. This letter presents two computationally inexpensive techniques to estimate the approximate ranks of such matrices. These techniques exploit approximate spectral densities, popular in physics, which are probability density distributions that measure the likelihood of finding eigenvalues of the matrix at a given point on the real line. Integrating the spectral density over an interval gives the eigenvalue count of the matrix in that interval. Therefore, the rank can be approximated by integrating the spectral density over a carefully selected interval. Two different approaches are discussed to estimate the approximate rank, one based on Chebyshev polynomials and the other based on the Lanczos algorithm. In order to obtain the appropriate interval, it is necessary to locate a gap between the eigenvalues that correspond to noise and the relevant eigenvalues that contribute to the matrix rank. A method for locating this gap and selecting the interval of integration is proposed based on the plot of the spectral density. Numerical experiments illustrate the performance of these techniques on matrices from typical applications.

  8. Tensor-based dynamic reconstruction method for electrical capacitance tomography

    Science.gov (United States)

    Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.

    2017-03-01

    Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.

  9. An Improved Approach to the PageRank Problems

    Directory of Open Access Journals (Sweden)

    Yue Xie

    2013-01-01

    Full Text Available We introduce a partition of the web pages particularly suited to the PageRank problems in which the web link graph has a nested block structure. Based on the partition of the web pages, dangling nodes, common nodes, and general nodes, the hyperlink matrix can be reordered to be a more simple block structure. Then based on the parallel computation method, we propose an algorithm for the PageRank problems. In this algorithm, the dimension of the linear system becomes smaller, and the vector for general nodes in each block can be calculated separately in every iteration. Numerical experiments show that this approach speeds up the computation of PageRank.

  10. Fuzzy logic and its application in football team ranking.

    Science.gov (United States)

    Zeng, Wenyi; Li, Junhong

    2014-01-01

    Fuzzy set theory and fuzzy logic are a highly suitable and applicable basis for developing knowledge-based systems in physical education for tasks such as the selection for athletes, the evaluation for different training approaches, the team ranking, and the real-time monitoring of sports data. In this paper, we use fuzzy set theory and apply fuzzy clustering analysis in football team ranking. Based on some certain rules, we propose four parameters to calculate fuzzy similar matrix, obtain fuzzy equivalence matrix and the ranking result for our numerical example, T 7, T 3, T 1, T 9, T 10, T 8, T 11, T 12, T 2, T 6, T 5, T 4, and investigate four parameters sensitivity analysis. The study shows that our fuzzy logic method is reliable and stable when the parameters change in certain range.

  11. Implicit Block Diagonal Low-Rank Representation.

    Science.gov (United States)

    Xie, Xingyu; Guo, Xianglin; Liu, Guangcan; Wang, Jun

    2017-10-17

    While current block diagonal constrained subspace clustering methods are performed explicitly on the original data space, in practice it is often more desirable to embed the block diagonal prior into the reproducing kernel Hilbert feature space by kernelization techniques, as the underlying data structure in reality is usually nonlinear. However, it is still unknown how to carry out the embedding and kernelization in the models with block diagonal constraints. In this work, we shall take a step in this direction. First, we establish a novel model termed Implicit Block Diagonal Low-Rank Representation (IBDLR), by incorporating the implicit feature representation and block diagonal prior into the prevalent Low-Rank Representation (LRR) method. Second, mostly important, we show that the model in IBDLR could be kernelized by making use of a smoothed dual representation and the specifics of a proximal gradient based optimization algorithm. Finally, we provide some theoretical analyses for the convergence of our optimization algorithm. Comprehensive experiments on synthetic and realworld datasets demonstrate the superiorities of our IBDLR over state-of-the-art methods.While current block diagonal constrained subspace clustering methods are performed explicitly on the original data space, in practice it is often more desirable to embed the block diagonal prior into the reproducing kernel Hilbert feature space by kernelization techniques, as the underlying data structure in reality is usually nonlinear. However, it is still unknown how to carry out the embedding and kernelization in the models with block diagonal constraints. In this work, we shall take a step in this direction. First, we establish a novel model termed Implicit Block Diagonal Low-Rank Representation (IBDLR), by incorporating the implicit feature representation and block diagonal prior into the prevalent Low-Rank Representation (LRR) method. Second, mostly important, we show that the model in IBDLR could be

  12. Multiplex PageRank.

    Directory of Open Access Journals (Sweden)

    Arda Halu

    Full Text Available Many complex systems can be described as multiplex networks in which the same nodes can interact with one another in different layers, thus forming a set of interacting and co-evolving networks. Examples of such multiplex systems are social networks where people are involved in different types of relationships and interact through various forms of communication media. The ranking of nodes in multiplex networks is one of the most pressing and challenging tasks that research on complex networks is currently facing. When pairs of nodes can be connected through multiple links and in multiple layers, the ranking of nodes should necessarily reflect the importance of nodes in one layer as well as their importance in other interdependent layers. In this paper, we draw on the idea of biased random walks to define the Multiplex PageRank centrality measure in which the effects of the interplay between networks on the centrality of nodes are directly taken into account. In particular, depending on the intensity of the interaction between layers, we define the Additive, Multiplicative, Combined, and Neutral versions of Multiplex PageRank, and show how each version reflects the extent to which the importance of a node in one layer affects the importance the node can gain in another layer. We discuss these measures and apply them to an online multiplex social network. Findings indicate that taking the multiplex nature of the network into account helps uncover the emergence of rankings of nodes that differ from the rankings obtained from one single layer. Results provide support in favor of the salience of multiplex centrality measures, like Multiplex PageRank, for assessing the prominence of nodes embedded in multiple interacting networks, and for shedding a new light on structural properties that would otherwise remain undetected if each of the interacting networks were analyzed in isolation.

  13. Multiplex PageRank.

    Science.gov (United States)

    Halu, Arda; Mondragón, Raúl J; Panzarasa, Pietro; Bianconi, Ginestra

    2013-01-01

    Many complex systems can be described as multiplex networks in which the same nodes can interact with one another in different layers, thus forming a set of interacting and co-evolving networks. Examples of such multiplex systems are social networks where people are involved in different types of relationships and interact through various forms of communication media. The ranking of nodes in multiplex networks is one of the most pressing and challenging tasks that research on complex networks is currently facing. When pairs of nodes can be connected through multiple links and in multiple layers, the ranking of nodes should necessarily reflect the importance of nodes in one layer as well as their importance in other interdependent layers. In this paper, we draw on the idea of biased random walks to define the Multiplex PageRank centrality measure in which the effects of the interplay between networks on the centrality of nodes are directly taken into account. In particular, depending on the intensity of the interaction between layers, we define the Additive, Multiplicative, Combined, and Neutral versions of Multiplex PageRank, and show how each version reflects the extent to which the importance of a node in one layer affects the importance the node can gain in another layer. We discuss these measures and apply them to an online multiplex social network. Findings indicate that taking the multiplex nature of the network into account helps uncover the emergence of rankings of nodes that differ from the rankings obtained from one single layer. Results provide support in favor of the salience of multiplex centrality measures, like Multiplex PageRank, for assessing the prominence of nodes embedded in multiple interacting networks, and for shedding a new light on structural properties that would otherwise remain undetected if each of the interacting networks were analyzed in isolation.

  14. Ranking of Rankings: Benchmarking Twenty-Five Higher Education Ranking Systems in Europe

    Science.gov (United States)

    Stolz, Ingo; Hendel, Darwin D.; Horn, Aaron S.

    2010-01-01

    The purpose of this study is to evaluate the ranking practices of 25 European higher education ranking systems (HERSs). Ranking practices were assessed with 14 quantitative measures derived from the Berlin Principles on Ranking of Higher Education Institutions (BPs). HERSs were then ranked according to their degree of congruence with the BPs.…

  15. Metabolomics-based approach for ranking the candidate structures of unidentified peaks in capillary electrophoresis time-of-flight mass spectrometry.

    Science.gov (United States)

    Yamamoto, Hiroyuki; Sasaki, Kazunori

    2017-04-01

    One of the technical challenges encountered during metabolomics research is determining the chemical structures of unidentified peaks. We have developed a metabolomics-based chemoinformatics approach for ranking the candidate structures of unidentified peaks. Our approach uses information about the known metabolites detected in samples containing unidentified peaks and involves three discrete steps. The first step involves identifying "precursor/product metabolites" as potential reactants or products derived from the unidentified peaks. In the second step, candidate structures for the unidentified peak are searched against the PubChem database using a molecular formula. These structures are then ranked by structural similarity against precursor/product metabolites and candidate structures. In the third step, the migration time is predicted to refine the candidate structures. Two simulation studies were conducted to highlight the efficacy of our approach, including the use of 20 proteinogenic amino acids as pseudo-unidentified peaks, and leave-one-out experiments for all of the annotated metabolites with and without filtering against the Human Metabolome Database. We also applied our approach to two unidentified peaks in a urine sample, which were identified as glycocyamidine and N-acetylglycine. These results suggest that our approach could be used to identify unidentified peaks during metabolomics analysis. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. A full ranking for decision making units using ideal and anti-ideal points in DEA.

    Science.gov (United States)

    Barzegarinegad, A; Jahanshahloo, G; Rostamy-Malkhalifeh, M

    2014-01-01

    We propose a procedure for ranking decision making units in data envelopment analysis, based on ideal and anti-ideal points in the production possibility set. Moreover, a model has been introduced to compute the performance of a decision making unit for these two points through using common set of weights. One of the best privileges of this method is that we can make ranking for all decision making units by solving only three programs, and also solving these programs is not related to numbers of decision making units. One of the other advantages of this procedure is to rank all the extreme and nonextreme efficient decision making units. In other words, the suggested ranking method tends to seek a set of common weights for all units to make them fully ranked. Finally, it was applied for different sets holding real data, and then it can be compared with other procedures.

  17. Comparing classical and quantum PageRanks

    Science.gov (United States)

    Loke, T.; Tang, J. W.; Rodriguez, J.; Small, M.; Wang, J. B.

    2017-01-01

    Following recent developments in quantum PageRanking, we present a comparative analysis of discrete-time and continuous-time quantum-walk-based PageRank algorithms. Relative to classical PageRank and to different extents, the quantum measures better highlight secondary hubs and resolve ranking degeneracy among peripheral nodes for all networks we studied in this paper. For the discrete-time case, we investigated the periodic nature of the walker's probability distribution for a wide range of networks and found that the dominant period does not grow with the size of these networks. Based on this observation, we introduce a new quantum measure using the maximum probabilities of the associated walker during the first couple of periods. This is particularly important, since it leads to a quantum PageRanking scheme that is scalable with respect to network size.

  18. Ranking Fuzzy Numbers and Its Application to Products Attributes Preferences

    OpenAIRE

    Lazim Abdullah; Nor Nashrah Ahmad Fauzee

    2011-01-01

    Ranking is one of the widely used methods in fuzzy decision making environment. The recent ranking fuzzy numbers proposed by Wang and Li is claimed to be the improved version in ranking. However, the method was never been simplified and tested in real life application. This paper presents a four-step computation of ranking fuzzy numbers and its application in ranking attributes of selected chocolate products.  The four steps algorithm was formulated to rank fuzzy numbers and followed by a tes...

  19. Sparse Contextual Activation for Efficient Visual Re-Ranking.

    Science.gov (United States)

    Bai, Song; Bai, Xiang

    2016-03-01

    In this paper, we propose an extremely efficient algorithm for visual re-ranking. By considering the original pairwise distance in the contextual space, we develop a feature vector called sparse contextual activation (SCA) that encodes the local distribution of an image. Hence, re-ranking task can be simply accomplished by vector comparison under the generalized Jaccard metric, which has its theoretical meaning in the fuzzy set theory. In order to improve the time efficiency of re-ranking procedure, inverted index is successfully introduced to speed up the computation of generalized Jaccard metric. As a result, the average time cost of re-ranking for a certain query can be controlled within 1 ms. Furthermore, inspired by query expansion, we also develop an additional method called local consistency enhancement on the proposed SCA to improve the retrieval performance in an unsupervised manner. On the other hand, the retrieval performance using a single feature may not be satisfactory enough, which inspires us to fuse multiple complementary features for accurate retrieval. Based on SCA, a robust feature fusion algorithm is exploited that also preserves the characteristic of high time efficiency. We assess our proposed method in various visual re-ranking tasks. Experimental results on Princeton shape benchmark (3D object), WM-SRHEC07 (3D competition), YAEL data set B (face), MPEG-7 data set (shape), and Ukbench data set (image) manifest the effectiveness and efficiency of SCA.

  20. Dynamic Matrix Rank

    DEFF Research Database (Denmark)

    Frandsen, Gudmund Skovbjerg; Frandsen, Peter Frands

    2009-01-01

    We consider maintaining information about the rank of a matrix under changes of the entries. For n×n matrices, we show an upper bound of O(n1.575) arithmetic operations and a lower bound of Ω(n) arithmetic operations per element change. The upper bound is valid when changing up to O(n0.575) entries...... in a single column of the matrix. We also give an algorithm that maintains the rank using O(n2) arithmetic operations per rank one update. These bounds appear to be the first nontrivial bounds for the problem. The upper bounds are valid for arbitrary fields, whereas the lower bound is valid for algebraically...... closed fields. The upper bound for element updates uses fast rectangular matrix multiplication, and the lower bound involves further development of an earlier technique for proving lower bounds for dynamic computation of rational functions....

  1. Activity – based costing method

    Directory of Open Access Journals (Sweden)

    Èuchranová Katarína

    2001-06-01

    Full Text Available Activity based costing is a method of identifying and tracking the operating costs directly associated with processing items. It is the practice of focusing on some unit of output, such as a purchase order or an assembled automobile and attempting to determine its total as precisely as poccible based on the fixed and variable costs of the inputs.You use ABC to identify, quantify and analyze the various cost drivers (such as labor, materials, administrative overhead, rework. and to determine which ones are candidates for reduction.A processes any activity that accepts inputs, adds value to these inputs for customers and produces outputs for these customers. The customer may be either internal or external to the organization. Every activity within an organization comprimes one or more processes. Inputs, controls and resources are all supplied to the process.A process owner is the person responsible for performing and or controlling the activity.The direction of cost through their contact to partial activity and processes is a new modern theme today. Beginning of this method is connected with very important changes in the firm processes.ABC method is a instrument , that bring a competitive advantages for the firm.

  2. ARWU vs. Alternative ARWU Ranking: What are the Consequences for Lower Ranked Universities?

    Directory of Open Access Journals (Sweden)

    Milica Maričić

    2017-05-01

    Full Text Available The ARWU ranking has been a source of academic debate since its development in 2003, but the same does not account for the Alternative ARWU ranking. Namely, the Alternative ARWU ranking attempts to reduce the influence of the prestigious indicators Alumni and Award which are based on the number of received Nobel Prizes and Fields Medals by alumni or university staff. However, the consequences of the reduction of the two indicators have not been scrutinized in detail. Therefore, we propose a statistical approach to the comparison of the two rankings and an in-depth analysis of the Alternative ARWU groups. The obtained results, which are based on the official data, can provide new insights into the nature of the Alternative ARWU ranking. The presented approach might initiate further research on the Alternative ARWU ranking and on the impact of university ranking’s list length. JEL Classification: C10, C38, I23

  3. Block models and personalized PageRank

    National Research Council Canada - National Science Library

    Kloumann, Isabel M; Ugander, Johan; Kleinberg, Jon

    2017-01-01

    ...? We start from the observation that the most widely used techniques for this problem, personalized PageRank and heat kernel methods, operate in the space of "landing probabilities" of a random walk...

  4. Administrative Process and Criteria Ranking for Drug Entering Health Insurance List in Iran-TOPSIS-Based Consensus Model

    National Research Council Canada - National Science Library

    Viyanchi, Amir; Rajabzadeh Ghatari, Ali; Rasekh, Hamid Reza; SafiKhani, HamidReza

    2016-01-01

    The purposes of our study were to identify a drug entry process, collect, and prioritize criteria for selecting drugs for the list of basic health insurance commitments to prepare an "evidence based...

  5. Cointegration rank testing under conditional heteroskedasticity

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Rahbek, Anders Christian; Taylor, Robert M.

    2010-01-01

    (martingale difference) innovations. We first demonstrate that the limiting null distributions of the rank statistics coincide with those derived by previous authors who assume either independent and identically distributed (i.i.d.) or (strict and covariance) stationary martingale difference innovations. We...... then propose wild bootstrap implementations of the cointegrating rank tests and demonstrate that the associated bootstrap rank statistics replicate the first-order asymptotic null distributions of the rank statistics. We show that the same is also true of the corresponding rank tests based on the i.......i.d. bootstrap of Swensen (2006, Econometrica 74, 1699-1714). The wild bootstrap, however, has the important property that, unlike the i.i.d. bootstrap, it preserves in the resampled data the pattern of heteroskedasticity present in the original shocks. Consistent with this, numerical evidence suggests that...

  6. Ranking Music Data by Relevance and Importance

    DEFF Research Database (Denmark)

    Ruxanda, Maria Magdalena; Nanopoulos, Alexandros; Jensen, Christian Søndergaard

    2008-01-01

    Due to the rapidly increasing availability of audio files on the Web, it is relevant to augment search engines with advanced audio search functionality. In this context, the ranking of the retrieved music is an important issue. This paper proposes a music ranking method capable of flexibly fusing...... the relevance and importance of music. The proposed method may support users with diverse needs when searching for music....

  7. Citation ranking versus peer evaluation of senior faculty research performance

    DEFF Research Database (Denmark)

    Meho, Lokman I.; Sonnenwald, Diane H.

    2000-01-01

    indicator of research performance of senior faculty members? Citation data, book reviews, and peer ranking were compiled and examined for faculty members specializing in Kurdish studies. Analysis shows that normalized citation ranking and citation content analysis data yield identical ranking results....... Analysis also shows that normalized citation ranking and citation content analysis, book reviews, and peer ranking perform similarly (i.e., are highly correlated) for high-ranked and low-ranked senior scholars. Additional evaluation methods and measures that take into account the context and content......The purpose of this study is to analyze the relationship between citation ranking and peer evaluation in assessing senior faculty research performance. Other studies typically derive their peer evaluation data directly from referees, often in the form of ranking. This study uses two additional...

  8. Algebraic and computational aspects of real tensor ranks

    CERN Document Server

    Sakata, Toshio; Miyazaki, Mitsuhiro

    2016-01-01

    This book provides comprehensive summaries of theoretical (algebraic) and computational aspects of tensor ranks, maximal ranks, and typical ranks, over the real number field. Although tensor ranks have been often argued in the complex number field, it should be emphasized that this book treats real tensor ranks, which have direct applications in statistics. The book provides several interesting ideas, including determinant polynomials, determinantal ideals, absolutely nonsingular tensors, absolutely full column rank tensors, and their connection to bilinear maps and Hurwitz-Radon numbers. In addition to reviews of methods to determine real tensor ranks in details, global theories such as the Jacobian method are also reviewed in details. The book includes as well an accessible and comprehensive introduction of mathematical backgrounds, with basics of positive polynomials and calculations by using the Groebner basis. Furthermore, this book provides insights into numerical methods of finding tensor ranks through...

  9. A Ranking Analysis of the Management Schools in Greater China (2000-2010): Evidence from the SSCI Database

    Science.gov (United States)

    Hou, Mingjun; Fan, Peihua; Liu, Heng

    2014-01-01

    The authors rank the management schools in Greater China (including Mainland China, Hong Kong, Taiwan, and Macau) based on their academic publications in the Social Sciences Citation Index management and business journals from 2000 to 2010. Following K. Ritzberger's (2008) and X. Yu and Z. Gao's (2010) ranking method, the authors develop six…

  10. Traffic Speed Data Imputation Method Based on Tensor Completion

    Directory of Open Access Journals (Sweden)

    Bin Ran

    2015-01-01

    Full Text Available Traffic speed data plays a key role in Intelligent Transportation Systems (ITS; however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS. In this paper, we handle this issue by a novel tensor-based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC, an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches.

  11. Motif discovery in ranked lists of sequences

    DEFF Research Database (Denmark)

    Nielsen, Morten Muhlig; Tataru, Paula; Madsen, Tobias

    2016-01-01

    . These features make Regmex well suited for a range of biological sequence analysis problems related to motif discovery, exemplified by microRNA seed enrichment, but also including enrichment problems involving complex motifs and combinations of motifs. We demonstrate a number of usage scenarios that take......Motif analysis has long been an important method to characterize biological functionality and the current growth of sequencing-based genomics experiments further extends its potential. These diverse experiments often generate sequence lists ranked by some functional property. There is therefore...... a growing need for motif analysis methods that can exploit this coupled data structure and be tailored for specific biological questions. Here, we present an exploratory motif analysis tool, Regmex (REGular expression Motif EXplorer), which offers several methods to evaluate the correlation of motifs...

  12. Improving Ranking Using Quantum Probability

    OpenAIRE

    Melucci, Massimo

    2011-01-01

    The paper shows that ranking information units by quantum probability differs from ranking them by classical probability provided the same data used for parameter estimation. As probability of detection (also known as recall or power) and probability of false alarm (also known as fallout or size) measure the quality of ranking, we point out and show that ranking by quantum probability yields higher probability of detection than ranking by classical probability provided a given probability of ...

  13. GeneRank: using search engine technology for the analysis of microarray experiments.

    Science.gov (United States)

    Morrison, Julie L; Breitling, Rainer; Higham, Desmond J; Gilbert, David R

    2005-09-21

    Interpretation of simple microarray experiments is usually based on the fold-change of gene expression between a reference and a "treated" sample where the treatment can be of many types from drug exposure to genetic variation. Interpretation of the results usually combines lists of differentially expressed genes with previous knowledge about their biological function. Here we evaluate a method--based on the PageRank algorithm employed by the popular search engine Google--that tries to automate some of this procedure to generate prioritized gene lists by exploiting biological background information. GeneRank is an intuitive modification of PageRank that maintains many of its mathematical properties. It combines gene expression information with a network structure derived from gene annotations (gene ontologies) or expression profile correlations. Using both simulated and real data we find that the algorithm offers an improved ranking of genes compared to pure expression change rankings. Our modification of the PageRank algorithm provides an alternative method of evaluating microarray experimental results which combines prior knowledge about the underlying network. GeneRank offers an improvement compared to assessing the importance of a gene based on its experimentally observed fold-change alone and may be used as a basis for further analytical developments.

  14. GeneRank: Using search engine technology for the analysis of microarray experiments

    Directory of Open Access Journals (Sweden)

    Breitling Rainer

    2005-09-01

    Full Text Available Abstract Background Interpretation of simple microarray experiments is usually based on the fold-change of gene expression between a reference and a "treated" sample where the treatment can be of many types from drug exposure to genetic variation. Interpretation of the results usually combines lists of differentially expressed genes with previous knowledge about their biological function. Here we evaluate a methodbased on the PageRank algorithm employed by the popular search engine Google – that tries to automate some of this procedure to generate prioritized gene lists by exploiting biological background information. Results GeneRank is an intuitive modification of PageRank that maintains many of its mathematical properties. It combines gene expression information with a network structure derived from gene annotations (gene ontologies or expression profile correlations. Using both simulated and real data we find that the algorithm offers an improved ranking of genes compared to pure expression change rankings. Conclusion Our modification of the PageRank algorithm provides an alternative method of evaluating microarray experimental results which combines prior knowledge about the underlying network. GeneRank offers an improvement compared to assessing the importance of a gene based on its experimentally observed fold-change alone and may be used as a basis for further analytical developments.

  15. Evaluation of treatment effects by ranking

    DEFF Research Database (Denmark)

    Halekoh, U; Kristensen, K

    2008-01-01

    In crop experiments measurements are often made by a judge evaluating the crops' conditions after treatment. In the present paper an analysis is proposed for experiments where plots of crops treated differently are mutually ranked. In the experimental layout the crops are treated on consecutive...... plots usually placed side by side in one or more rows. In the proposed method a judge ranks several neighbouring plots, say three, by ranking them from best to worst. For the next observation the judge moves on by no more than two plots, such that up to two plots will be re-evaluated again...

  16. An intertwined method for making low-rank, sum-of-product basis functions that makes it possible to compute vibrational spectra of molecules with more than 10 atoms

    Science.gov (United States)

    Thomas, Phillip S.; Carrington, Tucker

    2017-05-01

    We propose a method for solving the vibrational Schrödinger equation with which one can compute spectra for molecules with more than ten atoms. It uses sum-of-product (SOP) basis functions stored in a canonical polyadic tensor format and generated by evaluating matrix-vector products. By doing a sequence of partial optimizations, in each of which the factors in a SOP basis function for a single coordinate are optimized, the rank of the basis functions is reduced as matrix-vector products are computed. This is better than using an alternating least squares method to reduce the rank, as is done in the reduced-rank block power method. Partial optimization is better because it speeds up the calculation by about an order of magnitude and allows one to significantly reduce the memory cost. We demonstrate the effectiveness of the new method by computing vibrational spectra of two molecules, ethylene oxide (C2H4O ) and cyclopentadiene (C5H6 ) , with 7 and 11 atoms, respectively.

  17. Block models and personalized PageRank.

    Science.gov (United States)

    Kloumann, Isabel M; Ugander, Johan; Kleinberg, Jon

    2017-01-03

    Methods for ranking the importance of nodes in a network have a rich history in machine learning and across domains that analyze structured data. Recent work has evaluated these methods through the "seed set expansion problem": given a subset [Formula: see text] of nodes from a community of interest in an underlying graph, can we reliably identify the rest of the community? We start from the observation that the most widely used techniques for this problem, personalized PageRank and heat kernel methods, operate in the space of "landing probabilities" of a random walk rooted at the seed set, ranking nodes according to weighted sums of landing probabilities of different length walks. Both schemes, however, lack an a priori relationship to the seed set objective. In this work, we develop a principled framework for evaluating ranking methods by studying seed set expansion applied to the stochastic block model. We derive the optimal gradient for separating the landing probabilities of two classes in a stochastic block model and find, surprisingly, that under reasonable assumptions the gradient is asymptotically equivalent to personalized PageRank for a specific choice of the PageRank parameter [Formula: see text] that depends on the block model parameters. This connection provides a formal motivation for the success of personalized PageRank in seed set expansion and node ranking generally. We use this connection to propose more advanced techniques incorporating higher moments of landing probabilities; our advanced methods exhibit greatly improved performance, despite being simple linear classification rules, and are even competitive with belief propagation.

  18. Fractional cointegration rank estimation

    DEFF Research Database (Denmark)

    Lasak, Katarzyna; Velasco, Carlos

    We consider cointegration rank estimation for a p-dimensional Fractional Vector Error Correction Model. We propose a new two-step procedure which allows testing for further long-run equilibrium relations with possibly different persistence levels. The fi…rst step consists in estimating the parame......We consider cointegration rank estimation for a p-dimensional Fractional Vector Error Correction Model. We propose a new two-step procedure which allows testing for further long-run equilibrium relations with possibly different persistence levels. The fi…rst step consists in estimating...... to control for stochastic trend estimation effects from the first step. The critical values of the tests proposed depend only on the number of common trends under the null, p - r, and on the interval of the cointegration degrees b allowed, but not on the true cointegration degree b0. Hence, no additional...

  19. Integrated inventory ranking system for oilfield equipment industry

    Directory of Open Access Journals (Sweden)

    Jalel Ben Hmida

    2014-01-01

    Full Text Available Purpose: This case study is motivated by the subcontracting problem in an oilfield equipment and service company where the management needs to decide which parts to manufacture in-house when the capacity is not enough to make all required parts. Currently the company is making subcontracting decisions based on management’s experience. Design/methodology/approach: Working with the management, a decision support system (DSS is developed to rank parts by integrating three inventory classification methods considering both quantitative factors such as cost and demand, and qualitative factors such as functionality, efficiency, and quality. The proposed integrated inventory ranking procedure will make use of three classification methods: ABC, FSN, and VED. Findings: An integration mechanism using weights is developed to rank the parts based on the total priority scores. The ranked list generated by the system helps management to identify about 50 critical parts to manufacture in-house. Originality/value: The integration of all three inventory classification techniques into a single system is a unique feature of this research. This is important as it provides a more inclusive, big picture view of the DSS for management’s use in making business decisions.

  20. Leveraging Multiactions to Improve Medical Personalized Ranking for Collaborative Filtering.

    Science.gov (United States)

    Gao, Shan; Guo, Guibing; Li, Runzhi; Wang, Zongmin

    2017-01-01

    Nowadays, providing high-quality recommendation services to users is an essential component in web applications, including shopping, making friends, and healthcare. This can be regarded either as a problem of estimating users' preference by exploiting explicit feedbacks (numerical ratings), or as a problem of collaborative ranking with implicit feedback (e.g., purchases, views, and clicks). Previous works for solving this issue include pointwise regression methods and pairwise ranking methods. The emerging healthcare websites and online medical databases impose a new challenge for medical service recommendation. In this paper, we develop a model, MBPR (Medical Bayesian Personalized Ranking over multiple users' actions), based on the simple observation that users tend to assign higher ranks to some kind of healthcare services that are meanwhile preferred in users' other actions. Experimental results on the real-world datasets demonstrate that MBPR achieves more accurate recommendations than several state-of-the-art methods and shows its generality and scalability via experiments on the datasets from one mobile shopping app.

  1. Leveraging Multiactions to Improve Medical Personalized Ranking for Collaborative Filtering

    Directory of Open Access Journals (Sweden)

    Shan Gao

    2017-01-01

    Full Text Available Nowadays, providing high-quality recommendation services to users is an essential component in web applications, including shopping, making friends, and healthcare. This can be regarded either as a problem of estimating users’ preference by exploiting explicit feedbacks (numerical ratings, or as a problem of collaborative ranking with implicit feedback (e.g., purchases, views, and clicks. Previous works for solving this issue include pointwise regression methods and pairwise ranking methods. The emerging healthcare websites and online medical databases impose a new challenge for medical service recommendation. In this paper, we develop a model, MBPR (Medical Bayesian Personalized Ranking over multiple users’ actions, based on the simple observation that users tend to assign higher ranks to some kind of healthcare services that are meanwhile preferred in users’ other actions. Experimental results on the real-world datasets demonstrate that MBPR achieves more accurate recommendations than several state-of-the-art methods and shows its generality and scalability via experiments on the datasets from one mobile shopping app.

  2. NUCLEAR POWER PLANTS SAFETY IMPROVEMENT PROJECTS RANKING

    OpenAIRE

    Григорян, Анна Сергеевна; Тигран Георгиевич ГРИГОРЯН; Квасневский, Евгений Анатольевич

    2013-01-01

    The ranking nuclear power plants safety improvement projects is the most important task for ensuring the efficiency of NPP project management office work. Total amount of projects in NPP portfolio may reach more than 400. Features of the nuclear power plants safety improvement projects ranking in NPP portfolio determine the choice of the decision verbal analysis as a method of decision-making, as it allows to quickly compare the number of alternatives that are not available at the time of con...

  3. DrugE-Rank: improving drug-target interaction prediction of new candidate drugs or targets by ensemble learning to rank.

    Science.gov (United States)

    Yuan, Qingjun; Gao, Junning; Wu, Dongliang; Zhang, Shihua; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2016-06-15

    Identifying drug-target interactions is an important task in drug discovery. To reduce heavy time and financial cost in experimental way, many computational approaches have been proposed. Although these approaches have used many different principles, their performance is far from satisfactory, especially in predicting drug-target interactions of new candidate drugs or targets. Approaches based on machine learning for this problem can be divided into two types: feature-based and similarity-based methods. Learning to rank is the most powerful technique in the feature-based methods. Similarity-based methods are well accepted, due to their idea of connecting the chemical and genomic spaces, represented by drug and target similarities, respectively. We propose a new method, DrugE-Rank, to improve the prediction performance by nicely combining the advantages of the two different types of methods. That is, DrugE-Rank uses LTR, for which multiple well-known similarity-based methods can be used as components of ensemble learning. The performance of DrugE-Rank is thoroughly examined by three main experiments using data from DrugBank: (i) cross-validation on FDA (US Food and Drug Administration) approved drugs before March 2014; (ii) independent test on FDA approved drugs after March 2014; and (iii) independent test on FDA experimental drugs. Experimental results show that DrugE-Rank outperforms competing methods significantly, especially achieving more than 30% improvement in Area under Prediction Recall curve for FDA approved new drugs and FDA experimental drugs. http://datamining-iip.fudan.edu.cn/service/DrugE-Rank zhusf@fudan.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  4. An Integrated Multicriteria Decision-Making Approach for Evaluating Nuclear Fuel Cycle Systems for Long-term Sustainability on the Basis of an Equilibrium Model: Technique for Order of Preference by Similarity to Ideal Solution, Preference Ranking Organization Method for Enrichment Evaluation, and Multiattribute Utility Theory Combined with Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Saerom Yoon

    2017-02-01

    Full Text Available The focus on the issues surrounding spent nuclear fuel and lifetime extension of old nuclear power plants continues to grow nowadays. A transparent decision-making process to identify the best suitable nuclear fuel cycle (NFC is considered to be the key task in the current situation. Through this study, an attempt is made to develop an equilibrium model for the NFC to calculate the material flows based on 1 TWh of electricity production, and to perform integrated multicriteria decision-making method analyses via the analytic hierarchy process technique for order of preference by similarity to ideal solution, preference ranking organization method for enrichment evaluation, and multiattribute utility theory methods. This comparative study is aimed at screening and ranking the three selected NFC options against five aspects: sustainability, environmental friendliness, economics, proliferation resistance, and technical feasibility. The selected fuel cycle options include pressurized water reactor (PWR once-through cycle, PWR mixed oxide cycle, or pyroprocessing sodium-cooled fast reactor cycle. A sensitivity analysis was performed to prove the robustness of the results and explore the influence of criteria on the obtained ranking. As a result of the comparative analysis, the pyroprocessing sodium-cooled fast reactor cycle is determined to be the most competitive option among the NFC scenarios.

  5. An integrated multicriteria decision-making approach for evaluating nuclear fuel cycle systems for long-term sustainability on the basis of an equilibrium model: Technique for order of preference by similarity to ideal solution, preference ranking organization method for enrichment evaluation, and multiattribute utility theory combined with analytic hierarchy process

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Sae Rom [Dept of Quantum Energy Chemical Engineering, Korea University of Science and Technology (KUST), Daejeon (Korea, Republic of); Choi, Sung Yeol [Ulsan National Institute of Science and Technology, Ulju (Korea, Republic of); Ko, Wonil [Nonproliferation System Development Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2017-02-15

    The focus on the issues surrounding spent nuclear fuel and lifetime extension of old nuclear power plants continues to grow nowadays. A transparent decision-making process to identify the best suitable nuclear fuel cycle (NFC) is considered to be the key task in the current situation. Through this study, an attempt is made to develop an equilibrium model for the NFC to calculate the material flows based on 1 TWh of electricity production, and to perform integrated multicriteria decision-making method analyses via the analytic hierarchy process technique for order of preference by similarity to ideal solution, preference ranking organization method for enrichment evaluation, and multiattribute utility theory methods. This comparative study is aimed at screening and ranking the three selected NFC options against five aspects: sustainability, environmental friendliness, economics, proliferation resistance, and technical feasibility. The selected fuel cycle options include pressurized water reactor (PWR) once-through cycle, PWR mixed oxide cycle, or pyroprocessing sodium-cooled fast reactor cycle. A sensitivity analysis was performed to prove the robustness of the results and explore the influence of criteria on the obtained ranking. As a result of the comparative analysis, the pyroprocessing sodium-cooled fast reactor cycle is determined to be the most competitive option among the NFC scenarios.

  6. Discriminant Context Information Analysis for Post-Ranking Person Re-Identification.

    Science.gov (United States)

    Garcia, Jorge; Martinel, Niki; Gardel, Alfredo; Bravo, Ignacio; Foresti, Gian Luca; Micheloni, Christian

    2017-04-01

    Existing approaches for person re-identification are mainly based on creating distinctive representations or on learning optimal metrics. The achieved results are then provided in the form of a list of ranked matching persons. It often happens that the true match is not ranked first but it is in the first positions. This is mostly due to the visual ambiguities shared between the true match and other "similar" persons. At the current state, there is a lack of a study of such visual ambiguities which limit the re-identification performance within the first ranks. We believe that an analysis of the similar appearances of the first ranks can be helpful in detecting, hence removing, such visual ambiguities. We propose to achieve such a goal by introducing an unsupervised post-ranking framework. Once the initial ranking is available, content and context sets are extracted. Then, these are exploited to remove the visual ambiguities and to obtain the discriminant feature space which is finally exploited to compute the new ranking. An in-depth analysis of the performance achieved on three public benchmark data sets support our believes. For every data set, the proposed method remarkably improves the first ranks results and outperforms the state-of-the-art approaches.

  7. Ranking Quality in Higher Education: Guiding or Misleading?

    Science.gov (United States)

    Bergseth, Brita; Petocz, Peter; Abrandt Dahlgren, Madeleine

    2014-01-01

    The study examines two different models of measuring, assessing and ranking quality in higher education. Do different systems of quality assessment lead to equivalent conclusions about the quality of education? This comparative study is based on the rankings of 24 Swedish higher education institutions. Two ranking actors have independently…

  8. Calibrating Canadian Universities: Rankings for Sale Once Again

    Science.gov (United States)

    Cramer, Kenneth M.; Page, Stewart

    2007-01-01

    A summary and update on recent research by the authors and others concerning rankings of Canadian universities is presented. Some specific data are reported in regard to the 2005 and 2006 ranking data published by "Maclean's" magazine. Some criticisms and difficulties with the use of rank-based data are outlined with regard to the issues…

  9. Ranking adverse drug reactions with crowdsourcing.

    Science.gov (United States)

    Gottlieb, Assaf; Hoehndorf, Robert; Dumontier, Michel; Altman, Russ B

    2015-03-23

    There is no publicly available resource that provides the relative severity of adverse drug reactions (ADRs). Such a resource would be useful for several applications, including assessment of the risks and benefits of drugs and improvement of patient-centered care. It could also be used to triage predictions of drug adverse events. The intent of the study was to rank ADRs according to severity. We used Internet-based crowdsourcing to rank ADRs according to severity. We assigned 126,512 pairwise comparisons of ADRs to 2589 Amazon Mechanical Turk workers and used these comparisons to rank order 2929 ADRs. There is good correlation (rho=.53) between the mortality rates associated with ADRs and their rank. Our ranking highlights severe drug-ADR predictions, such as cardiovascular ADRs for raloxifene and celecoxib. It also triages genes associated with severe ADRs such as epidermal growth-factor receptor (EGFR), associated with glioblastoma multiforme, and SCN1A, associated with epilepsy. ADR ranking lays a first stepping stone in personalized drug risk assessment. Ranking of ADRs using crowdsourcing may have useful clinical and financial implications, and should be further investigated in the context of health care decision making.

  10. Ranking Silent Nodes in Information Networks: A Quantitative Approach and Applications

    Science.gov (United States)

    Interdonato, Roberto; Tagarelli, Andrea

    This paper overviews recent research findings concerning a new challenging problem in information networks, namely identifying and ranking silent nodes. We present three case studies which show how silent nodes' behavior maps to different situations in computer networks, online social networks, and online collaboration networks, and we discuss major benefits in identifying and ranking silent nodes in such networks. We also provide an overview of our proposed approach, which relies on a new eigenvector- centrality graph-based ranking method built on a silent-oriented network model.

  11. The exact probability distribution of the rank product statistics for replicated experiments.

    Science.gov (United States)

    Eisinga, Rob; Breitling, Rainer; Heskes, Tom

    2013-03-18

    The rank product method is a widely accepted technique for detecting differentially regulated genes in replicated microarray experiments. To approximate the sampling distribution of the rank product statistic, the original publication proposed a permutation approach, whereas recently an alternative approximation based on the continuous gamma distribution was suggested. However, both approximations are imperfect for estimating small tail probabilities. In this paper we relate the rank product statistic to number theory and provide a derivation of its exact probability distribution and the true tail probabilities. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  12. A Truncated Nuclear Norm Regularization Method Based on Weighted Residual Error for Matrix Completion.

    Science.gov (United States)

    Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin

    2016-01-01

    Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.

  13. Quality indicators in the mobile industry rankings based on indicators of customer satisfaction with the hybrid approach DEMATEL and ANP appropriate strategy based on gray system

    Directory of Open Access Journals (Sweden)

    Ashouri Fatemeh

    2016-01-01

    Full Text Available The quality of services as a vital element in the strategic competitiveness and commercial success are various methods have been developed to evaluate it. Prioritizing qualitative indicators based on the quality of mobile phone services enables the company gives top priority due to the higher percentage of satisfied customers provide. This study tries to customer satisfaction according to criteria to prioritize mobile operators pay qualitative characteristics. A sample consisted of 450 individuals (46% women, 54% men from IRANCELL operator (Iran in 2015 which selected randomly. Results shows between four considered strategies maximum priorities belong to S1 which is denote to more services beyond customer expectations.

  14. Global Low-Rank Image Restoration With Gaussian Mixture Model.

    Science.gov (United States)

    Zhang, Sibo; Jiao, Licheng; Liu, Fang; Wang, Shuang

    2017-06-27

    Low-rank restoration has recently attracted a lot of attention in the research of computer vision. Empirical studies show that exploring the low-rank property of the patch groups can lead to superior restoration performance, however, there is limited achievement on the global low-rank restoration because the rank minimization at image level is too strong for the natural images which seldom match the low-rank condition. In this paper, we describe a flexible global low-rank restoration model which introduces the local statistical properties into the rank minimization. The proposed model can effectively recover the latent global low-rank structure via nuclear norm, as well as the fine details via Gaussian mixture model. An alternating scheme is developed to estimate the Gaussian parameters and the restored image, and it shows excellent convergence and stability. Besides, experiments on image and video sequence datasets show the effectiveness of the proposed method in image inpainting problems.

  15. Who's bigger? where historical figures really rank

    CERN Document Server

    Skiena, Steven

    2014-01-01

    Is Hitler bigger than Napoleon? Washington bigger than Lincoln? Picasso bigger than Einstein? Quantitative analysts are rapidly finding homes in social and cultural domains, from finance to politics. What about history? In this fascinating book, Steve Skiena and Charles Ward bring quantitative analysis to bear on ranking and comparing historical reputations. They evaluate each person by aggregating the traces of millions of opinions, just as Google ranks webpages. The book includes a technical discussion for readers interested in the details of the methods, but no mathematical or computational background is necessary to understand the rankings or conclusions. Along the way, the authors present the rankings of more than one thousand of history's most significant people in science, politics, entertainment, and all areas of human endeavor. Anyone interested in history or biography can see where their favorite figures place in the grand scheme of things.

  16. Can College Rankings Be Believed?

    Directory of Open Access Journals (Sweden)

    Meredith Davis

    Full Text Available The article summarizes literature on college and university rankings worldwide and the strategies used by various ranking organizations, including those of government and popular media. It traces the history of national and global rankings, indicators used by ranking systems, and the effect of rankings on academic programs and their institutions. Although ranking systems employ diverse criteria and most weight certain indicators over others, there is considerable skepticism that most actually measure educational quality. At the same time, students and their families increasingly consult these evaluations when making college decisions, and sponsors of faculty research consider reputation when forming academic partnerships. While there are serious concerns regarding the validity of ranking institutions when so little data can support differences between one institution and another, college rankings appear to be here to stay.

  17. Ranking competitors using degree-neutralized random walks.

    Science.gov (United States)

    Shin, Seungkyu; Ahnert, Sebastian E; Park, Juyong

    2014-01-01

    Competition is ubiquitous in many complex biological, social, and technological systems, playing an integral role in the evolutionary dynamics of the systems. It is often useful to determine the dominance hierarchy or the rankings of the components of the system that compete for survival and success based on the outcomes of the competitions between them. Here we propose a ranking method based on the random walk on the network representing the competitors as nodes and competitions as directed edges with asymmetric weights. We use the edge weights and node degrees to define the gradient on each edge that guides the random walker towards the weaker (or the stronger) node, which enables us to interpret the steady-state occupancy as the measure of the node's weakness (or strength) that is free of unwarranted degree-induced bias. We apply our method to two real-world competition networks and explore the issues of ranking stabilization and prediction accuracy, finding that our method outperforms other methods including the baseline win-loss differential method in sparse networks.

  18. Ranking competitors using degree-neutralized random walks.

    Directory of Open Access Journals (Sweden)

    Seungkyu Shin

    Full Text Available Competition is ubiquitous in many complex biological, social, and technological systems, playing an integral role in the evolutionary dynamics of the systems. It is often useful to determine the dominance hierarchy or the rankings of the components of the system that compete for survival and success based on the outcomes of the competitions between them. Here we propose a ranking method based on the random walk on the network representing the competitors as nodes and competitions as directed edges with asymmetric weights. We use the edge weights and node degrees to define the gradient on each edge that guides the random walker towards the weaker (or the stronger node, which enables us to interpret the steady-state occupancy as the measure of the node's weakness (or strength that is free of unwarranted degree-induced bias. We apply our method to two real-world competition networks and explore the issues of ranking stabilization and prediction accuracy, finding that our method outperforms other methods including the baseline win-loss differential method in sparse networks.

  19. Low-rank sparse learning for robust visual tracking

    KAUST Repository

    Zhang, Tianzhu

    2012-01-01

    In this paper, we propose a new particle-filter based tracking algorithm that exploits the relationship between particles (candidate targets). By representing particles as sparse linear combinations of dictionary templates, this algorithm capitalizes on the inherent low-rank structure of particle representations that are learned jointly. As such, it casts the tracking problem as a low-rank matrix learning problem. This low-rank sparse tracker (LRST) has a number of attractive properties. (1) Since LRST adaptively updates dictionary templates, it can handle significant changes in appearance due to variations in illumination, pose, scale, etc. (2) The linear representation in LRST explicitly incorporates background templates in the dictionary and a sparse error term, which enables LRST to address the tracking drift problem and to be robust against occlusion respectively. (3) LRST is computationally attractive, since the low-rank learning problem can be efficiently solved as a sequence of closed form update operations, which yield a time complexity that is linear in the number of particles and the template size. We evaluate the performance of LRST by applying it to a set of challenging video sequences and comparing it to 6 popular tracking methods. Our experiments show that by representing particles jointly, LRST not only outperforms the state-of-the-art in tracking accuracy but also significantly improves the time complexity of methods that use a similar sparse linear representation model for particles [1]. © 2012 Springer-Verlag.

  20. Tool for Ranking Research Options

    Science.gov (United States)

    Ortiz, James N.; Scott, Kelly; Smith, Harold

    2005-01-01

    Tool for Research Enhancement Decision Support (TREDS) is a computer program developed to assist managers in ranking options for research aboard the International Space Station (ISS). It could likely also be adapted to perform similar decision-support functions in industrial and academic settings. TREDS provides a ranking of the options, based on a quantifiable assessment of all the relevant programmatic decision factors of benefit, cost, and risk. The computation of the benefit for each option is based on a figure of merit (FOM) for ISS research capacity that incorporates both quantitative and qualitative inputs. Qualitative inputs are gathered and partly quantified by use of the time-tested analytical hierarchical process and used to set weighting factors in the FOM corresponding to priorities determined by the cognizant decision maker(s). Then by use of algorithms developed specifically for this application, TREDS adjusts the projected benefit for each option on the basis of levels of technical implementation, cost, and schedule risk. Based partly on Excel spreadsheets, TREDS provides screens for entering cost, benefit, and risk information. Drop-down boxes are provided for entry of qualitative information. TREDS produces graphical output in multiple formats that can be tailored by users.

  1. The use of TCP based EUD to rank and compare lung radiotherapy plans: in-silico study to evaluate the correlation between TCP with physical quality indices.

    Science.gov (United States)

    Chaikh, Abdulhamid; Balosso, Jacques

    2017-06-01

    To apply the equivalent uniform dose (EUD) radiobiological model to estimate the tumor control probability (TCP) scores for treatment plans using different radiobiological parameter settings, and to evaluate the correlation between TCP and physical quality indices of the treatment plans. Ten radiotherapy treatment plans for lung cancer were generated. The dose distributions were calculated using anisotropic analytical algorithm (AAA). Dose parameters and quality indices derived from dose volume histograms (DVH) for target volumes were evaluated. The predicted TCP was computed using EUD model with tissue-specific parameter (a=-10). The assumed radiobiological parameter setting for adjuvant therapy [tumor dose to control 50% of the tumor (TCD50) =36.5 Gy and γ50=0.72] and curative intent (TCD50=51.24 Gy and γ50=0.83) were used. The bootstrap method was used to estimate the 95% confidence interval (95% CI). The coefficients (ρ) from Spearman's rank test were calculated to assess the correlation between quality indices with TCP. Wilcoxon paired test was used to calculate P value. The 95% CI of TCP were 70.6-81.5 and 46.6-64.7, respectively, for adjuvant radiotherapy and curative intent. The TCP outcome showed a positive and good correlation with calculated dose to 95% of the target volume (D95%) and minimum dose (Dmin). Consistently, TCP correlate negatively with heterogeneity indices. This study confirms that more relevant and robust radiobiological parameters setting should be integrated according to cancer type. The positive correlation with quality indices gives chance to improve the clinical out-come by optimizing the treatment plans to maximize the Dmin and D95%. This attempt to increase the TCP should be carried out with the respect of dose constraints for organs at risks. However, the negative correlation with heterogeneity indices shows that the optimization of beam arrangements could be also useful. Attention should be paid to obtain an appropriate

  2. Sync-rank: Robust Ranking, Constrained Ranking and Rank Aggregation via Eigenvector and SDP Synchronization

    Science.gov (United States)

    2015-04-28

    eigenvector of the associated Laplacian matrix (i.e., the Fiedler vector) matches that of the variables. In other words, this approach (reminiscent of...S1), i.e., Dii = ∑n j=1Gi,j is the degree of node i in the measurement graph G. 3: Compute the Fiedler vector of S (eigenvector corresponding to the...smallest nonzero eigenvalue of LS). 4: Output the ranking induced by sorting the Fiedler vector of S, with the global ordering (increasing or decreasing

  3. Fast alogorithms for Bayesian uncertainty quantification in large-scale linear inverse problems based on low-rank partial Hessian approximations

    Energy Technology Data Exchange (ETDEWEB)

    Akcelik, Volkan [ORNL; Flath, Pearl [University of Texas, Austin; Ghattas, Omar [University of Texas, Austin; Hill, Judith C [ORNL; Van Bloemen Waanders, Bart [Sandia National Laboratories (SNL); Wilcox, Lucas [University of Texas, Austin

    2011-01-01

    We consider the problem of estimating the uncertainty in large-scale linear statistical inverse problems with high-dimensional parameter spaces within the framework of Bayesian inference. When the noise and prior probability densities are Gaussian, the solution to the inverse problem is also Gaussian, and is thus characterized by the mean and covariance matrix of the posterior probability density. Unfortunately, explicitly computing the posterior covariance matrix requires as many forward solutions as there are parameters, and is thus prohibitive when the forward problem is expensive and the parameter dimension is large. However, for many ill-posed inverse problems, the Hessian matrix of the data misfit term has a spectrum that collapses rapidly to zero. We present a fast method for computation of an approximation to the posterior covariance that exploits the lowrank structure of the preconditioned (by the prior covariance) Hessian of the data misfit. Analysis of an infinite-dimensional model convection-diffusion problem, and numerical experiments on large-scale 3D convection-diffusion inverse problems with up to 1.5 million parameters, demonstrate that the number of forward PDE solves required for an accurate low-rank approximation is independent of the problem dimension. This permits scalable estimation of the uncertainty in large-scale ill-posed linear inverse problems at a small multiple (independent of the problem dimension) of the cost of solving the forward problem.

  4. PageRank (II): Mathematics

    African Journals Online (AJOL)

    maths/stats

    INTRODUCTION. PageRank is Google's system for ranking web pages. A page with a higher PageRank is deemed more important and is more likely to be listed above a ... Felix U. Ogban, Department of Mathematics/Statistics and Computer Science, Faculty of Science, University of ..... probability, 2004, 41, (3): 721-734.

  5. University Rankings and Social Science

    Science.gov (United States)

    Marginson, Simon

    2014-01-01

    University rankings widely affect the behaviours of prospective students and their families, university executive leaders, academic faculty, governments and investors in higher education. Yet the social science foundations of global rankings receive little scrutiny. Rankings that simply recycle reputation without any necessary connection to real…

  6. PageRank, HITS and a unified framework for link analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Chris; He, Xiaofeng; Husbands, Parry; Zha, Hongyuan; Simon, Horst

    2001-10-01

    Two popular webpage ranking algorithms are HITS and PageRank. HITS emphasizes mutual reinforcement between authority and hub webpages, while PageRank emphasizes hyperlink weight normalization and web surfing based on random walk models. We systematically generalize/combine these concepts into a unified framework. The ranking framework contains a large algorithm space; HITS and PageRank are two extreme ends in this space. We study several normalized ranking algorithms which are intermediate between HITS and PageRank, and obtain closed-form solutions. We show that, to first order approximation, all ranking algorithms in this framework, including PageRank and HITS, lead to same ranking which is highly correlated with ranking by indegree. These results support the notion that in web resource ranking indegree and outdegree are of fundamental importance. Rankings of webgraphs of different sizes and queries are presented to illustrate our analysis.

  7. A Review of Ranking Models in Data Envelopment Analysis

    OpenAIRE

    Hosseinzadeh Lotfi, F.; Jahanshahloo, G.R.; M. Khodabakhshi; Rostamy-Malkhlifeh, M.; Moghaddas, Z.; Vaez-Ghasemi, M.

    2013-01-01

    In the course of improving various abilities of data envelopment analysis (DEA) models, many investigations have been carried out for ranking decision-making units (DMUs). This is an important issue both in theory and practice. There exist a variety of papers which apply different ranking methods to a real data set. Here the ranking methods are divided into seven groups. As each of the existing methods can be viewed from different aspects, it is possible that somewhat these groups have an ove...

  8. When sparse coding meets ranking: a joint framework for learning sparse codes and ranking scores

    KAUST Repository

    Wang, Jim Jing-Yan

    2017-06-28

    Sparse coding, which represents a data point as a sparse reconstruction code with regard to a dictionary, has been a popular data representation method. Meanwhile, in database retrieval problems, learning the ranking scores from data points plays an important role. Up to now, these two problems have always been considered separately, assuming that data coding and ranking are two independent and irrelevant problems. However, is there any internal relationship between sparse coding and ranking score learning? If yes, how to explore and make use of this internal relationship? In this paper, we try to answer these questions by developing the first joint sparse coding and ranking score learning algorithm. To explore the local distribution in the sparse code space, and also to bridge coding and ranking problems, we assume that in the neighborhood of each data point, the ranking scores can be approximated from the corresponding sparse codes by a local linear function. By considering the local approximation error of ranking scores, the reconstruction error and sparsity of sparse coding, and the query information provided by the user, we construct a unified objective function for learning of sparse codes, the dictionary and ranking scores. We further develop an iterative algorithm to solve this optimization problem.

  9. Issue Management Risk Ranking Systems

    Energy Technology Data Exchange (ETDEWEB)

    Novack, Steven David; Marshall, Frances Mc Clellan; Stromberg, Howard Merion; Grant, Gary Michael

    1999-06-01

    Thousands of safety issues have been collected on-line at the Idaho National Engineering and Environmental Laboratory (INEEL) as part of the Issue Management Plan. However, there has been no established approach to prioritize collected and future issues. The authors developed a methodology, based on hazards assessment, to identify and risk rank over 5000 safety issues collected at INEEL. This approach required that it was easily applied and understandable for site adaptation and commensurate with the Integrated Safety Plan. High-risk issues were investigated and mitigative/preventive measures were suggested and ranked based on a cost-benefit scheme to provide risk-informed safety measures. This methodology was consistent with other integrated safety management goals and tasks providing a site-wide risk informed decision tool to reduce hazardous conditions and focus resources on high-risk safety issues. As part of the issue management plan, this methodology was incorporated at the issue collection level and training was provided to management to better familiarize decision-makers with concepts of safety and risk. This prioritization methodology and issue dissemination procedure will be discussed. Results of issue prioritization and training efforts will be summarized. Difficulties and advantages of the process will be reported. Development and incorporation of this process into INEELs lessons learned reporting and the site-wide integrated safety management program will be shown with an emphasis on establishing self reliance and ownership of safety issues.

  10. Statistical Methods for Base Inflation Calculation

    Directory of Open Access Journals (Sweden)

    Ion Partachi

    2007-02-01

    Full Text Available The purposes of the given research are the analysis of four alternative methods of measurement of base inflation and a choice of the parameter most precisely reflecting an actual inflationary trend in the Republic of Moldova. The estimation of the fact sheet on inflation in the Republic of Moldova is made with application of the approach based on methods of the statistical analysis, in particular: exclusion method, trimmed means method, standard deviation trimmed means method, percentile method.

  11. Novel Direction Of Arrival Estimation Method Based on Coherent Accumulation Matrix Reconstruction

    Directory of Open Access Journals (Sweden)

    Li Lei

    2015-04-01

    Full Text Available Based on coherent accumulation matrix reconstruction, a novel Direction Of Arrival (DOA estimation decorrelation method of coherent signals is proposed using a small sample. First, the Signal to Noise Ratio (SNR is improved by performing coherent accumulation operation on an array of observed data. Then, according to the structure characteristics of the accumulated snapshot vector, the equivalent covariance matrix, whose rank is the same as the number of array elements, is constructed. The rank of this matrix is proved to be determined just by the number of incident signals, which realize the decorrelation of coherent signals. Compared with spatial smoothing method, the proposed method performs better by effectively avoiding aperture loss with high-resolution characteristics and low computational complexity. Simulation results demonstrate the efficiency of the proposed method.

  12. Risk-based prioritization method for the classification of groundwater pesticide pollution from agricultural regions.

    Science.gov (United States)

    Yang, Yu; Lian, Xin-Ying; Jiang, Yong-Hai; Xi, Bei-Dou; He, Xiao-Song

    2017-11-01

    Agricultural regions are a significant source of groundwater pesticide pollution. To ensure that agricultural regions with a significantly high risk of groundwater pesticide contamination are properly managed, a risk-based ranking method related to groundwater pesticide contamination is needed. In the present paper, a risk-based prioritization method for the classification of groundwater pesticide pollution from agricultural regions was established. The method encompasses 3 phases, including indicator selection, characterization, and classification. In the risk ranking index system employed here, 17 indicators involving the physicochemical properties, environmental behavior characteristics, pesticide application methods, and inherent vulnerability of groundwater in the agricultural region were selected. The boundary of each indicator was determined using K-means cluster analysis based on a survey of a typical agricultural region and the physical and chemical properties of 300 typical pesticides. The total risk characterization was calculated by multiplying the risk value of each indicator, which could effectively avoid the subjectivity of index weight calculation and identify the main factors associated with the risk. The results indicated that the risk for groundwater pesticide contamination from agriculture in a region could be ranked into 4 classes from low to high risk. This method was applied to an agricultural region in Jiangsu Province, China, and it showed that this region had a relatively high risk for groundwater contamination from pesticides, and that the pesticide application method was the primary factor contributing to the relatively high risk. The risk ranking method was determined to be feasible, valid, and able to provide reference data related to the risk management of groundwater pesticide pollution from agricultural regions. Integr Environ Assess Manag 2017;13:1052-1059. © 2017 SETAC. © 2017 SETAC.

  13. Partial Kernelization for Rank Aggregation: Theory and Experiments

    Science.gov (United States)

    Betzler, Nadja; Bredereck, Robert; Niedermeier, Rolf

    Rank Aggregation is important in many areas ranging from web search over databases to bioinformatics. The underlying decision problem Kemeny Score is NP-complete even in case of four input rankings to be aggregated into a "median ranking". We study efficient polynomial-time data reduction rules that allow us to find optimal median rankings. On the theoretical side, we improve a result for a "partial problem kernel" from quadratic to linear size. On the practical side, we provide encouraging experimental results with data based on web search and sport competitions, e.g., computing optimal median rankings for real-world instances with more than 100 candidates within milliseconds.

  14. Variable ranking based on the estimated degree of separation for two distributions of data by the length of the receiver operating characteristic curve.

    Science.gov (United States)

    Maswadeh, Waleed M; Snyder, A Peter

    2015-05-30

    Variable responses are fundamental for all experiments, and they can consist of information-rich, redundant, and low signal intensities. A dataset can consist of a collection of variable responses over multiple classes or groups. Usually some of the variables are removed in a dataset that contain very little information. Sometimes all the variables are used in the data analysis phase. It is common practice to discriminate between two distributions of data; however, there is no formal algorithm to arrive at a degree of separation (DS) between two distributions of data. The DS is defined herein as the average of the sum of the areas from the probability density functions (PDFs) of A and B that contain a≥percentage of A and/or B. Thus, DS90 is the average of the sum of the PDF areas of A and B that contain ≥90% of A and/or B. To arrive at a DS value, two synthesized PDFs or very large experimental datasets are required. Experimentally it is common practice to generate relatively small datasets. Therefore, the challenge was to find a statistical parameter that can be used on small datasets to estimate and highly correlate with the DS90 parameter. Established statistical methods include the overlap area of the two data distribution profiles, Welch's t-test, Kolmogorov-Smirnov (K-S) test, Mann-Whitney-Wilcoxon test, and the area under the receiver operating characteristics (ROC) curve (AUC). The area between the ROC curve and diagonal (ACD) and the length of the ROC curve (LROC) are introduced. The established, ACD, and LROC methods were correlated to the DS90 when applied on many pairs of synthesized PDFs. The LROC method provided the best linear correlation with, and estimation of, the DS90. The estimated DS90 from the LROC (DS90-LROC) is applied to a database, as an example, of three Italian wines consisting of thirteen variable responses for variable ranking consideration. An important highlight of the DS90-LROC method is utilizing the LROC curve methodology to

  15. COMPANY VALUATION METHODS BASED ON PATRIMONY

    Directory of Open Access Journals (Sweden)

    SUCIU GHEORGHE

    2013-02-01

    Full Text Available The methods used for the company valuation can be divided into 3 main groups: methods based on patrimony,methods based on financial performance, methods based both on patrimony and on performance. The companyvaluation methods based on patrimony are implemented taking into account the balance sheet or the financialstatement. The financial statement refers to that type of balance in which the assets are arranged according to liquidity,and the liabilities according to their financial maturity date. The patrimonial methods are based on the principle thatthe value of the company equals that of the patrimony it owns. From a legal point of view, the patrimony refers to allthe rights and obligations of a company. The valuation of companies based on their financial performance can be donein 3 ways: the return value, the yield value, the present value of the cash flows. The mixed methods depend both onpatrimony and on financial performance or can make use of other methods.

  16. Based on Penalty Function Method

    Directory of Open Access Journals (Sweden)

    Ishaq Baba

    2015-01-01

    Full Text Available The dual response surface for simultaneously optimizing the mean and variance models as separate functions suffers some deficiencies in handling the tradeoffs between bias and variance components of mean squared error (MSE. In this paper, the accuracy of the predicted response is given a serious attention in the determination of the optimum setting conditions. We consider four different objective functions for the dual response surface optimization approach. The essence of the proposed method is to reduce the influence of variance of the predicted response by minimizing the variability relative to the quality characteristics of interest and at the same time achieving the specific target output. The basic idea is to convert the constraint optimization function into an unconstraint problem by adding the constraint to the original objective function. Numerical examples and simulations study are carried out to compare performance of the proposed method with some existing procedures. Numerical results show that the performance of the proposed method is encouraging and has exhibited clear improvement over the existing approaches.

  17. Ranking Water Transparency of Dutch Stock-Listed Companies

    Directory of Open Access Journals (Sweden)

    Marissa H. Linneman

    2015-04-01

    Full Text Available A growing world population, changing consumption patterns and climate change are affecting water demands, water scarcity and water quality worldwide, while at present, few companies are incorporating good water stewardship. In order to create awareness on this issue and provide an incentive for companies to improve the water performance in their operations and supply chain, a method for ranking companies based on their water transparency has been developed. The method consists of a checklist that can be completed on the basis of information from annual reports, sustainability reports and websites of companies. This is the first time a ranking of companies regarding water transparency has been carried out. Results show that there are large differences in transparency between and within sectors and that companies are reporting more about their operations than their supply chain.

  18. A ranking efficiency unit by restrictions using DEA models

    Science.gov (United States)

    Arsad, Roslah; Abdullah, Mohammad Nasir; Alias, Suriana

    2014-12-01

    In this paper, a comparison regarding the efficiency shares of listed companies in Bursa Malaysia was made, through the application of estimation method of Data Envelopment Analysis (DEA). In this study, DEA is used to measure efficiency shares of listed companies in Bursa Malaysia in terms of the financial performance. It is believed that only good financial performer will give a good return to the investors in the long run. The main objectives were to compute the relative efficiency scores of the shares in Bursa Malaysia and rank the shares based on Balance Index with regard to relative efficiency. The methods of analysis using Alirezaee and Afsharian's model were employed to this study; where the originality of Charnes, Cooper and Rhode model (CCR) with assumption of constant return to scale (CRS) still holds. This method of ranking relative efficiency of decision making units (DMUs) was value-added by using Balance Index. From the result, the companies that were recommended for investors based on ranking were NATWIDE, YTL and MUDA. These companies were the top three efficient companies with good performance in 2011 whereas in 2012 the top three companies were NATWIDE, MUDA and BERNAS.

  19. Constrained low-rank gamut completion for robust illumination estimation

    Science.gov (United States)

    Zhou, Jianshen; Yuan, Jiazheng; Liu, Hongzhe

    2017-02-01

    Illumination estimation is an important component of color constancy and automatic white balancing. According to recent survey and evaluation work, the supervised methods with a learning phase are competitive for illumination estimation. However, the robustness and performance of any supervised algorithm suffer from an incomplete gamut in training image sets because of limited reflectance surfaces in a scene. In order to address this problem, we present a constrained low-rank gamut completion algorithm, which can replenish gamut from limited surfaces in an image, for robust illumination estimation. In the proposed algorithm, we first discuss why the gamut completion is actually a low-rank matrix completion problem. Then a constrained low-rank matrix completion framework is proposed by adding illumination similarities among the training images as an additional constraint. An optimization algorithm is also given out by extending the augmented Lagrange multipliers. Finally, the completed gamut based on the proposed algorithm is fed into the support vector regression (SVR)-based illumination estimation method to evaluate the effect of gamut completion. The experimental results on both synthetic and real-world image sets show that the proposed gamut completion model not only can effectively improve the performance of the original SVR method but is also robust to the surface insufficiency in training samples.

  20. A Rank-Ordered Marginal Filter for Deinterlacing

    Directory of Open Access Journals (Sweden)

    Seok Hoon Kang

    2013-03-01

    Full Text Available This paper proposes a new interpolation filter for deinterlacing, which is achievedby enhancing the edge preserving ability of the conventional edge-based line averagemethods. This filter consists of three steps: pre-processing step, fuzzy metric-based weightassignation step, and rank-ordered marginal filter step. The proposed method is able tointerpolate the missing lines without introducing annoying articles. Simulation results showthat the images filtered with the proposed algorithm restrain less annoying pixels than theones acquired by other methods.

  1. Ranking and analysis administrative barriers of fundamental principles and rights at work from the perspective of employers by using parametrical analysis and the TOPSIS method

    Directory of Open Access Journals (Sweden)

    Gholam Reza Bordbar

    2014-10-01

    Full Text Available ILO, adopted and communicated several types of document which include Conventions, recommendations, resolutions and statements etc.among all of them, convention has a special place because the members states accept duties and obligations. Among these conventions, those conventions that are related to the fundamental rights of labor had special importance and prestige. Member countries of the Organization like Iran are required to implement the relevant commitments. this study aimed to analyze administrative obstacles of the Fundamental Principles and Rights at Work by getting weight to problems and ranking them. The result of study show that Prohibition of child labor is more important than other aspects of Fundamental Principles and Rights at Work. in aspect of child labor most important obstacle was "red tape" ,in others categories result was as follow: in lack of collective bargaining most important obstacle was made emotional decisions, in forced labor most important obstacle was distrust of other workers, in gender discrimination most important obstacle was Working environment, in people inequality most important obstacle was Subjective judgments manager.

  2. Enhancing collaborative filtering by user interest expansion via personalized ranking.

    Science.gov (United States)

    Liu, Qi; Chen, Enhong; Xiong, Hui; Ding, Chris H Q; Chen, Jian

    2012-02-01

    Recommender systems suggest a few items from many possible choices to the users by understanding their past behaviors. In these systems, the user behaviors are influenced by the hidden interests of the users. Learning to leverage the information about user interests is often critical for making better recommendations. However, existing collaborative-filtering-based recommender systems are usually focused on exploiting the information about the user's interaction with the systems; the information about latent user interests is largely underexplored. To that end, inspired by the topic models, in this paper, we propose a novel collaborative-filtering-based recommender system by user interest expansion via personalized ranking, named iExpand. The goal is to build an item-oriented model-based collaborative-filtering framework. The iExpand method introduces a three-layer, user-interests-item, representation scheme, which leads to more accurate ranking recommendation results with less computation cost and helps the understanding of the interactions among users, items, and user interests. Moreover, iExpand strategically deals with many issues that exist in traditional collaborative-filtering approaches, such as the overspecialization problem and the cold-start problem. Finally, we evaluate iExpand on three benchmark data sets, and experimental results show that iExpand can lead to better ranking performance than state-of-the-art methods with a significant margin.

  3. Identifying potential exposure reduction priorities using regional rankings based on emissions of known and suspected carcinogens to outdoor air in Canada.

    Science.gov (United States)

    Setton, Eleanor M; Veerman, Basil; Erickson, Anders; Deschenes, Steeve; Cheasley, Roz; Poplawski, Karla; Demers, Paul A; Keller, C Peter

    2015-08-22

    Emissions inventories aid in understanding the sources of hazardous air pollutants and how these vary regionally, supporting targeted reduction actions. Integrating information on the relative toxicity of emitted pollutants with respect to cancer in humans helps to further refine reduction actions or recommendations, but few national programs exist in North America that use emissions estimates in this way. The CAREX Canada Emissions Mapping Project provides key regional indicators of emissions (total annual and total annual toxic equivalent, circa 2011) of 21 selected known and suspected carcinogens. The indicators were calculated from industrial emissions reported to the National Pollutant Release Inventory (NPRI) and estimates of emissions from transportation (airports, trains, and car and truck traffic) and residential heating (oil, gas and wood), in conjunction with human toxicity potential factors. We also include substance-specific annual emissions in toxic equivalent kilograms and annual emissions in kilograms, to allow for ranking substances within any region. For provinces and territories in Canada, the indicators suggest the top five substances contributing to the total toxic equivalent emissions in any region could be prioritized for further investigation. Residents of Quebec and New Brunswick may be more at risk of exposure to industrial emissions than those in other regions, suggesting that a more detailed study of exposure to industrial emissions in these provinces is warranted. Residential wood smoke may be an important emission to control, particularly in the north and eastern regions of Canada. Residential oil and gas heating, along with rail emissions contribute little to regional emissions and therefore may not be an immediate regional priority. The developed indicators support the identification of pollutants and sources for additional investigation when planning exposure reduction actions among Canadian provinces and territories, but have

  4. Social class rank, essentialism, and punitive judgment.

    Science.gov (United States)

    Kraus, Michael W; Keltner, Dacher

    2013-08-01

    Recent evidence suggests that perceptions of social class rank influence a variety of social cognitive tendencies, from patterns of causal attribution to moral judgment. In the present studies we tested the hypotheses that upper-class rank individuals would be more likely to endorse essentialist lay theories of social class categories (i.e., that social class is founded in genetically based, biological differences) than would lower-class rank individuals and that these beliefs would decrease support for restorative justice--which seeks to rehabilitate offenders, rather than punish unlawful action. Across studies, higher social class rank was associated with increased essentialism of social class categories (Studies 1, 2, and 4) and decreased support for restorative justice (Study 4). Moreover, manipulated essentialist beliefs decreased preferences for restorative justice (Study 3), and the association between social class rank and class-based essentialist theories was explained by the tendency to endorse beliefs in a just world (Study 2). Implications for how class-based essentialist beliefs potentially constrain social opportunity and mobility are discussed.

  5. Point-source localization in blurred images by a frequency-domain eigenvector-based method.

    Science.gov (United States)

    Gunsay, M; Jeffs, B D

    1995-01-01

    We address the problem of resolving and localizing blurred point sources in intensity images. Telescopic star-field images blurred by atmospheric turbulence or optical aberrations are typical examples of this class of images, a new approach to image restoration is introduced, which is a generalization of 2-D sensor array processing techniques originating from the field of direction of arrival estimation (DOA). It is shown that in the frequency domain, blurred point source images can be modeled with a structure analogous to the response of linear sensor arrays to coherent signal sources. Thus, the problem may be cast into the form of DOA estimation, and eigenvector based subspace decomposition algorithms, such as MUSIC, may be adapted to search for these point sources. For deterministic point images the signal subspace is degenerate, with rank one, so rank enhancement techniques are required before MUSIC or related algorithms may be used. The presence of blur prohibits the use of existing rank enhancement methods. A generalized array smoothing method is introduced for rank enhancement in the presence of blur, and to regularize the ill posed nature of the image restoration. The new algorithm achieves inter-pixel super-resolution and is computationally efficient. Examples of star image deblurring using the algorithm are presented.

  6. An interactive segmentation method based on superpixel

    DEFF Research Database (Denmark)

    Yang, Shu; Zhu, Yaping; Wu, Xiaoyu

    2015-01-01

    This paper proposes an interactive image-segmentation method which is based on superpixel. To achieve fast segmentation, the method is used to establish a Graphcut model using superpixels as nodes, and a new energy function is proposed. Experimental results demonstrate that the authors' method has...... excellent performance in terms of segmentation accuracy and computation efficiency compared with other segmentation algorithm based on pixels....

  7. Adaptive low-rank subspace learning with online optimization for robust visual tracking.

    Science.gov (United States)

    Liu, Risheng; Wang, Di; Han, Yuzhuo; Fan, Xin; Luo, Zhongxuan

    2017-04-01

    In recent years, sparse and low-rank models have been widely used to formulate appearance subspace for visual tracking. However, most existing methods only consider the sparsity or low-rankness of the coefficients, which is not sufficient enough for appearance subspace learning on complex video sequences. Moreover, as both the low-rank and the column sparse measures are tightly related to all the samples in the sequences, it is challenging to incrementally solve optimization problems with both nuclear norm and column sparse norm on sequentially obtained video data. To address above limitations, this paper develops a novel low-rank subspace learning with adaptive penalization (LSAP) framework for subspace based robust visual tracking. Different from previous work, which often simply decomposes observations as low-rank features and sparse errors, LSAP simultaneously learns the subspace basis, low-rank coefficients and column sparse errors to formulate appearance subspace. Within LSAP framework, we introduce a Hadamard production based regularization to incorporate rich generative/discriminative structure constraints to adaptively penalize the coefficients for subspace learning. It is shown that such adaptive penalization can significantly improve the robustness of LSAP on severely corrupted dataset. To utilize LSAP for online visual tracking, we also develop an efficient incremental optimization scheme for nuclear norm and column sparse norm minimizations. Experiments on 50 challenging video sequences demonstrate that our tracker outperforms other state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Ranking de universidades chilenas: un análisis multivariado

    Directory of Open Access Journals (Sweden)

    Firinguetti Limone, Luis

    2015-06-01

    Full Text Available In this work a ranking of Chilean universities on the basis of publicly available information is developed. This ranking takes into account the multivariate character of these institutions. Also, it is noted that the results are consistent with those of a well-known international ranking that uses a different set of data, as well as with several multivariate analyses of the data considered in this study.En este trabajo se elabora un ranking de las universidades chilenas en base a información pública disponible. Dicho ranking toma en cuenta el carácter multivariado de estas instituciones. Además, se ha comprobado que los resultados del ranking son consistentes con un conocido ranking internacional construido a partir de un conjunto diferente de datos y con varios análisis multivariados realizados de la información tratada en este estudio.

  9. Risk ranking of chemical hazards in food - A case study on antibiotics in the Netherlands

    NARCIS (Netherlands)

    Asselt, van E.D.; Spiegel, van der M.; Noordam, M.Y.; Pikkemaat, M.G.; Fels-Klerx, van der H.J.

    2013-01-01

    Part of risk based control is the prioritization of hazard-food combinations for monitoring food safety. The aim of the current study was to develop a method for risk ranking of chemical food safety hazards using a structured and transparent approach. The method established is semi-quantitative and

  10. Wikipedia ranking of world universities

    Science.gov (United States)

    Lages, José; Patt, Antoine; Shepelyansky, Dima L.

    2016-03-01

    We use the directed networks between articles of 24 Wikipedia language editions for producing the wikipedia ranking of world Universities (WRWU) using PageRank, 2DRank and CheiRank algorithms. This approach allows to incorporate various cultural views on world universities using the mathematical statistical analysis independent of cultural preferences. The Wikipedia ranking of top 100 universities provides about 60% overlap with the Shanghai university ranking demonstrating the reliable features of this approach. At the same time WRWU incorporates all knowledge accumulated at 24 Wikipedia editions giving stronger highlights for historically important universities leading to a different estimation of efficiency of world countries in university education. The historical development of university ranking is analyzed during ten centuries of their history.

  11. Low-rank coal research

    Energy Technology Data Exchange (ETDEWEB)

    Weber, G. F.; Laudal, D. L.

    1989-01-01

    This work is a compilation of reports on ongoing research at the University of North Dakota. Topics include: Control Technology and Coal Preparation Research (SO{sub x}/NO{sub x} control, waste management), Advanced Research and Technology Development (turbine combustion phenomena, combustion inorganic transformation, coal/char reactivity, liquefaction reactivity of low-rank coals, gasification ash and slag characterization, fine particulate emissions), Combustion Research (fluidized bed combustion, beneficiation of low-rank coals, combustion characterization of low-rank coal fuels, diesel utilization of low-rank coals), Liquefaction Research (low-rank coal direct liquefaction), and Gasification Research (hydrogen production from low-rank coals, advanced wastewater treatment, mild gasification, color and residual COD removal from Synfuel wastewaters, Great Plains Gasification Plant, gasifier optimization).

  12. Ranking the quality of protein structure models using sidechain based network properties [v1; ref status: indexed, http://f1000r.es/2eu

    Directory of Open Access Journals (Sweden)

    Soma Ghosh

    2014-01-01

    Full Text Available Determining the correct structure of a protein given its sequence still remains an arduous task with many researchers working towards this goal. Most structure prediction methodologies result in the generation of a large number of probable candidates with the final challenge being to select the best amongst these. In this work, we have used Protein Structure Networks of native and modeled proteins in combination with Support Vector Machines to estimate the quality of a protein structure model and finally to provide ranks for these models. Model ranking is performed using regression analysis and helps in model selection from a group of many similar and good quality structures. Our results show that structures with a rank greater than 16 exhibit native protein-like properties while those below 10 are non-native like. The tool is also made available as a web-server (http://vishgraph.mbu.iisc.ernet.in/GraProStr/native_non_native_ranking.html, where, 5 modelled structures can be evaluated at a given time.

  13. Estimation of Basis Line-Integrals in a Spectral Distortion-Modeled Photon Counting Detector Using Low-Rank Approximation-Based X-Ray Transmittance Modeling: K-Edge Imaging Application.

    Science.gov (United States)

    Lee, Okkyun; Kappler, Steffen; Polster, Christoph; Taguchi, Katsuyuki

    2017-11-01

    Photon counting detectors (PCDs) provide multiple energy-dependent measurements for estimating basis line-integrals. However, the measured spectrum is distorted from the spectral response effect (SRE) via charge sharing, K-fluorescence emission, and so on. Thus, in order to avoid bias and artifacts in images, the SRE needs to be compensated. For this purpose, we recently developed a computationally efficient three-step algorithm for PCD-CT without contrast agents by approximating smooth X-ray transmittance using low-order polynomial bases. It compensated the SRE by incorporating the SRE model in a linearized estimation process and achieved nearly the minimum variance and unbiased (MVU) estimator. In this paper, we extend the three-step algorithm to K-edge imaging applications by designing optimal bases using a low-rank approximation to model X-ray transmittances with arbitrary shapes (i.e., smooth without the K-edge or discontinuous with the K-edge). The bases can be used to approximate the X-ray transmittance and to linearize the PCD measurement modeling and then the three-step estimator can be derived as in the previous approach: estimating the x-ray transmittance in the first step, estimating basis line-integrals including that of the contrast agent in the second step, and correcting for a bias in the third step. We demonstrate that the proposed method is more accurate and stable than the low-order polynomial-based approaches with extensive simulation studies using gadolinium for the K-edge imaging application. We also demonstrate that the proposed method achieves nearly MVU estimator, and is more stable than the conventional maximum likelihood estimator in high attenuation cases with fewer photon counts.

  14. Mining Functional Modules in Heterogeneous Biological Networks Using Multiplex PageRank Approach.

    Science.gov (United States)

    Li, Jun; Zhao, Patrick X

    2016-01-01

    Identification of functional modules/sub-networks in large-scale biological networks is one of the important research challenges in current bioinformatics and systems biology. Approaches have been developed to identify functional modules in single-class biological networks; however, methods for systematically and interactively mining multiple classes of heterogeneous biological networks are lacking. In this paper, we present a novel algorithm (called mPageRank) that utilizes the Multiplex PageRank approach to mine functional modules from two classes of biological networks. We demonstrate the capabilities of our approach by successfully mining functional biological modules through integrating expression-based gene-gene association networks and protein-protein interaction networks. We first compared the performance of our method with that of other methods using simulated data. We then applied our method to identify the cell division cycle related functional module and plant signaling defense-related functional module in the model plant Arabidopsis thaliana. Our results demonstrated that the mPageRank method is effective for mining sub-networks in both expression-based gene-gene association networks and protein-protein interaction networks, and has the potential to be adapted for the discovery of functional modules/sub-networks in other heterogeneous biological networks. The mPageRank executable program, source code, the datasets and results of the presented two case studies are publicly and freely available at http://plantgrn.noble.org/MPageRank/.

  15. An Automated Approach for Ranking Journals to Help in Clinician Decision Support

    Science.gov (United States)

    Jonnalagadda, Siddhartha R.; Moosavinasab, Soheil; Nath, Chinmoy; Li, Dingcheng; Chute, Christopher G.; Liu, Hongfang

    2014-01-01

    Point of care access to knowledge from full text journal articles supports decision-making and decreases medical errors. However, it is an overwhelming task to search through full text journal articles and find quality information needed by clinicians. We developed a method to rate journals for a given clinical topic, Congestive Heart Failure (CHF). Our method enables filtering of journals and ranking of journal articles based on source journal in relation to CHF. We also obtained a journal priority score, which automatically rates any journal based on its importance to CHF. Comparing our ranking with data gathered by surveying 169 cardiologists, who publish on CHF, our best Multiple Linear Regression model showed a correlation of 0.880, based on five-fold cross validation. Our ranking system can be extended to other clinical topics. PMID:25954382

  16. Robust Visual Tracking Via Consistent Low-Rank Sparse Learning

    KAUST Repository

    Zhang, Tianzhu

    2014-06-19

    Object tracking is the process of determining the states of a target in consecutive video frames based on properties of motion and appearance consistency. In this paper, we propose a consistent low-rank sparse tracker (CLRST) that builds upon the particle filter framework for tracking. By exploiting temporal consistency, the proposed CLRST algorithm adaptively prunes and selects candidate particles. By using linear sparse combinations of dictionary templates, the proposed method learns the sparse representations of image regions corresponding to candidate particles jointly by exploiting the underlying low-rank constraints. In addition, the proposed CLRST algorithm is computationally attractive since temporal consistency property helps prune particles and the low-rank minimization problem for learning joint sparse representations can be efficiently solved by a sequence of closed form update operations. We evaluate the proposed CLRST algorithm against 14 state-of-the-art tracking methods on a set of 25 challenging image sequences. Experimental results show that the CLRST algorithm performs favorably against state-of-the-art tracking methods in terms of accuracy and execution time.

  17. Inhibition of osteoclastogenesis by RNA interference targeting RANK

    Directory of Open Access Journals (Sweden)

    Ma Ruofan

    2012-08-01

    Full Text Available Abstract Background Osteoclasts and osteoblasts regulate bone resorption and formation to allow bone remodeling and homeostasis. The balance between bone resorption and formation is disturbed by abnormal recruitment of osteoclasts. Osteoclast differentiation is dependent on the receptor activator of nuclear factor NF-kappa B (RANK ligand (RANKL as well as the macrophage colony-stimulating factor (M-CSF. The RANKL/RANK system and RANK signaling induce osteoclast formation mediated by various cytokines. The RANK/RANKL pathway has been primarily implicated in metabolic, degenerative and neoplastic bone disorders or osteolysis. The central role of RANK/RANKL interaction in osteoclastogenesis makes RANK an attractive target for potential therapies in treatment of osteolysis. The purpose of this study was to assess the effect of inhibition of RANK expression in mouse bone marrow macrophages on osteoclast differentiation and bone resorption. Methods Three pairs of short hairpin RNAs (shRNA targeting RANK were designed and synthesized. The optimal shRNA was selected among three pairs of shRNAs by RANK expression analyzed by Western blot and Real-time PCR. We investigated suppression of osteoclastogenesis of mouse bone marrow macrophages (BMMs using the optimal shRNA by targeting RANK. Results Among the three shRANKs examined, shRANK-3 significantly suppressed [88.3%] the RANK expression (p Conclusions These findings suggest that retrovirus-mediated shRNA targeting RANK inhibits osteoclast differentiation and osteolysis. It may appear an attractive target for preventing osteolysis in humans with a potential clinical application.

  18. A Cognitive Model for Aggregating People's Rankings

    Science.gov (United States)

    Lee, Michael D.; Steyvers, Mark; Miller, Brent

    2014-01-01

    We develop a cognitive modeling approach, motivated by classic theories of knowledge representation and judgment from psychology, for combining people's rankings of items. The model makes simple assumptions about how individual differences in knowledge lead to observed ranking data in behavioral tasks. We implement the cognitive model as a Bayesian graphical model, and use computational sampling to infer an aggregate ranking and measures of the individual expertise. Applications of the model to 23 data sets, dealing with general knowledge and prediction tasks, show that the model performs well in producing an aggregate ranking that is often close to the ground truth and, as in the “wisdom of the crowd” effect, usually performs better than most of individuals. We also present some evidence that the model outperforms the traditional statistical Borda count method, and that the model is able to infer people's relative expertise surprisingly well without knowing the ground truth. We discuss the advantages of the cognitive modeling approach to combining ranking data, and in wisdom of the crowd research generally, as well as highlighting a number of potential directions for future model development. PMID:24816733

  19. Resolution of ranking hierarchies in directed networks

    Science.gov (United States)

    Barucca, Paolo; Lillo, Fabrizio

    2018-01-01

    Identifying hierarchies and rankings of nodes in directed graphs is fundamental in many applications such as social network analysis, biology, economics, and finance. A recently proposed method identifies the hierarchy by finding the ordered partition of nodes which minimises a score function, termed agony. This function penalises the links violating the hierarchy in a way depending on the strength of the violation. To investigate the resolution of ranking hierarchies we introduce an ensemble of random graphs, the Ranked Stochastic Block Model. We find that agony may fail to identify hierarchies when the structure is not strong enough and the size of the classes is small with respect to the whole network. We analytically characterise the resolution threshold and we show that an iterated version of agony can partly overcome this resolution limit. PMID:29394278

  20. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...