WorldWideScience

Sample records for rank based probabilistic

  1. Incorporating linguistic, probabilistic, and possibilistic information in a risk-based approach for ranking contaminated sites.

    Science.gov (United States)

    Zhang, Kejiang; Achari, Gopal; Pei, Yuansheng

    2010-10-01

    Different types of uncertain information-linguistic, probabilistic, and possibilistic-exist in site characterization. Their representation and propagation significantly influence the management of contaminated sites. In the absence of a framework with which to properly represent and integrate these quantitative and qualitative inputs together, decision makers cannot fully take advantage of the available and necessary information to identify all the plausible alternatives. A systematic methodology was developed in the present work to incorporate linguistic, probabilistic, and possibilistic information into the Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), a subgroup of Multi-Criteria Decision Analysis (MCDA) methods for ranking contaminated sites. The identification of criteria based on the paradigm of comparative risk assessment provides a rationale for risk-based prioritization. Uncertain linguistic, probabilistic, and possibilistic information identified in characterizing contaminated sites can be properly represented as numerical values, intervals, probability distributions, and fuzzy sets or possibility distributions, and linguistic variables according to their nature. These different kinds of representation are first transformed into a 2-tuple linguistic representation domain. The propagation of hybrid uncertainties is then carried out in the same domain. This methodology can use the original site information directly as much as possible. The case study shows that this systematic methodology provides more reasonable results. © 2010 SETAC.

  2. Probabilistic real-time contingency ranking method

    International Nuclear Information System (INIS)

    Mijuskovic, N.A.; Stojnic, D.

    2000-01-01

    This paper describes a real-time contingency method based on a probabilistic index-expected energy not supplied. This way it is possible to take into account the stochastic nature of the electric power system equipment outages. This approach enables more comprehensive ranking of contingencies and it is possible to form reliability cost values that can form the basis for hourly spot price calculations. The electric power system of Serbia is used as an example for the method proposed. (author)

  3. Exploiting Tensor Rank-One Decomposition in Probabilistic Inference

    Czech Academy of Sciences Publication Activity Database

    Savický, Petr; Vomlel, Jiří

    2007-01-01

    Roč. 43, č. 5 (2007), s. 747-764 ISSN 0023-5954 R&D Projects: GA MŠk 1M0545; GA MŠk 1M0572; GA ČR GA201/04/0393 Institutional research plan: CEZ:AV0Z10300504; CEZ:AV0Z10750506 Keywords : graphical probabilistic models * probabilistic inference * tensor rank Subject RIV: BD - Theory of Information Impact factor: 0.552, year: 2007 http://dml.cz/handle/10338.dmlcz/135810

  4. Probabilistic relation between In-Degree and PageRank

    NARCIS (Netherlands)

    Litvak, Nelli; Scheinhardt, Willem R.W.; Volkovich, Y.

    2008-01-01

    This paper presents a novel stochastic model that explains the relation between power laws of In-Degree and PageRank. PageRank is a popularity measure designed by Google to rank Web pages. We model the relation between PageRank and In-Degree through a stochastic equation, which is inspired by the

  5. Expert opinion on landslide susceptibility elicted by probabilistic inversion from scenario rankings

    Science.gov (United States)

    Lee, Katy; Dashwood, Claire; Lark, Murray

    2016-04-01

    For many natural hazards the opinion of experts, with experience in assessing susceptibility under different circumstances, is a valuable source of information on which to base risk assessments. This is particularly important where incomplete process understanding, and limited data, limit the scope to predict susceptibility by mechanistic or statistical modelling. The expert has a tacit model of a system, based on their understanding of processes and their field experience. This model may vary in quality, depending on the experience of the expert. There is considerable interest in how one may elicit expert understanding by a process which is transparent and robust, to provide a basis for decision support. One approach is to provide experts with a set of scenarios, and then to ask them to rank small overlapping subsets of these with respect to susceptibility. Methods of probabilistic inversion have been used to compute susceptibility scores for each scenario, implicit in the expert ranking. It is also possible to model these scores as functions of measurable properties of the scenarios. This approach has been used to assess susceptibility of animal populations to invasive diseases, to assess risk to vulnerable marine environments and to assess the risk in hypothetical novel technologies for food production. We will present the results of a study in which a group of geologists with varying degrees of expertise in assessing landslide hazards were asked to rank sets of hypothetical simplified scenarios with respect to land slide susceptibility. We examine the consistency of their rankings and the importance of different properties of the scenarios in the tacit susceptibility model that their rankings implied. Our results suggest that this is a promising approach to the problem of how experts can communicate their tacit model of uncertain systems to those who want to make use of their expertise.

  6. Constrained dictionary learning and probabilistic hypergraph ranking for person re-identification

    Science.gov (United States)

    He, You; Wu, Song; Pu, Nan; Qian, Li; Xiao, Guoqiang

    2018-04-01

    Person re-identification is a fundamental and inevitable task in public security. In this paper, we propose a novel framework to improve the performance of this task. First, two different types of descriptors are extracted to represent a pedestrian: (1) appearance-based superpixel features, which are constituted mainly by conventional color features and extracted from the supepixel rather than a whole picture and (2) due to the limitation of discrimination of appearance features, the deep features extracted by feature fusion Network are also used. Second, a view invariant subspace is learned by dictionary learning constrained by the minimum negative sample (termed as DL-cMN) to reduce the noise in appearance-based superpixel feature domain. Then, we use deep features and sparse codes transformed by appearancebased features to establish the hyperedges respectively by k-nearest neighbor, rather than jointing different features simply. Finally, a final ranking is performed by probabilistic hypergraph ranking algorithm. Extensive experiments on three challenging datasets (VIPeR, PRID450S and CUHK01) demonstrate the advantages and effectiveness of our proposed algorithm.

  7. Rank of tensors of l-out-of-k functions: an application in probabilistic inference

    Czech Academy of Sciences Publication Activity Database

    Vomlel, Jiří

    2011-01-01

    Roč. 47, č. 3 (2011), s. 317-336 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA201/09/1891; GA ČR GEICC/08/E010 Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : Bayesian network * probabilistic inference * tensor rank Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/MTR/vomlel-0361630.pdf

  8. Citation graph based ranking in Invenio

    CERN Document Server

    Marian, Ludmila; Rajman, Martin; Vesely, Martin

    2010-01-01

    Invenio is the web-based integrated digital library system developed at CERN. Within this framework, we present four types of ranking models based on the citation graph that complement the simple approach based on citation counts: time-dependent citation counts, a relevancy ranking which extends the PageRank model, a time-dependent ranking which combines the freshness of citations with PageRank and a ranking that takes into consideration the external citations. We present our analysis and results obtained on two main data sets: Inspire and CERN Document Server. Our main contributions are: (i) a study of the currently available ranking methods based on the citation graph; (ii) the development of new ranking methods that correct some of the identified limitations of the current methods such as treating all citations of equal importance, not taking time into account or considering the citation graph complete; (iii) a detailed study of the key parameters for these ranking methods. (The original publication is ava...

  9. SibRank: Signed bipartite network analysis for neighbor-based collaborative ranking

    Science.gov (United States)

    Shams, Bita; Haratizadeh, Saman

    2016-09-01

    Collaborative ranking is an emerging field of recommender systems that utilizes users' preference data rather than rating values. Unfortunately, neighbor-based collaborative ranking has gained little attention despite its more flexibility and justifiability. This paper proposes a novel framework, called SibRank that seeks to improve the state of the art neighbor-based collaborative ranking methods. SibRank represents users' preferences as a signed bipartite network, and finds similar users, through a novel personalized ranking algorithm in signed networks.

  10. Iris Template Protection Based on Local Ranking

    Directory of Open Access Journals (Sweden)

    Dongdong Zhao

    2018-01-01

    Full Text Available Biometrics have been widely studied in recent years, and they are increasingly employed in real-world applications. Meanwhile, a number of potential threats to the privacy of biometric data arise. Iris template protection demands that the privacy of iris data should be protected when performing iris recognition. According to the international standard ISO/IEC 24745, iris template protection should satisfy the irreversibility, revocability, and unlinkability. However, existing works about iris template protection demonstrate that it is difficult to satisfy the three privacy requirements simultaneously while supporting effective iris recognition. In this paper, we propose an iris template protection method based on local ranking. Specifically, the iris data are first XORed (Exclusive OR operation with an application-specific string; next, we divide the results into blocks and then partition the blocks into groups. The blocks in each group are ranked according to their decimal values, and original blocks are transformed to their rank values for storage. We also extend the basic method to support the shifting strategy and masking strategy, which are two important strategies for iris recognition. We demonstrate that the proposed method satisfies the irreversibility, revocability, and unlinkability. Experimental results on typical iris datasets (i.e., CASIA-IrisV3-Interval, CASIA-IrisV4-Lamp, UBIRIS-V1-S1, and MMU-V1 show that the proposed method could maintain the recognition performance while protecting the privacy of iris data.

  11. Fuzzy-set based contingency ranking

    International Nuclear Information System (INIS)

    Hsu, Y.Y.; Kuo, H.C.

    1992-01-01

    In this paper, a new approach based on fuzzy set theory is developed for contingency ranking of Taiwan power system. To examine whether a power system can remain in a secure and reliable operating state under contingency conditions, those contingency cases that will result in loss-of-load, loss-of generation, or islanding are first identified. Then 1P-1Q iteration of fast decoupled load flow is preformed to estimate post-contingent quantities (line flows, bus voltages) for other contingency cases. Based on system operators' past experience, each post-contingent quantity is assigned a degree of severity according to the potential damage that could be imposed on the power system by the quantity, should the contingency occurs. An approach based on fuzzy set theory is developed to deal with the imprecision of linguistic terms

  12. Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models

    NARCIS (Netherlands)

    Hallin, M.; van den Akker, R.; Werker, B.J.M.

    2012-01-01

    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the

  13. Ranking the Online Documents Based on Relative Credibility Measures

    Directory of Open Access Journals (Sweden)

    Ahmad Dahlan

    2013-09-01

    Full Text Available Information searching is the most popular activity in Internet. Usually the search engine provides the search results ranked by the relevance. However, for a certain purpose that concerns with information credibility, particularly citing information for scientific works, another approach of ranking the search engine results is required. This paper presents a study on developing a new ranking method based on the credibility of information. The method is built up upon two well-known algorithms, PageRank and Citation Analysis. The result of the experiment that used Spearman Rank Correlation Coefficient to compare the proposed rank (generated by the method with the standard rank (generated manually by a group of experts showed that the average Spearman 0 < rS < critical value. It means that the correlation was proven but it was not significant. Hence the proposed rank does not satisfy the standard but the performance could be improved.

  14. Ranking the Online Documents Based on Relative Credibility Measures

    Directory of Open Access Journals (Sweden)

    Ahmad Dahlan

    2009-05-01

    Full Text Available Information searching is the most popular activity in Internet. Usually the search engine provides the search results ranked by the relevance. However, for a certain purpose that concerns with information credibility, particularly citing information for scientific works, another approach of ranking the search engine results is required. This paper presents a study on developing a new ranking method based on the credibility of information. The method is built up upon two well-known algorithms, PageRank and Citation Analysis. The result of the experiment that used Spearman Rank Correlation Coefficient to compare the proposed rank (generated by the method with the standard rank (generated manually by a group of experts showed that the average Spearman 0 < rS < critical value. It means that the correlation was proven but it was not significant. Hence the proposed rank does not satisfy the standard but the performance could be improved.

  15. Staged decision making based on probabilistic forecasting

    Science.gov (United States)

    Booister, Nikéh; Verkade, Jan; Werner, Micha; Cranston, Michael; Cumiskey, Lydia; Zevenbergen, Chris

    2016-04-01

    Flood forecasting systems reduce, but cannot eliminate uncertainty about the future. Probabilistic forecasts explicitly show that uncertainty remains. However, as - compared to deterministic forecasts - a dimension is added ('probability' or 'likelihood'), with this added dimension decision making is made slightly more complicated. A technique of decision support is the cost-loss approach, which defines whether or not to issue a warning or implement mitigation measures (risk-based method). With the cost-loss method a warning will be issued when the ratio of the response costs to the damage reduction is less than or equal to the probability of the possible flood event. This cost-loss method is not widely used, because it motivates based on only economic values and is a technique that is relatively static (no reasoning, yes/no decision). Nevertheless it has high potential to improve risk-based decision making based on probabilistic flood forecasting because there are no other methods known that deal with probabilities in decision making. The main aim of this research was to explore the ways of making decision making based on probabilities with the cost-loss method better applicable in practice. The exploration began by identifying other situations in which decisions were taken based on uncertain forecasts or predictions. These cases spanned a range of degrees of uncertainty: from known uncertainty to deep uncertainty. Based on the types of uncertainties, concepts of dealing with situations and responses were analysed and possible applicable concepts where chosen. Out of this analysis the concepts of flexibility and robustness appeared to be fitting to the existing method. Instead of taking big decisions with bigger consequences at once, the idea is that actions and decisions are cut-up into smaller pieces and finally the decision to implement is made based on economic costs of decisions and measures and the reduced effect of flooding. The more lead-time there is in

  16. On revision of partially specified convex probabilistic belief bases

    CSIR Research Space (South Africa)

    Rens, G

    2016-08-01

    Full Text Available We propose a method for an agent to revise its incomplete probabilistic beliefs when a new piece of propositional information is observed. In this work, an agent’s beliefs are represented by a set of probabilistic formulae – a belief base...

  17. Non-steroidal Anti-inflammatory Drugs Ranking by Nondeterministic Assessments of Probabilistic Type

    Directory of Open Access Journals (Sweden)

    Madalina luiza MOLDOVEANU

    2012-09-01

    Full Text Available With a number of common therapeutic prescriptions, common mechanisms, common pharmacological effects - analgesic, antipyretic and anti-inflammatory (acetaminophen excepted, common side effects (SE (platelet dysfunction, gastritis and peptic ulcers, renal insufficiency in susceptible patients, water and sodium retention, edemas, nephropathies, and only a few different characteristics – different chemical structures, pharmacokinetics and different therapeutic possibility, different selectivities according to cyclooxygenase pathway 1 and 2, non-steroidal anti-inflammatory drugs (NSAIDs similarities are more apparent than differences. Being known that in a correct treatment benefits would exceed risks, the question “Which anti-inflammatory drug presents the lowest risks for a patient?” is just natural. By the Global Risk Method (GRM and the Maximum Risk Method (MRM we have determined the ranking of fourteen NSAIDs considering the risks presented by each particular NSAID. Nimesulide, Etoricoxib and Celecoxib safety level came superior to the other NSAIDs, whereas Etodolac and Indomethacin present an increased side effects risk.

  18. Quantum probability ranking principle for ligand-based virtual screening

    Science.gov (United States)

    Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal

    2017-04-01

    Chemical libraries contain thousands of compounds that need screening, which increases the need for computational methods that can rank or prioritize compounds. The tools of virtual screening are widely exploited to enhance the cost effectiveness of lead drug discovery programs by ranking chemical compounds databases in decreasing probability of biological activity based upon probability ranking principle (PRP). In this paper, we developed a novel ranking approach for molecular compounds inspired by quantum mechanics, called quantum probability ranking principle (QPRP). The QPRP ranking criteria would make an attempt to draw an analogy between the physical experiment and molecular structure ranking process for 2D fingerprints in ligand based virtual screening (LBVS). The development of QPRP criteria in LBVS has employed the concepts of quantum at three different levels, firstly at representation level, this model makes an effort to develop a new framework of molecular representation by connecting the molecular compounds with mathematical quantum space. Secondly, estimate the similarity between chemical libraries and references based on quantum-based similarity searching method. Finally, rank the molecules using QPRP approach. Simulated virtual screening experiments with MDL drug data report (MDDR) data sets showed that QPRP outperformed the classical ranking principle (PRP) for molecular chemical compounds.

  19. Quantum probability ranking principle for ligand-based virtual screening.

    Science.gov (United States)

    Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal

    2017-04-01

    Chemical libraries contain thousands of compounds that need screening, which increases the need for computational methods that can rank or prioritize compounds. The tools of virtual screening are widely exploited to enhance the cost effectiveness of lead drug discovery programs by ranking chemical compounds databases in decreasing probability of biological activity based upon probability ranking principle (PRP). In this paper, we developed a novel ranking approach for molecular compounds inspired by quantum mechanics, called quantum probability ranking principle (QPRP). The QPRP ranking criteria would make an attempt to draw an analogy between the physical experiment and molecular structure ranking process for 2D fingerprints in ligand based virtual screening (LBVS). The development of QPRP criteria in LBVS has employed the concepts of quantum at three different levels, firstly at representation level, this model makes an effort to develop a new framework of molecular representation by connecting the molecular compounds with mathematical quantum space. Secondly, estimate the similarity between chemical libraries and references based on quantum-based similarity searching method. Finally, rank the molecules using QPRP approach. Simulated virtual screening experiments with MDL drug data report (MDDR) data sets showed that QPRP outperformed the classical ranking principle (PRP) for molecular chemical compounds.

  20. A Citation-Based Ranking of Strategic Management Journals

    OpenAIRE

    Azar, Ofer H.; Brock, David M.

    2007-01-01

    Rankings of strategy journals are important for authors, readers, and promotion and tenure committees. We present several rankings, based either on the number of articles that cited the journal or the per-article impact. Our analyses cover various periods between 1991 and 2006, for most of which the Strategic Management Journal was in first place and Journal of Economics & Management Strategy (JEMS) second, although JEMS ranked first in certain instances. Long Range Planning and Technology An...

  1. CNN-based ranking for biomedical entity normalization.

    Science.gov (United States)

    Li, Haodi; Chen, Qingcai; Tang, Buzhou; Wang, Xiaolong; Xu, Hua; Wang, Baohua; Huang, Dong

    2017-10-03

    Most state-of-the-art biomedical entity normalization systems, such as rule-based systems, merely rely on morphological information of entity mentions, but rarely consider their semantic information. In this paper, we introduce a novel convolutional neural network (CNN) architecture that regards biomedical entity normalization as a ranking problem and benefits from semantic information of biomedical entities. The CNN-based ranking method first generates candidates using handcrafted rules, and then ranks the candidates according to their semantic information modeled by CNN as well as their morphological information. Experiments on two benchmark datasets for biomedical entity normalization show that our proposed CNN-based ranking method outperforms traditional rule-based method with state-of-the-art performance. We propose a CNN architecture that regards biomedical entity normalization as a ranking problem. Comparison results show that semantic information is beneficial to biomedical entity normalization and can be well combined with morphological information in our CNN architecture for further improvement.

  2. A Case-Based Reasoning Method with Rank Aggregation

    Science.gov (United States)

    Sun, Jinhua; Du, Jiao; Hu, Jian

    2018-03-01

    In order to improve the accuracy of case-based reasoning (CBR), this paper addresses a new CBR framework with the basic principle of rank aggregation. First, the ranking methods are put forward in each attribute subspace of case. The ordering relation between cases on each attribute is got between cases. Then, a sorting matrix is got. Second, the similar case retrieval process from ranking matrix is transformed into a rank aggregation optimal problem, which uses the Kemeny optimal. On the basis, a rank aggregation case-based reasoning algorithm, named RA-CBR, is designed. The experiment result on UCI data sets shows that case retrieval accuracy of RA-CBR algorithm is higher than euclidean distance CBR and mahalanobis distance CBR testing.So we can get the conclusion that RA-CBR method can increase the performance and efficiency of CBR.

  3. A sampling-based approach to probabilistic pursuit evasion

    KAUST Repository

    Mahadevan, Aditya; Amato, Nancy M.

    2012-01-01

    Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented

  4. Probabilistic Decision Based Block Partitioning for Future Video Coding

    KAUST Repository

    Wang, Zhao; Wang, Shiqi; Zhang, Jian; Wang, Shanshe; Ma, Siwei

    2017-01-01

    , the mode decision problem is casted into a probabilistic framework to select the final partition based on the confidence interval decision strategy. Experimental results show that the proposed CIET algorithm can speed up QTBT block partitioning structure

  5. Probabilistic reasoning for assembly-based 3D modeling

    KAUST Repository

    Chaudhuri, Siddhartha

    2011-01-01

    Assembly-based modeling is a promising approach to broadening the accessibility of 3D modeling. In assembly-based modeling, new models are assembled from shape components extracted from a database. A key challenge in assembly-based modeling is the identification of relevant components to be presented to the user. In this paper, we introduce a probabilistic reasoning approach to this problem. Given a repository of shapes, our approach learns a probabilistic graphical model that encodes semantic and geometric relationships among shape components. The probabilistic model is used to present components that are semantically and stylistically compatible with the 3D model that is being assembled. Our experiments indicate that the probabilistic model increases the relevance of presented components. © 2011 ACM.

  6. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  7. Weighted Discriminative Dictionary Learning based on Low-rank Representation

    International Nuclear Information System (INIS)

    Chang, Heyou; Zheng, Hao

    2017-01-01

    Low-rank representation has been widely used in the field of pattern classification, especially when both training and testing images are corrupted with large noise. Dictionary plays an important role in low-rank representation. With respect to the semantic dictionary, the optimal representation matrix should be block-diagonal. However, traditional low-rank representation based dictionary learning methods cannot effectively exploit the discriminative information between data and dictionary. To address this problem, this paper proposed weighted discriminative dictionary learning based on low-rank representation, where a weighted representation regularization term is constructed. The regularization associates label information of both training samples and dictionary atoms, and encourages to generate a discriminative representation with class-wise block-diagonal structure, which can further improve the classification performance where both training and testing images are corrupted with large noise. Experimental results demonstrate advantages of the proposed method over the state-of-the-art methods. (paper)

  8. A network-based dynamical ranking system for competitive sports

    Science.gov (United States)

    Motegi, Shun; Masuda, Naoki

    2012-12-01

    From the viewpoint of networks, a ranking system for players or teams in sports is equivalent to a centrality measure for sports networks, whereby a directed link represents the result of a single game. Previously proposed network-based ranking systems are derived from static networks, i.e., aggregation of the results of games over time. However, the score of a player (or team) fluctuates over time. Defeating a renowned player in the peak performance is intuitively more rewarding than defeating the same player in other periods. To account for this factor, we propose a dynamic variant of such a network-based ranking system and apply it to professional men's tennis data. We derive a set of linear online update equations for the score of each player. The proposed ranking system predicts the outcome of the future games with a higher accuracy than the static counterparts.

  9. Ranking of Developing Countries Based on the Economic Freedom Index

    OpenAIRE

    Zirak, Masoumeh; Mehrara, Mohsen

    2013-01-01

    In this paper we’ve ranked developing countries based on the Economic Freedom index. Therefore we are trying to do the analysis how this ranking is done using numerical taxonomic methodology. To do this, by estimating the effects of the determinants of FDI in 123 developing countries from 1997 to 2010, results showed that with regard to the degree of economic freedom or Economic openness, attract foreign direct investment in each country is different. In this study china, Equator, Liberia, Az...

  10. Probabilistic low-rank factorization accelerates tensor network simulations of critical quantum many-body ground states

    Science.gov (United States)

    Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B.; Tamascelli, Dario; Montangero, Simone

    2018-01-01

    We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.

  11. Image Re-Ranking Based on Topic Diversity.

    Science.gov (United States)

    Qian, Xueming; Lu, Dan; Wang, Yaxiong; Zhu, Li; Tang, Yuan Yan; Wang, Meng

    2017-08-01

    Social media sharing Websites allow users to annotate images with free tags, which significantly contribute to the development of the web image retrieval. Tag-based image search is an important method to find images shared by users in social networks. However, how to make the top ranked result relevant and with diversity is challenging. In this paper, we propose a topic diverse ranking approach for tag-based image retrieval with the consideration of promoting the topic coverage performance. First, we construct a tag graph based on the similarity between each tag. Then, the community detection method is conducted to mine the topic community of each tag. After that, inter-community and intra-community ranking are introduced to obtain the final retrieved results. In the inter-community ranking process, an adaptive random walk model is employed to rank the community based on the multi-information of each topic community. Besides, we build an inverted index structure for images to accelerate the searching process. Experimental results on Flickr data set and NUS-Wide data sets show the effectiveness of the proposed approach.

  12. International Conference on Robust Rank-Based and Nonparametric Methods

    CERN Document Server

    McKean, Joseph

    2016-01-01

    The contributors to this volume include many of the distinguished researchers in this area. Many of these scholars have collaborated with Joseph McKean to develop underlying theory for these methods, obtain small sample corrections, and develop efficient algorithms for their computation. The papers cover the scope of the area, including robust nonparametric rank-based procedures through Bayesian and big data rank-based analyses. Areas of application include biostatistics and spatial areas. Over the last 30 years, robust rank-based and nonparametric methods have developed considerably. These procedures generalize traditional Wilcoxon-type methods for one- and two-sample location problems. Research into these procedures has culminated in complete analyses for many of the models used in practice including linear, generalized linear, mixed, and nonlinear models. Settings are both multivariate and univariate. With the development of R packages in these areas, computation of these procedures is easily shared with r...

  13. Probabilistic fatigue life prediction methodology for notched components based on simple smooth fatigue tests

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Z. R.; Li, Z. X. [Dept.of Engineering Mechanics, Jiangsu Key Laboratory of Engineering Mechanics, Southeast University, Nanjing (China); Hu, X. T.; Xin, P. P.; Song, Y. D. [State Key Laboratory of Mechanics and Control of Mechanical Structures, Nanjing University of Aeronautics and Astronautics, Nanjing (China)

    2017-01-15

    The methodology of probabilistic fatigue life prediction for notched components based on smooth specimens is presented. Weakestlink theory incorporating Walker strain model has been utilized in this approach. The effects of stress ratio and stress gradient have been considered. Weibull distribution and median rank estimator are used to describe fatigue statistics. Fatigue tests under different stress ratios were conducted on smooth and notched specimens of titanium alloy TC-1-1. The proposed procedures were checked against the test data of TC-1-1 notched specimens. Prediction results of 50 % survival rate are all within a factor of two scatter band of the test results.

  14. Logic-based aggregation methods for ranking student applicants

    Directory of Open Access Journals (Sweden)

    Milošević Pavle

    2017-01-01

    Full Text Available In this paper, we present logic-based aggregation models used for ranking student applicants and we compare them with a number of existing aggregation methods, each more complex than the previous one. The proposed models aim to include depen- dencies in the data using Logical aggregation (LA. LA is a aggregation method based on interpolative Boolean algebra (IBA, a consistent multi-valued realization of Boolean algebra. This technique is used for a Boolean consistent aggregation of attributes that are logically dependent. The comparison is performed in the case of student applicants for master programs at the University of Belgrade. We have shown that LA has some advantages over other presented aggregation methods. The software realization of all applied aggregation methods is also provided. This paper may be of interest not only for student ranking, but also for similar problems of ranking people e.g. employees, team members, etc.

  15. CAD Parts-Based Assembly Modeling by Probabilistic Reasoning

    KAUST Repository

    Zhang, Kai-Ke; Hu, Kai-Mo; Yin, Li-Cheng; Yan, Dongming; Wang, Bin

    2016-01-01

    Nowadays, increasing amount of parts and sub-assemblies are publicly available, which can be used directly for product development instead of creating from scratch. In this paper, we propose an interactive design framework for efficient and smart assembly modeling, in order to improve the design efficiency. Our approach is based on a probabilistic reasoning. Given a collection of industrial assemblies, we learn a probabilistic graphical model from the relationships between the parts of assemblies. Then in the modeling stage, this probabilistic model is used to suggest the most likely used parts compatible with the current assembly. Finally, the parts are assembled under certain geometric constraints. We demonstrate the effectiveness of our framework through a variety of assembly models produced by our prototype system. © 2015 IEEE.

  16. CAD Parts-Based Assembly Modeling by Probabilistic Reasoning

    KAUST Repository

    Zhang, Kai-Ke

    2016-04-11

    Nowadays, increasing amount of parts and sub-assemblies are publicly available, which can be used directly for product development instead of creating from scratch. In this paper, we propose an interactive design framework for efficient and smart assembly modeling, in order to improve the design efficiency. Our approach is based on a probabilistic reasoning. Given a collection of industrial assemblies, we learn a probabilistic graphical model from the relationships between the parts of assemblies. Then in the modeling stage, this probabilistic model is used to suggest the most likely used parts compatible with the current assembly. Finally, the parts are assembled under certain geometric constraints. We demonstrate the effectiveness of our framework through a variety of assembly models produced by our prototype system. © 2015 IEEE.

  17. Ranking Scientific Publications Based on Their Citation Graph

    CERN Document Server

    Marian, L; Rajman, M

    2009-01-01

    CDS Invenio is the web-based integrated digital library system developed at CERN. It is a suite of applications which provides the framework and tools for building and managing an autonomous digital library server. Within this framework, the goal of this project is to implement new ranking methods based on the bibliographic citation graph extracted from the CDS Invenio database. As a first step, we implemented the Citation Count as a baseline ranking method. The major disadvantage of this method is that all citations are treated equally, disregarding their importance and their publication date. To overcome this drawback, we consider two different approaches: a link-based approach which extends the PageRank model to the bibliographic citation graph and a time-dependent approach which takes into account time in the citation counts. In addition, we also combined these two approaches in a hybrid model based on a time-dependent PageRank. In the present document, we describe the conceptual background behind our new...

  18. Result diversification based on query-specific cluster ranking

    NARCIS (Netherlands)

    He, J.; Meij, E.; de Rijke, M.

    2011-01-01

    Result diversification is a retrieval strategy for dealing with ambiguous or multi-faceted queries by providing documents that cover as many facets of the query as possible. We propose a result diversification framework based on query-specific clustering and cluster ranking, in which diversification

  19. Result Diversification Based on Query-Specific Cluster Ranking

    NARCIS (Netherlands)

    J. He (Jiyin); E. Meij; M. de Rijke (Maarten)

    2011-01-01

    htmlabstractResult diversification is a retrieval strategy for dealing with ambiguous or multi-faceted queries by providing documents that cover as many facets of the query as possible. We propose a result diversification framework based on query-specific clustering and cluster ranking,

  20. Adaptive Game Level Creation through Rank-based Interactive Evolution

    DEFF Research Database (Denmark)

    Liapis, Antonios; Martínez, Héctor Pérez; Togelius, Julian

    2013-01-01

    as fitness functions for the optimization of the generated content. The preference models are built via ranking-based preference learning, while the content is generated via evolutionary search. The proposed method is evaluated on the creation of strategy game maps, and its performance is tested using...

  1. Ranking Institutional Settings Based on Publications in Community Psychology Journals

    Science.gov (United States)

    Jason, Leonard A.; Pokorny, Steven B.; Patka, Mazna; Adams, Monica; Morello, Taylor

    2007-01-01

    Two primary outlets for community psychology research, the "American Journal of Community Psychology" and the "Journal of Community Psychology", were assessed to rank institutions based on publication frequency and scientific influence of publications over a 32-year period. Three specific periods were assessed (1973-1983, 1984-1994, 1995-2004).…

  2. A sampling-based approach to probabilistic pursuit evasion

    KAUST Repository

    Mahadevan, Aditya

    2012-05-01

    Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented with useful information to model interesting scenarios related to multi-agent interaction and coordination. © 2012 IEEE.

  3. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  4. Probabilistic and machine learning-based retrieval approaches for biomedical dataset retrieval

    Science.gov (United States)

    Karisani, Payam; Qin, Zhaohui S; Agichtein, Eugene

    2018-01-01

    Abstract The bioCADDIE dataset retrieval challenge brought together different approaches to retrieval of biomedical datasets relevant to a user’s query, expressed as a text description of a needed dataset. We describe experiments in applying a data-driven, machine learning-based approach to biomedical dataset retrieval as part of this challenge. We report on a series of experiments carried out to evaluate the performance of both probabilistic and machine learning-driven techniques from information retrieval, as applied to this challenge. Our experiments with probabilistic information retrieval methods, such as query term weight optimization, automatic query expansion and simulated user relevance feedback, demonstrate that automatically boosting the weights of important keywords in a verbose query is more effective than other methods. We also show that although there is a rich space of potential representations and features available in this domain, machine learning-based re-ranking models are not able to improve on probabilistic information retrieval techniques with the currently available training data. The models and algorithms presented in this paper can serve as a viable implementation of a search engine to provide access to biomedical datasets. The retrieval performance is expected to be further improved by using additional training data that is created by expert annotation, or gathered through usage logs, clicks and other processes during natural operation of the system. Database URL: https://github.com/emory-irlab/biocaddie

  5. Research on probabilistic assessment method based on the corroded pipeline assessment criteria

    International Nuclear Information System (INIS)

    Zhang Guangli; Luo, Jinheng; Zhao Xinwei; Zhang Hua; Zhang Liang; Zhang Yi

    2012-01-01

    Pipeline integrity assessments are performed using conventional deterministic approaches, even though there are many uncertainties about the parameters in the pipeline integrity assessment. In this paper, a probabilistic assessment method is provided for the gas pipeline with corrosion defects based on the current corroded pipe evaluation criteria, and the failure probability of corroded pipelines due to the uncertainties of loadings, material property and measurement accuracy is estimated using Monte-Carlo technique. Furthermore, the sensitivity analysis approach is introduced to rank the influence of various random variables to the safety of pipeline. And the method to determine the critical defect size based on acceptable failure probability is proposed. Highlights: ► The folias factor in pipeline corrosion assessment methods was analyzed. ► The probabilistic method was applied in corrosion assessment methods. ► The influence of assessment variables to the reliability of pipeline was ranked. ► The acceptable failure probability was used to determine the critical defect size.

  6. Probabilistic dual heuristic programming-based adaptive critic

    Science.gov (United States)

    Herzallah, Randa

    2010-02-01

    Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.

  7. Probabilistic Model-based Background Subtraction

    DEFF Research Database (Denmark)

    Krüger, Volker; Anderson, Jakob; Prehn, Thomas

    2005-01-01

    is the correlation between pixels. In this paper we introduce a model-based background subtraction approach which facilitates prior knowledge of pixel correlations for clearer and better results. Model knowledge is being learned from good training video data, the data is stored for fast access in a hierarchical...

  8. Probabilistic confidence for decisions based on uncertain reliability estimates

    Science.gov (United States)

    Reid, Stuart G.

    2013-05-01

    Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.

  9. An Individual-based Probabilistic Model for Fish Stock Simulation

    Directory of Open Access Journals (Sweden)

    Federico Buti

    2010-08-01

    Full Text Available We define an individual-based probabilistic model of a sole (Solea solea behaviour. The individual model is given in terms of an Extended Probabilistic Discrete Timed Automaton (EPDTA, a new formalism that is introduced in the paper and that is shown to be interpretable as a Markov decision process. A given EPDTA model can be probabilistically model-checked by giving a suitable translation into syntax accepted by existing model-checkers. In order to simulate the dynamics of a given population of soles in different environmental scenarios, an agent-based simulation environment is defined in which each agent implements the behaviour of the given EPDTA model. By varying the probabilities and the characteristic functions embedded in the EPDTA model it is possible to represent different scenarios and to tune the model itself by comparing the results of the simulations with real data about the sole stock in the North Adriatic sea, available from the recent project SoleMon. The simulator is presented and made available for its adaptation to other species.

  10. Variable screening and ranking using sampling-based sensitivity measures

    International Nuclear Information System (INIS)

    Wu, Y-T.; Mohanty, Sitakanta

    2006-01-01

    This paper presents a methodology for screening insignificant random variables and ranking significant important random variables using sensitivity measures including two cumulative distribution function (CDF)-based and two mean-response based measures. The methodology features (1) using random samples to compute sensitivities and (2) using acceptance limits, derived from the test-of-hypothesis, to classify significant and insignificant random variables. Because no approximation is needed in either the form of the performance functions or the type of continuous distribution functions representing input variables, the sampling-based approach can handle highly nonlinear functions with non-normal variables. The main characteristics and effectiveness of the sampling-based sensitivity measures are investigated using both simple and complex examples. Because the number of samples needed does not depend on the number of variables, the methodology appears to be particularly suitable for problems with large, complex models that have large numbers of random variables but relatively few numbers of significant random variables

  11. An adaptive ES with a ranking based constraint handling strategy

    Directory of Open Access Journals (Sweden)

    Kusakci Ali Osman

    2014-01-01

    Full Text Available To solve a constrained optimization problem, equality constraints can be used to eliminate a problem variable. If it is not feasible, the relations imposed implicitly by the constraints can still be exploited. Most conventional constraint handling methods in Evolutionary Algorithms (EAs do not consider the correlations between problem variables imposed by the constraints. This paper relies on the idea that a proper search operator, which captures mentioned implicit correlations, can improve performance of evolutionary constrained optimization algorithms. To realize this, an Evolution Strategy (ES along with a simplified Covariance Matrix Adaptation (CMA based mutation operator is used with a ranking based constraint-handling method. The proposed algorithm is tested on 13 benchmark problems as well as on a real life design problem. The outperformance of the algorithm is significant when compared with conventional ES-based methods.

  12. Optimisation of test and maintenance based on probabilistic methods

    International Nuclear Information System (INIS)

    Cepin, M.

    2001-01-01

    This paper presents a method, which based on models and results of probabilistic safety assessment, minimises the nuclear power plant risk by optimisation of arrangement of safety equipment outages. The test and maintenance activities of the safety equipment are timely arranged, so the classical static fault tree models are extended with the time requirements to be capable to model real plant states. A house event matrix is used, which enables modelling of the equipment arrangements through the discrete points of time. The result of the method is determination of such configuration of equipment outages, which result in the minimal risk. Minimal risk is represented by system unavailability. (authors)

  13. Visualizing Uncertainty for Probabilistic Weather Forecasting based on Reforecast Analogs

    Science.gov (United States)

    Pelorosso, Leandro; Diehl, Alexandra; Matković, Krešimir; Delrieux, Claudio; Ruiz, Juan; Gröeller, M. Eduard; Bruckner, Stefan

    2016-04-01

    Numerical weather forecasts are prone to uncertainty coming from inaccuracies in the initial and boundary conditions and lack of precision in numerical models. Ensemble of forecasts partially addresses these problems by considering several runs of the numerical model. Each forecast is generated with different initial and boundary conditions and different model configurations [GR05]. The ensembles can be expressed as probabilistic forecasts, which have proven to be very effective in the decision-making processes [DE06]. The ensemble of forecasts represents only some of the possible future atmospheric states, usually underestimating the degree of uncertainty in the predictions [KAL03, PH06]. Hamill and Whitaker [HW06] introduced the "Reforecast Analog Regression" (RAR) technique to overcome the limitations of ensemble forecasting. This technique produces probabilistic predictions based on the analysis of historical forecasts and observations. Visual analytics provides tools for processing, visualizing, and exploring data to get new insights and discover hidden information patterns in an interactive exchange between the user and the application [KMS08]. In this work, we introduce Albero, a visual analytics solution for probabilistic weather forecasting based on the RAR technique. Albero targets at least two different type of users: "forecasters", who are meteorologists working in operational weather forecasting and "researchers", who work in the construction of numerical prediction models. Albero is an efficient tool for analyzing precipitation forecasts, allowing forecasters to make and communicate quick decisions. Our solution facilitates the analysis of a set of probabilistic forecasts, associated statistical data, observations and uncertainty. A dashboard with small-multiples of probabilistic forecasts allows the forecasters to analyze at a glance the distribution of probabilities as a function of time, space, and magnitude. It provides the user with a more

  14. Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem

    Science.gov (United States)

    Man, J.; Li, W.; Zeng, L.; Wu, L.

    2015-12-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.

  15. Nanotechnology strength indicators: international rankings based on US patents

    Science.gov (United States)

    Marinova, Dora; McAleer, Michael

    2003-01-01

    Technological strength indicators (TSIs) based on patent statistics for 1975-2000 are used to analyse patenting of nanotechnology in the USA, and to compile international rankings for the top 12 foreign patenting countries (namely Australia, Canada, France, Germany, Great Britain, Italy, Japan, Korea, the Netherlands, Sweden, Switzerland and Taiwan). As the indicators are not directly observable, various proxy variables are used, namely the technological specialization index for national priorities, patent shares for international presence, citation rate for the contribution of patents to knowledge development and rate of assigned patents for potential commercial benefits. The best performing country is France, followed by Japan and Canada. It is shown that expertise and strength in nanotechnology are not evenly distributed among the technologically advanced countries, with the TSIs revealing different emphases in the development of nanotechnology.

  16. A model-based approach to operational event groups ranking

    Energy Technology Data Exchange (ETDEWEB)

    Simic, Zdenko [European Commission Joint Research Centre, Petten (Netherlands). Inst. for Energy and Transport; Maqua, Michael [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany); Wattrelos, Didier [Institut de Radioprotection et de Surete Nucleaire (IRSN), Fontenay-aux-Roses (France)

    2014-04-15

    The operational experience (OE) feedback provides improvements in all industrial activities. Identification of the most important and valuable groups of events within accumulated experience is important in order to focus on a detailed investigation of events. The paper describes the new ranking method and compares it with three others. Methods have been described and applied to OE events utilised by nuclear power plants in France and Germany for twenty years. The results show that different ranking methods only roughly agree on which of the event groups are the most important ones. In the new ranking method the analytical hierarchy process is applied in order to assure consistent and comprehensive weighting determination for ranking indexes. The proposed method allows a transparent and flexible event groups ranking and identification of the most important OE for further more detailed investigation in order to complete the feedback. (orig.)

  17. On the ranking of chemicals based on their PBT characteristics: comparison of different ranking methodologies using selected POPs as an illustrative example.

    Science.gov (United States)

    Sailaukhanuly, Yerbolat; Zhakupbekova, Arai; Amutova, Farida; Carlsen, Lars

    2013-01-01

    Knowledge of the environmental behavior of chemicals is a fundamental part of the risk assessment process. The present paper discusses various methods of ranking of a series of persistent organic pollutants (POPs) according to the persistence, bioaccumulation and toxicity (PBT) characteristics. Traditionally ranking has been done as an absolute (total) ranking applying various multicriteria data analysis methods like simple additive ranking (SAR) or various utility functions (UFs) based rankings. An attractive alternative to these ranking methodologies appears to be partial order ranking (POR). The present paper compares different ranking methods like SAR, UF and POR. Significant discrepancies between the rankings are noted and it is concluded that partial order ranking, as a method without any pre-assumptions concerning possible relation between the single parameters, appears as the most attractive ranking methodology. In addition to the initial ranking partial order methodology offers a wide variety of analytical tools to elucidate the interplay between the objects to be ranked and the ranking parameters. In the present study is included an analysis of the relative importance of the single P, B and T parameters. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Global Infrasound Association Based on Probabilistic Clutter Categorization

    Science.gov (United States)

    Arora, Nimar; Mialle, Pierrick

    2016-04-01

    The IDC advances its methods and continuously improves its automatic system for the infrasound technology. The IDC focuses on enhancing the automatic system for the identification of valid signals and the optimization of the network detection threshold by identifying ways to refine signal characterization methodology and association criteria. An objective of this study is to reduce the number of associated infrasound arrivals that are rejected from the automatic bulletins when generating the reviewed event bulletins. Indeed, a considerable number of signal detections are due to local clutter sources such as microbaroms, waterfalls, dams, gas flares, surf (ocean breaking waves) etc. These sources are either too diffuse or too local to form events. Worse still, the repetitive nature of this clutter leads to a large number of false event hypotheses due to the random matching of clutter at multiple stations. Previous studies, for example [1], have worked on categorization of clutter using long term trends on detection azimuth, frequency, and amplitude at each station. In this work we continue the same line of reasoning to build a probabilistic model of clutter that is used as part of NETVISA [2], a Bayesian approach to network processing. The resulting model is a fusion of seismic, hydroacoustic and infrasound processing built on a unified probabilistic framework. References: [1] Infrasound categorization Towards a statistics based approach. J. Vergoz, P. Gaillard, A. Le Pichon, N. Brachet, and L. Ceranna. ITW 2011 [2] NETVISA: Network Processing Vertically Integrated Seismic Analysis. N. S. Arora, S. Russell, and E. Sudderth. BSSA 2013

  19. Probabilistic atlas based labeling of the cerebral vessel tree

    Science.gov (United States)

    Van de Giessen, Martijn; Janssen, Jasper P.; Brouwer, Patrick A.; Reiber, Johan H. C.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke

    2015-03-01

    Preoperative imaging of the cerebral vessel tree is essential for planning therapy on intracranial stenoses and aneurysms. Usually, a magnetic resonance angiography (MRA) or computed tomography angiography (CTA) is acquired from which the cerebral vessel tree is segmented. Accurate analysis is helped by the labeling of the cerebral vessels, but labeling is non-trivial due to anatomical topological variability and missing branches due to acquisition issues. In recent literature, labeling the cerebral vasculature around the Circle of Willis has mainly been approached as a graph-based problem. The most successful method, however, requires the definition of all possible permutations of missing vessels, which limits application to subsets of the tree and ignores spatial information about the vessel locations. This research aims to perform labeling using probabilistic atlases that model spatial vessel and label likelihoods. A cerebral vessel tree is aligned to a probabilistic atlas and subsequently each vessel is labeled by computing the maximum label likelihood per segment from label-specific atlases. The proposed method was validated on 25 segmented cerebral vessel trees. Labeling accuracies were close to 100% for large vessels, but dropped to 50-60% for small vessels that were only present in less than 50% of the set. With this work we showed that using solely spatial information of the vessel labels, vessel segments from stable vessels (>50% presence) were reliably classified. This spatial information will form the basis for a future labeling strategy with a very loose topological model.

  20. Content-based image retrieval with ontological ranking

    Science.gov (United States)

    Tsai, Shen-Fu; Tsai, Min-Hsuan; Huang, Thomas S.

    2010-02-01

    Images are a much more powerful medium of expression than text, as the adage says: "One picture is worth a thousand words." It is because compared with text consisting of an array of words, an image has more degrees of freedom and therefore a more complicated structure. However, the less limited structure of images presents researchers in the computer vision community a tough task of teaching machines to understand and organize images, especially when a limit number of learning examples and background knowledge are given. The advance of internet and web technology in the past decade has changed the way human gain knowledge. People, hence, can exchange knowledge with others by discussing and contributing information on the web. As a result, the web pages in the internet have become a living and growing source of information. One is therefore tempted to wonder whether machines can learn from the web knowledge base as well. Indeed, it is possible to make computer learn from the internet and provide human with more meaningful knowledge. In this work, we explore this novel possibility on image understanding applied to semantic image search. We exploit web resources to obtain links from images to keywords and a semantic ontology constituting human's general knowledge. The former maps visual content to related text in contrast to the traditional way of associating images with surrounding text; the latter provides relations between concepts for machines to understand to what extent and in what sense an image is close to the image search query. With the aid of these two tools, the resulting image search system is thus content-based and moreover, organized. The returned images are ranked and organized such that semantically similar images are grouped together and given a rank based on the semantic closeness to the input query. The novelty of the system is twofold: first, images are retrieved not only based on text cues but their actual contents as well; second, the grouping

  1. Distant Supervision for Relation Extraction with Ranking-Based Methods

    Directory of Open Access Journals (Sweden)

    Yang Xiang

    2016-05-01

    Full Text Available Relation extraction has benefited from distant supervision in recent years with the development of natural language processing techniques and data explosion. However, distant supervision is still greatly limited by the quality of training data, due to its natural motivation for greatly reducing the heavy cost of data annotation. In this paper, we construct an architecture called MIML-sort (Multi-instance Multi-label Learning with Sorting Strategies, which is built on the famous MIML framework. Based on MIML-sort, we propose three ranking-based methods for sample selection with which we identify relation extractors from a subset of the training data. Experiments are set up on the KBP (Knowledge Base Propagation corpus, one of the benchmark datasets for distant supervision, which is large and noisy. Compared with previous work, the proposed methods produce considerably better results. Furthermore, the three methods together achieve the best F1 on the official testing set, with an optimal enhancement of F1 from 27.3% to 29.98%.

  2. Manifold Based Low-rank Regularization for Image Restoration and Semi-supervised Learning

    OpenAIRE

    Lai, Rongjie; Li, Jia

    2017-01-01

    Low-rank structures play important role in recent advances of many problems in image science and data science. As a natural extension of low-rank structures for data with nonlinear structures, the concept of the low-dimensional manifold structure has been considered in many data processing problems. Inspired by this concept, we consider a manifold based low-rank regularization as a linear approximation of manifold dimension. This regularization is less restricted than the global low-rank regu...

  3. A web-based tool for ranking landslide mitigation measures

    Science.gov (United States)

    Lacasse, S.; Vaciago, G.; Choi, Y. J.; Kalsnes, B.

    2012-04-01

    brief description, guidance on design, schematic details, practical examples and references for each mitigation measure. Each of the measures was given a score on its ability and applicability for different types of landslides and boundary conditions, and a decision support matrix was established. The web-based toolbox organizes the information in the compendium and provides an algorithm to rank the measures on the basis of the decision support matrix, and on the basis of the risk level estimated at the site. The toolbox includes a description of the case under study and offers a simplified option for estimating the hazard and risk levels of the slide at hand. The user selects the mitigation measures to be included in the assessment. The toolbox then ranks, with built-in assessment factors and weights and/or with user-defined ranking values and criteria, the mitigation measures included in the analysis. The toolbox includes data management, e.g. saving data half-way in an analysis, returning to an earlier case, looking up prepared examples or looking up information on mitigation measures. The toolbox also generates a report and has user-forum and help features. The presentation will give an overview of the mitigation measures considered and examples of the use of the toolbox, and will take the attendees through the application of the toolbox.

  4. A stable systemic risk ranking in China's banking sector: Based on principal component analysis

    Science.gov (United States)

    Fang, Libing; Xiao, Binqing; Yu, Honghai; You, Qixing

    2018-02-01

    In this paper, we compare five popular systemic risk rankings, and apply principal component analysis (PCA) model to provide a stable systemic risk ranking for the Chinese banking sector. Our empirical results indicate that five methods suggest vastly different systemic risk rankings for the same bank, while the combined systemic risk measure based on PCA provides a reliable ranking. Furthermore, according to factor loadings of the first component, PCA combined ranking is mainly based on fundamentals instead of market price data. We clearly find that price-based rankings are not as practical a method as fundamentals-based ones. This PCA combined ranking directly shows systemic risk contributions of each bank for banking supervision purpose and reminds banks to prevent and cope with the financial crisis in advance.

  5. Hyper-local, directions-based ranking of places

    DEFF Research Database (Denmark)

    Venetis, Petros; Gonzalez, Hector; Jensen, Christian S.

    2011-01-01

    they are numerous and contain precise locations. Specifically, the paper proposes a framework that takes a user location and a collection of near-by places as arguments, producing a ranking of the places. The framework enables a range of aspects of directions queries to be exploited for the ranking of places......, including the frequency with which places have been referred to in directions queries. Next, the paper proposes an algorithm and accompanying data structures capable of ranking places in response to hyper-local web queries. Finally, an empirical study with very large directions query logs offers insight...... into the potential of directions queries for the ranking of places and suggests that the proposed algorithm is suitable for use in real web search engines....

  6. Personalized Profile Based Search Interface With Ranked and Clustered Display

    National Research Council Canada - National Science Library

    Kumar, Sachin; Oztekin, B. U; Ertoz, Levent; Singhal, Saurabh; Han, Euihong; Kumar, Vipin

    2001-01-01

    We have developed an experimental meta-search engine, which takes the snippets from traditional search engines and presents them to the user either in the form of clusters, indices or re-ranked list...

  7. Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy.

    Science.gov (United States)

    Tian, Yuling; Zhang, Hongxian

    2016-01-01

    For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic-there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions.

  8. A robust probabilistic collaborative representation based classification for multimodal biometrics

    Science.gov (United States)

    Zhang, Jing; Liu, Huanxi; Ding, Derui; Xiao, Jianli

    2018-04-01

    Most of the traditional biometric recognition systems perform recognition with a single biometric indicator. These systems have suffered noisy data, interclass variations, unacceptable error rates, forged identity, and so on. Due to these inherent problems, it is not valid that many researchers attempt to enhance the performance of unimodal biometric systems with single features. Thus, multimodal biometrics is investigated to reduce some of these defects. This paper proposes a new multimodal biometric recognition approach by fused faces and fingerprints. For more recognizable features, the proposed method extracts block local binary pattern features for all modalities, and then combines them into a single framework. For better classification, it employs the robust probabilistic collaborative representation based classifier to recognize individuals. Experimental results indicate that the proposed method has improved the recognition accuracy compared to the unimodal biometrics.

  9. A probabilistic model for component-based shape synthesis

    KAUST Repository

    Kalogerakis, Evangelos

    2012-07-01

    We present an approach to synthesizing shapes from complex domains, by identifying new plausible combinations of components from existing shapes. Our primary contribution is a new generative model of component-based shape structure. The model represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation that can be effectively learned without supervision from a set of compatibly segmented shapes. We evaluate the model on a number of shape datasets with complex structural variability and demonstrate its application to amplification of shape databases and to interactive shape synthesis. © 2012 ACM 0730-0301/2012/08-ART55.

  10. Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study.

    Science.gov (United States)

    Tîrnăucă, Cristina; Montaña, José L; Ontañón, Santiago; González, Avelino J; Pardo, Luis M

    2016-06-24

    Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent's actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches.

  11. A rank-based Prediction Algorithm of Learning User's Intention

    Science.gov (United States)

    Shen, Jie; Gao, Ying; Chen, Cang; Gong, HaiPing

    Internet search has become an important part in people's daily life. People can find many types of information to meet different needs through search engines on the Internet. There are two issues for the current search engines: first, the users should predetermine the types of information they want and then change to the appropriate types of search engine interfaces. Second, most search engines can support multiple kinds of search functions, each function has its own separate search interface. While users need different types of information, they must switch between different interfaces. In practice, most queries are corresponding to various types of information results. These queries can search the relevant results in various search engines, such as query "Palace" contains the websites about the introduction of the National Palace Museum, blog, Wikipedia, some pictures and video information. This paper presents a new aggregative algorithm for all kinds of search results. It can filter and sort the search results by learning three aspects about the query words, search results and search history logs to achieve the purpose of detecting user's intention. Experiments demonstrate that this rank-based method for multi-types of search results is effective. It can meet the user's search needs well, enhance user's satisfaction, provide an effective and rational model for optimizing search engines and improve user's search experience.

  12. Reliability-based decision making for selection of ready-mix concrete supply using stochastic superiority and inferiority ranking method

    International Nuclear Information System (INIS)

    Chou, Jui-Sheng; Ongkowijoyo, Citra Satria

    2015-01-01

    Corporate competitiveness is heavily influenced by the information acquired, processed, utilized and transferred by professional staff involved in the supply chain. This paper develops a decision aid for selecting on-site ready-mix concrete (RMC) unloading type in decision making situations involving multiple stakeholders and evaluation criteria. The uncertainty of criteria weights set by expert judgment can be transformed in random ways based on the probabilistic virtual-scale method within a prioritization matrix. The ranking is performed by grey relational grade systems considering stochastic criteria weight based on individual preference. Application of the decision aiding model in actual RMC case confirms that the method provides a robust and effective tool for facilitating decision making under uncertainty. - Highlights: • This study models decision aiding method to assess ready-mix concrete unloading type. • Applying Monte Carlo simulation to virtual-scale method achieves a reliable process. • Individual preference ranking method enhances the quality of global decision making. • Robust stochastic superiority and inferiority ranking obtains reasonable results

  13. A novel three-stage distance-based consensus ranking method

    Science.gov (United States)

    Aghayi, Nazila; Tavana, Madjid

    2018-05-01

    In this study, we propose a three-stage weighted sum method for identifying the group ranks of alternatives. In the first stage, a rank matrix, similar to the cross-efficiency matrix, is obtained by computing the individual rank position of each alternative based on importance weights. In the second stage, a secondary goal is defined to limit the vector of weights since the vector of weights obtained in the first stage is not unique. Finally, in the third stage, the group rank position of alternatives is obtained based on a distance of individual rank positions. The third stage determines a consensus solution for the group so that the ranks obtained have a minimum distance from the ranks acquired by each alternative in the previous stage. A numerical example is presented to demonstrate the applicability and exhibit the efficacy of the proposed method and algorithms.

  14. Rank-based model selection for multiple ions quantum tomography

    International Nuclear Information System (INIS)

    Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian

    2012-01-01

    The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ 2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements. (paper)

  15. The analysis of probability task completion; Taxonomy of probabilistic thinking-based across gender in elementary school students

    Science.gov (United States)

    Sari, Dwi Ivayana; Budayasa, I. Ketut; Juniati, Dwi

    2017-08-01

    Formulation of mathematical learning goals now is not only oriented on cognitive product, but also leads to cognitive process, which is probabilistic thinking. Probabilistic thinking is needed by students to make a decision. Elementary school students are required to develop probabilistic thinking as foundation to learn probability at higher level. A framework of probabilistic thinking of students had been developed by using SOLO taxonomy, which consists of prestructural probabilistic thinking, unistructural probabilistic thinking, multistructural probabilistic thinking and relational probabilistic thinking. This study aimed to analyze of probability task completion based on taxonomy of probabilistic thinking. The subjects were two students of fifth grade; boy and girl. Subjects were selected by giving test of mathematical ability and then based on high math ability. Subjects were given probability tasks consisting of sample space, probability of an event and probability comparison. The data analysis consisted of categorization, reduction, interpretation and conclusion. Credibility of data used time triangulation. The results was level of boy's probabilistic thinking in completing probability tasks indicated multistructural probabilistic thinking, while level of girl's probabilistic thinking in completing probability tasks indicated unistructural probabilistic thinking. The results indicated that level of boy's probabilistic thinking was higher than level of girl's probabilistic thinking. The results could contribute to curriculum developer in developing probability learning goals for elementary school students. Indeed, teachers could teach probability with regarding gender difference.

  16. Aggregated wind power generation probabilistic forecasting based on particle filter

    International Nuclear Information System (INIS)

    Li, Pai; Guan, Xiaohong; Wu, Jiang

    2015-01-01

    Highlights: • A new method for probabilistic forecasting of aggregated wind power generation. • A dynamic system is established based on a numerical weather prediction model. • The new method handles the non-Gaussian and time-varying wind power uncertainties. • Particle filter is applied to forecast predictive densities of wind generation. - Abstract: Probability distribution of aggregated wind power generation in a region is one of important issues for power system daily operation. This paper presents a novel method to forecast the predictive densities of the aggregated wind power generation from several geographically distributed wind farms, considering the non-Gaussian and non-stationary characteristics in wind power uncertainties. Based on a mesoscale numerical weather prediction model, a dynamic system is established to formulate the relationship between the atmospheric and near-surface wind fields of geographically distributed wind farms. A recursively backtracking framework based on the particle filter is applied to estimate the atmospheric state with the near-surface wind power generation measurements, and to forecast the possible samples of the aggregated wind power generation. The predictive densities of the aggregated wind power generation are then estimated based on these predicted samples by a kernel density estimator. In case studies, the new method presented is tested on a 9 wind farms system in Midwestern United States. The testing results that the new method can provide competitive interval forecasts for the aggregated wind power generation with conventional statistical based models, which validates the effectiveness of the new method

  17. Probabilistic Networks

    DEFF Research Database (Denmark)

    Jensen, Finn Verner; Lauritzen, Steffen Lilholt

    2001-01-01

    This article describes the basic ideas and algorithms behind specification and inference in probabilistic networks based on directed acyclic graphs, undirected graphs, and chain graphs.......This article describes the basic ideas and algorithms behind specification and inference in probabilistic networks based on directed acyclic graphs, undirected graphs, and chain graphs....

  18. A Hybrid Distance-Based Ideal-Seeking Consensus Ranking Model

    Directory of Open Access Journals (Sweden)

    Madjid Tavana

    2007-01-01

    Full Text Available Ordinal consensus ranking problems have received much attention in the management science literature. A problem arises in situations where a group of k decision makers (DMs is asked to rank order n alternatives. The question is how to combine the DM rankings into one consensus ranking. Several different approaches have been suggested to aggregate DM responses into a compromise or consensus ranking; however, the similarity of consensus rankings generated by the different algorithms is largely unknown. In this paper, we propose a new hybrid distance-based ideal-seeking consensus ranking model (DCM. The proposed hybrid model combines parts of the two commonly used consensus ranking techniques of Beck and Lin (1983 and Cook and Kress (1985 into an intuitive and computationally simple model. We illustrate our method and then run a Monte Carlo simulation across a range of k and n to compare the similarity of the consensus rankings generated by our method with the best-known method of Borda and Kendall (Kendall 1962 and the two methods proposed by Beck and Lin (1983 and Cook and Kress (1985. DCM and Beck and Lin's method yielded the most similar consensus rankings, whereas the Cook-Kress method and the Borda-Kendall method yielded the least similar consensus rankings.

  19. Probabilistic inference with noisy-threshold models based on a CP tensor decomposition

    Czech Academy of Sciences Publication Activity Database

    Vomlel, Jiří; Tichavský, Petr

    2014-01-01

    Roč. 55, č. 4 (2014), s. 1072-1092 ISSN 0888-613X R&D Projects: GA ČR GA13-20012S; GA ČR GA102/09/1278 Institutional support: RVO:67985556 Keywords : Bayesian networks * Probabilistic inference * Candecomp-Parafac tensor decomposition * Symmetric tensor rank Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.451, year: 2014 http://library.utia.cas.cz/separaty/2014/MTR/vomlel-0427059.pdf

  20. A probabilistic model for component-based shape synthesis

    KAUST Repository

    Kalogerakis, Evangelos; Chaudhuri, Siddhartha; Koller, Daphne; Koltun, Vladlen

    2012-01-01

    represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation

  1. Impairment of probabilistic reward-based learning in schizophrenia.

    Science.gov (United States)

    Weiler, Julia A; Bellebaum, Christian; Brüne, Martin; Juckel, Georg; Daum, Irene

    2009-09-01

    Recent models assume that some symptoms of schizophrenia originate from defective reward processing mechanisms. Understanding the precise nature of reward-based learning impairments might thus make an important contribution to the understanding of schizophrenia and the development of treatment strategies. The present study investigated several features of probabilistic reward-based stimulus association learning, namely the acquisition of initial contingencies, reversal learning, generalization abilities, and the effects of reward magnitude. Compared to healthy controls, individuals with schizophrenia exhibited attenuated overall performance during acquisition, whereas learning rates across blocks were similar to the rates of controls. On the group level, persons with schizophrenia were, however, unable to learn the reversal of the initial reward contingencies. Exploratory analysis of only the subgroup of individuals with schizophrenia who showed significant learning during acquisition yielded deficits in reversal learning with low reward magnitudes only. There was further evidence of a mild generalization impairment of the persons with schizophrenia in an acquired equivalence task. In summary, although there was evidence of intact basic processing of reward magnitudes, individuals with schizophrenia were impaired at using this feedback for the adaptive guidance of behavior.

  2. Probabilistic structural damage identification based on vibration data

    International Nuclear Information System (INIS)

    Hao, H.; Xia, Y.

    2001-01-01

    Vibration-based methods are being rapidly developed and applied to detect structural damage in civil, mechanical and aerospace engineering communities in the last two decades. But uncertainties existing in the structural model and measured vibration data might lead to unreliable results. This paper will present some recent research results to tackle the above mentioned uncertainty problems. By assuming each of the FE model parameters and measured vibration data as a normally distributed random variable, a probabilistic damage detection procedure is developed based on perturbation method and validated by Monte Carlo simulation technique. With this technique, the damage probability of each structural element can be determined. The method developed has been verified by applying it to identify the damages of laboratory tested structures. It was proven that, as compared to the deterministic damage identification method, the present method can not only reduce the possibility of false identification, but also give the identification results in terms of probability. which is deemed more realistic and practical in detecting possible damages in a structure. It has also been found that the modal data included in damage identification analysis have a great influence on the identification results. With a sensitivity study, an optimal measurement set for damage detection is determined. This set includes the optimal measurement locations and the most appropriate modes that should be used in the damage identification analysis. Numerical results indicated that if the optimal set determined in a pre-analysis is used in the damage detection better results will be achieved. (author)

  3. Probabilistic Decision Based Block Partitioning for Future Video Coding

    KAUST Repository

    Wang, Zhao

    2017-11-29

    In the latest Joint Video Exploration Team development, the quadtree plus binary tree (QTBT) block partitioning structure has been proposed for future video coding. Compared to the traditional quadtree structure of High Efficiency Video Coding (HEVC) standard, QTBT provides more flexible patterns for splitting the blocks, which results in dramatically increased combinations of block partitions and high computational complexity. In view of this, a confidence interval based early termination (CIET) scheme is proposed for QTBT to identify the unnecessary partition modes in the sense of rate-distortion (RD) optimization. In particular, a RD model is established to predict the RD cost of each partition pattern without the full encoding process. Subsequently, the mode decision problem is casted into a probabilistic framework to select the final partition based on the confidence interval decision strategy. Experimental results show that the proposed CIET algorithm can speed up QTBT block partitioning structure by reducing 54.7% encoding time with only 1.12% increase in terms of bit rate. Moreover, the proposed scheme performs consistently well for the high resolution sequences, of which the video coding efficiency is crucial in real applications.

  4. Entropy-based Probabilistic Fatigue Damage Prognosis and Algorithmic Performance Comparison

    Data.gov (United States)

    National Aeronautics and Space Administration — In this paper, a maximum entropy-based general framework for probabilistic fatigue damage prognosis is investigated. The proposed methodology is based on an...

  5. Entropy-based probabilistic fatigue damage prognosis and algorithmic performance comparison

    Data.gov (United States)

    National Aeronautics and Space Administration — In this paper, a maximum entropy-based general framework for probabilistic fatigue damage prognosis is investigated. The proposed methodology is based on an...

  6. Bibliometric Rankings of Journals Based on the Thomson Reuters Citations Database

    NARCIS (Netherlands)

    C-L. Chang (Chia-Lin); M.J. McAleer (Michael)

    2015-01-01

    markdownabstract__Abstract__ Virtually all rankings of journals are based on citations, including self citations by journals and individual academics. The gold standard for bibliometric rankings based on citations data is the widely-used Thomson Reuters Web of Science (2014) citations database,

  7. Bibliometric Rankings of Journals based on the Thomson Reuters Citations Database

    NARCIS (Netherlands)

    C-L. Chang (Chia-Lin); M.J. McAleer (Michael)

    2015-01-01

    markdownabstract__Abstract__ Virtually all rankings of journals are based on citations, including self citations by journals and individual academics. The gold standard for bibliometric rankings based on citations data is the widely-used Thomson Reuters Web of Science (2014) citations database,

  8. Semiparametric Gaussian copula models : Geometry and efficient rank-based estimation

    NARCIS (Netherlands)

    Segers, J.; van den Akker, R.; Werker, B.J.M.

    2014-01-01

    We propose, for multivariate Gaussian copula models with unknown margins and structured correlation matrices, a rank-based, semiparametrically efficient estimator for the Euclidean copula parameter. This estimator is defined as a one-step update of a rank-based pilot estimator in the direction of

  9. Non-probabilistic defect assessment for structures with cracks based on interval model

    International Nuclear Information System (INIS)

    Dai, Qiao; Zhou, Changyu; Peng, Jian; Chen, Xiangwei; He, Xiaohua

    2013-01-01

    Highlights: • Non-probabilistic approach is introduced to defect assessment. • Definition and establishment of IFAC are put forward. • Determination of assessment rectangle is proposed. • Solution of non-probabilistic reliability index is presented. -- Abstract: Traditional defect assessment methods conservatively treat uncertainty of parameters as safety factors, while the probabilistic method is based on the clear understanding of detailed statistical information of parameters. In this paper, the non-probabilistic approach is introduced to the failure assessment diagram (FAD) to propose a non-probabilistic defect assessment method for structures with cracks. This novel defect assessment method contains three critical processes: establishment of the interval failure assessment curve (IFAC), determination of the assessment rectangle, and solution of the non-probabilistic reliability degree. Based on the interval theory, uncertain parameters such as crack sizes, material properties and loads are considered as interval variables. As a result, the failure assessment curve (FAC) will vary in a certain range, which is defined as IFAC. And the assessment point will vary within a rectangle zone which is defined as an assessment rectangle. Based on the interval model, the establishment of IFAC and the determination of the assessment rectangle are presented. Then according to the interval possibility degree method, the non-probabilistic reliability degree of IFAC can be determined. Meanwhile, in order to clearly introduce the non-probabilistic defect assessment method, a numerical example for the assessment of a pipe with crack is given. In addition, the assessment result of the proposed method is compared with that of the traditional probabilistic method, which confirms that this non-probabilistic defect assessment can reasonably resolve the practical problem with interval variables

  10. Non-probabilistic defect assessment for structures with cracks based on interval model

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Qiao; Zhou, Changyu, E-mail: changyu_zhou@163.com; Peng, Jian; Chen, Xiangwei; He, Xiaohua

    2013-09-15

    Highlights: • Non-probabilistic approach is introduced to defect assessment. • Definition and establishment of IFAC are put forward. • Determination of assessment rectangle is proposed. • Solution of non-probabilistic reliability index is presented. -- Abstract: Traditional defect assessment methods conservatively treat uncertainty of parameters as safety factors, while the probabilistic method is based on the clear understanding of detailed statistical information of parameters. In this paper, the non-probabilistic approach is introduced to the failure assessment diagram (FAD) to propose a non-probabilistic defect assessment method for structures with cracks. This novel defect assessment method contains three critical processes: establishment of the interval failure assessment curve (IFAC), determination of the assessment rectangle, and solution of the non-probabilistic reliability degree. Based on the interval theory, uncertain parameters such as crack sizes, material properties and loads are considered as interval variables. As a result, the failure assessment curve (FAC) will vary in a certain range, which is defined as IFAC. And the assessment point will vary within a rectangle zone which is defined as an assessment rectangle. Based on the interval model, the establishment of IFAC and the determination of the assessment rectangle are presented. Then according to the interval possibility degree method, the non-probabilistic reliability degree of IFAC can be determined. Meanwhile, in order to clearly introduce the non-probabilistic defect assessment method, a numerical example for the assessment of a pipe with crack is given. In addition, the assessment result of the proposed method is compared with that of the traditional probabilistic method, which confirms that this non-probabilistic defect assessment can reasonably resolve the practical problem with interval variables.

  11. A perspective of PC-based probabilistic risk assessment

    International Nuclear Information System (INIS)

    Sattison, M.B.; Rasmuson, D.M.; Robinson, R.C.; Russell, K.D.; Van Siclen, V.S.

    1987-01-01

    Probabilistic risk assessment (PRA) information has been under-utilized in the past due to the large effort required to input the PRA data and the large expense of the computers needed to run PRA codes. The microcomputer-based Integrated Reliability and Risk Analysis System (IRRAS) and the System Analysis and Risk Assessment (SARA) System, under development at the Idaho National Engineering Laboratory, have greatly enhanced the ability of managers to use PRA techniques in their decision-making. IRRAS is a tool that allows an analyst to create, modify, update, and reanalyze a plant PRA to keep the risk assessment current with the plant's configuration and operation. The SARA system is used to perform sensitivity studies on the results of a PRA. This type of analysis can be used to evaluate proposed changes to a plant or its operation. The success of these two software projects demonstrate that risk information can be made readily available to those that need it. This is the first step in the development of a true risk management capability

  12. Global optimization of maintenance and surveillance testing based on reliability and probabilistic safety assessment. Research project

    International Nuclear Information System (INIS)

    Martorell, S.; Serradell, V.; Munoz, A.; Sanchez, A.

    1997-01-01

    Background, objective, scope, detailed working plan and follow-up and final product of the project ''Global optimization of maintenance and surveillance testing based on reliability and probabilistic safety assessment'' are described

  13. Dominance-based ranking functions for interval-valued intuitionistic fuzzy sets.

    Science.gov (United States)

    Chen, Liang-Hsuan; Tu, Chien-Cheng

    2014-08-01

    The ranking of interval-valued intuitionistic fuzzy sets (IvIFSs) is difficult since they include the interval values of membership and nonmembership. This paper proposes ranking functions for IvIFSs based on the dominance concept. The proposed ranking functions consider the degree to which an IvIFS dominates and is not dominated by other IvIFSs. Based on the bivariate framework and the dominance concept, the functions incorporate not only the boundary values of membership and nonmembership, but also the relative relations among IvIFSs in comparisons. The dominance-based ranking functions include bipolar evaluations with a parameter that allows the decision-maker to reflect his actual attitude in allocating the various kinds of dominance. The relationship for two IvIFSs that satisfy the dual couple is defined based on four proposed ranking functions. Importantly, the proposed ranking functions can achieve a full ranking for all IvIFSs. Two examples are used to demonstrate the applicability and distinctiveness of the proposed ranking functions.

  14. GeoSearcher: Location-Based Ranking of Search Engine Results.

    Science.gov (United States)

    Watters, Carolyn; Amoudi, Ghada

    2003-01-01

    Discussion of Web queries with geospatial dimensions focuses on an algorithm that assigns location coordinates dynamically to Web sites based on the URL. Describes a prototype search system that uses the algorithm to re-rank search engine results for queries with a geospatial dimension, thus providing an alternative ranking order for search engine…

  15. Integration of Evidence Base into a Probabilistic Risk Assessment

    Science.gov (United States)

    Saile, Lyn; Lopez, Vilma; Bickham, Grandin; Kerstman, Eric; FreiredeCarvalho, Mary; Byrne, Vicky; Butler, Douglas; Myers, Jerry; Walton, Marlei

    2011-01-01

    INTRODUCTION: A probabilistic decision support model such as the Integrated Medical Model (IMM) utilizes an immense amount of input data that necessitates a systematic, integrated approach for data collection, and management. As a result of this approach, IMM is able to forecasts medical events, resource utilization and crew health during space flight. METHODS: Inflight data is the most desirable input for the Integrated Medical Model. Non-attributable inflight data is collected from the Lifetime Surveillance for Astronaut Health study as well as the engineers, flight surgeons, and astronauts themselves. When inflight data is unavailable cohort studies, other models and Bayesian analyses are used, in addition to subject matters experts input on occasion. To determine the quality of evidence of a medical condition, the data source is categorized and assigned a level of evidence from 1-5; the highest level is one. The collected data reside and are managed in a relational SQL database with a web-based interface for data entry and review. The database is also capable of interfacing with outside applications which expands capabilities within the database itself. Via the public interface, customers can access a formatted Clinical Findings Form (CLiFF) that outlines the model input and evidence base for each medical condition. Changes to the database are tracked using a documented Configuration Management process. DISSCUSSION: This strategic approach provides a comprehensive data management plan for IMM. The IMM Database s structure and architecture has proven to support additional usages. As seen by the resources utilization across medical conditions analysis. In addition, the IMM Database s web-based interface provides a user-friendly format for customers to browse and download the clinical information for medical conditions. It is this type of functionality that will provide Exploratory Medicine Capabilities the evidence base for their medical condition list

  16. Identification of probabilistic approaches and map-based navigation ...

    Indian Academy of Sciences (India)

    B Madhevan

    2018-02-07

    Feb 7, 2018 ... consists of three processes: map learning (ML), localization and PP [73–76]. (i) ML ...... [83] Thrun S 2001 A probabilistic online mapping algorithm for teams of .... for target tracking using fuzzy logic controller in game theoretic ...

  17. Probabilistic Anomaly Detection Based On System Calls Analysis

    Directory of Open Access Journals (Sweden)

    Przemysław Maciołek

    2007-01-01

    Full Text Available We present an application of probabilistic approach to the anomaly detection (PAD. Byanalyzing selected system calls (and their arguments, the chosen applications are monitoredin the Linux environment. This allows us to estimate “(abnormality” of their behavior (bycomparison to previously collected profiles. We’ve attached results of threat detection ina typical computer environment.

  18. Population based ranking of frameless CT-MRI registration methods

    Energy Technology Data Exchange (ETDEWEB)

    Opposits, Gabor; Kis, Sandor A.; Tron, Lajos; Emri, Miklos [Debrecen Univ. (Hungary). Dept. of Nuclear Medicine; Berenyi, Ervin [Debrecen Univ. (Hungary). Dept. of Biomedical Laboratory and Imaging Science; Takacs, Endre [Rotating Gamma Ltd., Debrecen (Hungary); Dobai, Jozsef G.; Bognar, Laszlo [Debrecen Univ., Medical Center (Hungary). Dept. of Neurosurgery; Szuecs, Bernadett [ScanoMed Ltd., Debrecen (Hungary)

    2015-07-01

    Clinical practice often requires simultaneous information obtained by two different imaging modalities. Registration algorithms are commonly used for this purpose. Automated procedures are very helpful in cases when the same kind of registration has to be performed on images of a high number of subjects. Radiotherapists would prefer to use the best automated method to assist therapy planning, however there are not accepted procedures for ranking the different registration algorithms. We were interested in developing a method to measure the population level performance of CT-MRI registration algorithms by a parameter of values in the [0,1] interval. Pairs of CT and MRI images were collected from 1051 subjects. Results of an automated registration were corrected manually until a radiologist and a neurosurgeon expert both accepted the result as good. This way 1051 registered MRI images were produced by the same pair of experts to be used as gold standards for the evaluation of the performance of other registration algorithms. Pearson correlation coefficient, mutual information, normalized mutual information, Kullback-Leibler divergence, L{sub 1} norm and square L{sub 2} norm (dis)similarity measures were tested for sensitivity to indicate the extent of (dis)similarity of a pair of individual mismatched images. The square Hellinger distance proved suitable to grade the performance of registration algorithms at population level providing the developers with a valuable tool to rank algorithms. The developed procedure provides an objective method to find the registration algorithm performing the best on the population level out of newly constructed or available preselected ones.

  19. Event-Based Media Enrichment Using an Adaptive Probabilistic Hypergraph Model.

    Science.gov (United States)

    Liu, Xueliang; Wang, Meng; Yin, Bao-Cai; Huet, Benoit; Li, Xuelong

    2015-11-01

    Nowadays, with the continual development of digital capture technologies and social media services, a vast number of media documents are captured and shared online to help attendees record their experience during events. In this paper, we present a method combining semantic inference and multimodal analysis for automatically finding media content to illustrate events using an adaptive probabilistic hypergraph model. In this model, media items are taken as vertices in the weighted hypergraph and the task of enriching media to illustrate events is formulated as a ranking problem. In our method, each hyperedge is constructed using the K-nearest neighbors of a given media document. We also employ a probabilistic representation, which assigns each vertex to a hyperedge in a probabilistic way, to further exploit the correlation among media data. Furthermore, we optimize the hypergraph weights in a regularization framework, which is solved as a second-order cone problem. The approach is initiated by seed media and then used to rank the media documents using a transductive inference process. The results obtained from validating the approach on an event dataset collected from EventMedia demonstrate the effectiveness of the proposed approach.

  20. RANWAR: rank-based weighted association rule mining from gene expression and methylation data.

    Science.gov (United States)

    Mallik, Saurav; Mukhopadhyay, Anirban; Maulik, Ujjwal

    2015-01-01

    Ranking of association rules is currently an interesting topic in data mining and bioinformatics. The huge number of evolved rules of items (or, genes) by association rule mining (ARM) algorithms makes confusion to the decision maker. In this article, we propose a weighted rule-mining technique (say, RANWAR or rank-based weighted association rule-mining) to rank the rules using two novel rule-interestingness measures, viz., rank-based weighted condensed support (wcs) and weighted condensed confidence (wcc) measures to bypass the problem. These measures are basically depended on the rank of items (genes). Using the rank, we assign weight to each item. RANWAR generates much less number of frequent itemsets than the state-of-the-art association rule mining algorithms. Thus, it saves time of execution of the algorithm. We run RANWAR on gene expression and methylation datasets. The genes of the top rules are biologically validated by Gene Ontologies (GOs) and KEGG pathway analyses. Many top ranked rules extracted from RANWAR that hold poor ranks in traditional Apriori, are highly biologically significant to the related diseases. Finally, the top rules evolved from RANWAR, that are not in Apriori, are reported.

  1. Ranking Based Locality Sensitive Hashing Enabled Cancelable Biometrics: Index-of-Max Hashing

    OpenAIRE

    Jin, Zhe; Lai, Yen-Lung; Hwang, Jung-Yeon; Kim, Soohyung; Teoh, Andrew Beng Jin

    2017-01-01

    In this paper, we propose a ranking based locality sensitive hashing inspired two-factor cancelable biometrics, dubbed "Index-of-Max" (IoM) hashing for biometric template protection. With externally generated random parameters, IoM hashing transforms a real-valued biometric feature vector into discrete index (max ranked) hashed code. We demonstrate two realizations from IoM hashing notion, namely Gaussian Random Projection based and Uniformly Random Permutation based hashing schemes. The disc...

  2. The effect of uncertainties in distance-based ranking methods for multi-criteria decision making

    Science.gov (United States)

    Jaini, Nor I.; Utyuzhnikov, Sergei V.

    2017-08-01

    Data in the multi-criteria decision making are often imprecise and changeable. Therefore, it is important to carry out sensitivity analysis test for the multi-criteria decision making problem. The paper aims to present a sensitivity analysis for some ranking techniques based on the distance measures in multi-criteria decision making. Two types of uncertainties are considered for the sensitivity analysis test. The first uncertainty is related to the input data, while the second uncertainty is towards the Decision Maker preferences (weights). The ranking techniques considered in this study are TOPSIS, the relative distance and trade-off ranking methods. TOPSIS and the relative distance method measure a distance from an alternative to the ideal and antiideal solutions. In turn, the trade-off ranking calculates a distance of an alternative to the extreme solutions and other alternatives. Several test cases are considered to study the performance of each ranking technique in both types of uncertainties.

  3. Spectral-based features ranking for gamelan instruments identification using filter techniques

    Directory of Open Access Journals (Sweden)

    Diah P Wulandari

    2013-03-01

    Full Text Available In this paper, we describe an approach of spectral-based features ranking for Javanese gamelaninstruments identification using filter techniques. The model extracted spectral-based features set of thesignal using Short Time Fourier Transform (STFT. The rank of the features was determined using the fivealgorithms; namely ReliefF, Chi-Squared, Information Gain, Gain Ratio, and Symmetric Uncertainty. Then,we tested the ranked features by cross validation using Support Vector Machine (SVM. The experimentshowed that Gain Ratio algorithm gave the best result, it yielded accuracy of 98.93%.

  4. Application of probabilistic risk based optimization approaches in environmental restoration

    International Nuclear Information System (INIS)

    Goldammer, W.

    1995-01-01

    The paper presents a general approach to site-specific risk assessments and optimization procedures. In order to account for uncertainties in the assessment of the current situation and future developments, optimization parameters are treated as probabilistic distributions. The assessments are performed within the framework of a cost-benefit analysis. Radiation hazards and conventional risks are treated within an integrated approach. Special consideration is given to consequences of low probability events such as, earthquakes or major floods. Risks and financial costs are combined to an overall figure of detriment allowing one to distinguish between benefits of available reclamation options. The probabilistic analysis uses a Monte Carlo simulation technique. The paper demonstrates the applicability of this approach in aiding the reclamation planning using an example from the German reclamation program for uranium mining and milling sites

  5. Probabilistic hypergraph based hash codes for social image search

    Institute of Scientific and Technical Information of China (English)

    Yi XIE; Hui-min YU; Roland HU

    2014-01-01

    With the rapid development of the Internet, recent years have seen the explosive growth of social media. This brings great challenges in performing efficient and accurate image retrieval on a large scale. Recent work shows that using hashing methods to embed high-dimensional image features and tag information into Hamming space provides a powerful way to index large collections of social images. By learning hash codes through a spectral graph partitioning algorithm, spectral hashing (SH) has shown promising performance among various hashing approaches. However, it is incomplete to model the relations among images only by pairwise simple graphs which ignore the relationship in a higher order. In this paper, we utilize a probabilistic hypergraph model to learn hash codes for social image retrieval. A probabilistic hypergraph model offers a higher order repre-sentation among social images by connecting more than two images in one hyperedge. Unlike a normal hypergraph model, a probabilistic hypergraph model considers not only the grouping information, but also the similarities between vertices in hy-peredges. Experiments on Flickr image datasets verify the performance of our proposed approach.

  6. Suppression of panel flutter of near-space aircraft based on non-probabilistic reliability theory

    Directory of Open Access Journals (Sweden)

    Ye-Wei Zhang

    2016-03-01

    Full Text Available The vibration active control of the composite panels with the uncertain parameters in the hypersonic flow is studied using the non-probabilistic reliability theory. Using the piezoelectric patches as active control actuators, dynamic equations of panel are established by finite element method and Hamilton’s principle. And the control model of panel with uncertain parameters is obtained. According to the non-probabilistic reliability index, and besides being based on H∞ robust control theory and non-probabilistic reliability theory, the non-probabilistic reliability performance function is given. Moreover, the relationships between the robust controller and H∞ performance index and reliability are established. Numerical results show that the control method under the influence of reliability, H∞ performance index, and approaching velocity is effective to the vibration suppression of panel in the whole interval of uncertain parameters.

  7. Photometric redshift estimation via deep learning. Generalized and pre-classification-less, image based, fully probabilistic redshifts

    Science.gov (United States)

    D'Isanto, A.; Polsterer, K. L.

    2018-01-01

    Context. The need to analyze the available large synoptic multi-band surveys drives the development of new data-analysis methods. Photometric redshift estimation is one field of application where such new methods improved the results, substantially. Up to now, the vast majority of applied redshift estimation methods have utilized photometric features. Aims: We aim to develop a method to derive probabilistic photometric redshift directly from multi-band imaging data, rendering pre-classification of objects and feature extraction obsolete. Methods: A modified version of a deep convolutional network was combined with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) were applied as performance criteria. We have adopted a feature based random forest and a plain mixture density network to compare performances on experiments with data from SDSS (DR9). Results: We show that the proposed method is able to predict redshift PDFs independently from the type of source, for example galaxies, quasars or stars. Thereby the prediction performance is better than both presented reference methods and is comparable to results from the literature. Conclusions: The presented method is extremely general and allows us to solve of any kind of probabilistic regression problems based on imaging data, for example estimating metallicity or star formation rate of galaxies. This kind of methodology is tremendously important for the next generation of surveys.

  8. Multi-energy CT based on a prior rank, intensity and sparsity model (PRISM)

    International Nuclear Information System (INIS)

    Gao, Hao; Osher, Stanley; Yu, Hengyong; Wang, Ge

    2011-01-01

    We propose a compressive sensing approach for multi-energy computed tomography (CT), namely the prior rank, intensity and sparsity model (PRISM). To further compress the multi-energy image for allowing the reconstruction with fewer CT data and less radiation dose, the PRISM models a multi-energy image as the superposition of a low-rank matrix and a sparse matrix (with row dimension in space and column dimension in energy), where the low-rank matrix corresponds to the stationary background over energy that has a low matrix rank, and the sparse matrix represents the rest of distinct spectral features that are often sparse. Distinct from previous methods, the PRISM utilizes the generalized rank, e.g., the matrix rank of tight-frame transform of a multi-energy image, which offers a way to characterize the multi-level and multi-filtered image coherence across the energy spectrum. Besides, the energy-dependent intensity information can be incorporated into the PRISM in terms of the spectral curves for base materials, with which the restoration of the multi-energy image becomes the reconstruction of the energy-independent material composition matrix. In other words, the PRISM utilizes prior knowledge on the generalized rank and sparsity of a multi-energy image, and intensity/spectral characteristics of base materials. Furthermore, we develop an accurate and fast split Bregman method for the PRISM and demonstrate the superior performance of the PRISM relative to several competing methods in simulations. (papers)

  9. Ranking of input parameters importance for BWR stability based on Ringhals-1

    International Nuclear Information System (INIS)

    Gajev, Ivan; Kozlowski, Tomasz; Xu, Yunlin; Downar, Thomas

    2011-01-01

    Unstable behavior of Boiling Water Reactors (BWRs) is known to occur during operation at certain power and flow conditions. Uncertainty calculations for BWR stability, based on the Wilks' formula, have been already done for the Ringhals-1 benchmark. In this work, these calculations have been used to identify and rank the most important parameters affecting the stability of the Ringhals-1 plant. The ranking has been done in two different ways and a comparison of these two methods has been demonstrated. Results show that the methods provide different, but meaningful evaluations of the ranking. (author)

  10. A Citation-Based Ranking of German-Speaking Researchers in Business Administration with Data of Google Scholar

    Science.gov (United States)

    Dilger, Alexander; Müller, Harry

    2013-01-01

    Rankings of academics can be constructed in two different ways, either based on journal rankings or based on citations. Although citation-based rankings promise some fundamental advantages they are still not common in German-speaking business administration. However, the choice of the underlying database is crucial. This article argues that for…

  11. Ranking of bank branches with undesirable and fuzzy data: A DEA-based approach

    Directory of Open Access Journals (Sweden)

    Sohrab Kordrostami

    2016-07-01

    Full Text Available Banks are one of the most important financial sectors in order to the economic development of each country. Certainly, efficiency scores and ranks of banks are significant and effective aspects towards future planning. Sometimes the performance of banks must be measured in the presence of undesirable and vague factors. For these reasons in the current paper a procedure based on data envelopment analysis (DEA is introduced for evaluating the efficiency and complete ranking of decision making units (DMUs where undesirable and fuzzy measures exist. To illustrate, in the presence of undesirable and fuzzy measures, DMUs are evaluated by using a fuzzy expected value approach and DMUs with similar efficiency scores are ranked by using constraints and the Maximal Balance Index based on the optimal shadow prices. Afterwards, the efficiency scores of 25 branches of an Iranian commercial bank are evaluated using the proposed method. Also, a complete ranking of bank branches is presented to discriminate branches.

  12. Optimization-Based Approaches to Control of Probabilistic Boolean Networks

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2017-02-01

    Full Text Available Control of gene regulatory networks is one of the fundamental topics in systems biology. In the last decade, control theory of Boolean networks (BNs, which is well known as a model of gene regulatory networks, has been widely studied. In this review paper, our previously proposed methods on optimal control of probabilistic Boolean networks (PBNs are introduced. First, the outline of PBNs is explained. Next, an optimal control method using polynomial optimization is explained. The finite-time optimal control problem is reduced to a polynomial optimization problem. Furthermore, another finite-time optimal control problem, which can be reduced to an integer programming problem, is also explained.

  13. Discovering urban mobility patterns with PageRank based traffic modeling and prediction

    Science.gov (United States)

    Wang, Minjie; Yang, Su; Sun, Yi; Gao, Jun

    2017-11-01

    Urban transportation system can be viewed as complex network with time-varying traffic flows as links to connect adjacent regions as networked nodes. By computing urban traffic evolution on such temporal complex network with PageRank, it is found that for most regions, there exists a linear relation between the traffic congestion measure at present time and the PageRank value of the last time. Since the PageRank measure of a region does result from the mutual interactions of the whole network, it implies that the traffic state of a local region does not evolve independently but is affected by the evolution of the whole network. As a result, the PageRank values can act as signatures in predicting upcoming traffic congestions. We observe the aforementioned laws experimentally based on the trajectory data of 12000 taxies in Beijing city for one month.

  14. Development of probabilistic fatigue curve for asphalt concrete based on viscoelastic continuum damage mechanics

    Directory of Open Access Journals (Sweden)

    Himanshu Sharma

    2016-07-01

    Full Text Available Due to its roots in fundamental thermodynamic framework, continuum damage approach is popular for modeling asphalt concrete behavior. Currently used continuum damage models use mixture averaged values for model parameters and assume deterministic damage process. On the other hand, significant scatter is found in fatigue data generated even under extremely controlled laboratory testing conditions. Thus, currently used continuum damage models fail to account the scatter observed in fatigue data. This paper illustrates a novel approach for probabilistic fatigue life prediction based on viscoelastic continuum damage approach. Several specimens were tested for their viscoelastic properties and damage properties under uniaxial mode of loading. The data thus generated were analyzed using viscoelastic continuum damage mechanics principles to predict fatigue life. Weibull (2 parameter, 3 parameter and lognormal distributions were fit to fatigue life predicted using viscoelastic continuum damage approach. It was observed that fatigue damage could be best-described using Weibull distribution when compared to lognormal distribution. Due to its flexibility, 3-parameter Weibull distribution was found to fit better than 2-parameter Weibull distribution. Further, significant differences were found between probabilistic fatigue curves developed in this research and traditional deterministic fatigue curve. The proposed methodology combines advantages of continuum damage mechanics as well as probabilistic approaches. These probabilistic fatigue curves can be conveniently used for reliability based pavement design. Keywords: Probabilistic fatigue curve, Continuum damage mechanics, Weibull distribution, Lognormal distribution

  15. MCDM based evaluation and ranking of commercial off-the-shelf using fuzzy based matrix method

    Directory of Open Access Journals (Sweden)

    Rakesh Garg

    2017-04-01

    Full Text Available In today’s scenario, software has become an essential component in all kinds of systems. The size and the complexity of the software increases with a corresponding increase in its functionality, hence leads to the development of the modular software systems. Software developers emphasize on the concept of component based software engineering (CBSE for the development of modular software systems. The CBSE concept consists of dividing the software into a number of modules; selecting Commercial Off-the-Shelf (COTS for each module; and finally integrating the modules to develop the final software system. The selection of COTS for any module plays a vital role in software development. To address the problem of selection of COTS, a framework for ranking and selection of various COTS components for any software system based on expert opinion elicitation and fuzzy-based matrix methodology is proposed in this research paper. The selection problem is modeled as a multi-criteria decision making (MCDM problem. The evaluation criteria are identified through extensive literature study and the COTS components are ranked based on these identified and selected evaluation criteria using the proposed methods according to the value of a permanent function of their criteria matrices. The methodology is explained through an example and is validated by comparing with an existing method.

  16. The ranking of scientists based on scientific publications assessment.

    Science.gov (United States)

    Zerem, Enver

    2017-11-01

    It is generally accepted that the scientific impact factor (Web of Science) and the total number of citations of the articles published in a journal, are the most relevant parameters of the journal's significance. However, the significance of scientists is much more complicated to establish and the value of their scientific production cannot be directly reflected by the importance of the journals in which their articles are published. Evaluating the significance of scientists' accomplishments involves more complicated metrics than just their publication records. Based on a long term of academic experience, the author proposes objective criteria to estimate the scientific merit of an individual's publication record. This metric can serve as a pragmatic tool and the nidus for discussion within the readership of this journal. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Ranking environmental projects model based on multicriteria decision-making and the weight sensitivity analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    With the fast growth of Chinese economic,more and more capital will be invested in environmental projects.How to select the environmental investment projects(alternatives)for obtaining the best environmental quality and economic benefits is an important problem for the decision makers.The purpose of this paper is to develop a decision-making model to rank a finite number of alternatives with several and sometimes conflicting criteria.A model for ranking the projects of municipal sewage treatment plants is proposed by using exports' information and the data of the real projects.And,the ranking result is given based on the PROMETHEE method. Furthermore,by means of the concept of the weight stability intervals(WSI),the sensitivity of the ranking results to the size of criteria values and the change of weights value of criteria are discussed.The result shows that some criteria,such as"proportion of benefit to projoct cost",will influence the ranking result of alternatives very strong while others not.The influence are not only from the value of criterion but also from the changing the weight of criterion.So,some criteria such as"proportion of benefit to projoct cost" are key critera for ranking the projects. Decision makers must be cautious to them.

  18. Probabilistic safety assessment model in consideration of human factors based on object-oriented bayesian networks

    International Nuclear Information System (INIS)

    Zhou Zhongbao; Zhou Jinglun; Sun Quan

    2007-01-01

    Effect of Human factors on system safety is increasingly serious, which is often ignored in traditional probabilistic safety assessment methods however. A new probabilistic safety assessment model based on object-oriented Bayesian networks is proposed in this paper. Human factors are integrated into the existed event sequence diagrams. Then the classes of the object-oriented Bayesian networks are constructed which are converted to latent Bayesian networks for inference. Finally, the inference results are integrated into event sequence diagrams for probabilistic safety assessment. The new method is applied to the accident of loss of coolant in a nuclear power plant. the results show that the model is not only applicable to real-time situation assessment, but also applicable to situation assessment based certain amount of information. The modeling complexity is kept down and the new method is appropriate to large complex systems due to the thoughts of object-oriented. (authors)

  19. A comprehensive probabilistic analysis model of oil pipelines network based on Bayesian network

    Science.gov (United States)

    Zhang, C.; Qin, T. X.; Jiang, B.; Huang, C.

    2018-02-01

    Oil pipelines network is one of the most important facilities of energy transportation. But oil pipelines network accident may result in serious disasters. Some analysis models for these accidents have been established mainly based on three methods, including event-tree, accident simulation and Bayesian network. Among these methods, Bayesian network is suitable for probabilistic analysis. But not all the important influencing factors are considered and the deployment rule of the factors has not been established. This paper proposed a probabilistic analysis model of oil pipelines network based on Bayesian network. Most of the important influencing factors, including the key environment condition and emergency response are considered in this model. Moreover, the paper also introduces a deployment rule for these factors. The model can be used in probabilistic analysis and sensitive analysis of oil pipelines network accident.

  20. Social norms and rank-based nudging: Changing willingness to pay for healthy food.

    Science.gov (United States)

    Aldrovandi, Silvio; Brown, Gordon D A; Wood, Alex M

    2015-09-01

    People's evaluations in the domain of healthy eating are at least partly determined by the choice context. We systematically test reference level and rank-based models of relative comparisons against each other and explore their application to social norms nudging, an intervention that aims at influencing consumers' behavior by addressing their inaccurate beliefs about their consumption relative to the consumption of others. Study 1 finds that the rank of a product or behavior among others in the immediate comparison context, rather than its objective attributes, influences its evaluation. Study 2 finds that when a comparator is presented in isolation the same rank-based process occurs based on information retrieved from memory. Study 3 finds that telling people how their consumption ranks within a normative comparison sample increases willingness to pay for a healthy food by over 30% relative to the normal social norms intervention that tells them how they compare to the average. We conclude that social norms interventions should present rank information (e.g., "you are in the most unhealthy 10% of eaters") rather than information relative to the average (e.g., "you consume 500 calories more than the average person"). (c) 2015 APA, all rights reserved).

  1. Introducing trimming and function ranking to Solid Works based on function analysis

    NARCIS (Netherlands)

    Chechurin, Leonid S.; Wits, Wessel Willems; Bakker, Hans M.; Cascini, G.; Vaneker, Thomas H.J.

    2011-01-01

    TRIZ based Function Analysis models existing products based on functional interactions between product parts. Such a function model description is the ideal starting point for product innovation. Design engineers can apply (TRIZ) methods such as trimming and function ranking to this function model

  2. Introducing Trimming and Function Ranking to SolidWorks based on Function Analysis

    NARCIS (Netherlands)

    Chechurin, L.S.; Wits, Wessel Willems; Bakker, Hans M.; Vaneker, Thomas H.J.

    2015-01-01

    TRIZ based Function Analysis models existing products based on functional interactions between product parts. Such a function model description is the ideal starting point for product innovation. Design engineers can apply (TRIZ) methods such as trimming and function ranking to this function model

  3. New public management based on rankings: From plann ing to evaluation

    Directory of Open Access Journals (Sweden)

    Andrés Valdez Zepeda

    2017-11-01

    Full Text Available This article focuses on the emergence and development of a new trend of public affairs and global government management known as ranking-based management. This type of management process is the result of performance measurement usually conducted by an external agent or prestigious institution, which generally uses a methodology based on indicators and audits. It also evaluates the results, achievements and progress in governance, which it ranks on a list on which they are compared against other comparable governments. As a global trend, supported by management rankings this process is not seen as an option, but as a real requirement for public agencies and government, which not only helps them in the process of continuous improvement, but also creates important incentives such as prestige, social recognition, construction and better branding.

  4. Probabilistic estimation of residential air exchange rates for population-based human exposure modeling

    Science.gov (United States)

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...

  5. A Practical Probabilistic Graphical Modeling Tool for Weighing Ecological Risk-Based Evidence

    Science.gov (United States)

    Past weight-of-evidence frameworks for adverse ecological effects have provided soft-scoring procedures for judgments based on the quality and measured attributes of evidence. Here, we provide a flexible probabilistic structure for weighing and integrating lines of evidence for e...

  6. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.

    Science.gov (United States)

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-08-31

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.

  7. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Hyungjin Kim

    2015-08-01

    Full Text Available Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments

  8. Rule-based versus probabilistic selection for active surveillance using three definitions of insignificant prostate cancer

    NARCIS (Netherlands)

    L.D.F. Venderbos (Lionne); M.J. Roobol-Bouts (Monique); C.H. Bangma (Chris); R.C.N. van den Bergh (Roderick); L.P. Bokhorst (Leonard); D. Nieboer (Daan); Godtman, R; J. Hugosson (Jonas); van der Kwast, T; E.W. Steyerberg (Ewout)

    2016-01-01

    textabstractTo study whether probabilistic selection by the use of a nomogram could improve patient selection for active surveillance (AS) compared to the various sets of rule-based AS inclusion criteria currently used. We studied Dutch and Swedish patients participating in the European Randomized

  9. Probabilistic Data Modeling and Querying for Location-Based Data Warehouses

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    Motivated by the increasing need to handle complex, dynamic, uncertain multidimensional data in location-based warehouses, this paper proposes a novel probabilistic data model that can address the complexities of such data. The model provides a foundation for handling complex hierarchical and unc...

  10. Probabilistic Data Modeling and Querying for Location-Based Data Warehouses

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    2005-01-01

    Motivated by the increasing need to handle complex, dynamic, uncertain multidimensional data in location-based warehouses, this paper proposes a novel probabilistic data model that can address the complexities of such data. The model provides a foundation for handling complex hierarchical and unc...

  11. Probabilistic full waveform inversion based on tectonic regionalization - development and application to the Australian upper mantle

    NARCIS (Netherlands)

    Käufl, P.; Fichtner, A.; Igel, H.

    2013-01-01

    We present a first study to investigate the feasibility of a probabilistic 3-D full waveform inversion based on spectral-element simulations of seismic wave propagation and Monte Carlo exploration of the model space. Through a tectonic regionalization we reduce the dimension of the model space to

  12. BridgeRank: A novel fast centrality measure based on local structure of the network

    Science.gov (United States)

    Salavati, Chiman; Abdollahpouri, Alireza; Manbari, Zhaleh

    2018-04-01

    Ranking nodes in complex networks have become an important task in many application domains. In a complex network, influential nodes are those that have the most spreading ability. Thus, identifying influential nodes based on their spreading ability is a fundamental task in different applications such as viral marketing. One of the most important centrality measures to ranking nodes is closeness centrality which is efficient but suffers from high computational complexity O(n3) . This paper tries to improve closeness centrality by utilizing the local structure of nodes and presents a new ranking algorithm, called BridgeRank centrality. The proposed method computes local centrality value for each node. For this purpose, at first, communities are detected and the relationship between communities is completely ignored. Then, by applying a centrality in each community, only one best critical node from each community is extracted. Finally, the nodes are ranked based on computing the sum of the shortest path length of nodes to obtained critical nodes. We have also modified the proposed method by weighting the original BridgeRank and selecting several nodes from each community based on the density of that community. Our method can find the best nodes with high spread ability and low time complexity, which make it applicable to large-scale networks. To evaluate the performance of the proposed method, we use the SIR diffusion model. Finally, experiments on real and artificial networks show that our method is able to identify influential nodes so efficiently, and achieves better performance compared to other recent methods.

  13. An improved rank based disease prediction using web navigation patterns on bio-medical databases

    Directory of Open Access Journals (Sweden)

    P. Dhanalakshmi

    2017-12-01

    Full Text Available Applying machine learning techniques to on-line biomedical databases is a challenging task, as this data is collected from large number of sources and it is multi-dimensional. Also retrieval of relevant document from large repository such as gene document takes more processing time and an increased false positive rate. Generally, the extraction of biomedical document is based on the stream of prior observations of gene parameters taken at different time periods. Traditional web usage models such as Markov, Bayesian and Clustering models are sensitive to analyze the user navigation patterns and session identification in online biomedical database. Moreover, most of the document ranking models on biomedical database are sensitive to sparsity and outliers. In this paper, a novel user recommendation system was implemented to predict the top ranked biomedical documents using the disease type, gene entities and user navigation patterns. In this recommendation system, dynamic session identification, dynamic user identification and document ranking techniques were used to extract the highly relevant disease documents on the online PubMed repository. To verify the performance of the proposed model, the true positive rate and runtime of the model was compared with that of traditional static models such as Bayesian and Fuzzy rank. Experimental results show that the performance of the proposed ranking model is better than the traditional models.

  14. CT Image Sequence Restoration Based on Sparse and Low-Rank Decomposition

    Science.gov (United States)

    Gou, Shuiping; Wang, Yueyue; Wang, Zhilong; Peng, Yong; Zhang, Xiaopeng; Jiao, Licheng; Wu, Jianshe

    2013-01-01

    Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT) image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA), Linearized Alternating Direction Method with Adaptive Penalty (LADMAP) and Go Decomposition (GoDec). Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images. PMID:24023764

  15. A Dynamic Probabilistic Based Broadcasting Scheme for MANETs

    Science.gov (United States)

    Shanmugam, Kannan; Subburathinam, Karthik; Velayuthampalayam Palanisamy, Arunachalam

    2016-01-01

    MANET is commonly known as Mobile Ad Hoc Network in which cluster of mobile nodes can communicate with each other without having any basic infrastructure. The basic characteristic of MANET is dynamic topology. Due to the dynamic behavior nature, the topology of the network changes very frequently, and this will lead to the failure of the valid route repeatedly. Thus, the process of finding the valid route leads to notable drop in the throughput of the network. To identify a new valid path to the targeted mobile node, available proactive routing protocols use simple broadcasting method known as simple flooding. The simple flooding method broadcasts the RREQ packet from the source to the rest of the nodes in mobile network. But the problem with this method is disproportionate repetitive retransmission of RREQ packet which could result in high contention on the available channel and packet collision due to extreme traffic in the network. A reasonable number of routing algorithms have been suggested for reducing the lethal impact of flooding the RREQ packets. However, most of the algorithms have resulted in considerable amount of complexity and deduce the throughput by depending on special hardware components and maintaining complex information which will be less frequently used. By considering routing complexity with the goal of increasing the throughput of the network, in this paper, we have introduced a new approach called Dynamic Probabilistic Route (DPR) discovery. The Node's Forwarding Probability (NFP) is dynamically calculated by the DPR mobile nodes using Probability Function (PF) which depends on density of local neighbor nodes and the cumulative number of its broadcast covered neighbors. PMID:27019868

  16. Finding differentially expressed genes in high dimensional data: Rank based test statistic via a distance measure.

    Science.gov (United States)

    Mathur, Sunil; Sadana, Ajit

    2015-12-01

    We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.

  17. Superstring motivated gauge models based on a rank six subgroup of E6

    International Nuclear Information System (INIS)

    Lazarides, G.; Panagiotakopoulos, C.; Shafi, Q.

    1987-01-01

    We discuss gauge models based on a superstring motivated rank six subgroup of E 6 . Lepton number is an accidental unbroken symmetry of the models which leads to an essential stable proton. One of the neutral gauge bosons couples to B-L and may have mass below a TeV. (orig.)

  18. Multi-Label Classification Based on Low Rank Representation for Image Annotation

    Directory of Open Access Journals (Sweden)

    Qiaoyu Tan

    2017-01-01

    Full Text Available Annotating remote sensing images is a challenging task for its labor demanding annotation process and requirement of expert knowledge, especially when images can be annotated with multiple semantic concepts (or labels. To automatically annotate these multi-label images, we introduce an approach called Multi-Label Classification based on Low Rank Representation (MLC-LRR. MLC-LRR firstly utilizes low rank representation in the feature space of images to compute the low rank constrained coefficient matrix, then it adapts the coefficient matrix to define a feature-based graph and to capture the global relationships between images. Next, it utilizes low rank representation in the label space of labeled images to construct a semantic graph. Finally, these two graphs are exploited to train a graph-based multi-label classifier. To validate the performance of MLC-LRR against other related graph-based multi-label methods in annotating images, we conduct experiments on a public available multi-label remote sensing images (Land Cover. We perform additional experiments on five real-world multi-label image datasets to further investigate the performance of MLC-LRR. Empirical study demonstrates that MLC-LRR achieves better performance on annotating images than these comparing methods across various evaluation criteria; it also can effectively exploit global structure and label correlations of multi-label images.

  19. A Probabilistic Analysis of the Sacco and Vanzetti Evidence

    CERN Document Server

    Kadane, Joseph B

    2011-01-01

    A Probabilistic Analysis of the Sacco and Vanzetti Evidence is a Bayesian analysis of the trial and post-trial evidence in the Sacco and Vanzetti case, based on subjectively determined probabilities and assumed relationships among evidential events. It applies the ideas of charting evidence and probabilistic assessment to this case, which is perhaps the ranking cause celebre in all of American legal history. Modern computation methods applied to inference networks are used to show how the inferential force of evidence in a complicated case can be graded. The authors employ probabilistic assess

  20. Accelerating parameter identification of proton exchange membrane fuel cell model with ranking-based differential evolution

    International Nuclear Information System (INIS)

    Gong, Wenyin; Cai, Zhihua

    2013-01-01

    Parameter identification of PEM (proton exchange membrane) fuel cell model is a very active area of research. Generally, it can be treated as a numerical optimization problem with complex nonlinear and multi-variable features. DE (differential evolution), which has been successfully used in various fields, is a simple yet efficient evolutionary algorithm for global numerical optimization. In this paper, with the objective of accelerating the process of parameter identification of PEM fuel cell models and reducing the necessary computational efforts, we firstly present a generic and simple ranking-based mutation operator for the DE algorithm. Then, the ranking-based mutation operator is incorporated into five highly-competitive DE variants to solve the PEM fuel cell model parameter identification problems. The main contributions of this work are the proposed ranking-based DE variants and their application to the parameter identification problems of PEM fuel cell models. Experiments have been conducted by using both the simulated voltage–current data and the data obtained from the literature to validate the performance of our approach. The results indicate that the ranking-based DE methods provide better results with respect to the solution quality, the convergence rate, and the success rate compared with their corresponding original DE methods. In addition, the voltage–current characteristics obtained by our approach are in good agreement with the original voltage–current curves in all cases. - Highlights: • A simple and generic ranking-based mutation operator is presented in this paper. • Several DE (differential evolution) variants are used to solve the parameter identification of PEMFC (proton exchange membrane fuel cells) model. • Results show that our method accelerates the process of parameter identification. • The V–I characteristics are in very good agreement with experimental data

  1. Scheduling for Multiuser MIMO Downlink Channels with Ranking-Based Feedback

    Science.gov (United States)

    Kountouris, Marios; Sälzer, Thomas; Gesbert, David

    2008-12-01

    We consider a multi-antenna broadcast channel with more single-antenna receivers than transmit antennas and partial channel state information at the transmitter (CSIT). We propose a novel type of CSIT representation for the purpose of user selection, coined as ranking-based feedback. Each user calculates and feeds back the rank, an integer between 1 and W + 1, of its instantaneous channel quality information (CQI) among a set of W past CQI measurements. Apart from reducing significantly the required feedback load, ranking-based feedback enables the transmitter to select users that are on the highest peak (quantile) with respect to their own channel distribution, independently of the distribution of other users. It can also be shown that this feedback metric can restore temporal fairness in heterogeneous networks, in which users' channels are not identically distributed and mobile terminals experience different average signal-to-noise ratio (SNR). The performance of a system that performs user selection using ranking-based CSIT in the context of random opportunistic beamforming is analyzed, and we provide design guidelines on the number of required past CSIT samples and the impact of finite W on average throughput. Simulation results show that feedback reduction of order of 40-50% can be achieved with negligible decrease in system throughput.

  2. Probabilistic reasoning for assembly-based 3D modeling

    KAUST Repository

    Chaudhuri, Siddhartha; Kalogerakis, Evangelos; Guibas, Leonidas; Koltun, Vladlen

    2011-01-01

    Assembly-based modeling is a promising approach to broadening the accessibility of 3D modeling. In assembly-based modeling, new models are assembled from shape components extracted from a database. A key challenge in assembly-based modeling

  3. Strategic Entrepreneurship Based Model of Catch-up University in Global Rankings

    Directory of Open Access Journals (Sweden)

    Kozlov Mikhail

    2016-01-01

    Full Text Available The paper will help answer the question, why only few universities managed to succeed significantly in their global ranking advancement, while most of their competitors fail. For this purpose it will introduce a new strategically entrepreneurial catch-up university framework, based on the combination of the resource based view, dynamic capabilities, strategic entrepreneurship and latecomer organization concepts. The new framework logics explains the advantages of being ambidextrous for ranking oriented universities and pursuing new potentially more favorable opportunities for research development. It will propose that substantial increase in the level of dynamic capabilities of the universities and their resource base accumulation is based on the use of the new combination of financial, human and social capital combined with strategic management of these resources in the process of identification and exploitation of greater opportunities.

  4. Network-based ranking methods for prediction of novel disease associated microRNAs.

    Science.gov (United States)

    Le, Duc-Hau

    2015-10-01

    Many studies have shown roles of microRNAs on human disease and a number of computational methods have been proposed to predict such associations by ranking candidate microRNAs according to their relevance to a disease. Among them, machine learning-based methods usually have a limitation in specifying non-disease microRNAs as negative training samples. Meanwhile, network-based methods are becoming dominant since they well exploit a "disease module" principle in microRNA functional similarity networks. Of which, random walk with restart (RWR) algorithm-based method is currently state-of-the-art. The use of this algorithm was inspired from its success in predicting disease gene because the "disease module" principle also exists in protein interaction networks. Besides, many algorithms designed for webpage ranking have been successfully applied in ranking disease candidate genes because web networks share topological properties with protein interaction networks. However, these algorithms have not yet been utilized for disease microRNA prediction. We constructed microRNA functional similarity networks based on shared targets of microRNAs, and then we integrated them with a microRNA functional synergistic network, which was recently identified. After analyzing topological properties of these networks, in addition to RWR, we assessed the performance of (i) PRINCE (PRIoritizatioN and Complex Elucidation), which was proposed for disease gene prediction; (ii) PageRank with Priors (PRP) and K-Step Markov (KSM), which were used for studying web networks; and (iii) a neighborhood-based algorithm. Analyses on topological properties showed that all microRNA functional similarity networks are small-worldness and scale-free. The performance of each algorithm was assessed based on average AUC values on 35 disease phenotypes and average rankings of newly discovered disease microRNAs. As a result, the performance on the integrated network was better than that on individual ones. In

  5. Environmental and health hazard ranking and assessment of plastic polymers based on chemical composition

    Energy Technology Data Exchange (ETDEWEB)

    Lithner, Delilah, E-mail: delilah.lithner@gmail.com; Larsson, Ake; Dave, Goeran

    2011-08-15

    Plastics constitute a large material group with a global annual production that has doubled in 15 years (245 million tonnes in 2008). Plastics are present everywhere in society and the environment, especially the marine environment, where large amounts of plastic waste accumulate. The knowledge of human and environmental hazards and risks from chemicals associated with the diversity of plastic products is very limited. Most chemicals used for producing plastic polymers are derived from non-renewable crude oil, and several are hazardous. These may be released during the production, use and disposal of the plastic product. In this study the environmental and health hazards of chemicals used in 55 thermoplastic and thermosetting polymers were identified and compiled. A hazard ranking model was developed for the hazard classes and categories in the EU classification and labelling (CLP) regulation which is based on the UN Globally Harmonized System. The polymers were ranked based on monomer hazard classifications, and initial assessments were made. The polymers that ranked as most hazardous are made of monomers classified as mutagenic and/or carcinogenic (category 1A or 1B). These belong to the polymer families of polyurethanes, polyacrylonitriles, polyvinyl chloride, epoxy resins, and styrenic copolymers. All have a large global annual production (1-37 million tonnes). A considerable number of polymers (31 out of 55) are made of monomers that belong to the two worst of the ranking model's five hazard levels, i.e. levels IV-V. The polymers that are made of level IV monomers and have a large global annual production (1-5 million tonnes) are phenol formaldehyde resins, unsaturated polyesters, polycarbonate, polymethyl methacrylate, and urea-formaldehyde resins. This study has identified hazardous substances used in polymer production for which the risks should be evaluated for decisions on the need for risk reduction measures, substitution, or even phase out

  6. A passage retrieval method based on probabilistic information retrieval model and UMLS concepts in biomedical question answering.

    Science.gov (United States)

    Sarrouti, Mourad; Ouatik El Alaoui, Said

    2017-04-01

    Passage retrieval, the identification of top-ranked passages that may contain the answer for a given biomedical question, is a crucial component for any biomedical question answering (QA) system. Passage retrieval in open-domain QA is a longstanding challenge widely studied over the last decades. However, it still requires further efforts in biomedical QA. In this paper, we present a new biomedical passage retrieval method based on Stanford CoreNLP sentence/passage length, probabilistic information retrieval (IR) model and UMLS concepts. In the proposed method, we first use our document retrieval system based on PubMed search engine and UMLS similarity to retrieve relevant documents to a given biomedical question. We then take the abstracts from the retrieved documents and use Stanford CoreNLP for sentence splitter to make a set of sentences, i.e., candidate passages. Using stemmed words and UMLS concepts as features for the BM25 model, we finally compute the similarity scores between the biomedical question and each of the candidate passages and keep the N top-ranked ones. Experimental evaluations performed on large standard datasets, provided by the BioASQ challenge, show that the proposed method achieves good performances compared with the current state-of-the-art methods. The proposed method significantly outperforms the current state-of-the-art methods by an average of 6.84% in terms of mean average precision (MAP). We have proposed an efficient passage retrieval method which can be used to retrieve relevant passages in biomedical QA systems with high mean average precision. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. A Multiobjective Programming Method for Ranking All Units Based on Compensatory DEA Model

    Directory of Open Access Journals (Sweden)

    Haifang Cheng

    2014-01-01

    Full Text Available In order to rank all decision making units (DMUs on the same basis, this paper proposes a multiobjective programming (MOP model based on a compensatory data envelopment analysis (DEA model to derive a common set of weights that can be used for the full ranking of all DMUs. We first revisit a compensatory DEA model for ranking all units, point out the existing problem for solving the model, and present an improved algorithm for which an approximate global optimal solution of the model can be obtained by solving a sequence of linear programming. Then, we applied the key idea of the compensatory DEA model to develop the MOP model in which the objectives are to simultaneously maximize all common weights under constraints that the sum of efficiency values of all DMUs is equal to unity and the sum of all common weights is also equal to unity. In order to solve the MOP model, we transform it into a single objective programming (SOP model using a fuzzy programming method and solve the SOP model using the proposed approximation algorithm. To illustrate the ranking method using the proposed method, two numerical examples are solved.

  8. A multimedia retrieval framework based on semi-supervised ranking and relevance feedback.

    Science.gov (United States)

    Yang, Yi; Nie, Feiping; Xu, Dong; Luo, Jiebo; Zhuang, Yueting; Pan, Yunhe

    2012-04-01

    We present a new framework for multimedia content analysis and retrieval which consists of two independent algorithms. First, we propose a new semi-supervised algorithm called ranking with Local Regression and Global Alignment (LRGA) to learn a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking scores of its neighboring points. A unified objective function is then proposed to globally align the local models from all the data points so that an optimal ranking score can be assigned to each data point. Second, we propose a semi-supervised long-term Relevance Feedback (RF) algorithm to refine the multimedia data representation. The proposed long-term RF algorithm utilizes both the multimedia data distribution in multimedia feature space and the history RF information provided by users. A trace ratio optimization problem is then formulated and solved by an efficient algorithm. The algorithms have been applied to several content-based multimedia retrieval applications, including cross-media retrieval, image retrieval, and 3D motion/pose data retrieval. Comprehensive experiments on four data sets have demonstrated its advantages in precision, robustness, scalability, and computational efficiency.

  9. Virtual drug screen schema based on multiview similarity integration and ranking aggregation.

    Science.gov (United States)

    Kang, Hong; Sheng, Zhen; Zhu, Ruixin; Huang, Qi; Liu, Qi; Cao, Zhiwei

    2012-03-26

    The current drug virtual screen (VS) methods mainly include two categories. i.e., ligand/target structure-based virtual screen and that, utilizing protein-ligand interaction fingerprint information based on the large number of complex structures. Since the former one focuses on the one-side information while the later one focuses on the whole complex structure, they are thus complementary and can be boosted by each other. However, a common problem faced here is how to present a comprehensive understanding and evaluation of the various virtual screen results derived from various VS methods. Furthermore, there is still an urgent need for developing an efficient approach to fully integrate various VS methods from a comprehensive multiview perspective. In this study, our virtual screen schema based on multiview similarity integration and ranking aggregation was tested comprehensively with statistical evaluations, providing several novel and useful clues on how to perform drug VS from multiple heterogeneous data sources. (1) 18 complex structures of HIV-1 protease with ligands from the PDB were curated as a test data set and the VS was performed with five different drug representations. Ritonavir ( 1HXW ) was selected as the query in VS and the weighted ranks of the query results were aggregated from multiple views through four similarity integration approaches. (2) Further, one of the ranking aggregation methods was used to integrate the similarity ranks calculated by gene ontology (GO) fingerprint and structural fingerprint on the data set from connectivity map, and two typical HDAC and HSP90 inhibitors were chosen as the queries. The results show that rank aggregation can enhance the result of similarity searching in VS when two or more descriptions are involved and provide a more reasonable similarity rank result. Our study shows that integrated VS based on multiple data fusion can achieve a remarkable better performance compared to that from individual ones and

  10. A gender-based comparison of academic rank and scholarly productivity in academic neurological surgery.

    Science.gov (United States)

    Tomei, Krystal L; Nahass, Meghan M; Husain, Qasim; Agarwal, Nitin; Patel, Smruti K; Svider, Peter F; Eloy, Jean Anderson; Liu, James K

    2014-07-01

    The number of women pursuing training opportunities in neurological surgery has increased, although they are still underrepresented at senior positions relative to junior academic ranks. Research productivity is an important component of the academic advancement process. We sought to use the h-index, a bibliometric previously analyzed among neurological surgeons, to evaluate whether there are gender differences in academic rank and research productivity among academic neurological surgeons. The h-index was calculated for 1052 academic neurological surgeons from 84 institutions, and organized by gender and academic rank. Overall men had statistically higher research productivity (mean 13.3) than their female colleagues (mean 9.5), as measured by the h-index, in the overall sample (p0.05) in h-index at the assistant professor (mean 7.2 male, 6.3 female), associate professor (11.2 male, 10.8 female), and professor (20.0 male, 18.0 female) levels based on gender. There was insufficient data to determine significance at the chairperson rank, as there was only one female chairperson. Although overall gender differences in scholarly productivity were detected, these differences did not reach statistical significance upon controlling for academic rank. Women were grossly underrepresented at the level of chairpersons in this sample of 1052 academic neurological surgeons, likely a result of the low proportion of females in this specialty. Future studies may be needed to investigate gender-specific research trends for neurosurgical residents, a cohort that in recent years has seen increased representation by women. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. A Comparative Approach for Ranking Contaminated Sites Based on the Risk Assessment Paradigm Using Fuzzy PROMETHEE

    Science.gov (United States)

    Zhang, Kejiang; Kluck, Cheryl; Achari, Gopal

    2009-11-01

    A ranking system for contaminated sites based on comparative risk methodology using fuzzy Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE) was developed in this article. It combines the concepts of fuzzy sets to represent uncertain site information with the PROMETHEE, a subgroup of Multi-Criteria Decision Making (MCDM) methods. Criteria are identified based on a combination of the attributes (toxicity, exposure, and receptors) associated with the potential human health and ecological risks posed by contaminated sites, chemical properties, site geology and hydrogeology and contaminant transport phenomena. Original site data are directly used avoiding the subjective assignment of scores to site attributes. When the input data are numeric and crisp the PROMETHEE method can be used. The Fuzzy PROMETHEE method is preferred when substantial uncertainties and subjectivities exist in site information. The PROMETHEE and fuzzy PROMETHEE methods are both used in this research to compare the sites. The case study shows that this methodology provides reasonable results.

  12. Discrepancies between multicriteria decision analysis-based ranking and intuitive ranking for pharmaceutical benefit-risk profiles in a hypothetical setting.

    Science.gov (United States)

    Hoshikawa, K; Ono, S

    2017-02-01

    Multicriteria decision analysis (MCDA) has been generally considered a promising decision-making methodology for the assessment of drug benefit-risk profiles. There have been many discussions in both public and private sectors on its feasibility and applicability, but it has not been employed in official decision-makings. For the purpose of examining to what extent MCDA would reflect the first-hand, intuitive preference of evaluators in practical pharmaceutical assessments, we conducted a questionnaire survey involving the participation of employees of pharmaceutical companies. Showing profiles of the efficacy and safety of four hypothetical drugs, each respondent was asked to rank them following the standard MCDA process and then to rank them intuitively (i.e. without applying any analytical framework). These two approaches resulted in substantially different ranking patterns from the same individuals, and the concordance rate was surprisingly low (17%). Although many respondents intuitively showed a preference for mild, balanced risk-benefit profiles over profiles with a conspicuous advantage in either risk or benefit, the ranking orders based on MCDA scores did not reflect the intuitive preference. Observed discrepancies between the rankings seemed to be primarily attributed to the structural characteristics of MCDA, which assumes that evaluation on each benefit and risk component should have monotonic impact on final scores. It would be difficult for MCDA to reflect commonly observed non-monotonic preferences for risk and benefit profiles. Possible drawbacks of MCDA should be further investigated prior to the real-world application of its benefit-risk assessment. © 2016 John Wiley & Sons Ltd.

  13. A rank based social norms model of how people judge their levels of drunkenness whilst intoxicated

    Directory of Open Access Journals (Sweden)

    Simon C. Moore

    2016-09-01

    Full Text Available Abstract Background A rank based social norms model predicts that drinkers’ judgements about their drinking will be based on the rank of their breath alcohol level amongst that of others in the immediate environment, rather than their actual breath alcohol level, with lower relative rank associated with greater feelings of safety. This study tested this hypothesis and examined how people judge their levels of drunkenness and the health consequences of their drinking whilst they are intoxicated in social drinking environments. Methods Breath alcohol testing of 1,862 people (mean age = 26.96 years; 61.86 % male in drinking environments. A subset (N = 400 also answered four questions asking about their perceptions of their drunkenness and the health consequences of their drinking (plus background measures. Results Perceptions of drunkenness and the health consequences of drinking were regressed on: (a breath alcohol level, (b the rank of the breath alcohol level amongst that of others in the same environment, and (c covariates. Only rank of breath alcohol level predicted perceptions: How drunk they felt (b 3.78, 95 % CI 1.69 5.87, how extreme they regarded their drinking that night (b 3.7, 95 % CI 1.3 6.20, how at risk their long-term health was due to their current level of drinking (b 4.1, 95 % CI 0.2 8.0 and how likely they felt they would experience liver cirrhosis (b 4.8. 95 % CI 0.7 8.8. People were more influenced by more sober others than by more drunk others. Conclusion Whilst intoxicated and in drinking environments, people base judgements regarding their drinking on how their level of intoxication ranks relative to that of others of the same gender around them, not on their actual levels of intoxication. Thus, when in the company of others who are intoxicated, drinkers were found to be more likely to underestimate their own level of drinking, drunkenness and associated risks. The implications of these results, for example

  14. Fabric defect detection based on visual saliency using deep feature and low-rank recovery

    Science.gov (United States)

    Liu, Zhoufeng; Wang, Baorui; Li, Chunlei; Li, Bicao; Dong, Yan

    2018-04-01

    Fabric defect detection plays an important role in improving the quality of fabric product. In this paper, a novel fabric defect detection method based on visual saliency using deep feature and low-rank recovery was proposed. First, unsupervised training is carried out by the initial network parameters based on MNIST large datasets. The supervised fine-tuning of fabric image library based on Convolutional Neural Networks (CNNs) is implemented, and then more accurate deep neural network model is generated. Second, the fabric images are uniformly divided into the image block with the same size, then we extract their multi-layer deep features using the trained deep network. Thereafter, all the extracted features are concentrated into a feature matrix. Third, low-rank matrix recovery is adopted to divide the feature matrix into the low-rank matrix which indicates the background and the sparse matrix which indicates the salient defect. In the end, the iterative optimal threshold segmentation algorithm is utilized to segment the saliency maps generated by the sparse matrix to locate the fabric defect area. Experimental results demonstrate that the feature extracted by CNN is more suitable for characterizing the fabric texture than the traditional LBP, HOG and other hand-crafted features extraction method, and the proposed method can accurately detect the defect regions of various fabric defects, even for the image with complex texture.

  15. Novel Opportunistic Network Routing Based on Social Rank for Device-to-Device Communication

    Directory of Open Access Journals (Sweden)

    Tong Wang

    2017-01-01

    Full Text Available In recent years, there has been dramatic proliferation of research concerned with fifth-generation (5G mobile communication networks, among which device-to-device (D2D communication is one of the key technologies. Due to the intermittent connection of nodes, the D2D network topology may be disconnected frequently, which will lead to failure in transmission of large data files. In opportunistic networks, in case of encountering nodes which never meet before a flood message blindly to cause tremendous network overhead, a novel opportunistic network routing protocol based on social rank and intermeeting time (SRIT is proposed in this paper. An improved utility approach applied in utility replication based on encounter durations and intermeeting time is put forward to enhance the routing efficiency. Meanwhile, in order to select better candidate nodes in the network, a social graph among people is established when they socially relate to each other in social rank replication. The results under the scenario show an advantage of the proposed opportunistic network routing based on social rank and intermeeting time (SRIT over the compared algorithms in terms of delivery ratio, average delivery latency, and overhead ratio.

  16. A ligand predication tool based on modeling and reasoning with imprecise probabilistic knowledge.

    Science.gov (United States)

    Liu, Weiru; Yue, Anbu; Timson, David J

    2010-04-01

    Ligand prediction has been driven by a fundamental desire to understand more about how biomolecules recognize their ligands and by the commercial imperative to develop new drugs. Most of the current available software systems are very complex and time-consuming to use. Therefore, developing simple and efficient tools to perform initial screening of interesting compounds is an appealing idea. In this paper, we introduce our tool for very rapid screening for likely ligands (either substrates or inhibitors) based on reasoning with imprecise probabilistic knowledge elicited from past experiments. Probabilistic knowledge is input to the system via a user-friendly interface showing a base compound structure. A prediction of whether a particular compound is a substrate is queried against the acquired probabilistic knowledge base and a probability is returned as an indication of the prediction. This tool will be particularly useful in situations where a number of similar compounds have been screened experimentally, but information is not available for all possible members of that group of compounds. We use two case studies to demonstrate how to use the tool. 2009 Elsevier Ireland Ltd. All rights reserved.

  17. Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Qin [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Florita, Anthony R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnan, Venkat K [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cui, Mingjian [University of Texas at Dallas; Feng, Cong [University of Texas at Dallas; Wang, Zhenke [University of Texas at Dallas; Zhang, Jie [University of Texas at Dallas

    2018-02-01

    Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power and currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) is analytically deduced. The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start-time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.

  18. Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Qin [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Florita, Anthony R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnan, Venkat K [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cui, Mingjian [Univ. of Texas-Dallas, Richardson, TX (United States); Feng, Cong [Univ. of Texas-Dallas, Richardson, TX (United States); Wang, Zhenke [Univ. of Texas-Dallas, Richardson, TX (United States); Zhang, Jie [Univ. of Texas-Dallas, Richardson, TX (United States)

    2017-08-31

    Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power, and they are currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) is analytically deduced. The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.

  19. A probabilistic fragment-based protein structure prediction algorithm.

    Directory of Open Access Journals (Sweden)

    David Simoncini

    Full Text Available Conformational sampling is one of the bottlenecks in fragment-based protein structure prediction approaches. They generally start with a coarse-grained optimization where mainchain atoms and centroids of side chains are considered, followed by a fine-grained optimization with an all-atom representation of proteins. It is during this coarse-grained phase that fragment-based methods sample intensely the conformational space. If the native-like region is sampled more, the accuracy of the final all-atom predictions may be improved accordingly. In this work we present EdaFold, a new method for fragment-based protein structure prediction based on an Estimation of Distribution Algorithm. Fragment-based approaches build protein models by assembling short fragments from known protein structures. Whereas the probability mass functions over the fragment libraries are uniform in the usual case, we propose an algorithm that learns from previously generated decoys and steers the search toward native-like regions. A comparison with Rosetta AbInitio protocol shows that EdaFold is able to generate models with lower energies and to enhance the percentage of near-native coarse-grained decoys on a benchmark of [Formula: see text] proteins. The best coarse-grained models produced by both methods were refined into all-atom models and used in molecular replacement. All atom decoys produced out of EdaFold's decoy set reach high enough accuracy to solve the crystallographic phase problem by molecular replacement for some test proteins. EdaFold showed a higher success rate in molecular replacement when compared to Rosetta. Our study suggests that improving low resolution coarse-grained decoys allows computational methods to avoid subsequent sampling issues during all-atom refinement and to produce better all-atom models. EdaFold can be downloaded from http://www.riken.jp/zhangiru/software.html [corrected].

  20. Probabilistic based design rules for intersystem LOCAS in ABWR piping

    International Nuclear Information System (INIS)

    Ware, A.G.; Wesley, D.A.

    1993-01-01

    A methodology has been developed for probability-based standards for low-pressure piping systems that are attached to the reactor coolant loops of advanced light water reactors (ALWRs) which could experience reactor coolant loop temperatures and pressures because of multiple isolation valve failures. This accident condition is called an intersystem loss-of-coolant accident (ISLOCA). The methodology was applied to various sizes of carbon and stainless steel piping designed to advanced boiling water reactor (ABWR) temperatures and pressures

  1. Probabilistic logics and probabilistic networks

    CERN Document Server

    Haenni, Rolf; Wheeler, Gregory; Williamson, Jon; Andrews, Jill

    2014-01-01

    Probabilistic Logic and Probabilistic Networks presents a groundbreaking framework within which various approaches to probabilistic logic naturally fit. Additionally, the text shows how to develop computationally feasible methods to mesh with this framework.

  2. An Improved Fuzzy Based Missing Value Estimation in DNA Microarray Validated by Gene Ranking

    Directory of Open Access Journals (Sweden)

    Sujay Saha

    2016-01-01

    Full Text Available Most of the gene expression data analysis algorithms require the entire gene expression matrix without any missing values. Hence, it is necessary to devise methods which would impute missing data values accurately. There exist a number of imputation algorithms to estimate those missing values. This work starts with a microarray dataset containing multiple missing values. We first apply the modified version of the fuzzy theory based existing method LRFDVImpute to impute multiple missing values of time series gene expression data and then validate the result of imputation by genetic algorithm (GA based gene ranking methodology along with some regular statistical validation techniques, like RMSE method. Gene ranking, as far as our knowledge, has not been used yet to validate the result of missing value estimation. Firstly, the proposed method has been tested on the very popular Spellman dataset and results show that error margins have been drastically reduced compared to some previous works, which indirectly validates the statistical significance of the proposed method. Then it has been applied on four other 2-class benchmark datasets, like Colorectal Cancer tumours dataset (GDS4382, Breast Cancer dataset (GSE349-350, Prostate Cancer dataset, and DLBCL-FL (Leukaemia for both missing value estimation and ranking the genes, and the results show that the proposed method can reach 100% classification accuracy with very few dominant genes, which indirectly validates the biological significance of the proposed method.

  3. Adaptive predictors based on probabilistic SVM for real time disruption mitigation on JET

    Science.gov (United States)

    Murari, A.; Lungaroni, M.; Peluso, E.; Gaudio, P.; Vega, J.; Dormido-Canto, S.; Baruzzo, M.; Gelfusa, M.; Contributors, JET

    2018-05-01

    Detecting disruptions with sufficient anticipation time is essential to undertake any form of remedial strategy, mitigation or avoidance. Traditional predictors based on machine learning techniques can be very performing, if properly optimised, but do not provide a natural estimate of the quality of their outputs and they typically age very quickly. In this paper a new set of tools, based on probabilistic extensions of support vector machines (SVM), are introduced and applied for the first time to JET data. The probabilistic output constitutes a natural qualification of the prediction quality and provides additional flexibility. An adaptive training strategy ‘from scratch’ has also been devised, which allows preserving the performance even when the experimental conditions change significantly. Large JET databases of disruptions, covering entire campaigns and thousands of discharges, have been analysed, both for the case of the graphite and the ITER Like Wall. Performance significantly better than any previous predictor using adaptive training has been achieved, satisfying even the requirements of the next generation of devices. The adaptive approach to the training has also provided unique information about the evolution of the operational space. The fact that the developed tools give the probability of disruption improves the interpretability of the results, provides an estimate of the predictor quality and gives new insights into the physics. Moreover, the probabilistic treatment permits to insert more easily these classifiers into general decision support and control systems.

  4. PROBABILISTIC SEISMIC ASSESSMENT OF BASE-ISOLATED NPPS SUBJECTED TO STRONG GROUND MOTIONS OF TOHOKU EARTHQUAKE

    Directory of Open Access Journals (Sweden)

    AHMER ALI

    2014-10-01

    Full Text Available The probabilistic seismic performance of a standard Korean nuclear power plant (NPP with an idealized isolation is investigated in the present work. A probabilistic seismic hazard analysis (PSHA of the Wolsong site on the Korean peninsula is performed by considering peak ground acceleration (PGA as an earthquake intensity measure. A procedure is reported on the categorization and selection of two sets of ground motions of the Tohoku earthquake, i.e. long-period and common as Set A and Set B respectively, for the nonlinear time history response analysis of the base-isolated NPP. Limit state values as multiples of the displacement responses of the NPP base isolation are considered for the fragility estimation. The seismic risk of the NPP is further assessed by incorporation of the rate of frequency exceedance and conditional failure probability curves. Furthermore, this framework attempts to show the unacceptable performance of the isolated NPP in terms of the probabilistic distribution and annual probability of limit states. The comparative results for long and common ground motions are discussed to contribute to the future safety of nuclear facilities against drastic events like Tohoku.

  5. Probabilistic seismic assessment of base-isolated NPPs subjected to strong ground motions of Tohoku earthquake

    Energy Technology Data Exchange (ETDEWEB)

    Ali, Ahmer; Hayah, Nadin Abu; Kim, Doo Kie [Dept. of Civil and Environmental Engineering, Kunsan National University, Kunsan (Korea, Republic of); Cho, Sung Gook [R and D Center, JACE KOREA Company, Gyeonggido (Korea, Republic of)

    2014-10-15

    The probabilistic seismic performance of a standard Korean nuclear power plant (NPP) with an idealized isolation is investigated in the present work. A probabilistic seismic hazard analysis (PSHA) of the Wolsong site on the Korean peninsula is performed by considering peak ground acceleration (PGA) as an earthquake intensity measure. A procedure is reported on the categorization and selection of two sets of ground motions of the Tohoku earthquake, i.e. long-period and common as Set A and Set B respectively, for the nonlinear time history response analysis of the base-isolated NPP. Limit state values as multiples of the displacement responses of the NPP base isolation are considered for the fragility estimation. The seismic risk of the NPP is further assessed by incorporation of the rate of frequency exceedance and conditional failure probability curves. Furthermore, this framework attempts to show the unacceptable performance of the isolated NPP in terms of the probabilistic distribution and annual probability of limit states. The comparative results for long and common ground motions are discussed to contribute to the future safety of nuclear facilities against drastic events like Tohoku.

  6. Ranking Operations Management conferences

    NARCIS (Netherlands)

    Steenhuis, H.J.; de Bruijn, E.J.; Gupta, Sushil; Laptaned, U

    2007-01-01

    Several publications have appeared in the field of Operations Management which rank Operations Management related journals. Several ranking systems exist for journals based on , for example, perceived relevance and quality, citation, and author affiliation. Many academics also publish at conferences

  7. A probabilistic approach to the drag-based model

    Science.gov (United States)

    Napoletano, Gianluca; Forte, Roberta; Moro, Dario Del; Pietropaolo, Ermanno; Giovannelli, Luca; Berrilli, Francesco

    2018-02-01

    The forecast of the time of arrival (ToA) of a coronal mass ejection (CME) to Earth is of critical importance for our high-technology society and for any future manned exploration of the Solar System. As critical as the forecast accuracy is the knowledge of its precision, i.e. the error associated to the estimate. We propose a statistical approach for the computation of the ToA using the drag-based model by introducing the probability distributions, rather than exact values, as input parameters, thus allowing the evaluation of the uncertainty on the forecast. We test this approach using a set of CMEs whose transit times are known, and obtain extremely promising results: the average value of the absolute differences between measure and forecast is 9.1h, and half of these residuals are within the estimated errors. These results suggest that this approach deserves further investigation. We are working to realize a real-time implementation which ingests the outputs of automated CME tracking algorithms as inputs to create a database of events useful for a further validation of the approach.

  8. Feasibility study of component risk ranking for plant maintenance

    International Nuclear Information System (INIS)

    Ushijima, Koji; Yonebayashi, Kenji; Narumiya, Yoshiyuki; Sakata, Kaoru; Kumano, Tetsuji

    1999-01-01

    Nuclear power is the base load electricity source in Japan, and reduction of operation and maintenance cost maintaining or improving plant safety is one of the major issues. Recently, Risk Informed Management (RIM) is focused as a solution. In this paper, the outline regarding feasibility study of component risk ranking for plant maintenance for a typical Japanese PWR plant is described. A feasibility study of component risk raking for plant maintenance optimization is performed on check valves and motor-operated valves. Risk ranking is performed in two steps using probabilistic analysis (quantitative method) for risk ranking of components, and deterministic examination (qualitative method) for component review. In this study, plant components are ranked from the viewpoint of plant safety / reliability, and the applicability for maintenance is assessed. As a result, distribution of maintenance resources using risk ranking is considered effective. (author)

  9. Reference Information Based Remote Sensing Image Reconstruction with Generalized Nonconvex Low-Rank Approximation

    Directory of Open Access Journals (Sweden)

    Hongyang Lu

    2016-06-01

    Full Text Available Because of the contradiction between the spatial and temporal resolution of remote sensing images (RSI and quality loss in the process of acquisition, it is of great significance to reconstruct RSI in remote sensing applications. Recent studies have demonstrated that reference image-based reconstruction methods have great potential for higher reconstruction performance, while lacking accuracy and quality of reconstruction. For this application, a new compressed sensing objective function incorporating a reference image as prior information is developed. We resort to the reference prior information inherent in interior and exterior data simultaneously to build a new generalized nonconvex low-rank approximation framework for RSI reconstruction. Specifically, the innovation of this paper consists of the following three respects: (1 we propose a nonconvex low-rank approximation for reconstructing RSI; (2 we inject reference prior information to overcome over smoothed edges and texture detail losses; (3 on this basis, we combine conjugate gradient algorithms and a single-value threshold (SVT simultaneously to solve the proposed algorithm. The performance of the algorithm is evaluated both qualitatively and quantitatively. Experimental results demonstrate that the proposed algorithm improves several dBs in terms of peak signal to noise ratio (PSNR and preserves image details significantly compared to most of the current approaches without reference images as priors. In addition, the generalized nonconvex low-rank approximation of our approach is naturally robust to noise, and therefore, the proposed algorithm can handle low resolution with noisy inputs in a more unified framework.

  10. Fuzzy ranking based non-dominated sorting genetic algorithm-II for network overload alleviation

    Directory of Open Access Journals (Sweden)

    Pandiarajan K.

    2014-09-01

    Full Text Available This paper presents an effective method of network overload management in power systems. The three competing objectives 1 generation cost 2 transmission line overload and 3 real power loss are optimized to provide pareto-optimal solutions. A fuzzy ranking based non-dominated sorting genetic algorithm-II (NSGA-II is used to solve this complex nonlinear optimization problem. The minimization of competing objectives is done by generation rescheduling. Fuzzy ranking method is employed to extract the best compromise solution out of the available non-dominated solutions depending upon its highest rank. N-1 contingency analysis is carried out to identify the most severe lines and those lines are selected for outage. The effectiveness of the proposed approach is demonstrated for different contingency cases in IEEE 30 and IEEE 118 bus systems with smooth cost functions and their results are compared with other single objective evolutionary algorithms like Particle swarm optimization (PSO and Differential evolution (DE. Simulation results show the effectiveness of the proposed approach to generate well distributed pareto-optimal non-dominated solutions of multi-objective problem

  11. Aging and a genetic KIBRA polymorphism interactively affect feedback- and observation-based probabilistic classification learning.

    Science.gov (United States)

    Schuck, Nicolas W; Petok, Jessica R; Meeter, Martijn; Schjeide, Brit-Maren M; Schröder, Julia; Bertram, Lars; Gluck, Mark A; Li, Shu-Chen

    2018-01-01

    Probabilistic category learning involves complex interactions between the hippocampus and striatum that may depend on whether acquisition occurs via feedback or observation. Little is known about how healthy aging affects these processes. We tested whether age-related behavioral differences in probabilistic category learning from feedback or observation depend on a genetic factor known to influence individual differences in hippocampal function, the KIBRA gene (single nucleotide polymorphism rs17070145). Results showed comparable age-related performance impairments in observational as well as feedback-based learning. Moreover, genetic analyses indicated an age-related interactive effect of KIBRA on learning: among older adults, the beneficial T-allele was positively associated with learning from feedback, but negatively with learning from observation. In younger adults, no effects of KIBRA were found. Our results add behavioral genetic evidence to emerging data showing age-related differences in how neural resources relate to memory functions, namely that hippocampal and striatal contributions to probabilistic category learning may vary with age. Our findings highlight the effects genetic factors can have on differential age-related decline of different memory functions. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Shear-wave velocity-based probabilistic and deterministic assessment of seismic soil liquefaction potential

    Science.gov (United States)

    Kayen, R.; Moss, R.E.S.; Thompson, E.M.; Seed, R.B.; Cetin, K.O.; Der Kiureghian, A.; Tanaka, Y.; Tokimatsu, K.

    2013-01-01

    Shear-wave velocity (Vs) offers a means to determine the seismic resistance of soil to liquefaction by a fundamental soil property. This paper presents the results of an 11-year international project to gather new Vs site data and develop probabilistic correlations for seismic soil liquefaction occurrence. Toward that objective, shear-wave velocity test sites were identified, and measurements made for 301 new liquefaction field case histories in China, Japan, Taiwan, Greece, and the United States over a decade. The majority of these new case histories reoccupy those previously investigated by penetration testing. These new data are combined with previously published case histories to build a global catalog of 422 case histories of Vs liquefaction performance. Bayesian regression and structural reliability methods facilitate a probabilistic treatment of the Vs catalog for performance-based engineering applications. Where possible, uncertainties of the variables comprising both the seismic demand and the soil capacity were estimated and included in the analysis, resulting in greatly reduced overall model uncertainty relative to previous studies. The presented data set and probabilistic analysis also help resolve the ancillary issues of adjustment for soil fines content and magnitude scaling factors.

  13. Using ELM-based weighted probabilistic model in the classification of synchronous EEG BCI.

    Science.gov (United States)

    Tan, Ping; Tan, Guan-Zheng; Cai, Zi-Xing; Sa, Wei-Ping; Zou, Yi-Qun

    2017-01-01

    Extreme learning machine (ELM) is an effective machine learning technique with simple theory and fast implementation, which has gained increasing interest from various research fields recently. A new method that combines ELM with probabilistic model method is proposed in this paper to classify the electroencephalography (EEG) signals in synchronous brain-computer interface (BCI) system. In the proposed method, the softmax function is used to convert the ELM output to classification probability. The Chernoff error bound, deduced from the Bayesian probabilistic model in the training process, is adopted as the weight to take the discriminant process. Since the proposed method makes use of the knowledge from all preceding training datasets, its discriminating performance improves accumulatively. In the test experiments based on the datasets from BCI competitions, the proposed method is compared with other classification methods, including the linear discriminant analysis, support vector machine, ELM and weighted probabilistic model methods. For comparison, the mutual information, classification accuracy and information transfer rate are considered as the evaluation indicators for these classifiers. The results demonstrate that our method shows competitive performance against other methods.

  14. Generalized outcome-based strategy classification: comparing deterministic and probabilistic choice models.

    Science.gov (United States)

    Hilbig, Benjamin E; Moshagen, Morten

    2014-12-01

    Model comparisons are a vital tool for disentangling which of several strategies a decision maker may have used--that is, which cognitive processes may have governed observable choice behavior. However, previous methodological approaches have been limited to models (i.e., decision strategies) with deterministic choice rules. As such, psychologically plausible choice models--such as evidence-accumulation and connectionist models--that entail probabilistic choice predictions could not be considered appropriately. To overcome this limitation, we propose a generalization of Bröder and Schiffer's (Journal of Behavioral Decision Making, 19, 361-380, 2003) choice-based classification method, relying on (1) parametric order constraints in the multinomial processing tree framework to implement probabilistic models and (2) minimum description length for model comparison. The advantages of the generalized approach are demonstrated through recovery simulations and an experiment. In explaining previous methods and our generalization, we maintain a nontechnical focus--so as to provide a practical guide for comparing both deterministic and probabilistic choice models.

  15. A probabilistic EAC management of Ni-base Alloy in PWR

    International Nuclear Information System (INIS)

    Lee, Tae Hyun; Hwang, Il Soon

    2009-01-01

    Material aging is a principle cause for the aging of engineering systems that can lead to reduction in reliability and continued safety and increase in the cost of operation and maintenance. As the nuclear power plants get older, aging becomes an issue, because aging degradation can affect the structural integrity of systems and components in the same manner. To ensure the safe operation of nuclear power plants, it is essential to assess the effects of age-related degradation of plant structures, systems, and components. In this study, we propose a framework for probabilistic assessment of primary pressure-boundary components, with particular attention to Environmentally Assisted Cracking (EAC) of pipings and nozzles on Nuclear Power Plants (NPP). The framework on EAC management is targeted for the degradation prediction using mechanism and probabilistic treatment and probabilistic assessment of defect detection and sizing. Also, the EAC-induced failure process has examined the effect of uncertainties in key parameters in models for EAC growth model, final fracture and inspection, based on a sensitivity study and updating using Bayesian inference approach. (author)

  16. Probabilistic wind power forecasting based on logarithmic transformation and boundary kernel

    International Nuclear Information System (INIS)

    Zhang, Yao; Wang, Jianxue; Luo, Xu

    2015-01-01

    Highlights: • Quantitative information on the uncertainty of wind power generation. • Kernel density estimator provides non-Gaussian predictive distributions. • Logarithmic transformation reduces the skewness of wind power density. • Boundary kernel method eliminates the density leakage near the boundary. - Abstracts: Probabilistic wind power forecasting not only produces the expectation of wind power output, but also gives quantitative information on the associated uncertainty, which is essential for making better decisions about power system and market operations with the increasing penetration of wind power generation. This paper presents a novel kernel density estimator for probabilistic wind power forecasting, addressing two characteristics of wind power which have adverse impacts on the forecast accuracy, namely, the heavily skewed and double-bounded nature of wind power density. Logarithmic transformation is used to reduce the skewness of wind power density, which improves the effectiveness of the kernel density estimator in a transformed scale. Transformations partially relieve the boundary effect problem of the kernel density estimator caused by the double-bounded nature of wind power density. However, the case study shows that there are still some serious problems of density leakage after the transformation. In order to solve this problem in the transformed scale, a boundary kernel method is employed to eliminate the density leak at the bounds of wind power distribution. The improvement of the proposed method over the standard kernel density estimator is demonstrated by short-term probabilistic forecasting results based on the data from an actual wind farm. Then, a detailed comparison is carried out of the proposed method and some existing probabilistic forecasting methods

  17. A novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling.

    Science.gov (United States)

    Knoops, Paul G M; Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W F; Jeelani, Owase; Dunaway, David J; Schievano, Silvia

    2018-01-01

    Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face.

  18. Development of optimization-based probabilistic earthquake scenarios for the city of Tehran

    Science.gov (United States)

    Zolfaghari, M. R.; Peyghaleh, E.

    2016-01-01

    This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less

  19. Unsupervised ensemble ranking of terms in electronic health record notes based on their importance to patients.

    Science.gov (United States)

    Chen, Jinying; Yu, Hong

    2017-04-01

    Allowing patients to access their own electronic health record (EHR) notes through online patient portals has the potential to improve patient-centered care. However, EHR notes contain abundant medical jargon that can be difficult for patients to comprehend. One way to help patients is to reduce information overload and help them focus on medical terms that matter most to them. Targeted education can then be developed to improve patient EHR comprehension and the quality of care. The aim of this work was to develop FIT (Finding Important Terms for patients), an unsupervised natural language processing (NLP) system that ranks medical terms in EHR notes based on their importance to patients. We built FIT on a new unsupervised ensemble ranking model derived from the biased random walk algorithm to combine heterogeneous information resources for ranking candidate terms from each EHR note. Specifically, FIT integrates four single views (rankers) for term importance: patient use of medical concepts, document-level term salience, word co-occurrence based term relatedness, and topic coherence. It also incorporates partial information of term importance as conveyed by terms' unfamiliarity levels and semantic types. We evaluated FIT on 90 expert-annotated EHR notes and used the four single-view rankers as baselines. In addition, we implemented three benchmark unsupervised ensemble ranking methods as strong baselines. FIT achieved 0.885 AUC-ROC for ranking candidate terms from EHR notes to identify important terms. When including term identification, the performance of FIT for identifying important terms from EHR notes was 0.813 AUC-ROC. Both performance scores significantly exceeded the corresponding scores from the four single rankers (P<0.001). FIT also outperformed the three ensemble rankers for most metrics. Its performance is relatively insensitive to its parameter. FIT can automatically identify EHR terms important to patients. It may help develop future interventions

  20. Development of a health effects based priority ranking system for air emissions reductions from oil refineries in Canada

    International Nuclear Information System (INIS)

    McColl, S.; Gower, S.; Hicks, J.; Shortreed, J.; Craig, L.

    2004-01-01

    This paper presents the concept and methodologies behind the development of a health effects priority ranking tool for the reduction of air emissions from oil refineries. The Health Effects Indicators Decision Index- Versions 2 (Heidi II) was designed to assist policy makers in prioritizing air emissions reductions on the basis of estimated risk to human health. Inputs include facility level rankings of potential health impacts associated with carcinogenic air toxics, non-carcinogenic air toxics and criteria air contaminants for each of the 20 refineries in Canada. Rankings of estimated health impacts are presented on predicted incidence of health effects. Heidi II considers site-specific annual pollutant emission data, ambient air concentrations associated with releases and concentration response functions for various types of health effects. Additional data includes location specific background air concentrations, site-specific population densities, and the baseline incidence of different health effects endpoints, such as cancer, non-cancer illnesses and cardiorespiratory illnesses and death. Air pollutants include the 29 air toxics reported annually in Environment Canada's National Pollutant Release Inventory. Three health impact ranking outputs are provided for each facility: ranking of pollutants based on predicted number of annual cases of health effects; ranking of pollutants based on simplified Disability Adjusted Life Years (DALYs); and ranking of pollutants based on more complex DALYs that consider types of cancer, systemic disease or types of cardiopulmonary health effects. Rankings rely on rough statistical estimates of predicted incidence rates for health endpoints. The models used to calculate rankings can provide useful guidance by comparing estimated health impacts. Heidi II has demonstrated that it is possible to develop a consistent and objective approach for ranking priority reductions of air emissions. Heidi II requires numerous types and

  1. A LDA-based approach to promoting ranking diversity for genomics information retrieval.

    Science.gov (United States)

    Chen, Yan; Yin, Xiaoshi; Li, Zhoujun; Hu, Xiaohua; Huang, Jimmy Xiangji

    2012-06-11

    In the biomedical domain, there are immense data and tremendous increase of genomics and biomedical relevant publications. The wealth of information has led to an increasing amount of interest in and need for applying information retrieval techniques to access the scientific literature in genomics and related biomedical disciplines. In many cases, the desired information of a query asked by biologists is a list of a certain type of entities covering different aspects that are related to the question, such as cells, genes, diseases, proteins, mutations, etc. Hence, it is important of a biomedical IR system to be able to provide relevant and diverse answers to fulfill biologists' information needs. However traditional IR model only concerns with the relevance between retrieved documents and user query, but does not take redundancy between retrieved documents into account. This will lead to high redundancy and low diversity in the retrieval ranked lists. In this paper, we propose an approach which employs a topic generative model called Latent Dirichlet Allocation (LDA) to promoting ranking diversity for biomedical information retrieval. Different from other approaches or models which consider aspects on word level, our approach assumes that aspects should be identified by the topics of retrieved documents. We present LDA model to discover topic distribution of retrieval passages and word distribution of each topic dimension, and then re-rank retrieval results with topic distribution similarity between passages based on N-size slide window. We perform our approach on TREC 2007 Genomics collection and two distinctive IR baseline runs, which can achieve 8% improvement over the highest Aspect MAP reported in TREC 2007 Genomics track. The proposed method is the first study of adopting topic model to genomics information retrieval, and demonstrates its effectiveness in promoting ranking diversity as well as in improving relevance of ranked lists of genomics search

  2. Pathway Relevance Ranking for Tumor Samples through Network-Based Data Integration.

    Directory of Open Access Journals (Sweden)

    Lieven P C Verbeke

    Full Text Available The study of cancer, a highly heterogeneous disease with different causes and clinical outcomes, requires a multi-angle approach and the collection of large multi-omics datasets that, ideally, should be analyzed simultaneously. We present a new pathway relevance ranking method that is able to prioritize pathways according to the information contained in any combination of tumor related omics datasets. Key to the method is the conversion of all available data into a single comprehensive network representation containing not only genes but also individual patient samples. Additionally, all data are linked through a network of previously identified molecular interactions. We demonstrate the performance of the new method by applying it to breast and ovarian cancer datasets from The Cancer Genome Atlas. By integrating gene expression, copy number, mutation and methylation data, the method's potential to identify key pathways involved in breast cancer development shared by different molecular subtypes is illustrated. Interestingly, certain pathways were ranked equally important for different subtypes, even when the underlying (epi-genetic disturbances were diverse. Next to prioritizing universally high-scoring pathways, the pathway ranking method was able to identify subtype-specific pathways. Often the score of a pathway could not be motivated by a single mutation, copy number or methylation alteration, but rather by a combination of genetic and epi-genetic disturbances, stressing the need for a network-based data integration approach. The analysis of ovarian tumors, as a function of survival-based subtypes, demonstrated the method's ability to correctly identify key pathways, irrespective of tumor subtype. A differential analysis of survival-based subtypes revealed several pathways with higher importance for the bad-outcome patient group than for the good-outcome patient group. Many of the pathways exhibiting higher importance for the bad

  3. Detecting determinism with improved sensitivity in time series: rank-based nonlinear predictability score.

    Science.gov (United States)

    Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G

    2014-09-01

    The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).

  4. Ensemble Manifold Rank Preserving for Acceleration-Based Human Activity Recognition.

    Science.gov (United States)

    Tao, Dapeng; Jin, Lianwen; Yuan, Yuan; Xue, Yang

    2016-06-01

    With the rapid development of mobile devices and pervasive computing technologies, acceleration-based human activity recognition, a difficult yet essential problem in mobile apps, has received intensive attention recently. Different acceleration signals for representing different activities or even a same activity have different attributes, which causes troubles in normalizing the signals. We thus cannot directly compare these signals with each other, because they are embedded in a nonmetric space. Therefore, we present a nonmetric scheme that retains discriminative and robust frequency domain information by developing a novel ensemble manifold rank preserving (EMRP) algorithm. EMRP simultaneously considers three aspects: 1) it encodes the local geometry using the ranking order information of intraclass samples distributed on local patches; 2) it keeps the discriminative information by maximizing the margin between samples of different classes; and 3) it finds the optimal linear combination of the alignment matrices to approximate the intrinsic manifold lied in the data. Experiments are conducted on the South China University of Technology naturalistic 3-D acceleration-based activity dataset and the naturalistic mobile-devices based human activity dataset to demonstrate the robustness and effectiveness of the new nonmetric scheme for acceleration-based human activity recognition.

  5. Ranking and selection of commercial off-the-shelf using fuzzy distance based approach

    Directory of Open Access Journals (Sweden)

    Rakesh Garg

    2015-06-01

    Full Text Available There is a tremendous growth of the use of the component based software engineering (CBSE approach for the development of software systems. The selection of the best suited COTS components which fulfils the necessary requirement for the development of software(s has become a major challenge for the software developers. The complexity of the optimal selection problem increases with an increase in alternative potential COTS components and the corresponding selection criteria. In this research paper, the problem of ranking and selection of Data Base Management Systems (DBMS components is modeled as a multi-criteria decision making problem. A ‘Fuzzy Distance Based Approach (FDBA’ method is proposed for the optimal ranking and selection of DBMS COTS components of an e-payment system based on 14 selection criteria grouped under three major categories i.e. ‘Vendor Capabilities’, ‘Business Issues’ and ‘Cost’. The results of this method are compared with other Analytical Hierarchy Process (AHP which is termed as a typical multi-criteria decision making approach. The proposed methodology is explained with an illustrated example.

  6. Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.

    Science.gov (United States)

    Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo

    2017-07-01

    Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.

  7. Pareto-Ranking Based Quantum-Behaved Particle Swarm Optimization for Multiobjective Optimization

    Directory of Open Access Journals (Sweden)

    Na Tian

    2015-01-01

    Full Text Available A study on pareto-ranking based quantum-behaved particle swarm optimization (QPSO for multiobjective optimization problems is presented in this paper. During the iteration, an external repository is maintained to remember the nondominated solutions, from which the global best position is chosen. The comparison between different elitist selection strategies (preference order, sigma value, and random selection is performed on four benchmark functions and two metrics. The results demonstrate that QPSO with preference order has comparative performance with sigma value according to different number of objectives. Finally, QPSO with sigma value is applied to solve multiobjective flexible job-shop scheduling problems.

  8. A rapid place name locating algorithm based on ontology qualitative retrieval, ranking and recommendation

    Science.gov (United States)

    Fan, Hong; Zhu, Anfeng; Zhang, Weixia

    2015-12-01

    In order to meet the rapid positioning of 12315 complaints, aiming at the natural language expression of telephone complaints, a semantic retrieval framework is proposed which is based on natural language parsing and geographical names ontology reasoning. Among them, a search result ranking and recommended algorithms is proposed which is regarding both geo-name conceptual similarity and spatial geometry relation similarity. The experiments show that this method can assist the operator to quickly find location of 12,315 complaints, increased industry and commerce customer satisfaction.

  9. Environmental restoration risk-based prioritization work package planning and risk ranking methodology. Revision 2

    International Nuclear Information System (INIS)

    Dail, J.L.; Nanstad, L.D.; White, R.K.

    1995-06-01

    This document presents the risk-based prioritization methodology developed to evaluate and rank Environmental Restoration (ER) work packages at the five US Department of Energy, Oak Ridge Field Office (DOE-ORO) sites [i.e., Oak Ridge K-25 Site (K-25), Portsmouth Gaseous Diffusion Plant (PORTS), Paducah Gaseous Diffusion Plant (PGDP), Oak Ridge National Laboratory (ORNL), and the Oak Ridge Y-12 Plant (Y-12)], the ER Off-site Program, and Central ER. This prioritization methodology was developed to support the increased rigor and formality of work planning in the overall conduct of operations within the DOE-ORO ER Program. Prioritization is conducted as an integral component of the fiscal ER funding cycle to establish program budget priorities. The purpose of the ER risk-based prioritization methodology is to provide ER management with the tools and processes needed to evaluate, compare, prioritize, and justify fiscal budget decisions for a diverse set of remedial action, decontamination and decommissioning, and waste management activities. The methodology provides the ER Program with a framework for (1) organizing information about identified DOE-ORO environmental problems, (2) generating qualitative assessments of the long- and short-term risks posed by DOE-ORO environmental problems, and (3) evaluating the benefits associated with candidate work packages designed to reduce those risks. Prioritization is conducted to rank ER work packages on the basis of the overall value (e.g., risk reduction, stakeholder confidence) each package provides to the ER Program. Application of the methodology yields individual work package ''scores'' and rankings that are used to develop fiscal budget requests. This document presents the technical basis for the decision support tools and process

  10. SQERTSS: Dynamic rank based throttling of transition probabilities in kinetic Monte Carlo simulations

    International Nuclear Information System (INIS)

    Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; Virginia Polytechnic Institute and State University; Savara, Aditya

    2017-01-01

    Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of “KMC stiffness” (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps / cpu-time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order to achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events -- allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm designed for use in achieving and simulating steady-state conditions in KMC simulations. Lastly, as shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.

  11. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm.

    Science.gov (United States)

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the "quality of service" as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services.

  12. Probabilistic inversion in priority setting of emerging zoonoses.

    NARCIS (Netherlands)

    Kurowicka, D.; Bucura, C.; Cooke, R.; Havelaar, A.H.

    2010-01-01

    This article presents methodology of applying probabilistic inversion in combination with expert judgment in priority setting problem. Experts rank scenarios according to severity. A linear multi-criteria analysis model underlying the expert preferences is posited. Using probabilistic inversion, a

  13. Grid-based lattice summation of electrostatic potentials by assembled rank-structured tensor approximation

    Science.gov (United States)

    Khoromskaia, Venera; Khoromskij, Boris N.

    2014-12-01

    Our recent method for low-rank tensor representation of sums of the arbitrarily positioned electrostatic potentials discretized on a 3D Cartesian grid reduces the 3D tensor summation to operations involving only 1D vectors however retaining the linear complexity scaling in the number of potentials. Here, we introduce and study a novel tensor approach for fast and accurate assembled summation of a large number of lattice-allocated potentials represented on 3D N × N × N grid with the computational requirements only weakly dependent on the number of summed potentials. It is based on the assembled low-rank canonical tensor representations of the collected potentials using pointwise sums of shifted canonical vectors representing the single generating function, say the Newton kernel. For a sum of electrostatic potentials over L × L × L lattice embedded in a box the required storage scales linearly in the 1D grid-size, O(N) , while the numerical cost is estimated by O(NL) . For periodic boundary conditions, the storage demand remains proportional to the 1D grid-size of a unit cell, n = N / L, while the numerical cost reduces to O(N) , that outperforms the FFT-based Ewald-type summation algorithms of complexity O(N3 log N) . The complexity in the grid parameter N can be reduced even to the logarithmic scale O(log N) by using data-sparse representation of canonical N-vectors via the quantics tensor approximation. For justification, we prove an upper bound on the quantics ranks for the canonical vectors in the overall lattice sum. The presented approach is beneficial in applications which require further functional calculus with the lattice potential, say, scalar product with a function, integration or differentiation, which can be performed easily in tensor arithmetics on large 3D grids with 1D cost. Numerical tests illustrate the performance of the tensor summation method and confirm the estimated bounds on the tensor ranks.

  14. Research on the Fusion of Dependent Evidence Based on Rank Correlation Coefficient

    Directory of Open Access Journals (Sweden)

    Fengjian Shi

    2017-10-01

    Full Text Available In order to meet the higher accuracy and system reliability requirements, the information fusion for multi-sensor systems is an increasing concern. Dempster–Shafer evidence theory (D–S theory has been investigated for many applications in multi-sensor information fusion due to its flexibility in uncertainty modeling. However, classical evidence theory assumes that the evidence is independent of each other, which is often unrealistic. Ignoring the relationship between the evidence may lead to unreasonable fusion results, and even lead to wrong decisions. This assumption severely prevents D–S evidence theory from practical application and further development. In this paper, an innovative evidence fusion model to deal with dependent evidence based on rank correlation coefficient is proposed. The model first uses rank correlation coefficient to measure the dependence degree between different evidence. Then, total discount coefficient is obtained based on the dependence degree, which also considers the impact of the reliability of evidence. Finally, the discount evidence fusion model is presented. An example is illustrated to show the use and effectiveness of the proposed method.

  15. Fuzzy Group Decision Making Approach for Ranking Work Stations Based on Physical Pressure

    Directory of Open Access Journals (Sweden)

    Hamed Salmanzadeh

    2014-06-01

    Full Text Available This paper proposes a Fuzzy Group Decision Making approach for ranking work stations based on physical pressure. Fuzzy group decision making approach allows experts to evaluate different ergonomic factors using linguistic terms such as very high, high, medium, low, very low, rather than precise numerical values. In this way, there is no need to measure parameters and evaluation can be easily made in a group. According to ergonomics much work contents and situations, accompanied with multiple parameters and uncertainties, fuzzy group decision making is the best way to evaluate such a chameleon of concept. A case study was down to utilize the approach and illustrate its application in ergonomic assessment and ranking the work stations based on work pressure and found that this approach provides flexibility, practicality, efficiency in making decision around ergonomics areas. The normalized defuzzification numbers which are resulted from this method are compared with result of quantitative assessment of Automotive Assembly Work Sheet auto, it’s demonstrated that the proposed method result is 10% less than Automotive Assembly Work Sheet, approximately.

  16. Research on the Fusion of Dependent Evidence Based on Rank Correlation Coefficient.

    Science.gov (United States)

    Shi, Fengjian; Su, Xiaoyan; Qian, Hong; Yang, Ning; Han, Wenhua

    2017-10-16

    In order to meet the higher accuracy and system reliability requirements, the information fusion for multi-sensor systems is an increasing concern. Dempster-Shafer evidence theory (D-S theory) has been investigated for many applications in multi-sensor information fusion due to its flexibility in uncertainty modeling. However, classical evidence theory assumes that the evidence is independent of each other, which is often unrealistic. Ignoring the relationship between the evidence may lead to unreasonable fusion results, and even lead to wrong decisions. This assumption severely prevents D-S evidence theory from practical application and further development. In this paper, an innovative evidence fusion model to deal with dependent evidence based on rank correlation coefficient is proposed. The model first uses rank correlation coefficient to measure the dependence degree between different evidence. Then, total discount coefficient is obtained based on the dependence degree, which also considers the impact of the reliability of evidence. Finally, the discount evidence fusion model is presented. An example is illustrated to show the use and effectiveness of the proposed method.

  17. Quantified Risk Ranking Model for Condition-Based Risk and Reliability Centered Maintenance

    Science.gov (United States)

    Chattopadhyaya, Pradip Kumar; Basu, Sushil Kumar; Majumdar, Manik Chandra

    2017-06-01

    In the recent past, risk and reliability centered maintenance (RRCM) framework is introduced with a shift in the methodological focus from reliability and probabilities (expected values) to reliability, uncertainty and risk. In this paper authors explain a novel methodology for risk quantification and ranking the critical items for prioritizing the maintenance actions on the basis of condition-based risk and reliability centered maintenance (CBRRCM). The critical items are identified through criticality analysis of RPN values of items of a system and the maintenance significant precipitating factors (MSPF) of items are evaluated. The criticality of risk is assessed using three risk coefficients. The likelihood risk coefficient treats the probability as a fuzzy number. The abstract risk coefficient deduces risk influenced by uncertainty, sensitivity besides other factors. The third risk coefficient is called hazardous risk coefficient, which is due to anticipated hazards which may occur in the future and the risk is deduced from criteria of consequences on safety, environment, maintenance and economic risks with corresponding cost for consequences. The characteristic values of all the three risk coefficients are obtained with a particular test. With few more tests on the system, the values may change significantly within controlling range of each coefficient, hence `random number simulation' is resorted to obtain one distinctive value for each coefficient. The risk coefficients are statistically added to obtain final risk coefficient of each critical item and then the final rankings of critical items are estimated. The prioritization in ranking of critical items using the developed mathematical model for risk assessment shall be useful in optimization of financial losses and timing of maintenance actions.

  18. A probabilistic risk assessment for field radiography based on expert judgment and opinion

    International Nuclear Information System (INIS)

    Jang, Han-Ki; Ryu, Hyung-Joon; Kim, Ji-Young; Lee, Jai-Ki; Cho, Kun-Woo

    2011-01-01

    A probabilistic approach was applied to assess radiation risk associated with the field radiography using gamma sources. The Delphi method based on the expert judgments and opinions was used in the process of characterization of parameters affecting risk, which are inevitably subject to large uncertainties. A mathematical approach applying the Bayesian inferences was employed for data processing to improve the Delphi results. This process consists of three phases: (1) setting prior distributions, (2) constructing the likelihood functions and (3) deriving the posterior distributions based on the likelihood functions. The approach for characterizing input parameters using the Bayesian inference is provided for improved risk estimates without intentional rejection of part of the data, which demonstrated utility of Bayesian updating of distributions of uncertain input parameters in PRA (Probabilistic Risk Assessment). The data analysis portion for PRA in field radiography is addressed for estimates of the parameters used to determine the frequencies and consequences of the various events modeled. In this study, radiological risks for the worker and the public member in the vicinity of the work place are estimated for field radiography system in Korea based on two-dimensional Monte Carlo Analysis (2D MCA). (author)

  19. Development of Nuclear Safety Culture evaluation method for an operation team based on the probabilistic approach

    International Nuclear Information System (INIS)

    Han, Sang Min; Lee, Seung Min; Yim, Ho Bin; Seong, Poong Hyun

    2018-01-01

    Highlights: •We proposed a Probabilistic Safety Culture Healthiness Evaluation Method. •Positive relationship between the ‘success’ states of NSC and performance was shown. •The state probability profile showed a unique ratio regardless of the scenarios. •Cutset analysis provided not only root causes but also the latent causes of failures. •Pro-SCHEMe was found to be applicable to Korea NPPs. -- Abstract: The aim of this study is to propose a new quantitative evaluation method for Nuclear Safety Culture (NSC) in Nuclear Power Plant (NPP) operation teams based on the probabilistic approach. Various NSC evaluation methods have been developed, and the Korea NPP utility company has conducted the NSC assessment according to international practice. However, most of methods are conducted by interviews, observations, and the self-assessment. Consequently, the results are often qualitative, subjective, and mainly dependent on evaluator’s judgement, so the assessment results can be interpreted from different perspectives. To resolve limitations of present evaluation methods, the concept of Safety Culture Healthiness was suggested to produce quantitative results and provide faster evaluation process. This paper presents Probabilistic Safety Culture Healthiness Evaluation Method (Pro-SCHEMe) to generate quantitative inputs for Human Reliability Assessment (HRA) in Probabilistic Safety Assessment (PSA). Evaluation items which correspond to a basic event in PSA are derived in the first part of the paper through the literature survey; mostly from nuclear-related organizations such as the International Atomic Energy Agency (IAEA), the United States Nuclear Regulatory Commission (U.S.NRC), and the Institute of Nuclear Power Operations (INPO). Event trees (ETs) and fault trees (FTs) are devised to apply evaluation items to PSA based on the relationships among such items. The Modeling Guidelines are also suggested to classify and calculate NSC characteristics of

  20. Rank Dynamics

    Science.gov (United States)

    Gershenson, Carlos

    Studies of rank distributions have been popular for decades, especially since the work of Zipf. For example, if we rank words of a given language by use frequency (most used word in English is 'the', rank 1; second most common word is 'of', rank 2), the distribution can be approximated roughly with a power law. The same applies for cities (most populated city in a country ranks first), earthquakes, metabolism, the Internet, and dozens of other phenomena. We recently proposed ``rank diversity'' to measure how ranks change in time, using the Google Books Ngram dataset. Studying six languages between 1800 and 2009, we found that the rank diversity curves of languages are universal, adjusted with a sigmoid on log-normal scale. We are studying several other datasets (sports, economies, social systems, urban systems, earthquakes, artificial life). Rank diversity seems to be universal, independently of the shape of the rank distribution. I will present our work in progress towards a general description of the features of rank change in time, along with simple models which reproduce it

  1. A comparison of average-based, percentile rank, and other citation impact indicators

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz-Castillo, J.; Albarran, P.

    2016-07-01

    The main aim of this paper is to defend the view that, in spite of the broad agreement in favor of the MNCS and the percentile rank indicators, there are two other citation indicators with desirable properties that the above indicators do not posses: (i) a member of the family of high-impact indicators introduced in Albarránet al. (2011), and (ii) a new indicator, based in the work of Herrero & Villar (2013), which measures the relative performance of the different research units in terms of a series of tournaments in which each research unit is confronted with all others repeatedly. We compare indicators from the point of view of their discriminatory power, measured by the range and the coefficient of variation. Using a large dataset indexed by Thomson Reuters, we consider 40 countries that have published at least 10,000 articles in all sciences in 1998-2003. There are two main findings. First, the new indicator exhibits a greater discriminatory power than percentile rank indicators. Second, the high-impact indicator exhibits the greatest discriminatory power. (Author)

  2. Feature selection model based on clustering and ranking in pipeline for microarray data

    Directory of Open Access Journals (Sweden)

    Barnali Sahu

    2017-01-01

    Full Text Available Most of the available feature selection techniques in the literature are classifier bound. It means a group of features tied to the performance of a specific classifier as applied in wrapper and hybrid approach. Our objective in this study is to select a set of generic features not tied to any classifier based on the proposed framework. This framework uses attribute clustering and feature ranking techniques in pipeline in order to remove redundant features. On each uncovered cluster, signal-to-noise ratio, t-statistics and significance analysis of microarray are independently applied to select the top ranked features. Both filter and evolutionary wrapper approaches have been considered for feature selection and the data set with selected features are given to ensemble of predefined statistically different classifiers. The class labels of the test data are determined using majority voting technique. Moreover, with the aforesaid objectives, this paper focuses on obtaining a stable result out of various classification models. Further, a comparative analysis has been performed to study the classification accuracy and computational time of the current approach and evolutionary wrapper techniques. It gives a better insight into the features and further enhancing the classification accuracy with less computational time.

  3. Entropy-based gene ranking without selection bias for the predictive classification of microarray data

    Directory of Open Access Journals (Sweden)

    Serafini Maria

    2003-11-01

    Full Text Available Abstract Background We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process. Results With E-RFE, we speed up the recursive feature elimination (RFE with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Conclusions Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.

  4. Feature selection for splice site prediction: A new method using EDA-based feature ranking

    Directory of Open Access Journals (Sweden)

    Rouzé Pierre

    2004-05-01

    Full Text Available Abstract Background The identification of relevant biological features in large and complex datasets is an important step towards gaining insight in the processes underlying the data. Other advantages of feature selection include the ability of the classification system to attain good or even better solutions using a restricted subset of features, and a faster classification. Thus, robust methods for fast feature selection are of key importance in extracting knowledge from complex biological data. Results In this paper we present a novel method for feature subset selection applied to splice site prediction, based on estimation of distribution algorithms, a more general framework of genetic algorithms. From the estimated distribution of the algorithm, a feature ranking is derived. Afterwards this ranking is used to iteratively discard features. We apply this technique to the problem of splice site prediction, and show how it can be used to gain insight into the underlying biological process of splicing. Conclusion We show that this technique proves to be more robust than the traditional use of estimation of distribution algorithms for feature selection: instead of returning a single best subset of features (as they normally do this method provides a dynamical view of the feature selection process, like the traditional sequential wrapper methods. However, the method is faster than the traditional techniques, and scales better to datasets described by a large number of features.

  5. A Hybrid Probabilistic Model for Unified Collaborative and Content-Based Image Tagging.

    Science.gov (United States)

    Zhou, Ning; Cheung, William K; Qiu, Guoping; Xue, Xiangyang

    2011-07-01

    The increasing availability of large quantities of user contributed images with labels has provided opportunities to develop automatic tools to tag images to facilitate image search and retrieval. In this paper, we present a novel hybrid probabilistic model (HPM) which integrates low-level image features and high-level user provided tags to automatically tag images. For images without any tags, HPM predicts new tags based solely on the low-level image features. For images with user provided tags, HPM jointly exploits both the image features and the tags in a unified probabilistic framework to recommend additional tags to label the images. The HPM framework makes use of the tag-image association matrix (TIAM). However, since the number of images is usually very large and user-provided tags are diverse, TIAM is very sparse, thus making it difficult to reliably estimate tag-to-tag co-occurrence probabilities. We developed a collaborative filtering method based on nonnegative matrix factorization (NMF) for tackling this data sparsity issue. Also, an L1 norm kernel method is used to estimate the correlations between image features and semantic concepts. The effectiveness of the proposed approach has been evaluated using three databases containing 5,000 images with 371 tags, 31,695 images with 5,587 tags, and 269,648 images with 5,018 tags, respectively.

  6. A new repair criterion for steam generator tubes with axial cracks based on probabilistic integrity assessment

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyun-Su; Oh, Chang-Kyun [KEPCO Engineering and Construction Company, Inc., 269, Hyeoksin-ro, Gimcheon, Gyeongsangbuk-do 39660 (Korea, Republic of); Chang, Yoon-Suk, E-mail: yschang@khu.ac.kr [Department of Nuclear Engineering, College of Engineering, Kyung Hee University, 1732 Deokyoungdaero, Giheung, Yongin, Gyeonggi 446-701 (Korea, Republic of)

    2017-03-15

    Highlights: • Probabilistic assessment was performed for axially cracked steam generator tubes. • The threshold crack sizes were determined based on burst pressures of the tubes. • A new repair criterion was suggested as a function of operation time. - Abstract: Steam generator is one of the major components in a nuclear power plant, and it consists of thousands of thin-walled tubes. The operating record of the steam generators has indicated that a number of axial cracks due to stress corrosion have been frequently detected in the tubes. Since the tubes are closely related to the safety and also the efficiency of a nuclear power plant, an establishment of the appropriate repair criterion for the defected tubes and its applications are necessary. The objective of this paper is to develop an accurate repair criterion for the tubes with axial cracks. To do this, a thorough review is performed on the key parameters affecting the tube integrity, and then the probabilistic integrity assessment is carried out by considering the various uncertainties. In addition, the sizes of critical crack are determined by comparing the burst pressure of the cracked tube with the required performance criterion. Based on this result, the new repair criterion for the axially cracked tubes is defined from the reasonably conservative value such that the required performance criterion in terms of the burst pressure is able to be met during the next operating period.

  7. Integrating statistical and process-based models to produce probabilistic landslide hazard at regional scale

    Science.gov (United States)

    Strauch, R. L.; Istanbulluoglu, E.

    2017-12-01

    We develop a landslide hazard modeling approach that integrates a data-driven statistical model and a probabilistic process-based shallow landslide model for mapping probability of landslide initiation, transport, and deposition at regional scales. The empirical model integrates the influence of seven site attribute (SA) classes: elevation, slope, curvature, aspect, land use-land cover, lithology, and topographic wetness index, on over 1,600 observed landslides using a frequency ratio (FR) approach. A susceptibility index is calculated by adding FRs for each SA on a grid-cell basis. Using landslide observations we relate susceptibility index to an empirically-derived probability of landslide impact. This probability is combined with results from a physically-based model to produce an integrated probabilistic map. Slope was key in landslide initiation while deposition was linked to lithology and elevation. Vegetation transition from forest to alpine vegetation and barren land cover with lower root cohesion leads to higher frequency of initiation. Aspect effects are likely linked to differences in root cohesion and moisture controlled by solar insulation and snow. We demonstrate the model in the North Cascades of Washington, USA and identify locations of high and low probability of landslide impacts that can be used by land managers in their design, planning, and maintenance.

  8. Probabilistic Routing Based on Two-Hop Information in Delay/Disruption Tolerant Networks

    Directory of Open Access Journals (Sweden)

    Xu Wang

    2015-01-01

    Full Text Available We investigate an opportunistic routing protocol in delay/disruption tolerant networks (DTNs where the end-to-end path between source and destination nodes may not exist for most of the time. Probabilistic routing protocol using history of encounters and transitivity (PRoPHET is an efficient history-based routing protocol specifically proposed for DTNs, which only utilizes the delivery predictability of one-hop neighbors to make a decision for message forwarding. In order to further improve the message delivery rate and to reduce the average overhead of PRoPHET, in this paper we propose an improved probabilistic routing algorithm (IPRA, where the history information of contacts for the immediate encounter and two-hop neighbors has been jointly used to make an informed decision for message forwarding. Based on the Opportunistic Networking Environment (ONE simulator, the performance of IPRA has been evaluated via extensive simulations. The results show that IPRA can significantly improve the average delivery rate while achieving a better or comparable performance with respect to average overhead, average delay, and total energy consumption compared with the existing algorithms.

  9. Probabilistic relevance ranking for collaborative filtering

    NARCIS (Netherlands)

    Wang, J.; Robertson, S.; De Vries, A.P.; Reinders, M.J.T.

    2008-01-01

    Collaborative filtering is concerned with making recommendations about items to users. Most formulations of the problem are specifically designed for predicting user ratings, assuming past data of explicit user ratings is available. However, in practice we may only have implicit evidence of user

  10. Rank-defective millimeter-wave channel estimation based on subspace-compressive sensing

    Directory of Open Access Journals (Sweden)

    Majid Shakhsi Dastgahian

    2016-11-01

    Full Text Available Millimeter-wave communication (mmWC is considered as one of the pioneer candidates for 5G indoor and outdoor systems in E-band. To subdue the channel propagation characteristics in this band, high dimensional antenna arrays need to be deployed at both the base station (BS and mobile sets (MS. Unlike the conventional MIMO systems, Millimeter-wave (mmW systems lay away to employ the power predatory equipment such as ADC or RF chain in each branch of MIMO system because of hardware constraints. Such systems leverage to the hybrid precoding (combining architecture for downlink deployment. Because there is a large array at the transceiver, it is impossible to estimate the channel by conventional methods. This paper develops a new algorithm to estimate the mmW channel by exploiting the sparse nature of the channel. The main contribution is the representation of a sparse channel model and the exploitation of a modified approach based on Multiple Measurement Vector (MMV greedy sparse framework and subspace method of Multiple Signal Classification (MUSIC which work together to recover the indices of non-zero elements of an unknown channel matrix when the rank of the channel matrix is defected. In practical rank-defective channels, MUSIC fails, and we need to propose new extended MUSIC approaches based on subspace enhancement to compensate the limitation of MUSIC. Simulation results indicate that our proposed extended MUSIC algorithms will have proper performances and moderate computational speeds, and that they are even able to work in channels with an unknown sparsity level.

  11. ENSO-based probabilistic forecasts of March-May U.S. tornado and hail activity

    Science.gov (United States)

    Lepore, Chiara; Tippett, Michael K.; Allen, John T.

    2017-09-01

    Extended logistic regression is used to predict March-May severe convective storm (SCS) activity based on the preceding December-February (DJF) El Niño-Southern Oscillation (ENSO) state. The spatially resolved probabilistic forecasts are verified against U.S. tornado counts, hail events, and two environmental indices for severe convection. The cross-validated skill is positive for roughly a quarter of the U.S. Overall, indices are predicted with more skill than are storm reports, and hail events are predicted with more skill than tornado counts. Skill is higher in the cool phase of ENSO (La Niña like) when overall SCS activity is higher. SCS forecasts based on the predicted DJF ENSO state from coupled dynamical models initialized in October of the previous year extend the lead time with only a modest reduction in skill compared to forecasts based on the observed DJF ENSO state.

  12. Feasibility of developing risk-based rankings of pressure boundary systems for inservice inspection

    Energy Technology Data Exchange (ETDEWEB)

    Vo, T.V.; Smith, B.W.; Simonen, F.A.; Gore, B.F.

    1994-08-01

    The goals of the Evaluation and Improvement of Non-destructive Examination Reliability for the In-service Inspection of Light Water Reactors Program sponsored by the Nuclear Regulatory Commission at Pacific Northwest Laboratory (PNL) are to (1) assess current ISI techniques and requirements for all pressure boundary systems and components, (2) determine if improvements to the requirements are needed, and (3) if necessary, develop recommendations for revising the applicable ASME Codes and regulatory requirements. In evaluating approaches that could be used to provide a technical basis for improved inservice inspection plans, PNL has developed and applied a method that uses results of probabilistic risk assessment (PRA) to establish piping system ISI requirements. In the PNL program, the feasibility of generic ISI requirements is being addressed in two phases. Phase I involves identifying and prioritizing the systems most relevant to plant safety. The results of these evaluations will be later consolidated into requirements for comprehensive inservice inspection of nuclear power plant components that will be developed in Phase II. This report presents Phase I evaluations for eight selected plants and attempts to compare these PRA-based inspection priorities with current ASME Section XI requirements for Class 1, 2 and 3 systems. These results show that there are generic insights that can be extrapolated from the selected plants to specific classes of light water reactors.

  13. Feasibility of developing risk-based rankings of pressure boundary systems for inservice inspection

    International Nuclear Information System (INIS)

    Vo, T.V.; Smith, B.W.; Simonen, F.A.; Gore, B.F.

    1994-08-01

    The goals of the Evaluation and Improvement of Non-destructive Examination Reliability for the In-service Inspection of Light Water Reactors Program sponsored by the Nuclear Regulatory Commission at Pacific Northwest Laboratory (PNL) are to (1) assess current ISI techniques and requirements for all pressure boundary systems and components, (2) determine if improvements to the requirements are needed, and (3) if necessary, develop recommendations for revising the applicable ASME Codes and regulatory requirements. In evaluating approaches that could be used to provide a technical basis for improved inservice inspection plans, PNL has developed and applied a method that uses results of probabilistic risk assessment (PRA) to establish piping system ISI requirements. In the PNL program, the feasibility of generic ISI requirements is being addressed in two phases. Phase I involves identifying and prioritizing the systems most relevant to plant safety. The results of these evaluations will be later consolidated into requirements for comprehensive inservice inspection of nuclear power plant components that will be developed in Phase II. This report presents Phase I evaluations for eight selected plants and attempts to compare these PRA-based inspection priorities with current ASME Section XI requirements for Class 1, 2 and 3 systems. These results show that there are generic insights that can be extrapolated from the selected plants to specific classes of light water reactors

  14. Universal Generating Function Based Probabilistic Production Simulation Approach Considering Wind Speed Correlation

    Directory of Open Access Journals (Sweden)

    Yan Li

    2017-11-01

    Full Text Available Due to the volatile and correlated nature of wind speed, a high share of wind power penetration poses challenges to power system production simulation. Existing power system probabilistic production simulation approaches are in short of considering the time-varying characteristics of wind power and load, as well as the correlation between wind speeds at the same time, which brings about some problems in planning and analysis for the power system with high wind power penetration. Based on universal generating function (UGF, this paper proposes a novel probabilistic production simulation approach considering wind speed correlation. UGF is utilized to develop the chronological models of wind power that characterizes wind speed correlation simultaneously, as well as the chronological models of conventional generation sources and load. The supply and demand are matched chronologically to not only obtain generation schedules, but also reliability indices both at each simulation interval and the whole period. The proposed approach has been tested on the improved IEEE-RTS 79 test system and is compared with the Monte Carlo approach and the sequence operation theory approach. The results verified the proposed approach with the merits of computation simplicity and accuracy.

  15. A Max-Flow Based Algorithm for Connected Target Coverage with Probabilistic Sensors

    Directory of Open Access Journals (Sweden)

    Anxing Shan

    2017-05-01

    Full Text Available Coverage is a fundamental issue in the research field of wireless sensor networks (WSNs. Connected target coverage discusses the sensor placement to guarantee the needs of both coverage and connectivity. Existing works largely leverage on the Boolean disk model, which is only a coarse approximation to the practical sensing model. In this paper, we focus on the connected target coverage issue based on the probabilistic sensing model, which can characterize the quality of coverage more accurately. In the probabilistic sensing model, sensors are only be able to detect a target with certain probability. We study the collaborative detection probability of target under multiple sensors. Armed with the analysis of collaborative detection probability, we further formulate the minimum ϵ-connected target coverage problem, aiming to minimize the number of sensors satisfying the requirements of both coverage and connectivity. We map it into a flow graph and present an approximation algorithm called the minimum vertices maximum flow algorithm (MVMFA with provable time complex and approximation ratios. To evaluate our design, we analyze the performance of MVMFA theoretically and also conduct extensive simulation studies to demonstrate the effectiveness of our proposed algorithm.

  16. Speech Denoising in White Noise Based on Signal Subspace Low-rank Plus Sparse Decomposition

    Directory of Open Access Journals (Sweden)

    yuan Shuai

    2017-01-01

    Full Text Available In this paper, a new subspace speech enhancement method using low-rank and sparse decomposition is presented. In the proposed method, we firstly structure the corrupted data as a Toeplitz matrix and estimate its effective rank for the underlying human speech signal. Then the low-rank and sparse decomposition is performed with the guidance of speech rank value to remove the noise. Extensive experiments have been carried out in white Gaussian noise condition, and experimental results show the proposed method performs better than conventional speech enhancement methods, in terms of yielding less residual noise and lower speech distortion.

  17. A Direct Elliptic Solver Based on Hierarchically Low-Rank Schur Complements

    KAUST Repository

    Chávez, Gustavo

    2017-03-17

    A parallel fast direct solver for rank-compressible block tridiagonal linear systems is presented. Algorithmic synergies between Cyclic Reduction and Hierarchical matrix arithmetic operations result in a solver with O(Nlog2N) arithmetic complexity and O(NlogN) memory footprint. We provide a baseline for performance and applicability by comparing with well-known implementations of the $$\\\\mathcal{H}$$ -LU factorization and algebraic multigrid within a shared-memory parallel environment that leverages the concurrency features of the method. Numerical experiments reveal that this method is comparable with other fast direct solvers based on Hierarchical Matrices such as $$\\\\mathcal{H}$$ -LU and that it can tackle problems where algebraic multigrid fails to converge.

  18. Wilcoxon signed-rank-based technique for the pulse-shape analysis of HPGe detectors

    Science.gov (United States)

    Martín, S.; Quintana, B.; Barrientos, D.

    2016-07-01

    The characterization of the electric response of segmented-contact high-purity germanium detectors requires scanning systems capable of accurately associating each pulse with the position of the interaction that generated it. This process requires an algorithm sensitive to changes above the electronic noise in the pulse shapes produced at different positions, depending on the resolution of the Ge crystal. In this work, a pulse-shape comparison technique based on the Wilcoxon signed-rank test has been developed. It provides a method to distinguish pulses coming from different interaction points in the germanium crystal. Therefore, this technique is a necessary step for building a reliable pulse-shape database that can be used later for the determination of the position of interaction for γ-ray tracking spectrometry devices such as AGATA, GRETA or GERDA. The method was validated by comparison with a χ2 test using simulated and experimental pulses corresponding to a Broad Energy germanium detector (BEGe).

  19. Wilcoxon signed-rank-based technique for the pulse-shape analysis of HPGe detectors

    International Nuclear Information System (INIS)

    Martín, S.; Quintana, B.; Barrientos, D.

    2016-01-01

    The characterization of the electric response of segmented-contact high-purity germanium detectors requires scanning systems capable of accurately associating each pulse with the position of the interaction that generated it. This process requires an algorithm sensitive to changes above the electronic noise in the pulse shapes produced at different positions, depending on the resolution of the Ge crystal. In this work, a pulse-shape comparison technique based on the Wilcoxon signed-rank test has been developed. It provides a method to distinguish pulses coming from different interaction points in the germanium crystal. Therefore, this technique is a necessary step for building a reliable pulse-shape database that can be used later for the determination of the position of interaction for γ-ray tracking spectrometry devices such as AGATA, GRETA or GERDA. The method was validated by comparison with a χ"2 test using simulated and experimental pulses corresponding to a Broad Energy germanium detector (BEGe).

  20. Critical review of methods for risk ranking of food related hazards, based on risks for human health

    DEFF Research Database (Denmark)

    van der Fels-Klerx, H. J.; van Asselt, E. D.; Raley, M.

    2018-01-01

    This study aimed to critically review methods for ranking risks related to food safety and dietary hazards on the basis of their anticipated human health impacts. A literature review was performed to identify and characterize methods for risk ranking from the fields of food, environmental science......, and the risk ranking method characterized. The methods were then clustered - based on their characteristics - into eleven method categories. These categories included: risk assessment, comparative risk assessment, risk ratio method, scoring method, cost of illness, health adjusted life years, multi......-criteria decision analysis, risk matrix, flow charts/decision trees, stated preference techniques and expert synthesis. Method categories were described by their characteristics, weaknesses and strengths, data resources, and fields of applications. It was concluded there is no single best method for risk ranking...

  1. A rank-based sequence aligner with applications in phylogenetic analysis.

    Directory of Open Access Journals (Sweden)

    Liviu P Dinu

    Full Text Available Recent tools for aligning short DNA reads have been designed to optimize the trade-off between correctness and speed. This paper introduces a method for assigning a set of short DNA reads to a reference genome, under Local Rank Distance (LRD. The rank-based aligner proposed in this work aims to improve correctness over speed. However, some indexing strategies to speed up the aligner are also investigated. The LRD aligner is improved in terms of speed by storing [Formula: see text]-mer positions in a hash table for each read. Another improvement, that produces an approximate LRD aligner, is to consider only the positions in the reference that are likely to represent a good positional match of the read. The proposed aligner is evaluated and compared to other state of the art alignment tools in several experiments. A set of experiments are conducted to determine the precision and the recall of the proposed aligner, in the presence of contaminated reads. In another set of experiments, the proposed aligner is used to find the order, the family, or the species of a new (or unknown organism, given only a set of short Next-Generation Sequencing DNA reads. The empirical results show that the aligner proposed in this work is highly accurate from a biological point of view. Compared to the other evaluated tools, the LRD aligner has the important advantage of being very accurate even for a very low base coverage. Thus, the LRD aligner can be considered as a good alternative to standard alignment tools, especially when the accuracy of the aligner is of high importance. Source code and UNIX binaries of the aligner are freely available for future development and use at http://lrd.herokuapp.com/aligners. The software is implemented in C++ and Java, being supported on UNIX and MS Windows.

  2. Ranking of healthcare programmes based on health outcome, health costs and safe delivery of care in hospital pharmacy practice.

    Science.gov (United States)

    Brisseau, Lionel; Bussières, Jean-François; Bois, Denis; Vallée, Marc; Racine, Marie-Claude; Bonnici, André

    2013-02-01

    To establish a consensual and coherent ranking of healthcare programmes that involve the presence of ward-based and clinic-based clinical pharmacists, based on health outcome, health costs and safe delivery of care. This descriptive study was derived from a structured dialogue (Delphi technique) among directors of pharmacy department. We established a quantitative profile of healthcare programmes at five sites that involved the provision of ward-based and clinic-based pharmaceutical care. A summary table of evidence established a unique quality rating per inpatient (clinic-based) or outpatient (ward-based) healthcare programme. Each director rated the perceived impact of pharmaceutical care per inpatient or outpatient healthcare programme on three fields: health outcome, health costs and safe delivery of care. They agreed by consensus on the final ranking of healthcare programmes. A ranking was assigned for each of the 18 healthcare programmes for outpatient care and the 17 healthcare programmes for inpatient care involving the presence of pharmacists, based on health outcome, health costs and safe delivery of care. There was a good correlation between ranking based on data from a 2007-2008 Canadian report on hospital pharmacy practice and the ranking proposed by directors of pharmacy department. Given the often limited human and financial resources, managers should consider the best evidence available on a profession's impact to plan healthcare services within an organization. Data are few on ranking healthcare programmes in order to prioritize which healthcare programme would mostly benefit from the delivery of pharmaceutical care by ward-based and clinic-based pharmacists. © 2012 The Authors. IJPP © 2012 Royal Pharmaceutical Society.

  3. Economic-based design of engineering systems with degrading components using probabilistic loss of quality

    International Nuclear Information System (INIS)

    Son, Young Kap; Savage, Gordon J.; Chang, Seog Weon

    2007-01-01

    The allocation of means and tolerances to provide quality, functional reliability and performance reliability in engineering systems is a challenging problem. Traditional measures to help select the best means and tolerances include mean time to failure and its variance: however, they have some shortcomings. In this paper, a monetary measure based on present worth is invoked as a more inclusive metric. We consider the sum of the production cost and the expected loss of quality cost over a planned horizon at the customer's discount rates. Key to the approach is a probabilistic loss of quality cost that incorporates the cumulative distribution function that arises from time-variant distributions of system performance measures due to degrading components. The proposed design approach investigates both degradation and uncertainty in component. Moreover, it tries to obviate problems of current Taguchi's loss function-based design approaches. Case studies show the practicality and promise of the approach

  4. A Model-Based Probabilistic Inversion Framework for Wire Fault Detection Using TDR

    Science.gov (United States)

    Schuet, Stefan R.; Timucin, Dogan A.; Wheeler, Kevin R.

    2010-01-01

    Time-domain reflectometry (TDR) is one of the standard methods for diagnosing faults in electrical wiring and interconnect systems, with a long-standing history focused mainly on hardware development of both high-fidelity systems for laboratory use and portable hand-held devices for field deployment. While these devices can easily assess distance to hard faults such as sustained opens or shorts, their ability to assess subtle but important degradation such as chafing remains an open question. This paper presents a unified framework for TDR-based chafing fault detection in lossy coaxial cables by combining an S-parameter based forward modeling approach with a probabilistic (Bayesian) inference algorithm. Results are presented for the estimation of nominal and faulty cable parameters from laboratory data.

  5. Risk-Based Predictive Maintenance for Safety-Critical Systems by Using Probabilistic Inference

    Directory of Open Access Journals (Sweden)

    Tianhua Xu

    2013-01-01

    Full Text Available Risk-based maintenance (RBM aims to improve maintenance planning and decision making by reducing the probability and consequences of failure of equipment. A new predictive maintenance strategy that integrates dynamic evolution model and risk assessment is proposed which can be used to calculate the optimal maintenance time with minimal cost and safety constraints. The dynamic evolution model provides qualified risks by using probabilistic inference with bucket elimination and gives the prospective degradation trend of a complex system. Based on the degradation trend, an optimal maintenance time can be determined by minimizing the expected maintenance cost per time unit. The effectiveness of the proposed method is validated and demonstrated by a collision accident of high-speed trains with obstacles in the presence of safety and cost constrains.

  6. Probabilistic Multi-Sensor Fusion Based Indoor Positioning System on a Mobile Device

    Directory of Open Access Journals (Sweden)

    Xiang He

    2015-12-01

    Full Text Available Nowadays, smart mobile devices include more and more sensors on board, such as motion sensors (accelerometer, gyroscope, magnetometer, wireless signal strength indicators (WiFi, Bluetooth, Zigbee, and visual sensors (LiDAR, camera. People have developed various indoor positioning techniques based on these sensors. In this paper, the probabilistic fusion of multiple sensors is investigated in a hidden Markov model (HMM framework for mobile-device user-positioning. We propose a graph structure to store the model constructed by multiple sensors during the offline training phase, and a multimodal particle filter to seamlessly fuse the information during the online tracking phase. Based on our algorithm, we develop an indoor positioning system on the iOS platform. The experiments carried out in a typical indoor environment have shown promising results for our proposed algorithm and system design.

  7. Combinatoric Models of Information Retrieval Ranking Methods and Performance Measures for Weakly-Ordered Document Collections

    Science.gov (United States)

    Church, Lewis

    2010-01-01

    This dissertation answers three research questions: (1) What are the characteristics of a combinatoric measure, based on the Average Search Length (ASL), that performs the same as a probabilistic version of the ASL?; (2) Does the combinatoric ASL measure produce the same performance result as the one that is obtained by ranking a collection of…

  8. Probabilistic Location-based Routing Protocol for Mobile Wireless Sensor Networks with Intermittent Communication

    Directory of Open Access Journals (Sweden)

    Sho KUMAGAI

    2015-02-01

    Full Text Available In a sensor network, sensor data messages reach the nearest stationary sink node connected to the Internet by wireless multihop transmissions. Recently, various mobile sensors are available due to advances of robotics technologies and communication technologies. A location based message-by-message routing protocol, such as Geographic Distance Routing (GEDIR is suitable for such mobile wireless networks; however, it is required for each mobile wireless sensor node to know the current locations of all its neighbor nodes. On the other hand, various intermittent communication methods for a low power consumption requirement have been proposed for wireless sensor networks. Intermittent Receiver-driven Data Transmission (IRDT is one of the most efficient methods; however, it is difficult to combine the location based routing and the intermittent communication. In order to solve this problem, this paper proposes a probabilistic approach IRDT-GEDIR with the help of one of the solutions of the secretaries problem. Here, each time a neighbor sensor node wakes up from its sleep mode, an intermediate sensor node determines whether it forwards its buffered sensor data messages to it or not based on an estimation of achieved pseudo speed of the messages. Simulation experiments show that IRDT-GEDIR achieves higher pseudo speed of sensor data message transmissions and shorter transmission delay than achieves shorter transmission delay than the two naive combinations of IRDT and GEDIR in sensor networks with mobile sensor nodes and a stationary sink node. In addition, the guideline of the estimated numbers of the neighbor nodes of each intermediate sensor node is provided based on the results of the simulation experiments to apply the probabilistic approach IRDT-GEDIR.

  9. Probabilistic Modeling of the Fatigue Crack Growth Rate for Ni-base Alloy X-750

    International Nuclear Information System (INIS)

    Yoon, J.Y.; Nam, H.O.; Hwang, I.S.; Lee, T.H.

    2012-01-01

    Extending the operating life of existing nuclear power plants (NPP's) beyond 60 years. Many aging problems of passive components such as PWSCC, IASCC, FAC and Corrosion Fatigue; Safety analysis: Deterministic analysis + Probabilistic analysis; Many uncertainties of parameters or relationship in general probabilistic analysis such as probabilistic safety assessment (PSA); Bayesian inference: Decreasing uncertainties by updating unknown parameter; Ensuring the reliability of passive components (e.g. pipes) as well as active components (e.g. valve, pump) in NPP's; Developing probabilistic model for failures; Updating the fatigue crack growth rate (FCGR)

  10. A Third-Rank Tensor Field Based on a U(1) Gauge Theory in Loop Space

    OpenAIRE

    Shinichi, DEGUCHI; Tadahito, NAKAJIMA; Department of Physics and Atomic Energy Research Institute College of Science and Technology; Department of Physics and Atomic Energy Research Institute College of Science and Technology

    1995-01-01

    We derive the Stueckelberg formalism extended to a third-rank tensor field from a U(1) gauge theory in loop space, the space of all loops in space-time. The third-rank tensor field is regarded as a constrained U(1) gauge field on the loop space.

  11. DEA ranking of municipalities of the Republic of Serbia based on efficiency of SMEs in agribusiness

    Directory of Open Access Journals (Sweden)

    Maletić Radojka

    2015-01-01

    Full Text Available The most important aspect of any business is efficiency. The goal is to achieve a greater output results using less inputs, i.e. to maximize the use of available inputs. Numerous mathematical and statistical procedures, such as DEA technique (Data Envelopment Analysis, take an important place in the process of the effective management of the company and its business activities. This paper illustrated the application of DEA technique in assessing the business efficiency of SMEs in agribusiness in Vojvodina Measuring the efficiency of business operations of SMEs is based on the values of the following indicators: fixed assets, working capital, number of companies, number of employees, total income, profit and loss. The data used to calculate the values of indicators of business efficiency were obtained from the Statistical Office of the Republic of Serbia, based on the annual accounts of SMEs in agribusiness for four-year average (2008-2011. The aim of this paper is statistical assessment of business efficiency of SMEs in agribusiness using DEA technique, and then, based on the results obtained, to perform the ranking of Vojvodina municipalities in which observed SMEs were located, and finally, based on 4 models, to show sensitivity of DEA technique compared to different combination of input / output indicators, so therefore, caution is needed when this method is used. If the combination of parameters in the model is better, the results are more realistic, since if a key parameter is omitted, wrong decisions could be made.

  12. Confluence reduction for probabilistic systems

    NARCIS (Netherlands)

    Timmer, Mark; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    In this presentation we introduce a novel technique for state space reduction of probabilistic specifications, based on a newly developed notion of confluence for probabilistic automata. We proved that this reduction preserves branching probabilistic bisimulation and can be applied on-the-fly. To

  13. Research Ranking of Iranian Universities of Medical Sciences Based on International Indicators: An Experience From I.R. of Iran.

    Science.gov (United States)

    Baradaran Eftekhari, Monir; Sobhani, Zahra; Eltemasi, Masoumeh; Ghalenoee, Elham; Falahat, Katayoun; Habibi, Elham; Djalalinia, Shirin; Paykari, Niloofar; Ebadifar, Asghar; Akhondzadeh, Shahin

    2017-11-01

    In recent years, international ranking systems have been used by diverse users for various purposes. In most of these rankings, different aspects of performance of universities and research institutes, especially scientific performance, have been evaluated and ranked. In this article, we aimed to report the results of research ranking of Iranian universities of medical sciences (UMSs) based on some international indicators in 2015. In this study, after reviewing the research indicators of the majority of international ranking systems, with the participation of key stakeholders, we selected eight research indicators, namely research output, high-quality publications, leadership, total citations, citations per paper in 2015, papers per faculty member and h-index. The main sources for data gathering were Scopus, PubMed, and ISI, Web of Science. Data were extracted and normalized for Iranian governmental UMSs for 2015. A total of 18023 articles were indexed in 2015 in Scopus with affiliations of UMSs affiliation. Almost 17% of all articles were published in top journals and 15% were published with international collaborations. The maximum h-index (h-index = 110) belonged to Tehran University of Medical Sciences. The average paper per faculty member was 1.14 (Max = 2.5, Min = 0.13). The mean citation per published articles in Scopus was 0.33. Research ranking of Iranian UMSs can create favorable competition among them towards knowledge production.

  14. Feedback-based probabilistic category learning is selectively impaired in attention/hyperactivity deficit disorder.

    Science.gov (United States)

    Gabay, Yafit; Goldfarb, Liat

    2017-07-01

    Although Attention-Deficit Hyperactivity Disorder (ADHD) is closely linked to executive function deficits, it has recently been attributed to procedural learning impairments that are quite distinct from the former. These observations challenge the ability of the executive function framework solely to account for the diverse range of symptoms observed in ADHD. A recent neurocomputational model emphasizes the role of striatal dopamine (DA) in explaining ADHD's broad range of deficits, but the link between this model and procedural learning impairments remains unclear. Significantly, feedback-based procedural learning is hypothesized to be disrupted in ADHD because of the involvement of striatal DA in this type of learning. In order to test this assumption, we employed two variants of a probabilistic category learning task known from the neuropsychological literature. Feedback-based (FB) and paired associate-based (PA) probabilistic category learning were employed in a non-medicated sample of ADHD participants and neurotypical participants. In the FB task, participants learned associations between cues and outcomes initially by guessing and subsequently through feedback indicating the correctness of the response. In the PA learning task, participants viewed the cue and its associated outcome simultaneously without receiving an overt response or corrective feedback. In both tasks, participants were trained across 150 trials. Learning was assessed in a subsequent test without a presentation of the outcome or corrective feedback. Results revealed an interesting disassociation in which ADHD participants performed as well as control participants in the PA task, but were impaired compared with the controls in the FB task. The learning curve during FB training differed between the two groups. Taken together, these results suggest that the ability to incrementally learn by feedback is selectively disrupted in ADHD participants. These results are discussed in relation to both

  15. A probabilistic model-based soft sensor to monitor lactic acid bacteria fermentations

    DEFF Research Database (Denmark)

    Spann, Robert; Roca, Christophe; Kold, David

    2018-01-01

    A probabilistic soft sensor based on a mechanistic model was designed to monitor S. thermophilus fermentations, and validated with experimental lab-scale data. It considered uncertainties in the initial conditions, on-line measurements, and model parameters by performing Monte Carlo simulations...... the model parameters that were then used as input to the mechanistic model. The soft sensor predicted both the current state variables, as well as the future course of the fermentation, e.g. with a relative mean error of the biomass concentration of 8 %. This successful implementation of a process...... within the monitoring system. It predicted, therefore, the probability distributions of the unmeasured states, such as biomass, lactose, and lactic acid concentrations. To this end, a mechanistic model was developed first, and a statistical parameter estimation was performed in order to assess parameter...

  16. Constant Jacobian Matrix-Based Stochastic Galerkin Method for Probabilistic Load Flow

    Directory of Open Access Journals (Sweden)

    Yingyun Sun

    2016-03-01

    Full Text Available An intrusive spectral method of probabilistic load flow (PLF is proposed in the paper, which can handle the uncertainties arising from renewable energy integration. Generalized polynomial chaos (gPC expansions of dependent random variables are utilized to build a spectral stochastic representation of PLF model. Instead of solving the coupled PLF model with a traditional, cumbersome method, a modified stochastic Galerkin (SG method is proposed based on the P-Q decoupling properties of load flow in power system. By introducing two pre-calculated constant sparse Jacobian matrices, the computational burden of the SG method is significantly reduced. Two cases, IEEE 14-bus and IEEE 118-bus systems, are used to verify the computation speed and efficiency of the proposed method.

  17. Biasing transition rate method based on direct MC simulation for probabilistic safety assessment

    Institute of Scientific and Technical Information of China (English)

    Xiao-Lei Pan; Jia-Qun Wang; Run Yuan; Fang Wang; Han-Qing Lin; Li-Qin Hu; Jin Wang

    2017-01-01

    Direct Monte Carlo (MC) simulation is a powerful probabilistic safety assessment method for accounting dynamics of the system.But it is not efficient at simulating rare events.A biasing transition rate method based on direct MC simulation is proposed to solve the problem in this paper.This method biases transition rates of the components by adding virtual components to them in series to increase the occurrence probability of the rare event,hence the decrease in the variance of MC estimator.Several cases are used to benchmark this method.The results show that the method is effective at modeling system failure and is more efficient at collecting evidence of rare events than the direct MC simulation.The performance is greatly improved by the biasing transition rate method.

  18. Probabilistic Fatigue Life Prediction of Bridge Cables Based on Multiscaling and Mesoscopic Fracture Mechanics

    Directory of Open Access Journals (Sweden)

    Zhongxiang Liu

    2016-04-01

    Full Text Available Fatigue fracture of bridge stay-cables is usually a multiscale process as the crack grows from micro-scale to macro-scale. Such a process, however, is highly uncertain. In order to make a rational prediction of the residual life of bridge cables, a probabilistic fatigue approach is proposed, based on a comprehensive vehicle load model, finite element analysis and multiscaling and mesoscopic fracture mechanics. Uncertainties in both material properties and external loads are considered. The proposed method is demonstrated through the fatigue life prediction of cables of the Runyang Cable-Stayed Bridge in China, and it is found that cables along the bridge spans may have significantly different fatigue lives, and due to the variability, some of them may have shorter lives than those as expected from the design.

  19. EVALUATION OF MILITARY ACTIVITY IMPACT ON HUMANS THROUGH A PROBABILISTIC ECOLOGICAL RISK ASSESSMENT. EXAMPLE OF A FORMER MISSILE BASE.

    Directory of Open Access Journals (Sweden)

    Sergiy ОREL

    2015-10-01

    Full Text Available The current article provides a methodology focused on the assessment of environmental factors after termination of military activity and uses a former missile base as an example. The assessment of environmental conditions is performed through an evaluation of the risks posed by the hazardous chemicals contained by underground and surface water sources and soil to human health . Moreover, by conducting deterministic and probabilistic risk assessments, the article determines that the probabilistic assessment provides more accurate and qualitative information for decision-making on the use of environmental protection measures, which often saves financial and material resources needed for their implementation.

  20. Probabilistic Fatigue Life Updating for Railway Bridges Based on Local Inspection and Repair.

    Science.gov (United States)

    Lee, Young-Joo; Kim, Robin E; Suh, Wonho; Park, Kiwon

    2017-04-24

    Railway bridges are exposed to repeated train loads, which may cause fatigue failure. As critical links in a transportation network, railway bridges are expected to survive for a target period of time, but sometimes they fail earlier than expected. To guarantee the target bridge life, bridge maintenance activities such as local inspection and repair should be undertaken properly. However, this is a challenging task because there are various sources of uncertainty associated with aging bridges, train loads, environmental conditions, and maintenance work. Therefore, to perform optimal risk-based maintenance of railway bridges, it is essential to estimate the probabilistic fatigue life of a railway bridge and update the life information based on the results of local inspections and repair. Recently, a system reliability approach was proposed to evaluate the fatigue failure risk of structural systems and update the prior risk information in various inspection scenarios. However, this approach can handle only a constant-amplitude load and has limitations in considering a cyclic load with varying amplitude levels, which is the major loading pattern generated by train traffic. In addition, it is not feasible to update the prior risk information after bridges are repaired. In this research, the system reliability approach is further developed so that it can handle a varying-amplitude load and update the system-level risk of fatigue failure for railway bridges after inspection and repair. The proposed method is applied to a numerical example of an in-service railway bridge, and the effects of inspection and repair on the probabilistic fatigue life are discussed.

  1. Efficient Multi-keyword Ranked Search over Outsourced Cloud Data based on Homomorphic Encryption

    Directory of Open Access Journals (Sweden)

    Nie Mengxi

    2016-01-01

    Full Text Available With the development of cloud computing, more and more data owners are motivated to outsource their data to the cloud server for great flexibility and less saving expenditure. Because the security of outsourced data must be guaranteed, some encryption methods should be used which obsoletes traditional data utilization based on plaintext, e.g. keyword search. To solve the search of encrypted data, some schemes were proposed to solve the search of encrypted data, e.g. top-k single or multiple keywords retrieval. However, the efficiency of these proposed schemes is not high enough to be impractical in the cloud computing. In this paper, we propose a new scheme based on homomorphic encryption to solve this challenging problem of privacy-preserving efficient multi-keyword ranked search over outsourced cloud data. In our scheme, the inner product is adopted to measure the relevance scores and the technique of relevance feedback is used to reflect the search preference of the data users. Security analysis shows that the proposed scheme can meet strict privacy requirements for such a secure cloud data utilization system. Performance evaluation demonstrates that the proposed scheme can achieve low overhead on both computation and communication.

  2. A heuristic ranking approach on capacity benefit margin determination using Pareto-based evolutionary programming technique.

    Science.gov (United States)

    Othman, Muhammad Murtadha; Abd Rahman, Nurulazmi; Musirin, Ismail; Fotuhi-Firuzabad, Mahmud; Rajabi-Ghahnavieh, Abbas

    2015-01-01

    This paper introduces a novel multiobjective approach for capacity benefit margin (CBM) assessment taking into account tie-line reliability of interconnected systems. CBM is the imperative information utilized as a reference by the load-serving entities (LSE) to estimate a certain margin of transfer capability so that a reliable access to generation through interconnected system could be attained. A new Pareto-based evolutionary programming (EP) technique is used to perform a simultaneous determination of CBM for all areas of the interconnected system. The selection of CBM at the Pareto optimal front is proposed to be performed by referring to a heuristic ranking index that takes into account system loss of load expectation (LOLE) in various conditions. Eventually, the power transfer based available transfer capability (ATC) is determined by considering the firm and nonfirm transfers of CBM. A comprehensive set of numerical studies are conducted on the modified IEEE-RTS79 and the performance of the proposed method is numerically investigated in detail. The main advantage of the proposed technique is in terms of flexibility offered to an independent system operator in selecting an appropriate solution of CBM simultaneously for all areas.

  3. A Heuristic Ranking Approach on Capacity Benefit Margin Determination Using Pareto-Based Evolutionary Programming Technique

    Directory of Open Access Journals (Sweden)

    Muhammad Murtadha Othman

    2015-01-01

    Full Text Available This paper introduces a novel multiobjective approach for capacity benefit margin (CBM assessment taking into account tie-line reliability of interconnected systems. CBM is the imperative information utilized as a reference by the load-serving entities (LSE to estimate a certain margin of transfer capability so that a reliable access to generation through interconnected system could be attained. A new Pareto-based evolutionary programming (EP technique is used to perform a simultaneous determination of CBM for all areas of the interconnected system. The selection of CBM at the Pareto optimal front is proposed to be performed by referring to a heuristic ranking index that takes into account system loss of load expectation (LOLE in various conditions. Eventually, the power transfer based available transfer capability (ATC is determined by considering the firm and nonfirm transfers of CBM. A comprehensive set of numerical studies are conducted on the modified IEEE-RTS79 and the performance of the proposed method is numerically investigated in detail. The main advantage of the proposed technique is in terms of flexibility offered to an independent system operator in selecting an appropriate solution of CBM simultaneously for all areas.

  4. A comparison of sequential and information-based methods for determining the co-integration rank in heteroskedastic VAR MODELS

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Angelis, Luca De; Rahbek, Anders

    2015-01-01

    In this article, we investigate the behaviour of a number of methods for estimating the co-integration rank in VAR systems characterized by heteroskedastic innovation processes. In particular, we compare the efficacy of the most widely used information criteria, such as Akaike Information Criterion....... The relative finite-sample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BIC-based procedure and the bootstrap sequential test procedure deliver the best overall performance in terms......-based method to over-estimate the co-integration rank in relatively small sample sizes....

  5. Probabilistic commodity-flow-based focusing of monitoring activities to facilitate early detection of Phytophthora ramorum outbreaks

    Science.gov (United States)

    Steven C. McKelvey; William D. Smith; Frank Koch

    2012-01-01

    This project summary describes a probabilistic model developed with funding support from the Forest Health Monitoring Program of the Forest Service, U.S. Department of Agriculture (BaseEM Project SO-R-08-01). The model has been implemented in SODBuster, a standalone software package developed using the Java software development kit from Sun Microsystems.

  6. Learning Probabilistic Logic Models from Probabilistic Examples.

    Science.gov (United States)

    Chen, Jianzhong; Muggleton, Stephen; Santos, José

    2008-10-01

    We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches - abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples.

  7. Probabilistic insurance

    OpenAIRE

    Wakker, P.P.; Thaler, R.H.; Tversky, A.

    1997-01-01

    textabstractProbabilistic insurance is an insurance policy involving a small probability that the consumer will not be reimbursed. Survey data suggest that people dislike probabilistic insurance and demand more than a 20% reduction in the premium to compensate for a 1% default risk. While these preferences are intuitively appealing they are difficult to reconcile with expected utility theory. Under highly plausible assumptions about the utility function, willingness to pay for probabilistic i...

  8. Ranking multiple docking solutions based on the conservation of inter-residue contacts

    KAUST Repository

    Oliva, Romina M.; Vangone, Anna; Cavallo, Luigi

    2013-01-01

    ) conformations in the top positions is still an open problem. Herein we present CONSRANK, a simple and effective tool to rank multiple docking solutions, which relies on the conservation of inter-residue contacts in the analyzed decoys ensemble. First

  9. Ranking alternatives based on imprecise multi-criteria data and pairwise overlap dominance relations

    DEFF Research Database (Denmark)

    Franco de los Rios, Camilo Andres; Hougaard, Jens Leth; Nielsen, Kurt

    illustrative example is given for comparison with standard methods like PROMETHEE. The proposed methodology takes into account the risk attitudes of decision makers, organizing the alternatives and ranking them according to their relevance. The whole interactive decision support allows understanding...

  10. Face Recognition for Access Control Systems Combining Image-Difference Features Based on a Probabilistic Model

    Science.gov (United States)

    Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko

    We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.

  11. Alignment and prediction of cis-regulatory modules based on a probabilistic model of evolution.

    Directory of Open Access Journals (Sweden)

    Xin He

    2009-03-01

    Full Text Available Cross-species comparison has emerged as a powerful paradigm for predicting cis-regulatory modules (CRMs and understanding their evolution. The comparison requires reliable sequence alignment, which remains a challenging task for less conserved noncoding sequences. Furthermore, the existing models of DNA sequence evolution generally do not explicitly treat the special properties of CRM sequences. To address these limitations, we propose a model of CRM evolution that captures different modes of evolution of functional transcription factor binding sites (TFBSs and the background sequences. A particularly novel aspect of our work is a probabilistic model of gains and losses of TFBSs, a process being recognized as an important part of regulatory sequence evolution. We present a computational framework that uses this model to solve the problems of CRM alignment and prediction. Our alignment method is similar to existing methods of statistical alignment but uses the conserved binding sites to improve alignment. Our CRM prediction method deals with the inherent uncertainties of binding site annotations and sequence alignment in a probabilistic framework. In simulated as well as real data, we demonstrate that our program is able to improve both alignment and prediction of CRM sequences over several state-of-the-art methods. Finally, we used alignments produced by our program to study binding site conservation in genome-wide binding data of key transcription factors in the Drosophila blastoderm, with two intriguing results: (i the factor-bound sequences are under strong evolutionary constraints even if their neighboring genes are not expressed in the blastoderm and (ii binding sites in distal bound sequences (relative to transcription start sites tend to be more conserved than those in proximal regions. Our approach is implemented as software, EMMA (Evolutionary Model-based cis-regulatory Module Analysis, ready to be applied in a broad biological context.

  12. Modifications of Probabilistic Safety Assessment-1 Nuclear Power Plant Dukovany based upon new version of Emergency Operating Procedures

    International Nuclear Information System (INIS)

    Aldorf, R.

    1997-01-01

    In the frame of 'living Probabilistic Safety Assessment-1 Nuclear Power Plant Dukovany Project' being performed by Nuclear Research Institute Rez during 1997 is planned to reflect on Probabilistic Safety Assessment-1 basis on impact of Emergency Response Guidelines (as one particular event from the list of other modifications) on Plant Safety. Following highlights help to orient the reader in main general aspects, findings and issues of the work that currently continues on. Older results of Probabilistic Safety Assessment-1 Nuclear Power Plant Dukovany have revealed that human behaviour during accident progression scenarios represent one of the most important aspects in plant safety. Current effort of Nuclear Power Plants Dukovany (Czech Republic) and Bohunice (Slovak Republic) is focussed on development of qualitatively new symptom-based Emergency Operating Procedures called Emergency Response Guidelines Supplier - Westinghouse Energy Systems Europe, Brussels works in cooperation with teams of specialist from both Nuclear Power Plants. In the frame of 'living Probabilistic Safety Assessment-1 Nuclear Power Plant Dukovany Project' being performed by Nuclear Research Institute Rez during 1997 is planned to prove on Probabilistic Safety Assessment -1 basis an expected - positive impact of Emergency Response Guidelines on Plant Safety, Since this contract is currently still in progress, it is possible to release only preliminary conclusions and observations. Emergency Response Guidelines compare to original Emergency Operating Procedures substantially reduce uncertainty of general human behaviour during plant response to an accident process. It is possible to conclude that from the current scope Probabilistic Safety Assessment Dukovany point of view (until core damage), Emergency Response Guidelines represent adequately wide basis for mitigating any initiating event

  13. Variable importance analysis based on rank aggregation with applications in metabolomics for biomarker discovery.

    Science.gov (United States)

    Yun, Yong-Huan; Deng, Bai-Chuan; Cao, Dong-Sheng; Wang, Wei-Ting; Liang, Yi-Zeng

    2016-03-10

    Biomarker discovery is one important goal in metabolomics, which is typically modeled as selecting the most discriminating metabolites for classification and often referred to as variable importance analysis or variable selection. Until now, a number of variable importance analysis methods to discover biomarkers in the metabolomics studies have been proposed. However, different methods are mostly likely to generate different variable ranking results due to their different principles. Each method generates a variable ranking list just as an expert presents an opinion. The problem of inconsistency between different variable ranking methods is often ignored. To address this problem, a simple and ideal solution is that every ranking should be taken into account. In this study, a strategy, called rank aggregation, was employed. It is an indispensable tool for merging individual ranking lists into a single "super"-list reflective of the overall preference or importance within the population. This "super"-list is regarded as the final ranking for biomarker discovery. Finally, it was used for biomarkers discovery and selecting the best variable subset with the highest predictive classification accuracy. Nine methods were used, including three univariate filtering and six multivariate methods. When applied to two metabolic datasets (Childhood overweight dataset and Tubulointerstitial lesions dataset), the results show that the performance of rank aggregation has improved greatly with higher prediction accuracy compared with using all variables. Moreover, it is also better than penalized method, least absolute shrinkage and selectionator operator (LASSO), with higher prediction accuracy or less number of selected variables which are more interpretable. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. A Rank-Constrained Matrix Representation for Hypergraph-Based Subspace Clustering

    Directory of Open Access Journals (Sweden)

    Yubao Sun

    2015-01-01

    Full Text Available This paper presents a novel, rank-constrained matrix representation combined with hypergraph spectral analysis to enable the recovery of the original subspace structures of corrupted data. Real-world data are frequently corrupted with both sparse error and noise. Our matrix decomposition model separates the low-rank, sparse error, and noise components from the data in order to enhance robustness to the corruption. In order to obtain the desired rank representation of the data within a dictionary, our model directly utilizes rank constraints by restricting the upper bound of the rank range. An alternative projection algorithm is proposed to estimate the low-rank representation and separate the sparse error from the data matrix. To further capture the complex relationship between data distributed in multiple subspaces, we use hypergraph to represent the data by encapsulating multiple related samples into one hyperedge. The final clustering result is obtained by spectral decomposition of the hypergraph Laplacian matrix. Validation experiments on the Extended Yale Face Database B, AR, and Hopkins 155 datasets show that the proposed method is a promising tool for subspace clustering.

  15. PageRank-based identification of signaling crosstalk from transcriptomics data: the case of Arabidopsis thaliana.

    Science.gov (United States)

    Omranian, Nooshin; Mueller-Roeber, Bernd; Nikoloski, Zoran

    2012-04-01

    The levels of cellular organization, from gene transcription to translation to protein-protein interaction and metabolism, operate via tightly regulated mutual interactions, facilitating organismal adaptability and various stress responses. Characterizing the mutual interactions between genes, transcription factors, and proteins involved in signaling, termed crosstalk, is therefore crucial for understanding and controlling cells' functionality. We aim at using high-throughput transcriptomics data to discover previously unknown links between signaling networks. We propose and analyze a novel method for crosstalk identification which relies on transcriptomics data and overcomes the lack of complete information for signaling pathways in Arabidopsis thaliana. Our method first employs a network-based transformation of the results from the statistical analysis of differential gene expression in given groups of experiments under different signal-inducing conditions. The stationary distribution of a random walk (similar to the PageRank algorithm) on the constructed network is then used to determine the putative transcripts interrelating different signaling pathways. With the help of the proposed method, we analyze a transcriptomics data set including experiments from four different stresses/signals: nitrate, sulfur, iron, and hormones. We identified promising gene candidates, downstream of the transcription factors (TFs), associated to signaling crosstalk, which were validated through literature mining. In addition, we conduct a comparative analysis with the only other available method in this field which used a biclustering-based approach. Surprisingly, the biclustering-based approach fails to robustly identify any candidate genes involved in the crosstalk of the analyzed signals. We demonstrate that our proposed method is more robust in identifying gene candidates involved downstream of the signaling crosstalk for species for which large transcriptomics data sets

  16. A Probabilistic Short-Term Water Demand Forecasting Model Based on the Markov Chain

    Directory of Open Access Journals (Sweden)

    Francesca Gagliardi

    2017-07-01

    Full Text Available This paper proposes a short-term water demand forecasting method based on the use of the Markov chain. This method provides estimates of future demands by calculating probabilities that the future demand value will fall within pre-assigned intervals covering the expected total variability. More specifically, two models based on homogeneous and non-homogeneous Markov chains were developed and presented. These models, together with two benchmark models (based on artificial neural network and naïve methods, were applied to three real-life case studies for the purpose of forecasting the respective water demands from 1 to 24 h ahead. The results obtained show that the model based on a homogeneous Markov chain provides more accurate short-term forecasts than the one based on a non-homogeneous Markov chain, which is in line with the artificial neural network model. Both Markov chain models enable probabilistic information regarding the stochastic demand forecast to be easily obtained.

  17. Ranking system for national regulatory jurisdictions based on pesticide standard values in major exposures

    Directory of Open Access Journals (Sweden)

    Zijian Li

    2017-07-01

    Full Text Available To control the risk of human exposure to pesticides, about 50 nations have promulgated pesticide soil regulatory guidance values (RGVs, and 104 nations have provided pesticide drinking water maximum concentration levels (MCLs. In addition, 90 nations have regulated pesticide agricultural commodity maximum residue limits (MRLs. Pesticide standard values (PSVs for one single pesticide varied in a range of six, seven, or even eight orders of magnitude. Some PSVs are too large to prevent the impact of pesticides on human health. Many nations have not provided PSVs for some commonly used pesticides until now. This research has introduced several completeness values and numerical values methods to evaluate the national jurisdiction’s performance on PSVs on a nation base. The national jurisdiction ranking system developed by these methods will be beneficial to the environmental regulation makers in the management of PSVs. Results also indicate that European countries perform better in the regulation of pesticide soil RGVs, drinking water MCLs, and agricultural commodity MRLs.

  18. Data depth and rank-based tests for covariance and spectral density matrices

    KAUST Repository

    Chau, Joris

    2017-06-26

    In multivariate time series analysis, objects of primary interest to study cross-dependences in the time series are the autocovariance or spectral density matrices. Non-degenerate covariance and spectral density matrices are necessarily Hermitian and positive definite, and our primary goal is to develop new methods to analyze samples of such matrices. The main contribution of this paper is the generalization of the concept of statistical data depth for collections of covariance or spectral density matrices by exploiting the geometric properties of the space of Hermitian positive definite matrices as a Riemannian manifold. This allows one to naturally characterize most central or outlying matrices, but also provides a practical framework for rank-based hypothesis testing in the context of samples of covariance or spectral density matrices. First, the desired properties of a data depth function acting on the space of Hermitian positive definite matrices are presented. Second, we propose two computationally efficient pointwise and integrated data depth functions that satisfy each of these requirements. Several applications of the developed methodology are illustrated by the analysis of collections of spectral matrices in multivariate brain signal time series datasets.

  19. Rank-based permutation approaches for non-parametric factorial designs.

    Science.gov (United States)

    Umlauft, Maria; Konietschke, Frank; Pauly, Markus

    2017-11-01

    Inference methods for null hypotheses formulated in terms of distribution functions in general non-parametric factorial designs are studied. The methods can be applied to continuous, ordinal or even ordered categorical data in a unified way, and are based only on ranks. In this set-up Wald-type statistics and ANOVA-type statistics are the current state of the art. The first method is asymptotically exact but a rather liberal statistical testing procedure for small to moderate sample size, while the latter is only an approximation which does not possess the correct asymptotic α level under the null. To bridge these gaps, a novel permutation approach is proposed which can be seen as a flexible generalization of the Kruskal-Wallis test to all kinds of factorial designs with independent observations. It is proven that the permutation principle is asymptotically correct while keeping its finite exactness property when data are exchangeable. The results of extensive simulation studies foster these theoretical findings. A real data set exemplifies its applicability. © 2017 The British Psychological Society.

  20. Web-based tool for subjective observer ranking of compressed medical images

    Science.gov (United States)

    Langer, Steven G.; Stewart, Brent K.; Andrew, Rex K.

    1999-05-01

    In the course of evaluating various compression schemes for ultrasound teleradiology applications, it became obvious that paper based methods of data collection were time consuming and error prone. A method was sought which allowed participating radiologists to view the ultrasound video clips (compressed to varying degree) at their desks. Furthermore, the method should allow observers to enter their evaluations and when finished, automatically submit the data to our statistical analysis engine. We have found the World Wide Web offered a ready solution. A web page was constructed that contains 18 embedded AVI video clips. The 18 clips represent 6 distinct anatomical areas, compressed by various methods and amounts, and then randomly distributed through the web page. To the right of each video, a series of questions are presented which ask the observer to rank (1 - 5) his/her ability to answer diagnostically relevant questions. When completed, the observer presses 'Submit' and a file of tab delimited test is created which can then be imported to an Excel workbook. Kappa analysis is then performed and the resulting plots demonstrate observer preferences.

  1. Rank-Based miRNA Signatures for Early Cancer Detection

    Directory of Open Access Journals (Sweden)

    Mario Lauria

    2014-01-01

    Full Text Available We describe a new signature definition and analysis method to be used as biomarker for early cancer detection. Our new approach is based on the construction of a reference map of transcriptional signatures of both healthy and cancer affected individuals using circulating miRNA from a large number of subjects. Once such a map is available, the diagnosis for a new patient can be performed by observing the relative position on the map of his/her transcriptional signature. To demonstrate its efficacy for this specific application we report the results of the application of our method to published datasets of circulating miRNA, and we quantify its performance compared to current state-of-the-art methods. A number of additional features make this method an ideal candidate for large-scale use, for example, as a mass screening tool for early cancer detection or for at-home diagnostics. Specifically, our method is minimally invasive (because it works well with circulating miRNA, it is robust with respect to lab-to-lab protocol variability and batch effects (it requires that only the relative ranking of expression value of miRNA in a profile be accurate not their absolute values, and it is scalable to a large number of subjects. Finally we discuss the need for HPC capability in a widespread application of our or similar methods.

  2. Wilcoxon signed-rank-based technique for the pulse-shape analysis of HPGe detectors

    Energy Technology Data Exchange (ETDEWEB)

    Martín, S., E-mail: sergiomr@usal.es; Quintana, B.; Barrientos, D.

    2016-07-01

    The characterization of the electric response of segmented-contact high-purity germanium detectors requires scanning systems capable of accurately associating each pulse with the position of the interaction that generated it. This process requires an algorithm sensitive to changes above the electronic noise in the pulse shapes produced at different positions, depending on the resolution of the Ge crystal. In this work, a pulse-shape comparison technique based on the Wilcoxon signed-rank test has been developed. It provides a method to distinguish pulses coming from different interaction points in the germanium crystal. Therefore, this technique is a necessary step for building a reliable pulse-shape database that can be used later for the determination of the position of interaction for γ-ray tracking spectrometry devices such as AGATA, GRETA or GERDA. The method was validated by comparison with a χ{sup 2} test using simulated and experimental pulses corresponding to a Broad Energy germanium detector (BEGe).

  3. Data depth and rank-based tests for covariance and spectral density matrices

    KAUST Repository

    Chau, Joris; Ombao, Hernando; Sachs, Rainer von

    2017-01-01

    In multivariate time series analysis, objects of primary interest to study cross-dependences in the time series are the autocovariance or spectral density matrices. Non-degenerate covariance and spectral density matrices are necessarily Hermitian and positive definite, and our primary goal is to develop new methods to analyze samples of such matrices. The main contribution of this paper is the generalization of the concept of statistical data depth for collections of covariance or spectral density matrices by exploiting the geometric properties of the space of Hermitian positive definite matrices as a Riemannian manifold. This allows one to naturally characterize most central or outlying matrices, but also provides a practical framework for rank-based hypothesis testing in the context of samples of covariance or spectral density matrices. First, the desired properties of a data depth function acting on the space of Hermitian positive definite matrices are presented. Second, we propose two computationally efficient pointwise and integrated data depth functions that satisfy each of these requirements. Several applications of the developed methodology are illustrated by the analysis of collections of spectral matrices in multivariate brain signal time series datasets.

  4. Diagnosis of students' ability in a statistical course based on Rasch probabilistic outcome

    Science.gov (United States)

    Mahmud, Zamalia; Ramli, Wan Syahira Wan; Sapri, Shamsiah; Ahmad, Sanizah

    2017-06-01

    Measuring students' ability and performance are important in assessing how well students have learned and mastered the statistical courses. Any improvement in learning will depend on the student's approaches to learning, which are relevant to some factors of learning, namely assessment methods carrying out tasks consisting of quizzes, tests, assignment and final examination. This study has attempted an alternative approach to measure students' ability in an undergraduate statistical course based on the Rasch probabilistic model. Firstly, this study aims to explore the learning outcome patterns of students in a statistics course (Applied Probability and Statistics) based on an Entrance-Exit survey. This is followed by investigating students' perceived learning ability based on four Course Learning Outcomes (CLOs) and students' actual learning ability based on their final examination scores. Rasch analysis revealed that students perceived themselves as lacking the ability to understand about 95% of the statistics concepts at the beginning of the class but eventually they had a good understanding at the end of the 14 weeks class. In terms of students' performance in their final examination, their ability in understanding the topics varies at different probability values given the ability of the students and difficulty of the questions. Majority found the probability and counting rules topic to be the most difficult to learn.

  5. Microseismic Event Relocation and Focal Mechanism Estimation Based on PageRank Linkage

    Science.gov (United States)

    Aguiar, A. C.; Myers, S. C.

    2017-12-01

    Microseismicity associated with enhanced geothermal systems (EGS) is key in understanding how subsurface stimulation can modify stress, fracture rock, and increase permeability. Large numbers of microseismic events are commonly associated with hydroshearing an EGS, making data mining methods useful in their analysis. We focus on PageRank, originally developed as Google's search engine, and subsequently adapted for use in seismology to detect low-frequency earthquakes by linking events directly and indirectly through cross-correlation (Aguiar and Beroza, 2014). We expand on this application by using PageRank to define signal-correlation topology for micro-earthquakes from the Newberry Volcano EGS in Central Oregon, which has been stimulated two times using high-pressure fluid injection. We create PageRank signal families from both data sets and compare these to the spatial and temporal proximity of associated earthquakes. PageRank families are relocated using differential travel times measured by waveform cross-correlation (CC) and the Bayesloc approach (Myers et al., 2007). Prior to relocation events are loosely clustered with events at a distance from the cluster. After relocation, event families are found to be tightly clustered. Indirect linkage of signals using PageRank is a reliable way to increase the number of events confidently determined to be similar, suggesting an efficient and effective grouping of earthquakes with similar physical characteristics (ie. location, focal mechanism, stress drop). We further explore the possibility of using PageRank families to identify events with similar relative phase polarities and estimate focal mechanisms following Shelly et al. (2016) method, where CC measurements are used to determine individual polarities within event clusters. Given a positive result, PageRank might be a useful tool in adaptive approaches to enhance production at well-instrumented geothermal sites. Prepared by LLNL under Contract DE-AC52-07NA27344

  6. The optimized expansion based low-rank method for wavefield extrapolation

    KAUST Repository

    Wu, Zedong

    2014-03-01

    Spectral methods are fast becoming an indispensable tool for wavefield extrapolation, especially in anisotropic media because it tends to be dispersion and artifact free as well as highly accurate when solving the wave equation. However, for inhomogeneous media, we face difficulties in dealing with the mixed space-wavenumber domain extrapolation operator efficiently. To solve this problem, we evaluated an optimized expansion method that can approximate this operator with a low-rank variable separation representation. The rank defines the number of inverse Fourier transforms for each time extrapolation step, and thus, the lower the rank, the faster the extrapolation. The method uses optimization instead of matrix decomposition to find the optimal wavenumbers and velocities needed to approximate the full operator with its explicit low-rank representation. As a result, we obtain lower rank representations compared with the standard low-rank method within reasonable accuracy and thus cheaper extrapolations. Additional bounds set on the range of propagated wavenumbers to adhere to the physical wave limits yield unconditionally stable extrapolations regardless of the time step. An application on the BP model provided superior results compared to those obtained using the decomposition approach. For transversely isotopic media, because we used the pure P-wave dispersion relation, we obtained solutions that were free of the shear wave artifacts, and the algorithm does not require that n > 0. In addition, the required rank for the optimization approach to obtain high accuracy in anisotropic media was lower than that obtained by the decomposition approach, and thus, it was more efficient. A reverse time migration result for the BP tilted transverse isotropy model using this method as a wave propagator demonstrated the ability of the algorithm.

  7. RANKING OF COMPANIES ACCORDING TO THE INDICATORS OF CORPORATE SOCIAL RESPONSIBILITY BASED ON SWARA AND ARAS METHODS

    Directory of Open Access Journals (Sweden)

    Darjan Karabasevic

    2016-05-01

    Full Text Available Corporate sector and companies have recognized the importance of implementation of strategy of corporate social responsibility in order to increase the company's image and responsibility towards society and the communities where they operate. Multinational companies in their everyday activities and operations pay more attention to sustainable models of corporate social responsibility. The focus of this paper is to identify the indicators of corporate social responsibility and to rank companies according to the indicators. Proposed framework for evaluation and ranking is based on the SWARA and the ARAS methods. The usability and efficiency of the proposed framework is shown on an illustrative example.

  8. Selection and ranking of occupational safety indicators based on fuzzy AHP: A case study in road construction companies

    Directory of Open Access Journals (Sweden)

    Janackovic, Goran Lj.

    2013-11-01

    Full Text Available This paper presents the factors, performance, and indicators of occupational safety, as well as a method to select and rank occupational safety indicators based on the expert evaluation method and the fuzzy analytic hierarchy process (fuzzy AHP. A case study is done on road construction companies in Serbia. The key safety performance indicators for the road construction industry are identified and ranked according to the results of a survey that included experts who assessed occupational safety risks in these companies. The case study confirmed that organisational factors have a dominant effect on the quality of the occupational health and safety management system in Serbian road construction companies.

  9. A Novel Riemannian Metric Based on Riemannian Structure and Scaling Information for Fixed Low-Rank Matrix Completion.

    Science.gov (United States)

    Mao, Shasha; Xiong, Lin; Jiao, Licheng; Feng, Tian; Yeung, Sai-Kit

    2017-05-01

    Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.

  10. Dose-volume based ranking of incident beam direction and its utility in facilitating IMRT beam placement

    International Nuclear Information System (INIS)

    Schreibmann, Eduard; Xing Lei

    2005-01-01

    Purpose: Beam orientation optimization in intensity-modulated radiation therapy (IMRT) is computationally intensive, and various single beam ranking techniques have been proposed to reduce the search space. Up to this point, none of the existing ranking techniques considers the clinically important dose-volume effects of the involved structures, which may lead to clinically irrelevant angular ranking. The purpose of this work is to develop a clinically sensible angular ranking model with incorporation of dose-volume effects and to show its utility for IMRT beam placement. Methods and Materials: The general consideration in constructing this angular ranking function is that a beamlet/beam is preferable if it can deliver a higher dose to the target without exceeding the tolerance of the sensitive structures located on the path of the beamlet/beam. In the previously proposed dose-based approach, the beamlets are treated independently and, to compute the maximally deliverable dose to the target volume, the intensity of each beamlet is pushed to its maximum intensity without considering the values of other beamlets. When volumetric structures are involved, the complication arises from the fact that there are numerous dose distributions corresponding to the same dose-volume tolerance. In this situation, the beamlets are not independent and an optimization algorithm is required to find the intensity profile that delivers the maximum target dose while satisfying the volumetric constraints. In this study, the behavior of a volumetric organ was modeled by using the equivalent uniform dose (EUD). A constrained sequential quadratic programming algorithm (CFSQP) was used to find the beam profile that delivers the maximum dose to the target volume without violating the EUD constraint or constraints. To assess the utility of the proposed technique, we planned a head-and-neck and abdominal case with and without the guidance of the angular ranking information. The qualities of the

  11. CONSRANK: a server for the analysis, comparison and ranking of docking models based on inter-residue contacts

    KAUST Repository

    Chermak, Edrisse; Petta, A.; Serra, L.; Vangone, A.; Scarano, V.; Cavallo, Luigi; Oliva, R.

    2014-01-01

    Summary: Herein, we present CONSRANK, a web tool for analyzing, comparing and ranking protein–protein and protein–nucleic acid docking models, based on the conservation of inter-residue contacts and its visualization in 2D and 3D interactive contact maps.

  12. CONSRANK: a server for the analysis, comparison and ranking of docking models based on inter-residue contacts

    KAUST Repository

    Chermak, Edrisse

    2014-12-21

    Summary: Herein, we present CONSRANK, a web tool for analyzing, comparing and ranking protein–protein and protein–nucleic acid docking models, based on the conservation of inter-residue contacts and its visualization in 2D and 3D interactive contact maps.

  13. Probabilistic Insurance

    NARCIS (Netherlands)

    Wakker, P.P.; Thaler, R.H.; Tversky, A.

    1997-01-01

    Probabilistic insurance is an insurance policy involving a small probability that the consumer will not be reimbursed. Survey data suggest that people dislike probabilistic insurance and demand more than a 20% reduction in premium to compensate for a 1% default risk. These observations cannot be

  14. Probabilistic Insurance

    NARCIS (Netherlands)

    P.P. Wakker (Peter); R.H. Thaler (Richard); A. Tversky (Amos)

    1997-01-01

    textabstractProbabilistic insurance is an insurance policy involving a small probability that the consumer will not be reimbursed. Survey data suggest that people dislike probabilistic insurance and demand more than a 20% reduction in the premium to compensate for a 1% default risk. While these

  15. Sign rank versus Vapnik-Chervonenkis dimension

    Science.gov (United States)

    Alon, N.; Moran, Sh; Yehudayoff, A.

    2017-12-01

    This work studies the maximum possible sign rank of sign (N × N)-matrices with a given Vapnik-Chervonenkis dimension d. For d=1, this maximum is three. For d=2, this maximum is \\widetilde{\\Theta}(N1/2). For d >2, similar but slightly less accurate statements hold. The lower bounds improve on previous ones by Ben-David et al., and the upper bounds are novel. The lower bounds are obtained by probabilistic constructions, using a theorem of Warren in real algebraic topology. The upper bounds are obtained using a result of Welzl about spanning trees with low stabbing number, and using the moment curve. The upper bound technique is also used to: (i) provide estimates on the number of classes of a given Vapnik-Chervonenkis dimension, and the number of maximum classes of a given Vapnik-Chervonenkis dimension--answering a question of Frankl from 1989, and (ii) design an efficient algorithm that provides an O(N/log(N)) multiplicative approximation for the sign rank. We also observe a general connection between sign rank and spectral gaps which is based on Forster's argument. Consider the adjacency (N × N)-matrix of a Δ-regular graph with a second eigenvalue of absolute value λ and Δ ≤ N/2. We show that the sign rank of the signed version of this matrix is at least Δ/λ. We use this connection to prove the existence of a maximum class C\\subseteq\\{+/- 1\\}^N with Vapnik-Chervonenkis dimension 2 and sign rank \\widetilde{\\Theta}(N1/2). This answers a question of Ben-David et al. regarding the sign rank of large Vapnik-Chervonenkis classes. We also describe limitations of this approach, in the spirit of the Alon-Boppana theorem. We further describe connections to communication complexity, geometry, learning theory, and combinatorics. Bibliography: 69 titles.

  16. Building a high-resolution T2-weighted MR-based probabilistic model of tumor occurrence in the prostate.

    Science.gov (United States)

    Nagarajan, Mahesh B; Raman, Steven S; Lo, Pechin; Lin, Wei-Chan; Khoshnoodi, Pooria; Sayre, James W; Ramakrishna, Bharath; Ahuja, Preeti; Huang, Jiaoti; Margolis, Daniel J A; Lu, David S K; Reiter, Robert E; Goldin, Jonathan G; Brown, Matthew S; Enzmann, Dieter R

    2018-02-19

    We present a method for generating a T2 MR-based probabilistic model of tumor occurrence in the prostate to guide the selection of anatomical sites for targeted biopsies and serve as a diagnostic tool to aid radiological evaluation of prostate cancer. In our study, the prostate and any radiological findings within were segmented retrospectively on 3D T2-weighted MR images of 266 subjects who underwent radical prostatectomy. Subsequent histopathological analysis determined both the ground truth and the Gleason grade of the tumors. A randomly chosen subset of 19 subjects was used to generate a multi-subject-derived prostate template. Subsequently, a cascading registration algorithm involving both affine and non-rigid B-spline transforms was used to register the prostate of every subject to the template. Corresponding transformation of radiological findings yielded a population-based probabilistic model of tumor occurrence. The quality of our probabilistic model building approach was statistically evaluated by measuring the proportion of correct placements of tumors in the prostate template, i.e., the number of tumors that maintained their anatomical location within the prostate after their transformation into the prostate template space. Probabilistic model built with tumors deemed clinically significant demonstrated a heterogeneous distribution of tumors, with higher likelihood of tumor occurrence at the mid-gland anterior transition zone and the base-to-mid-gland posterior peripheral zones. Of 250 MR lesions analyzed, 248 maintained their original anatomical location with respect to the prostate zones after transformation to the prostate. We present a robust method for generating a probabilistic model of tumor occurrence in the prostate that could aid clinical decision making, such as selection of anatomical sites for MR-guided prostate biopsies.

  17. Methodological Bases for Ranking the European Union Countries in Terms of Macroeconomic Security

    Directory of Open Access Journals (Sweden)

    Tymoshenko Olena V.

    2015-11-01

    Full Text Available The fundamental contradictions of existing methodical approaches to assessing the level of the state economic security have been substantiated and proposals on the introduction of a unified methodology for its assessment, which would be acceptable for use at the international level or for a specific cluster of countries, have been developed. Based on the conducted researches it has been found that the there are no unified signs for such classification of countries. To determine the most significant coefficients and critical values of the indicators of economic security, it is appropriate that the countries should be grouped in terms of the level of the economic development proposed by the UN Commission and the IMF. Analysis of the economic security level has been conducted for the countries-members of the European Union as a separate cluster of countries on the example of macroeconomic security indicators. Based on the evaluation it has been found that the proposed list of indicators and their critical values is economically sound and built on the principle of adequacy, representativeness and comprehensiveness. In 2004 the most secure countries of the EU corresponding to the macroeconomic security standards were Austria, Denmark, Sweden, Finland, and as in 2014 the percentage of absolutely secure countries decreased from 14.3 to 7.1%, only Denmark and Sweden remained in the ranking. During the analyzed period Bulgaria and Croatia got into the risk zone, Estonia, Lithuania, Latvia, Romania were in a danger zone. In 2014 Ukraine in terms of its macroeconomic security was in a critical state, which testified about serious structural and system imbalances in its development.

  18. Enhancement of dynamic myocardial perfusion PET images based on low-rank plus sparse decomposition.

    Science.gov (United States)

    Lu, Lijun; Ma, Xiaomian; Mohy-Ud-Din, Hassan; Ma, Jianhua; Feng, Qianjin; Rahmim, Arman; Chen, Wufan

    2018-02-01

    The absolute quantification of dynamic myocardial perfusion (MP) PET imaging is challenged by the limited spatial resolution of individual frame images due to division of the data into shorter frames. This study aims to develop a method for restoration and enhancement of dynamic PET images. We propose that the image restoration model should be based on multiple constraints rather than a single constraint, given the fact that the image characteristic is hardly described by a single constraint alone. At the same time, it may be possible, but not optimal, to regularize the image with multiple constraints simultaneously. Fortunately, MP PET images can be decomposed into a superposition of background vs. dynamic components via low-rank plus sparse (L + S) decomposition. Thus, we propose an L + S decomposition based MP PET image restoration model and express it as a convex optimization problem. An iterative soft thresholding algorithm was developed to solve the problem. Using realistic dynamic 82 Rb MP PET scan data, we optimized and compared its performance with other restoration methods. The proposed method resulted in substantial visual as well as quantitative accuracy improvements in terms of noise versus bias performance, as demonstrated in extensive 82 Rb MP PET simulations. In particular, the myocardium defect in the MP PET images had improved visual as well as contrast versus noise tradeoff. The proposed algorithm was also applied on an 8-min clinical cardiac 82 Rb MP PET study performed on the GE Discovery PET/CT, and demonstrated improved quantitative accuracy (CNR and SNR) compared to other algorithms. The proposed method is effective for restoration and enhancement of dynamic PET images. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Probabilistic safety assessment based expert systems in support of dynamic risk assessment

    International Nuclear Information System (INIS)

    Varde, P.V.; Sharma, U.L.; Marik, S.K.; Raina, V.K.; Tikku, A.C.

    2006-01-01

    Probabilistic Safety Assessment (PSA) studies are being performed, world over as part of integrated risk assessment for Nuclear Power Plants and in many cases PSA insight is utilized in support of decision making. Though the modern plants are built with inherent safety provisions, particularly to reduce the supervisory requirements during initial period into the accident, it is always desired to develop an efficient user friendly real-time operator advisory system for handling of plant transients/emergencies which would be of immense benefit for the enhancement of operational safety of the plant. This paper discusses an integrated approach for the development of operator support system. In this approach, PSA methodology and the insight obtained from PSA has been utilized for development of knowledge based or rule based experts system. While Artificial Neural Network (ANN) approach has been employed for transient identification, rule-base expert system shell environment was used for the development of diagnostic module in this system. Attempt has been made to demonstrate that this approach offers an efficient framework for addressing requirements related to handling of real-time/dynamic scenario. (author)

  20. A Hypergraph and Arithmetic Residue-based Probabilistic Neural Network for classification in Intrusion Detection Systems.

    Science.gov (United States)

    Raman, M R Gauthama; Somu, Nivethitha; Kirthivasan, Kannan; Sriram, V S Shankar

    2017-08-01

    Over the past few decades, the design of an intelligent Intrusion Detection System (IDS) remains an open challenge to the research community. Continuous efforts by the researchers have resulted in the development of several learning models based on Artificial Neural Network (ANN) to improve the performance of the IDSs. However, there exists a tradeoff with respect to the stability of ANN architecture and the detection rate for less frequent attacks. This paper presents a novel approach based on Helly property of Hypergraph and Arithmetic Residue-based Probabilistic Neural Network (HG AR-PNN) to address the classification problem in IDS. The Helly property of Hypergraph was exploited for the identification of the optimal feature subset and the arithmetic residue of the optimal feature subset was used to train the PNN. The performance of HG AR-PNN was evaluated using KDD CUP 1999 intrusion dataset. Experimental results prove the dominance of HG AR-PNN classifier over the existing classifiers with respect to the stability and improved detection rate for less frequent attacks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Comparison of acrylamide intake from Western and guideline based diets using probabilistic techniques and linear programming.

    Science.gov (United States)

    Katz, Josh M; Winter, Carl K; Buttrey, Samuel E; Fadel, James G

    2012-03-01

    Western and guideline based diets were compared to determine if dietary improvements resulting from following dietary guidelines reduce acrylamide intake. Acrylamide forms in heat treated foods and is a human neurotoxin and animal carcinogen. Acrylamide intake from the Western diet was estimated with probabilistic techniques using teenage (13-19 years) National Health and Nutrition Examination Survey (NHANES) food consumption estimates combined with FDA data on the levels of acrylamide in a large number of foods. Guideline based diets were derived from NHANES data using linear programming techniques to comport to recommendations from the Dietary Guidelines for Americans, 2005. Whereas the guideline based diets were more properly balanced and rich in consumption of fruits, vegetables, and other dietary components than the Western diets, acrylamide intake (mean±SE) was significantly greater (Plinear programming and results demonstrate that linear programming techniques can be used to model specific diets for the assessment of toxicological and nutritional dietary components. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. An ontology-based nurse call management system (oNCS) with probabilistic priority assessment

    Science.gov (United States)

    2011-01-01

    Background The current, place-oriented nurse call systems are very static. A patient can only make calls with a button which is fixed to a wall of a room. Moreover, the system does not take into account various factors specific to a situation. In the future, there will be an evolution to a mobile button for each patient so that they can walk around freely and still make calls. The system would become person-oriented and the available context information should be taken into account to assign the correct nurse to a call. The aim of this research is (1) the design of a software platform that supports the transition to mobile and wireless nurse call buttons in hospitals and residential care and (2) the design of a sophisticated nurse call algorithm. This algorithm dynamically adapts to the situation at hand by taking the profile information of staff members and patients into account. Additionally, the priority of a call probabilistically depends on the risk factors, assigned to a patient. Methods The ontology-based Nurse Call System (oNCS) was developed as an extension of a Context-Aware Service Platform. An ontology is used to manage the profile information. Rules implement the novel nurse call algorithm that takes all this information into account. Probabilistic reasoning algorithms are designed to determine the priority of a call based on the risk factors of the patient. Results The oNCS system is evaluated through a prototype implementation and simulations, based on a detailed dataset obtained from Ghent University Hospital. The arrival times of nurses at the location of a call, the workload distribution of calls amongst nurses and the assignment of priorities to calls are compared for the oNCS system and the current, place-oriented nurse call system. Additionally, the performance of the system is discussed. Conclusions The execution time of the nurse call algorithm is on average 50.333 ms. Moreover, the oNCS system significantly improves the assignment of nurses

  3. An ontology-based nurse call management system (oNCS with probabilistic priority assessment

    Directory of Open Access Journals (Sweden)

    Verhoeve Piet

    2011-02-01

    Full Text Available Abstract Background The current, place-oriented nurse call systems are very static. A patient can only make calls with a button which is fixed to a wall of a room. Moreover, the system does not take into account various factors specific to a situation. In the future, there will be an evolution to a mobile button for each patient so that they can walk around freely and still make calls. The system would become person-oriented and the available context information should be taken into account to assign the correct nurse to a call. The aim of this research is (1 the design of a software platform that supports the transition to mobile and wireless nurse call buttons in hospitals and residential care and (2 the design of a sophisticated nurse call algorithm. This algorithm dynamically adapts to the situation at hand by taking the profile information of staff members and patients into account. Additionally, the priority of a call probabilistically depends on the risk factors, assigned to a patient. Methods The ontology-based Nurse Call System (oNCS was developed as an extension of a Context-Aware Service Platform. An ontology is used to manage the profile information. Rules implement the novel nurse call algorithm that takes all this information into account. Probabilistic reasoning algorithms are designed to determine the priority of a call based on the risk factors of the patient. Results The oNCS system is evaluated through a prototype implementation and simulations, based on a detailed dataset obtained from Ghent University Hospital. The arrival times of nurses at the location of a call, the workload distribution of calls amongst nurses and the assignment of priorities to calls are compared for the oNCS system and the current, place-oriented nurse call system. Additionally, the performance of the system is discussed. Conclusions The execution time of the nurse call algorithm is on average 50.333 ms. Moreover, the oNCS system significantly improves

  4. Automatic segmentation of coronary angiograms based on fuzzy inferring and probabilistic tracking

    Directory of Open Access Journals (Sweden)

    Shoujun Zhou

    2010-08-01

    Full Text Available Abstract Background Segmentation of the coronary angiogram is important in computer-assisted artery motion analysis or reconstruction of 3D vascular structures from a single-plan or biplane angiographic system. Developing fully automated and accurate vessel segmentation algorithms is highly challenging, especially when extracting vascular structures with large variations in image intensities and noise, as well as with variable cross-sections or vascular lesions. Methods This paper presents a novel tracking method for automatic segmentation of the coronary artery tree in X-ray angiographic images, based on probabilistic vessel tracking and fuzzy structure pattern inferring. The method is composed of two main steps: preprocessing and tracking. In preprocessing, multiscale Gabor filtering and Hessian matrix analysis were used to enhance and extract vessel features from the original angiographic image, leading to a vessel feature map as well as a vessel direction map. In tracking, a seed point was first automatically detected by analyzing the vessel feature map. Subsequently, two operators [e.g., a probabilistic tracking operator (PTO and a vessel structure pattern detector (SPD] worked together based on the detected seed point to extract vessel segments or branches one at a time. The local structure pattern was inferred by a multi-feature based fuzzy inferring function employed in the SPD. The identified structure pattern, such as crossing or bifurcation, was used to control the tracking process, for example, to keep tracking the current segment or start tracking a new one, depending on the detected pattern. Results By appropriate integration of these advanced preprocessing and tracking steps, our tracking algorithm is able to extract both vessel axis lines and edge points, as well as measure the arterial diameters in various complicated cases. For example, it can walk across gaps along the longitudinal vessel direction, manage varying vessel

  5. A state-based probabilistic model for tumor respiratory motion prediction

    International Nuclear Information System (INIS)

    Kalet, Alan; Sandison, George; Schmitz, Ruth; Wu Huanmei

    2010-01-01

    This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more

  6. How to Rank Journals.

    Science.gov (United States)

    Bradshaw, Corey J A; Brook, Barry W

    2016-01-01

    There are now many methods available to assess the relative citation performance of peer-reviewed journals. Regardless of their individual faults and advantages, citation-based metrics are used by researchers to maximize the citation potential of their articles, and by employers to rank academic track records. The absolute value of any particular index is arguably meaningless unless compared to other journals, and different metrics result in divergent rankings. To provide a simple yet more objective way to rank journals within and among disciplines, we developed a κ-resampled composite journal rank incorporating five popular citation indices: Impact Factor, Immediacy Index, Source-Normalized Impact Per Paper, SCImago Journal Rank and Google 5-year h-index; this approach provides an index of relative rank uncertainty. We applied the approach to six sample sets of scientific journals from Ecology (n = 100 journals), Medicine (n = 100), Multidisciplinary (n = 50); Ecology + Multidisciplinary (n = 25), Obstetrics & Gynaecology (n = 25) and Marine Biology & Fisheries (n = 25). We then cross-compared the κ-resampled ranking for the Ecology + Multidisciplinary journal set to the results of a survey of 188 publishing ecologists who were asked to rank the same journals, and found a 0.68-0.84 Spearman's ρ correlation between the two rankings datasets. Our composite index approach therefore approximates relative journal reputation, at least for that discipline. Agglomerative and divisive clustering and multi-dimensional scaling techniques applied to the Ecology + Multidisciplinary journal set identified specific clusters of similarly ranked journals, with only Nature & Science separating out from the others. When comparing a selection of journals within or among disciplines, we recommend collecting multiple citation-based metrics for a sample of relevant and realistic journals to calculate the composite rankings and their relative uncertainty windows.

  7. Ranking of Prokaryotic Genomes Based on Maximization of Sortedness of Gene Lengths.

    Science.gov (United States)

    Bolshoy, A; Salih, B; Cohen, I; Tatarinova, T

    How variations of gene lengths (some genes become longer than their predecessors, while other genes become shorter and the sizes of these factions are randomly different from organism to organism) depend on organismal evolution and adaptation is still an open question. We propose to rank the genomes according to lengths of their genes, and then find association between the genome rank and variousproperties, such as growth temperature, nucleotide composition, and pathogenicity. This approach reveals evolutionary driving factors. The main purpose of this study is to test effectiveness and robustness of several ranking methods. The selected method of evaluation is measuring of overall sortedness of the data. We have demonstrated that all considered methods give consistent results and Bubble Sort and Simulated Annealing achieve the highest sortedness. Also, Bubble Sort is considerably faster than the Simulated Annealing method.

  8. Ranking provinces based on development scale in agriculture sector using taxonomy technique

    Directory of Open Access Journals (Sweden)

    Shahram Rostampour

    2012-08-01

    Full Text Available The purpose of this paper is to determine comparative ranking of agricultural development in different provinces of Iran using taxonomy technique. The independent variables are amount of annual rainfall amount, the number of permanent rivers, the width of pastures and forest, cultivated level of agricultural harvests and garden harvests, number of beehives, the number of fish farming ranches, the number of tractors and combines, the number of cooperative production societies, the number of industrial cattle breeding and aviculture. The results indicate that the maximum development coefficient value is associated with Razavi Khorasan province followed by Mazandaran, East Azarbayjan while the minimum ranking value belongs to Bushehr province.

  9. Satellite Based Probabilistic Snow Cover Extent Mapping (SCE) at Hydro-Québec

    Science.gov (United States)

    Teasdale, Mylène; De Sève, Danielle; Angers, Jean-François; Perreault, Luc

    2016-04-01

    Over 40% of Canada's water resources are in Quebec and Hydro-Quebec has developed potential to become one of the largest producers of hydroelectricity in the world, with a total installed capacity of 36,643 MW. The Hydro-Québec fleet park includes 27 large reservoirs with a combined storage capacity of 176 TWh, and 668 dams and 98 controls. Thus, over 98% of all electricity used to supply the domestic market comes from water resources and the excess output is sold on the wholesale markets. In this perspective the efficient management of water resources is needed and it is based primarily on a good river flow estimation including appropriate hydrological data. Snow on ground is one of the significant variables representing 30% to 40% of its annual energy reserve. More specifically, information on snow cover extent (SCE) and snow water equivalent (SWE) is crucial for hydrological forecasting, particularly in northern regions since the snowmelt provides the water that fills the reservoirs and is subsequently used for hydropower generation. For several years Hydro Quebec's research institute ( IREQ) developed several algorithms to map SCE and SWE. So far all the methods were deterministic. However, given the need to maximize the efficient use of all resources while ensuring reliability, the electrical systems must now be managed taking into account all risks. Since snow cover estimation is based on limited spatial information, it is important to quantify and handle its uncertainty in the hydrological forecasting system. This paper presents the first results of a probabilistic algorithm for mapping SCE by combining Bayesian mixture of probability distributions and multiple logistic regression models applied to passive microwave data. This approach allows assigning for each grid point, probabilities to the set of the mutually exclusive discrete outcomes: "snow" and "no snow". Its performance was evaluated using the Brier score since it is particularly appropriate to

  10. Improving predictive power of physically based rainfall-induced shallow landslide models: a probabilistic approach

    Directory of Open Access Journals (Sweden)

    S. Raia

    2014-03-01

    Full Text Available Distributed models to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides are based on deterministic laws. These models extend spatially the static stability models adopted in geotechnical engineering, and adopt an infinite-slope geometry to balance the resisting and the driving forces acting on the sliding mass. An infiltration model is used to determine how rainfall changes pore-water conditions, modulating the local stability/instability conditions. A problem with the operation of the existing models lays in the difficulty in obtaining accurate values for the several variables that describe the material properties of the slopes. The problem is particularly severe when the models are applied over large areas, for which sufficient information on the geotechnical and hydrological conditions of the slopes is not generally available. To help solve the problem, we propose a probabilistic Monte Carlo approach to the distributed modeling of rainfall-induced shallow landslides. For this purpose, we have modified the transient rainfall infiltration and grid-based regional slope-stability analysis (TRIGRS code. The new code (TRIGRS-P adopts a probabilistic approach to compute, on a cell-by-cell basis, transient pore-pressure changes and related changes in the factor of safety due to rainfall infiltration. Infiltration is modeled using analytical solutions of partial differential equations describing one-dimensional vertical flow in isotropic, homogeneous materials. Both saturated and unsaturated soil conditions can be considered. TRIGRS-P copes with the natural variability inherent to the mechanical and hydrological properties of the slope materials by allowing values of the TRIGRS model input parameters to be sampled randomly from a given probability distribution. The range of variation and the mean value of the parameters can be determined by the usual methods used for preparing the TRIGRS input parameters. The outputs

  11. A Probabilistic and Observation Based Methodology to Estimate Small Craft Harbor Vulnerability to Tsunami Events

    Science.gov (United States)

    Keen, A. S.; Lynett, P. J.; Ayca, A.

    2016-12-01

    Because of the damage resulting from the 2010 Chile and 2011 Japanese tele-tsunamis, the tsunami risk to the small craft marinas in California has become an important concern. The talk will outline an assessment tool which can be used to assess the tsunami hazard to small craft harbors. The methodology is based on the demand and structural capacity of the floating dock system, composed of floating docks/fingers and moored vessels. The structural demand is determined using a Monte Carlo methodology. Monte Carlo methodology is a probabilistic computational tool where the governing might be well known, but the independent variables of the input (demand) as well as the resisting structural components (capacity) may not be completely known. The Monte Carlo approach uses a distribution of each variable, and then uses that random variable within the described parameters, to generate a single computation. The process then repeats hundreds or thousands of times. The numerical model "Method of Splitting Tsunamis" (MOST) has been used to determine the inputs for the small craft harbors within California. Hydrodynamic model results of current speed, direction and surface elevation were incorporated via the drag equations to provide the bases of the demand term. To determine the capacities, an inspection program was developed to identify common features of structural components. A total of six harbors have been inspected ranging from Crescent City in Northern California to Oceanside Harbor in Southern California. Results from the inspection program were used to develop component capacity tables which incorporated the basic specifications of each component (e.g. bolt size and configuration) and a reduction factor (which accounts for the component reduction in capacity with age) to estimate in situ capacities. Like the demand term, these capacities are added probabilistically into the model. To date the model has been applied to Santa Cruz Harbor as well as Noyo River. Once

  12. Study on quantification method based on Monte Carlo sampling for multiunit probabilistic safety assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Kye Min [KHNP Central Research Institute, Daejeon (Korea, Republic of); Han, Sang Hoon; Park, Jin Hee; Lim, Ho Gon; Yang, Joon Yang [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Heo, Gyun Young [Kyung Hee University, Yongin (Korea, Republic of)

    2017-06-15

    In Korea, many nuclear power plants operate at a single site based on geographical characteristics, but the population density near the sites is higher than that in other countries. Thus, multiunit accidents are a more important consideration than in other countries and should be addressed appropriately. Currently, there are many issues related to a multiunit probabilistic safety assessment (PSA). One of them is the quantification of a multiunit PSA model. A traditional PSA uses a Boolean manipulation of the fault tree in terms of the minimal cut set. However, such methods have some limitations when rare event approximations cannot be used effectively or a very small truncation limit should be applied to identify accident sequence combinations for a multiunit site. In particular, it is well known that seismic risk in terms of core damage frequency can be overestimated because there are many events that have a high failure probability. In this study, we propose a quantification method based on a Monte Carlo approach for a multiunit PSA model. This method can consider all possible accident sequence combinations in a multiunit site and calculate a more exact value for events that have a high failure probability. An example model for six identical units at a site was also developed and quantified to confirm the applicability of the proposed method.

  13. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    Science.gov (United States)

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  14. Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation

    Institute of Scientific and Technical Information of China (English)

    Tian Dongping

    2017-01-01

    In recent years, multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas, especially for automatic image annotation, whose purpose is to provide an efficient and effective searching environment for users to query their images more easily.In this paper, a semi-supervised learning based probabilistic latent semantic analysis ( PL-SA) model for automatic image annotation is presenred.Since it' s often hard to obtain or create la-beled images in large quantities while unlabeled ones are easier to collect, a transductive support vector machine ( TSVM) is exploited to enhance the quality of the training image data.Then, differ-ent image features with different magnitudes will result in different performance for automatic image annotation.To this end, a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible.Finally, a PLSA model with asymmetric mo-dalities is constructed based on the expectation maximization( EM) algorithm to predict a candidate set of annotations with confidence scores.Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PL-SA for the task of automatic image annotation.

  15. Use of the LUS in sequence allele designations to facilitate probabilistic genotyping of NGS-based STR typing results.

    Science.gov (United States)

    Just, Rebecca S; Irwin, Jodi A

    2018-05-01

    Some of the expected advantages of next generation sequencing (NGS) for short tandem repeat (STR) typing include enhanced mixture detection and genotype resolution via sequence variation among non-homologous alleles of the same length. However, at the same time that NGS methods for forensic DNA typing have advanced in recent years, many caseworking laboratories have implemented or are transitioning to probabilistic genotyping to assist the interpretation of complex autosomal STR typing results. Current probabilistic software programs are designed for length-based data, and were not intended to accommodate sequence strings as the product input. Yet to leverage the benefits of NGS for enhanced genotyping and mixture deconvolution, the sequence variation among same-length products must be utilized in some form. Here, we propose use of the longest uninterrupted stretch (LUS) in allele designations as a simple method to represent sequence variation within the STR repeat regions and facilitate - in the nearterm - probabilistic interpretation of NGS-based typing results. An examination of published population data indicated that a reference LUS region is straightforward to define for most autosomal STR loci, and that using repeat unit plus LUS length as the allele designator can represent greater than 80% of the alleles detected by sequencing. A proof of concept study performed using a freely available probabilistic software demonstrated that the LUS length can be used in allele designations when a program does not require alleles to be integers, and that utilizing sequence information improves interpretation of both single-source and mixed contributor STR typing results as compared to using repeat unit information alone. The LUS concept for allele designation maintains the repeat-based allele nomenclature that will permit backward compatibility to extant STR databases, and the LUS lengths themselves will be concordant regardless of the NGS assay or analysis tools

  16. Low-rank extremal positive-partial-transpose states and unextendible product bases

    International Nuclear Information System (INIS)

    Leinaas, Jon Magne; Sollid, Per Oyvind; Myrheim, Jan

    2010-01-01

    It is known how to construct, in a bipartite quantum system, a unique low-rank entangled mixed state with positive partial transpose (a PPT state) from an unextendible product basis (UPB), defined as an unextendible set of orthogonal product vectors. We point out that a state constructed in this way belongs to a continuous family of entangled PPT states of the same rank, all related by nonsingular unitary or nonunitary product transformations. The characteristic property of a state ρ in such a family is that its kernel Ker ρ has a generalized UPB, a basis of product vectors, not necessarily orthogonal, with no product vector in Im ρ, the orthogonal complement of Ker ρ. The generalized UPB in Ker ρ has the special property that it can be transformed to orthogonal form by a product transformation. In the case of a system of dimension 3x3, we give a complete parametrization of orthogonal UPBs. This is then a parametrization of families of rank 4 entangled (and extremal) PPT states, and we present strong numerical evidence that it is a complete classification of such states. We speculate that the lowest rank entangled and extremal PPT states also in higher dimensions are related to generalized, nonorthogonal UPBs in similar ways.

  17. A Distributed Taxation Based Rank Adaptation Scheme for 5G Small Cells

    DEFF Research Database (Denmark)

    Catania, Davide; Cattoni, Andrea Fabio; Mahmood, Nurul Huda

    2015-01-01

    The further densification of small cells impose high and undesirable levels of inter-cell interference. Multiple Input Multiple Output (MIMO) systems along with advanced receiver techniques provide us with extra degrees of freedom to combat such a problem. With such tools, rank adaptation...

  18. Stakeholder Perspectives on Citation and Peer-Based Rankings of Higher Education Journals

    Science.gov (United States)

    Wilkins, Stephen; Huisman, Jeroen

    2015-01-01

    The purpose of this article is to identify and discuss the possible uses of higher education journal rankings, and the associated advantages and disadvantages of using them. The research involved 40 individuals--lecturers, university managers, journal editors and publishers--who represented a range of stakeholders involved with research into…

  19. Man-Made Object Extraction from Remote Sensing Imagery by Graph-Based Manifold Ranking

    Science.gov (United States)

    He, Y.; Wang, X.; Hu, X. Y.; Liu, S. H.

    2018-04-01

    The automatic extraction of man-made objects from remote sensing imagery is useful in many applications. This paper proposes an algorithm for extracting man-made objects automatically by integrating a graph model with the manifold ranking algorithm. Initially, we estimate a priori value of the man-made objects with the use of symmetric and contrast features. The graph model is established to represent the spatial relationships among pre-segmented superpixels, which are used as the graph nodes. Multiple characteristics, namely colour, texture and main direction, are used to compute the weights of the adjacent nodes. Manifold ranking effectively explores the relationships among all the nodes in the feature space as well as initial query assignment; thus, it is applied to generate a ranking map, which indicates the scores of the man-made objects. The man-made objects are then segmented on the basis of the ranking map. Two typical segmentation algorithms are compared with the proposed algorithm. Experimental results show that the proposed algorithm can extract man-made objects with high recognition rate and low omission rate.

  20. A probabilistic atlas of human brainstem pathways based on connectome imaging data.

    Science.gov (United States)

    Tang, Yuchun; Sun, Wei; Toga, Arthur W; Ringman, John M; Shi, Yonggang

    2018-04-01

    for each subject, which were then used to calculate the probabilistic atlases in the MNI152 space for public release. In our experimental results, we demonstrate that our method yielded anatomically faithful reconstruction of the brainstem pathways and achieved improved performance in comparison with an existing atlas of cerebellar peduncles based on HCP data. These atlases have been publicly released on NITRIC (https://www.nitrc.org/projects/brainstem_atlas/) and can be readily used by brain imaging researchers interested in studying brainstem pathways. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Ranking multiple docking solutions based on the conservation of inter-residue contacts

    KAUST Repository

    Oliva, Romina M.

    2013-06-17

    Molecular docking is the method of choice for investigating the molecular basis of recognition in a large number of functional protein complexes. However, correctly scoring the obtained docking solutions (decoys) to rank native-like (NL) conformations in the top positions is still an open problem. Herein we present CONSRANK, a simple and effective tool to rank multiple docking solutions, which relies on the conservation of inter-residue contacts in the analyzed decoys ensemble. First it calculates a conservation rate for each inter-residue contact, then it ranks decoys according to their ability to match the more frequently observed contacts. We applied CONSRANK to 102 targets from three different benchmarks, RosettaDock, DOCKGROUND, and Critical Assessment of PRedicted Interactions (CAPRI). The method performs consistently well, both in terms of NL solutions ranked in the top positions and of values of the area under the receiver operating characteristic curve. Its ideal application is to solutions coming from different docking programs and procedures, as in the case of CAPRI targets. For all the analyzed CAPRI targets where a comparison is feasible, CONSRANK outperforms the CAPRI scorers. The fraction of NL solutions in the top ten positions in the RosettaDock, DOCKGROUND, and CAPRI benchmarks is enriched on average by a factor of 3.0, 1.9, and 9.9, respectively. Interestingly, CONSRANK is also able to specifically single out the high/medium quality (HMQ) solutions from the docking decoys ensemble: it ranks 46.2 and 70.8% of the total HMQ solutions available for the RosettaDock and CAPRI targets, respectively, within the top 20 positions. © 2013 Wiley Periodicals, Inc.

  2. A Probabilistic Alternative Approach to Optimal Project Profitability Based on the Value-at-Risk

    Directory of Open Access Journals (Sweden)

    Yonggu Kim

    2018-03-01

    Full Text Available This paper focuses on an investment decision-making process for sustainable development based on the profitability impact factors for overseas projects. Investors prefer to use the discounted cash-flow method. Although this method is simple and straightforward, its critical weakness is its inability to reflect the factor volatility associated with the project evaluation. To overcome this weakness, the Value-at-Risk method is used to apply the volatility of the profitability impact factors, thereby reflecting the risks and establishing decision-making criteria for risk-averse investors. Risk-averse investors can lose relatively acceptable investment opportunities to risk-neutral or risk-amenable investors due to strict investment decision-making criteria. To overcome this problem, critical factors are selected through a Monte Carlo simulation and a sensitivity analysis, and solutions to the critical-factor problems are then found by using the Theory of Inventive Problem Solving and a business version of the Project Definition Rating Index. This study examines the process of recovering investment opportunities with projects that are investment feasible and that have been rejected when applying the criterion of the Value-at-Risk method. To do this, a probabilistic alternative approach is taken. To validate this methodology, the proposed framework for an improved decision-making process is demonstrated using two actual overseas projects of a Korean steel-making company.

  3. Business Value of Information Technology Service Quality Based on Probabilistic Business-Driven Model

    Directory of Open Access Journals (Sweden)

    Jaka Sembiring

    2015-08-01

    Full Text Available The business value of information technology (IT services is often difficult to assess, especially from the point of view of a non-IT manager. This condition could severely impact organizational IT strategic decisions. Various approaches have been proposed to quantify the business value, but some are trapped in technical complexity while others misguide managers into directly and subjectively judging some technical entities outside their domain of expertise. This paper describes a method on how to properly capture both perspectives based on a  probabilistic business-driven model. The proposed model presents a procedure to calculate the business value of IT services. The model also covers IT security services and their business value as an important aspect of IT services that is not covered in previously published researches. The impact of changes in the quality of IT services on business value will also be discussed. A simulation and a case illustration are provided to show the possible application of the proposed model for a simple business process in an enterprise.

  4. Meditation effects within the hippocampal complex revealed by voxel-based morphometry and cytoarchitectonic probabilistic mapping

    Directory of Open Access Journals (Sweden)

    Eileen eLuders

    2013-07-01

    Full Text Available Scientific studies addressing anatomical variations in meditators’ brains have emerged rapidly over the last few years, where significant links are most frequently reported with respect to gray matter (GM. To advance prior work, this study examined GM characteristics in a large sample of 100 subjects (50 meditators, 50 controls, where meditators have been practicing close to twenty years, on average. A standard, whole-brain voxel-based morphometry approach was applied and revealed significant meditation effects in the vicinity of the hippocampus, showing more GM in meditators than in controls as well as positive correlations with the number of years practiced. However, the hippocampal complex is regionally segregated by architecture, connectivity, and functional relevance. Thus, to establish differential effects within the hippocampal formation (cornu ammonis, fascia dentate, entorhinal cortex, subiculum as well as the hippocampal-amygdaloid transition area, we utilized refined cytoarchitectonic probabilistic maps of (peri- hippocampal subsections. Significant meditation effects were observed within the subiculum specifically. Since the subiculum is known to play a key role in stress regulation and meditation is an established form of stress reduction, these GM findings may reflect neuronal preservation in long-term meditators – perhaps due to an attenuated release of stress hormones and decreased neurotoxicity.

  5. Meditation effects within the hippocampal complex revealed by voxel-based morphometry and cytoarchitectonic probabilistic mapping

    Science.gov (United States)

    Luders, Eileen; Kurth, Florian; Toga, Arthur W.; Narr, Katherine L.; Gaser, Christian

    2013-01-01

    Scientific studies addressing anatomical variations in meditators' brains have emerged rapidly over the last few years, where significant links are most frequently reported with respect to gray matter (GM). To advance prior work, this study examined GM characteristics in a large sample of 100 subjects (50 meditators, 50 controls), where meditators have been practicing close to 20 years, on average. A standard, whole-brain voxel-based morphometry approach was applied and revealed significant meditation effects in the vicinity of the hippocampus, showing more GM in meditators than in controls as well as positive correlations with the number of years practiced. However, the hippocampal complex is regionally segregated by architecture, connectivity, and functional relevance. Thus, to establish differential effects within the hippocampal formation (cornu ammonis, fascia dentata, entorhinal cortex, subiculum) as well as the hippocampal-amygdaloid transition area, we utilized refined cytoarchitectonic probabilistic maps of (peri-) hippocampal subsections. Significant meditation effects were observed within the subiculum specifically. Since the subiculum is known to play a key role in stress regulation and meditation is an established form of stress reduction, these GM findings may reflect neuronal preservation in long-term meditators—perhaps due to an attenuated release of stress hormones and decreased neurotoxicity. PMID:23847572

  6. New Fault Recognition Method for Rotary Machinery Based on Information Entropy and a Probabilistic Neural Network.

    Science.gov (United States)

    Jiang, Quansheng; Shen, Yehu; Li, Hua; Xu, Fengyu

    2018-01-24

    Feature recognition and fault diagnosis plays an important role in equipment safety and stable operation of rotating machinery. In order to cope with the complexity problem of the vibration signal of rotating machinery, a feature fusion model based on information entropy and probabilistic neural network is proposed in this paper. The new method first uses information entropy theory to extract three kinds of characteristics entropy in vibration signals, namely, singular spectrum entropy, power spectrum entropy, and approximate entropy. Then the feature fusion model is constructed to classify and diagnose the fault signals. The proposed approach can combine comprehensive information from different aspects and is more sensitive to the fault features. The experimental results on simulated fault signals verified better performances of our proposed approach. In real two-span rotor data, the fault detection accuracy of the new method is more than 10% higher compared with the methods using three kinds of information entropy separately. The new approach is proved to be an effective fault recognition method for rotating machinery.

  7. Application of probabilistically weighted graphs to image-based diagnosis of Alzheimer's disease using diffusion MRI

    Science.gov (United States)

    Maryam, Syeda; McCrackin, Laura; Crowley, Mark; Rathi, Yogesh; Michailovich, Oleg

    2017-03-01

    The world's aging population has given rise to an increasing awareness towards neurodegenerative disorders, including Alzheimers Disease (AD). Treatment options for AD are currently limited, but it is believed that future success depends on our ability to detect the onset of the disease in its early stages. The most frequently used tools for this include neuropsychological assessments, along with genetic, proteomic, and image-based diagnosis. Recently, the applicability of Diffusion Magnetic Resonance Imaging (dMRI) analysis for early diagnosis of AD has also been reported. The sensitivity of dMRI to the microstructural organization of cerebral tissue makes it particularly well-suited to detecting changes which are known to occur in the early stages of AD. Existing dMRI approaches can be divided into two broad categories: region-based and tract-based. In this work, we propose a new approach, which extends region-based approaches to the simultaneous characterization of multiple brain regions. Given a predefined set of features derived from dMRI data, we compute the probabilistic distances between different brain regions and treat the resulting connectivity pattern as an undirected, fully-connected graph. The characteristics of this graph are then used as markers to discriminate between AD subjects and normal controls (NC). Although in this preliminary work we omit subjects in the prodromal stage of AD, mild cognitive impairment (MCI), our method demonstrates perfect separability between AD and NC subject groups with substantial margin, and thus holds promise for fine-grained stratification of NC, MCI and AD populations.

  8. Probabilistic multiobjective wind-thermal economic emission dispatch based on point estimated method

    International Nuclear Information System (INIS)

    Azizipanah-Abarghooee, Rasoul; Niknam, Taher; Roosta, Alireza; Malekpour, Ahmad Reza; Zare, Mohsen

    2012-01-01

    In this paper, wind power generators are being incorporated in the multiobjective economic emission dispatch problem which minimizes wind-thermal electrical energy cost and emissions produced by fossil-fueled power plants, simultaneously. Large integration of wind energy sources necessitates an efficient model to cope with uncertainty arising from random wind variation. Hence, a multiobjective stochastic search algorithm based on 2m point estimated method is implemented to analyze the probabilistic wind-thermal economic emission dispatch problem considering both overestimation and underestimation of available wind power. 2m point estimated method handles the system uncertainties and renders the probability density function of desired variables efficiently. Moreover, a new population-based optimization algorithm called modified teaching-learning algorithm is proposed to determine the set of non-dominated optimal solutions. During the simulation, the set of non-dominated solutions are kept in an external memory (repository). Also, a fuzzy-based clustering technique is implemented to control the size of the repository. In order to select the best compromise solution from the repository, a niching mechanism is utilized such that the population will move toward a smaller search space in the Pareto-optimal front. In order to show the efficiency and feasibility of the proposed framework, three different test systems are represented as case studies. -- Highlights: ► WPGs are being incorporated in the multiobjective economic emission dispatch problem. ► 2m PEM handles the system uncertainties. ► A MTLBO is proposed to determine the set of non-dominated (Pareto) optimal solutions. ► A fuzzy-based clustering technique is implemented to control the size of the repository.

  9. Probabilistic Analysis for Comparing Fatigue Data Based on Johnson-Weibull Parameters

    Science.gov (United States)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2013-01-01

    Leonard Johnson published a methodology for establishing the confidence that two populations of data are different. Johnson's methodology is dependent on limited combinations of test parameters (Weibull slope, mean life ratio, and degrees of freedom) and a set of complex mathematical equations. In this report, a simplified algebraic equation for confidence numbers is derived based on the original work of Johnson. The confidence numbers calculated with this equation are compared to those obtained graphically by Johnson. Using the ratios of mean life, the resultant values of confidence numbers at the 99 percent level deviate less than 1 percent from those of Johnson. At a 90 percent confidence level, the calculated values differ between +2 and 4 percent. The simplified equation is used to rank the experimental lives of three aluminum alloys (AL 2024, AL 6061, and AL 7075), each tested at three stress levels in rotating beam fatigue, analyzed using the Johnson- Weibull method, and compared to the ASTM Standard (E739 91) method of comparison. The ASTM Standard did not statistically distinguish between AL 6061 and AL 7075. However, it is possible to rank the fatigue lives of different materials with a reasonable degree of statistical certainty based on combined confidence numbers using the Johnson- Weibull analysis. AL 2024 was found to have the longest fatigue life, followed by AL 7075, and then AL 6061. The ASTM Standard and the Johnson-Weibull analysis result in the same stress-life exponent p for each of the three aluminum alloys at the median, or L(sub 50), lives

  10. Critical review of methods for risk ranking of food-related hazards, based on risks for human health.

    Science.gov (United States)

    Van der Fels-Klerx, H J; Van Asselt, E D; Raley, M; Poulsen, M; Korsgaard, H; Bredsdorff, L; Nauta, M; D'agostino, M; Coles, D; Marvin, H J P; Frewer, L J

    2018-01-22

    This study aimed to critically review methods for ranking risks related to food safety and dietary hazards on the basis of their anticipated human health impacts. A literature review was performed to identify and characterize methods for risk ranking from the fields of food, environmental science and socio-economic sciences. The review used a predefined search protocol, and covered the bibliographic databases Scopus, CAB Abstracts, Web of Sciences, and PubMed over the period 1993-2013. All references deemed relevant, on the basis of predefined evaluation criteria, were included in the review, and the risk ranking method characterized. The methods were then clustered-based on their characteristics-into eleven method categories. These categories included: risk assessment, comparative risk assessment, risk ratio method, scoring method, cost of illness, health adjusted life years (HALY), multi-criteria decision analysis, risk matrix, flow charts/decision trees, stated preference techniques and expert synthesis. Method categories were described by their characteristics, weaknesses and strengths, data resources, and fields of applications. It was concluded there is no single best method for risk ranking. The method to be used should be selected on the basis of risk manager/assessor requirements, data availability, and the characteristics of the method. Recommendations for future use and application are provided.

  11. Probabilistic Physics of Failure-based framework for fatigue life prediction of aircraft gas turbine discs under uncertainty

    International Nuclear Information System (INIS)

    Zhu, Shun-Peng; Huang, Hong-Zhong; Peng, Weiwen; Wang, Hai-Kun; Mahadevan, Sankaran

    2016-01-01

    A probabilistic Physics of Failure-based framework for fatigue life prediction of aircraft gas turbine discs operating under uncertainty is developed. The framework incorporates the overall uncertainties appearing in a structural integrity assessment. A comprehensive uncertainty quantification (UQ) procedure is presented to quantify multiple types of uncertainty using multiplicative and additive UQ methods. In addition, the factors that contribute the most to the resulting output uncertainty are investigated and identified for uncertainty reduction in decision-making. A high prediction accuracy of the proposed framework is validated through a comparison of model predictions to the experimental results of GH4133 superalloy and full-scale tests of aero engine high-pressure turbine discs. - Highlights: • A probabilistic PoF-based framework for fatigue life prediction is proposed. • A comprehensive procedure forquantifyingmultiple types of uncertaintyis presented. • The factors that contribute most to the resulting output uncertainty are identified. • The proposed frameworkdemonstrates high prediction accuracybyfull-scale tests.

  12. Microbial quality of reclaimed water for urban reuses: Probabilistic risk-based investigation and recommendations.

    Science.gov (United States)

    Chhipi-Shrestha, Gyan; Hewage, Kasun; Sadiq, Rehan

    2017-01-15

    Although Canada has abundant freshwater resources, many cities still experience seasonal water shortage. Supply-side and demand-side management is a core strategy to address this water shortage. Under this strategy, reclaimed water, which the Canadian public is willing to use for non-potable purposes, is an option. However, no universal guidelines exist for reclaimed water use. Despite the federal government's long-term goal to develop guidelines for many water reuse applications, guidelines have only been prescribed for reclaimed water use in toilet and urinal flushing in Canada. At the provincial level, British Columbia (BC) has promulgated guidelines for wide applications of reclaimed water but only at broad class levels. This research has investigated and proposed probabilistic risk-based recommended values for microbial quality of reclaimed water in various non-potable urban reuses. The health risk was estimated by using quantitative microbial risk assessment. Two-dimensional Monte Carlo simulations were used in the analysis to include variability and uncertainty in input data. The proposed recommended values are based on the indicator organism E. coli. The required treatment levels for reuse were also estimated. In addition, the recommended values were successfully applied to three wastewater treatment effluents in the Okanagan Valley, BC, Canada. The health risks associated with other bacterial pathogens (Campylobacter jejuni and Salmonella spp.), virus (adenovirus, norovirus, and rotavirus), and protozoa (Cryptosporidium parvum and Giardia spp.), were also estimated. The estimated risks indicate the effectiveness of the E. coli-based water quality recommended values. Sensitivity analysis shows the pathogenic E. coli ratio and morbidity are the most sensitive input parameters for all water reuses. The proposed recommended values could be further improved by using national or regional data on water exposures, disease burden per case, and the susceptibility

  13. Deriving a probabilistic syntacto-semantic grammar for biomedicine based on domain-specific terminologies.

    Science.gov (United States)

    Fan, Jung-Wei; Friedman, Carol

    2011-10-01

    Biomedical natural language processing (BioNLP) is a useful technique that unlocks valuable information stored in textual data for practice and/or research. Syntactic parsing is a critical component of BioNLP applications that rely on correctly determining the sentence and phrase structure of free text. In addition to dealing with the vast amount of domain-specific terms, a robust biomedical parser needs to model the semantic grammar to obtain viable syntactic structures. With either a rule-based or corpus-based approach, the grammar engineering process requires substantial time and knowledge from experts, and does not always yield a semantically transferable grammar. To reduce the human effort and to promote semantic transferability, we propose an automated method for deriving a probabilistic grammar based on a training corpus consisting of concept strings and semantic classes from the Unified Medical Language System (UMLS), a comprehensive terminology resource widely used by the community. The grammar is designed to specify noun phrases only due to the nominal nature of the majority of biomedical terminological concepts. Evaluated on manually parsed clinical notes, the derived grammar achieved a recall of 0.644, precision of 0.737, and average cross-bracketing of 0.61, which demonstrated better performance than a control grammar with the semantic information removed. Error analysis revealed shortcomings that could be addressed to improve performance. The results indicated the feasibility of an approach which automatically incorporates terminology semantics in the building of an operational grammar. Although the current performance of the unsupervised solution does not adequately replace manual engineering, we believe once the performance issues are addressed, it could serve as an aide in a semi-supervised solution. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Optimal design of cluster-based ad-hoc networks using probabilistic solution discovery

    International Nuclear Information System (INIS)

    Cook, Jason L.; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    The reliability of ad-hoc networks is gaining popularity in two areas: as a topic of academic interest and as a key performance parameter for defense systems employing this type of network. The ad-hoc network is dynamic and scalable and these descriptions are what attract its users. However, these descriptions are also synonymous for undefined and unpredictable when considering the impacts to the reliability of the system. The configuration of an ad-hoc network changes continuously and this fact implies that no single mathematical expression or graphical depiction can describe the system reliability-wise. Previous research has used mobility and stochastic models to address this challenge successfully. In this paper, the authors leverage the stochastic approach and build upon it a probabilistic solution discovery (PSD) algorithm to optimize the topology for a cluster-based mobile ad-hoc wireless network (MAWN). Specifically, the membership of nodes within the back-bone network or networks will be assigned in such as way as to maximize reliability subject to a constraint on cost. The constraint may also be considered as a non-monetary cost, such as weight, volume, power, or the like. When a cost is assigned to each component, a maximum cost threshold is assigned to the network, and the method is run; the result is an optimized allocation of the radios enabling back-bone network(s) to provide the most reliable network possible without exceeding the allowable cost. The method is intended for use directly as part of the architectural design process of a cluster-based MAWN to efficiently determine an optimal or near-optimal design solution. It is capable of optimizing the topology based upon all-terminal reliability (ATR), all-operating terminal reliability (AoTR), or two-terminal reliability (2TR)

  15. MEDRank: using graph-based concept ranking to index biomedical texts.

    Science.gov (United States)

    Herskovic, Jorge R; Cohen, Trevor; Subramanian, Devika; Iyengar, M Sriram; Smith, Jack W; Bernstam, Elmer V

    2011-06-01

    As the volume of biomedical text increases exponentially, automatic indexing becomes increasingly important. However, existing approaches do not distinguish central (or core) concepts from concepts that were mentioned in passing. We focus on the problem of indexing MEDLINE records, a process that is currently performed by highly trained humans at the National Library of Medicine (NLM). NLM indexers are assisted by a system called the Medical Text Indexer (MTI) that suggests candidate indexing terms. To improve the ability of MTI to select the core terms in MEDLINE abstracts. These core concepts are deemed to be most important and are designated as "major headings" by MEDLINE indexers. We introduce and evaluate a graph-based indexing methodology called MEDRank that generates concept graphs from biomedical text and then ranks the concepts within these graphs to identify the most important ones. We insert a MEDRank step into the MTI and compare MTI's output with and without MEDRank to the MEDLINE indexers' selected terms for a sample of 11,803 PubMed Central articles. We also tested whether human raters prefer terms generated by the MEDLINE indexers, MTI without MEDRank, and MTI with MEDRank for a sample of 36 PubMed Central articles. MEDRank improved recall of major headings designated by 30% over MTI without MEDRank (0.489 vs. 0.376). Overall recall was only slightly (6.5%) higher (0.490 vs. 0.460) as was F(2) (3%, 0.408 vs. 0.396). However, overall precision was 3.9% lower (0.268 vs. 0.279). Human raters preferred terms generated by MTI with MEDRank over terms generated by MTI without MEDRank (by an average of 1.00 more term per article), and preferred terms generated by MTI with MEDRank and the MEDLINE indexers at the same rate. The addition of MEDRank to MTI significantly improved the retrieval of core concepts in MEDLINE abstracts and more closely matched human expectations compared to MTI without MEDRank. In addition, MEDRank slightly improved overall recall

  16. Dynamic PET reconstruction using temporal patch-based low rank penalty for ROI-based brain kinetic analysis

    International Nuclear Information System (INIS)

    Kim, Kyungsang; Ye, Jong Chul; Son, Young Don; Cho, Zang Hee; Bresler, Yoram; Ra, Jong Beom

    2015-01-01

    Dynamic positron emission tomography (PET) is widely used to measure changes in the bio-distribution of radiopharmaceuticals within particular organs of interest over time. However, to retain sufficient temporal resolution, the number of photon counts in each time frame must be limited. Therefore, conventional reconstruction algorithms such as the ordered subset expectation maximization (OSEM) produce noisy reconstruction images, thus degrading the quality of the extracted time activity curves (TACs). To address this issue, many advanced reconstruction algorithms have been developed using various spatio-temporal regularizations. In this paper, we extend earlier results and develop a novel temporal regularization, which exploits the self-similarity of patches that are collected in dynamic images. The main contribution of this paper is to demonstrate that the correlation of patches can be exploited using a low-rank constraint that is insensitive to global intensity variations. The resulting optimization framework is, however, non-Lipschitz and non-convex due to the Poisson log-likelihood and low-rank penalty terms. Direct application of the conventional Poisson image deconvolution by an augmented Lagrangian (PIDAL) algorithm is, however, problematic due to its large memory requirements, which prevents its parallelization. Thus, we propose a novel optimization framework using the concave-convex procedure (CCCP) by exploiting the Legendre–Fenchel transform, which is computationally efficient and parallelizable. In computer simulation and a real in vivo experiment using a high-resolution research tomograph (HRRT) scanner, we confirm that the proposed algorithm can improve image quality while also extracting more accurate region of interests (ROI) based kinetic parameters. Furthermore, we show that the total reconstruction time for HRRT PET is significantly accelerated using our GPU implementation, which makes the algorithm very practical in clinical environments

  17. Rockfall hazard assessment integrating probabilistic physically based rockfall source detection (Norddal municipality, Norway).

    Science.gov (United States)

    Yugsi Molina, F. X.; Oppikofer, T.; Fischer, L.; Hermanns, R. L.; Taurisano, A.

    2012-04-01

    Traditional techniques to assess rockfall hazard are partially based on probabilistic analysis. Stochastic methods has been used for run-out analysis of rock blocks to estimate the trajectories that a detached block will follow during its fall until it stops due to kinetic energy loss. However, the selection of rockfall source areas is usually defined either by multivariate analysis or by field observations. For either case, a physically based approach is not used for the source area detection. We present an example of rockfall hazard assessment that integrates a probabilistic rockfall run-out analysis with a stochastic assessment of the rockfall source areas using kinematic stability analysis in a GIS environment. The method has been tested for a steep more than 200 m high rock wall, located in the municipality of Norddal (Møre og Romsdal county, Norway), where a large number of people are either exposed to snow avalanches, rockfalls, or debris flows. The area was selected following the recently published hazard mapping plan of Norway. The cliff is formed by medium to coarse-grained quartz-dioritic to granitic gneisses of Proterozoic age. Scree deposits product of recent rockfall activity are found at the bottom of the rock wall. Large blocks can be found several tens of meters away from the cliff in Sylte, the main locality in the Norddal municipality. Structural characterization of the rock wall was done using terrestrial laser scanning (TLS) point clouds in the software Coltop3D (www.terranum.ch), and results were validated with field data. Orientation data sets from the structural characterization were analyzed separately to assess best-fit probability density functions (PDF) for both dip angle and dip direction angle of each discontinuity set. A GIS-based stochastic kinematic analysis was then carried out using the discontinuity set orientations and the friction angle as random variables. An airborne laser scanning digital elevation model (ALS-DEM) with 1 m

  18. A collaborative filtering recommendation algorithm based on weighted SimRank and social trust

    Science.gov (United States)

    Su, Chang; Zhang, Butao

    2017-05-01

    Collaborative filtering is one of the most widely used recommendation technologies, but the data sparsity and cold start problem of collaborative filtering algorithms are difficult to solve effectively. In order to alleviate the problem of data sparsity in collaborative filtering algorithm, firstly, a weighted improved SimRank algorithm is proposed to compute the rating similarity between users in rating data set. The improved SimRank can find more nearest neighbors for target users according to the transmissibility of rating similarity. Then, we build trust network and introduce the calculation of trust degree in the trust relationship data set. Finally, we combine rating similarity and trust to build a comprehensive similarity in order to find more appropriate nearest neighbors for target user. Experimental results show that the algorithm proposed in this paper improves the recommendation precision of the Collaborative algorithm effectively.

  19. Microseismic Event Grouping Based on PageRank Linkage at the Newberry Volcano Geothermal Site

    Science.gov (United States)

    Aguiar, A. C.; Myers, S. C.

    2016-12-01

    The Newberry Volcano DOE FORGE site in Central Oregon has been stimulated two times using high-pressure fluid injection to study the Enhanced Geothermal Systems (EGS) technology. Several hundred microseismic events were generated during the first stimulation in the fall of 2012. Initial locations of this microseismicity do not show well defined subsurface structure in part because event location uncertainties are large (Foulger and Julian, 2013). We focus on this stimulation to explore the spatial and temporal development of microseismicity, which is key to understanding how subsurface stimulation modifies stress, fractures rock, and increases permeability. We use PageRank, Google's initial search algorithm, to determine connectivity within the events (Aguiar and Beroza, 2014) and assess signal-correlation topology for the micro-earthquakes. We then use this information to create signal families and compare these to the spatial and temporal proximity of associated earthquakes. We relocate events within families (identified by PageRank linkage) using the Bayesloc approach (Myers et al., 2007). Preliminary relocations show tight spatial clustering of event families as well as evidence of events relocating to a different cluster than originally reported. We also find that signal similarity (linkage) at several stations, not just one or two, is needed in order to determine that events are in close proximity to one another. We show that indirect linkage of signals using PageRank is a reliable way to increase the number of events that are confidently determined to be similar to one another, which may lead to efficient and effective grouping of earthquakes with similar physical characteristics, such as focal mechanisms and stress drop. Our ultimate goal is to determine whether changes in the state of stress and/or changes in the generation of subsurface fracture networks can be detected using PageRank topology as well as aid in the event relocation to obtain more accurate

  20. AN EXCEL-BASED DECISION SUPPORT SYSTEM FOR SCORING AND RANKING PROPOSED R&D PROJECTS

    OpenAIRE

    ANNE DE PIANTE HENRIKSEN; SUSAN W. PALOCSAY

    2008-01-01

    One of the most challenging aspects of technology management is the selection of research and development (R&D) projects from among a group of proposals. This paper introduces an interactive, user-friendly decision support system for evaluating and ranking R&D projects and demonstrates its application on an example R&D program. It employs the scoring methodology developed by Henriksen and Traynor to provide a practical technique that considers both project merit and project cost in the evalua...

  1. Study on the feature of the Fortune 500 and the enlightenment basing on the 2014 ranking

    OpenAIRE

    Xue, Mingyue; Zhang, Zhifeng

    2015-01-01

    The world’s 500 biggest companies have drawn more and more attention since the journal Fortune started the ranking, Summarize the law of development of the world top 500 enterprises to develop China's enterprises has an important reference and enlightenment function. This paper carries on an overall analysis of the world’s 500 biggest companies, summarizes the shortcomings of Chinese largest companies and offers suggestions to the process and strategy towards the world’s 500 for Chinese enter...

  2. Color correction with blind image restoration based on multiple images using a low-rank model

    Science.gov (United States)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  3. Ranked solutions to a class of combinatorial optimizations - with applications in mass spectrometry based peptide sequencing

    Science.gov (United States)

    Doerr, Timothy; Alves, Gelio; Yu, Yi-Kuo

    2006-03-01

    Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time. This suggests a way to efficiently find approximate solutions - - find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the fininte number of high- ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks - - peptide sequencing using tandem mass spectrometry data.

  4. Ranking REACH registered neutral, ionizable and ionic organic chemicals based on their aquatic persistency and mobility.

    Science.gov (United States)

    Arp, H P H; Brown, T N; Berger, U; Hale, S E

    2017-07-19

    The contaminants that have the greatest chances of appearing in drinking water are those that are mobile enough in the aquatic environment to enter drinking water sources and persistent enough to survive treatment processes. Herein a screening procedure to rank neutral, ionizable and ionic organic compounds for being persistent and mobile organic compounds (PMOCs) is presented and applied to the list of industrial substances registered under the EU REACH legislation as of December 2014. This comprised 5155 identifiable, unique organic structures. The minimum cut-off criteria considered for PMOC classification herein are a freshwater half-life >40 days, which is consistent with the REACH definition of freshwater persistency, and a log D oc water distribution coefficient). Experimental data were given the highest priority, followed by data from an array of available quantitative structure-activity relationships (QSARs), and as a third resort, an original Iterative Fragment Selection (IFS) QSAR. In total, 52% of the unique REACH structures made the minimum criteria to be considered a PMOC, and 21% achieved the highest PMOC ranking (half-life > 40 days, log D oc freshwater persistency, which was also the parameter that QSARs performed the most poorly at predicting. Several prioritized drinking water contaminants in the EU and USA, and other contaminants of concern, were identified as PMOCs. This identification and ranking procedure for PMOCs can be part of a strategy to better identify contaminants that pose a threat to drinking water sources.

  5. University Rankings: The Web Ranking

    Science.gov (United States)

    Aguillo, Isidro F.

    2012-01-01

    The publication in 2003 of the Ranking of Universities by Jiao Tong University of Shanghai has revolutionized not only academic studies on Higher Education, but has also had an important impact on the national policies and the individual strategies of the sector. The work gathers the main characteristics of this and other global university…

  6. A physics-based probabilistic forecasting model for rainfall-induced shallow landslides at regional scale

    Directory of Open Access Journals (Sweden)

    S. Zhang

    2018-03-01

    Full Text Available Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs < 1 is tested for each pixel in n simulations which are integrated in a unique parameter. This parameter links the landslide probability to the uncertainties of soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.

  7. A physics-based probabilistic forecasting model for rainfall-induced shallow landslides at regional scale

    Science.gov (United States)

    Zhang, Shaojie; Zhao, Luqiang; Delgado-Tellez, Ricardo; Bao, Hongjun

    2018-03-01

    Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs) of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.

  8. Efficient File Sharing by Multicast - P2P Protocol Using Network Coding and Rank Based Peer Selection

    Science.gov (United States)

    Stoenescu, Tudor M.; Woo, Simon S.

    2009-01-01

    In this work, we consider information dissemination and sharing in a distributed peer-to-peer (P2P highly dynamic communication network. In particular, we explore a network coding technique for transmission and a rank based peer selection method for network formation. The combined approach has been shown to improve information sharing and delivery to all users when considering the challenges imposed by the space network environments.

  9. Utilization of a risk matrix based on Probabilistic Safety Analysis to improve nuclear safety in NPP

    International Nuclear Information System (INIS)

    Stubbe, Gerald

    2010-01-01

    The Probabilistic Safety Analysis (PSA) is a systematic and comprehensive methodology to evaluate risks associated with a complex engineered technological entity. Risk in a PSA is defined as a feasible detrimental outcome of an initiator. Those initiators can be 'classical' transient as the loss of main feedwater, loss of the secondary heat sink, etc.. or accident (LOCA - Loss Of Coolant Accident, SGTR - Steam Generator Tube Rupture, LOOP - Loss Of Offsite Power, etc..) In a PSA, risk is characterized by two quantities: the magnitude (severity) of the possible adverse consequence, the likelihood (probability) of occurrence of each consequence. Consequences are expressed numerically (for this purpose: the core damage) and their likelihoods of occurrence are expressed as probabilities or frequencies (i.e., the number of occurrences or the probability of occurrence per unit time). The total risk is the expected loss: the sum of the products of the consequences multiplied by their probabilities. This lead to the parameter CDF: The Core Damage Frequency, which is expressed by unit of time. The main advantage of this risk calculation is to have a global, integrated, overview of the plants and their systems. This allows to have an objective and quantitative point of view on the importance of the equipments, human action, or common cause failures that can challenge the plant's safety. A total PSA model is divided in three levels: Level one, which consider the core damage; Level two, which consider the robustness of the containment; Level three, which consider the impact of the radiological release on the public. For the purpose of the risk matrix, a level one PSA is needed. The scope of a PSA model is important to have a good characterization of the plant's risk. The matrix makes more sense if you have a full scope level one model, containing, furthermore the internal events, the fire and flooding, but also seismic event (if relevant). Asymmetries are also classical in the

  10. SpikeTemp: An Enhanced Rank-Order-Based Learning Approach for Spiking Neural Networks With Adaptive Structure.

    Science.gov (United States)

    Wang, Jinling; Belatreche, Ammar; Maguire, Liam P; McGinnity, Thomas Martin

    2017-01-01

    This paper presents an enhanced rank-order-based learning algorithm, called SpikeTemp, for spiking neural networks (SNNs) with a dynamically adaptive structure. The trained feed-forward SNN consists of two layers of spiking neurons: 1) an encoding layer which temporally encodes real-valued features into spatio-temporal spike patterns and 2) an output layer of dynamically grown neurons which perform spatio-temporal classification. Both Gaussian receptive fields and square cosine population encoding schemes are employed to encode real-valued features into spatio-temporal spike patterns. Unlike the rank-order-based learning approach, SpikeTemp uses the precise times of the incoming spikes for adjusting the synaptic weights such that early spikes result in a large weight change and late spikes lead to a smaller weight change. This removes the need to rank all the incoming spikes and, thus, reduces the computational cost of SpikeTemp. The proposed SpikeTemp algorithm is demonstrated on several benchmark data sets and on an image recognition task. The results show that SpikeTemp can achieve better classification performance and is much faster than the existing rank-order-based learning approach. In addition, the number of output neurons is much smaller when the square cosine encoding scheme is employed. Furthermore, SpikeTemp is benchmarked against a selection of existing machine learning algorithms, and the results demonstrate the ability of SpikeTemp to classify different data sets after just one presentation of the training samples with comparable classification performance.

  11. New Aspects of Probabilistic Forecast Verification Using Information Theory

    Science.gov (United States)

    Tödter, Julian; Ahrens, Bodo

    2013-04-01

    This work deals with information-theoretical methods in probabilistic forecast verification, particularly concerning ensemble forecasts. Recent findings concerning the "Ignorance Score" are shortly reviewed, then a consistent generalization to continuous forecasts is motivated. For ensemble-generated forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up a natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The useful properties of the conceptually appealing CRIGN are illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This algorithm can also be used to calculate the decomposition of the more traditional CRPS exactly. The applicability of the "new" measures is demonstrated in a small evaluation study of ensemble-based precipitation forecasts.

  12. University Rankings and Social Science

    OpenAIRE

    Marginson, S.

    2014-01-01

    University rankings widely affect the behaviours of prospective students and their families, university executive leaders, academic faculty, governments and investors in higher education. Yet the social science foundations of global rankings receive little scrutiny. Rankings that simply recycle reputation without any necessary connection to real outputs are of no common value. It is necessary that rankings be soundly based in scientific terms if a virtuous relationship between performance and...

  13. Elements of the safety case for the Morsleben repository based on probabilistic modelling

    International Nuclear Information System (INIS)

    Wollrath, J.; Niemeyer, M.; Resele, G.; Becker, D.A.; Hirsekorn, P.

    2008-01-01

    The Morsleben nuclear waste repository (ERAM) for low- and intermediate-level mainly short-lived waste is located in a former salt mine. The closure concept was developed in parallel and interacting with the safety assessment. The safety concept is based on extensive backfilling with salt concrete complemented with seals between the main disposal areas and the rest of the mine building. Thus, the entire system exhibits a barrier effect through a partially redundant combination of several processes. However, in the formal safety assessment no credit is taken from the barrier effect of the extensive backfill. In the safety assessments, the different possibilities of system development, the resulting array of potential fluid movement and a large number of potential radionuclide migration pathways are mapped in the bandwidth of derived parameters. The calculated response of the system to parameter variations is non-linear. Different processes may compete and compensate each other. Hence, the common practice to choose a conservative parameter set for the safety assessment is a priori impossible. The safety assessment has been performed independently by two groups with different computer models, for the same closure concept and the same basic parameters but independent conceptual approaches. Both groups have performed deterministic and probabilistic dose calculations. The results match well; the differences can be explained on basis of the model approaches. Although a large bandwidth is considered for a number of parameters the maximum radiation exposure remains clearly below the applicable dose limit for nearly all calculations, demonstrating the robustness of the system. These aspects significantly contribute to confidence building in the Safety Case for ERAM. (authors)

  14. Incorporating probabilistic seasonal climate forecasts into river management using a risk-based framework

    Science.gov (United States)

    Sojda, Richard S.; Towler, Erin; Roberts, Mike; Rajagopalan, Balaji

    2013-01-01

    [1] Despite the influence of hydroclimate on river ecosystems, most efforts to date have focused on using climate information to predict streamflow for water supply. However, as water demands intensify and river systems are increasingly stressed, research is needed to explicitly integrate climate into streamflow forecasts that are relevant to river ecosystem management. To this end, we present a five step risk-based framework: (1) define risk tolerance, (2) develop a streamflow forecast model, (3) generate climate forecast ensembles, (4) estimate streamflow ensembles and associated risk, and (5) manage for climate risk. The framework is successfully demonstrated for an unregulated watershed in southwest Montana, where the combination of recent drought and water withdrawals has made it challenging to maintain flows needed for healthy fisheries. We put forth a generalized linear modeling (GLM) approach to develop a suite of tools that skillfully model decision-relevant low flow characteristics in terms of climate predictors. Probabilistic precipitation forecasts are used in conjunction with the GLMs, resulting in season-ahead prediction ensembles that provide the full risk profile. These tools are embedded in an end-to-end risk management framework that directly supports proactive fish conservation efforts. Results show that the use of forecasts can be beneficial to planning, especially in wet years, but historical precipitation forecasts are quite conservative (i.e., not very “sharp”). Synthetic forecasts show that a modest “sharpening” can strongly impact risk and improve skill. We emphasize that use in management depends on defining relevant environmental flows and risk tolerance, requiring local stakeholder involvement.

  15. Implementation of a risk assessment tool based on a probabilistic safety assessment developed for radiotherapy practices

    International Nuclear Information System (INIS)

    Paz, A.; Godinez, V.; Lopez, R.

    2010-10-01

    The present work describes the implementation process and main results of the risk assessment to the radiotherapy practices with Linear Accelerators (Linac), with cobalt 60, and with brachytherapy. These evaluations were made throughout the risk assessment tool for radiotherapy practices SEVRRA (risk evaluation system for radiotherapy), developed at the Mexican National Commission in Nuclear Safety and Safeguards derived from the outcome obtained with the Probabilistic Safety Analysis developed at the Ibero-American Regulators Forum for these radiotherapy facilities. The methodology used is supported by risk matrices method, a mathematical tool that estimates the risk to the patient, radiation workers and public from mechanical failures, mis calibration of the devices, human mistakes, and so. The initiating events are defined as those undesirable events that, together with other failures, can produce a delivery of an over-dose or an under-dose of the medical prescribed dose, to the planned target volume, or a significant dose to non prescribed human organs. Initiating events frequency and reducer of its frequency (actions intended to avoid the accident) are estimated as well as robustness of barriers to those actions, such as mechanical switches, which detect and prevent the accident from occurring. The spectrum of the consequences is parameterized, and the actions performed to reduce the consequences are identified. Based on this analysis, a software tool was developed in order to simplify the evaluations to radiotherapy installations and it has been applied as a first step forward to some Mexican installations, as part of a national implementation process, the final goal is evaluation of all Mexican facilities in the near future. The main target and benefits of the SEVRRA implementation are presented in this paper. (Author)

  16. Implementation of a risk assessment tool based on a probabilistic safety assessment developed for radiotherapy practices

    Energy Technology Data Exchange (ETDEWEB)

    Paz, A.; Godinez, V.; Lopez, R., E-mail: abpaz@cnsns.gob.m [Comision Nacional de Seguridad Nuclear y Salvaguardias, Dr. Barragan No. 779, Col. Narvarte, 03020 Mexico D. F. (Mexico)

    2010-10-15

    The present work describes the implementation process and main results of the risk assessment to the radiotherapy practices with Linear Accelerators (Linac), with cobalt 60, and with brachytherapy. These evaluations were made throughout the risk assessment tool for radiotherapy practices SEVRRA (risk evaluation system for radiotherapy), developed at the Mexican National Commission in Nuclear Safety and Safeguards derived from the outcome obtained with the Probabilistic Safety Analysis developed at the Ibero-American Regulators Forum for these radiotherapy facilities. The methodology used is supported by risk matrices method, a mathematical tool that estimates the risk to the patient, radiation workers and public from mechanical failures, mis calibration of the devices, human mistakes, and so. The initiating events are defined as those undesirable events that, together with other failures, can produce a delivery of an over-dose or an under-dose of the medical prescribed dose, to the planned target volume, or a significant dose to non prescribed human organs. Initiating events frequency and reducer of its frequency (actions intended to avoid the accident) are estimated as well as robustness of barriers to those actions, such as mechanical switches, which detect and prevent the accident from occurring. The spectrum of the consequences is parameterized, and the actions performed to reduce the consequences are identified. Based on this analysis, a software tool was developed in order to simplify the evaluations to radiotherapy installations and it has been applied as a first step forward to some Mexican installations, as part of a national implementation process, the final goal is evaluation of all Mexican facilities in the near future. The main target and benefits of the SEVRRA implementation are presented in this paper. (Author)

  17. An individual-based probabilistic model for simulating fisheries population dynamics

    Directory of Open Access Journals (Sweden)

    Jie Cao

    2016-12-01

    Full Text Available The purpose of stock assessment is to support managers to provide intelligent decisions regarding removal from fish populations. Errors in assessment models may have devastating impacts on the population fitness and negative impacts on the economy of the resource users. Thus, accuracte estimations of population size, growth rates are critical for success. Evaluating and testing the behavior and performance of stock assessment models and assessing the consequences of model mis-specification and the impact of management strategies requires an operating model that accurately describe the dynamics of the target species, and can resolve spatial and seasonal changes. In addition, the most thorough evaluations of assessment models use an operating model that takes a different form than the assessment model. This paper presents an individual-based probabilistic model used to simulate the complex dynamics of populations and their associated fisheries. Various components of population dynamics are expressed as random Bernoulli trials in the model and detailed life and fishery histories of each individual are tracked over their life span. The simulation model is designed to be flexible so it can be used for different species and fisheries. It can simulate mixing among multiple stocks and link stock-recruit relationships to environmental factors. Furthermore, the model allows for flexibility in sub-models (e.g., growth and recruitment and model assumptions (e.g., age- or size-dependent selectivity. This model enables the user to conduct various simulation studies, including testing the performance of assessment models under different assumptions, assessing the impacts of model mis-specification and evaluating management strategies.

  18. Landslide susceptibility mapping along PLUS expressways in Malaysia using probabilistic based model in GIS

    Science.gov (United States)

    Yusof, Norbazlan M.; Pradhan, Biswajeet

    2014-06-01

    PLUS Berhad holds the concession for a total of 987 km of toll expressways in Malaysia, the longest of which is the North-South Expressway or NSE. Acting as the backbone' of the west coast of the peninsula, the NSE stretches from the Malaysian-Thai border in the north to the border with neighbouring Singapore in the south, linking several major cities and towns along the way. North-South Expressway in Malaysia contributes to the country economic development through trade, social and tourism sector. Presently, the highway is good in terms of its condition and connection to every state but some locations need urgent attention. Stability of slopes at these locations is of most concern as any instability can cause danger to the motorist. In this paper, two study locations have been analysed; they are Gua Tempurung (soil slope) and Jelapang (rock slope) which are obviously having two different characteristics. These locations passed through undulating terrain with steep slopes where landslides are common and the probability of slope instability due to human activities in surrounding areas is high. A combination of twelve (12) landslide conditioning factors database on slope stability such as slope degree and slope aspect were extracted from IFSAR (interoferometric synthetic aperture radar) while landuse, lithology and structural geology were constructed from interpretation of high resolution satellite data from World View II, Quickbird and Ikonos. All this information was analysed in geographic information system (GIS) environment for landslide susceptibility mapping using probabilistic based frequency ratio model. Consequently, information on the slopes such as inventories, condition assessments and maintenance records were assessed through total expressway maintenance management system or better known as TEMAN. The above mentioned system is used by PLUS as an asset management and decision support tools for maintenance activities along the highways as well as for data

  19. Landslide susceptibility mapping along PLUS expressways in Malaysia using probabilistic based model in GIS

    International Nuclear Information System (INIS)

    Yusof, Norbazlan M; Pradhan, Biswajeet

    2014-01-01

    PLUS Berhad holds the concession for a total of 987 km of toll expressways in Malaysia, the longest of which is the North-South Expressway or NSE. Acting as the backbone' of the west coast of the peninsula, the NSE stretches from the Malaysian-Thai border in the north to the border with neighbouring Singapore in the south, linking several major cities and towns along the way. North-South Expressway in Malaysia contributes to the country economic development through trade, social and tourism sector. Presently, the highway is good in terms of its condition and connection to every state but some locations need urgent attention. Stability of slopes at these locations is of most concern as any instability can cause danger to the motorist. In this paper, two study locations have been analysed; they are Gua Tempurung (soil slope) and Jelapang (rock slope) which are obviously having two different characteristics. These locations passed through undulating terrain with steep slopes where landslides are common and the probability of slope instability due to human activities in surrounding areas is high. A combination of twelve (12) landslide conditioning factors database on slope stability such as slope degree and slope aspect were extracted from IFSAR (interoferometric synthetic aperture radar) while landuse, lithology and structural geology were constructed from interpretation of high resolution satellite data from World View II, Quickbird and Ikonos. All this information was analysed in geographic information system (GIS) environment for landslide susceptibility mapping using probabilistic based frequency ratio model. Consequently, information on the slopes such as inventories, condition assessments and maintenance records were assessed through total expressway maintenance management system or better known as TEMAN. The above mentioned system is used by PLUS as an asset management and decision support tools for maintenance activities along the highways as well as for

  20. The Stag Hunt Game: An Example of an Excel-Based Probabilistic Game

    Science.gov (United States)

    Bridge, Dave

    2016-01-01

    With so many role-playing simulations already in the political science education literature, the recent repeated calls for new games is both timely and appropriate. This article answers and extends those calls by advocating the creation of probabilistic games using Microsoft Excel. I introduce the example of the Stag Hunt Game--a short, effective,…

  1. Preliminary probabilistic prediction of ice/snow accretion on stay cables based on meteorological variables

    DEFF Research Database (Denmark)

    Roldsgaard, Joan Hee; Kiremidjian, A.; Georgakis, Christos T.

    The scope of the present paper is to present a framework for assessment of the probability of occurrence of ice/snow accretion on bridge cables. The framework utilizes Bayesian Probabilistic Networks and the methodology is illustrated with an example of the cable-stayed Øresund Bridge. The case...

  2. Dual channel rank-based intensity weighting for quantitative co-localization of microscopy images

    LENUS (Irish Health Repository)

    Singan, Vasanth R

    2011-10-21

    Abstract Background Accurate quantitative co-localization is a key parameter in the context of understanding the spatial co-ordination of molecules and therefore their function in cells. Existing co-localization algorithms consider either the presence of co-occurring pixels or correlations of intensity in regions of interest. Depending on the image source, and the algorithm selected, the co-localization coefficients determined can be highly variable, and often inaccurate. Furthermore, this choice of whether co-occurrence or correlation is the best approach for quantifying co-localization remains controversial. Results We have developed a novel algorithm to quantify co-localization that improves on and addresses the major shortcomings of existing co-localization measures. This algorithm uses a non-parametric ranking of pixel intensities in each channel, and the difference in ranks of co-localizing pixel positions in the two channels is used to weight the coefficient. This weighting is applied to co-occurring pixels thereby efficiently combining both co-occurrence and correlation. Tests with synthetic data sets show that the algorithm is sensitive to both co-occurrence and correlation at varying levels of intensity. Analysis of biological data sets demonstrate that this new algorithm offers high sensitivity, and that it is capable of detecting subtle changes in co-localization, exemplified by studies on a well characterized cargo protein that moves through the secretory pathway of cells. Conclusions This algorithm provides a novel way to efficiently combine co-occurrence and correlation components in biological images, thereby generating an accurate measure of co-localization. This approach of rank weighting of intensities also eliminates the need for manual thresholding of the image, which is often a cause of error in co-localization quantification. We envisage that this tool will facilitate the quantitative analysis of a wide range of biological data sets

  3. A least square support vector machine-based approach for contingency classification and ranking in a large power system

    Directory of Open Access Journals (Sweden)

    Bhanu Pratap Soni

    2016-12-01

    Full Text Available This paper proposes an effective supervised learning approach for static security assessment of a large power system. Supervised learning approach employs least square support vector machine (LS-SVM to rank the contingencies and predict the system severity level. The severity of the contingency is measured by two scalar performance indices (PIs: line MVA performance index (PIMVA and Voltage-reactive power performance index (PIVQ. SVM works in two steps. Step I is the estimation of both standard indices (PIMVA and PIVQ that is carried out under different operating scenarios and Step II contingency ranking is carried out based on the values of PIs. The effectiveness of the proposed methodology is demonstrated on IEEE 39-bus (New England system. The approach can be beneficial tool which is less time consuming and accurate security assessment and contingency analysis at energy management center.

  4. Ranking Theory and Conditional Reasoning.

    Science.gov (United States)

    Skovgaard-Olsen, Niels

    2016-05-01

    Ranking theory is a formal epistemology that has been developed in over 600 pages in Spohn's recent book The Laws of Belief, which aims to provide a normative account of the dynamics of beliefs that presents an alternative to current probabilistic approaches. It has long been received in the AI community, but it has not yet found application in experimental psychology. The purpose of this paper is to derive clear, quantitative predictions by exploiting a parallel between ranking theory and a statistical model called logistic regression. This approach is illustrated by the development of a model for the conditional inference task using Spohn's (2013) ranking theoretic approach to conditionals. Copyright © 2015 Cognitive Science Society, Inc.

  5. Hierarchical partial order ranking

    International Nuclear Information System (INIS)

    Carlsen, Lars

    2008-01-01

    Assessing the potential impact on environmental and human health from the production and use of chemicals or from polluted sites involves a multi-criteria evaluation scheme. A priori several parameters are to address, e.g., production tonnage, specific release scenarios, geographical and site-specific factors in addition to various substance dependent parameters. Further socio-economic factors may be taken into consideration. The number of parameters to be included may well appear to be prohibitive for developing a sensible model. The study introduces hierarchical partial order ranking (HPOR) that remedies this problem. By HPOR the original parameters are initially grouped based on their mutual connection and a set of meta-descriptors is derived representing the ranking corresponding to the single groups of descriptors, respectively. A second partial order ranking is carried out based on the meta-descriptors, the final ranking being disclosed though average ranks. An illustrative example on the prioritisation of polluted sites is given. - Hierarchical partial order ranking of polluted sites has been developed for prioritization based on a large number of parameters

  6. An Efficient Normalized Rank Based SVM for Room Level Indoor WiFi Localization with Diverse Devices

    Directory of Open Access Journals (Sweden)

    Yasmine Rezgui

    2017-01-01

    Full Text Available This paper proposes an efficient and effective WiFi fingerprinting-based indoor localization algorithm, which uses the Received Signal Strength Indicator (RSSI of WiFi signals. In practical harsh indoor environments, RSSI variation and hardware variance can significantly degrade the performance of fingerprinting-based localization methods. To address the problem of hardware variance and signal fluctuation in WiFi fingerprinting-based localization, we propose a novel normalized rank based Support Vector Machine classifier (NR-SVM. Moving from RSSI value based analysis to the normalized rank transformation based analysis, the principal features are prioritized and the dimensionalities of signature vectors are taken into account. The proposed method has been tested using sixteen different devices in a shopping mall with 88 shops. The experimental results demonstrate its robustness with no less than 98.75% correct estimation in 93.75% of the tested cases and 100% correct rate in 56.25% of cases. In the experiments, the new method shows better performance over the KNN, Naïve Bayes, Random Forest, and Neural Network algorithms. Furthermore, we have compared the proposed approach with three popular calibration-free transformation based methods, including difference method (DIFF, Signal Strength Difference (SSD, and the Hyperbolic Location Fingerprinting (HLF based SVM. The results show that the NR-SVM outperforms these popular methods.

  7. A Fast, Open EEG Classification Framework Based on Feature Compression and Channel Ranking

    Directory of Open Access Journals (Sweden)

    Jiuqi Han

    2018-04-01

    Full Text Available Superior feature extraction, channel selection and classification methods are essential for designing electroencephalography (EEG classification frameworks. However, the performance of most frameworks is limited by their improper channel selection methods and too specifical design, leading to high computational complexity, non-convergent procedure and narrow expansibility. In this paper, to remedy these drawbacks, we propose a fast, open EEG classification framework centralized by EEG feature compression, low-dimensional representation, and convergent iterative channel ranking. First, to reduce the complexity, we use data clustering to compress the EEG features channel-wise, packing the high-dimensional EEG signal, and endowing them with numerical signatures. Second, to provide easy access to alternative superior methods, we structurally represent each EEG trial in a feature vector with its corresponding numerical signature. Thus, the recorded signals of many trials shrink to a low-dimensional structural matrix compatible with most pattern recognition methods. Third, a series of effective iterative feature selection approaches with theoretical convergence is introduced to rank the EEG channels and remove redundant ones, further accelerating the EEG classification process and ensuring its stability. Finally, a classical linear discriminant analysis (LDA model is employed to classify a single EEG trial with selected channels. Experimental results on two real world brain-computer interface (BCI competition datasets demonstrate the promising performance of the proposed framework over state-of-the-art methods.

  8. Validation of a model for ranking aquaculture facilities for risk-based disease surveillance.

    Science.gov (United States)

    Diserens, Nicolas; Falzon, Laura Cristina; von Siebenthal, Beat; Schüpbach-Regula, Gertraud; Wahli, Thomas

    2017-09-15

    A semi-quantitative model for risk ranking of aquaculture facilities in Switzerland with regard to the introduction and spread of Viral Haemorrhagic Septicaemia (VHS) and Infectious Haematopoietic Necrosis (IHN) was developed in a previous study (Diserens et al., 2013). The objective of the present study was to validate this model using data collected during field visits on aquaculture sites in four Swiss cantons compared to data collected through a questionnaire in the previous study. A discrepancy between the values obtained with the two different methods was found in 32.8% of the parameters, resulting in a significant difference (pranking of Swiss aquaculture facilities according to their risk of getting infected with or spreading of VHS and IHN, as the five facilities that tested positive for these diseases in the last ten years were ranked as medium or high risk. Moreover, because the seven fish farms that were infected with Infectious Pancreatic Necrosis (IPN) during the same period also belonged to the risk categories medium and high, the classification appeared to correlate with the occurrence of this third viral fish disease. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Tackling Information Asymmetry in Networks: A New Entropy-Based Ranking Index

    Science.gov (United States)

    Barucca, Paolo; Caldarelli, Guido; Squartini, Tiziano

    2018-06-01

    Information is a valuable asset in socio-economic systems, a significant part of which is entailed into the network of connections between agents. The different interlinkages patterns that agents establish may, in fact, lead to asymmetries in the knowledge of the network structure; since this entails a different ability of quantifying relevant, systemic properties (e.g. the risk of contagion in a network of liabilities), agents capable of providing a better estimation of (otherwise) inaccessible network properties, ultimately have a competitive advantage. In this paper, we address the issue of quantifying the information asymmetry of nodes: to this aim, we define a novel index—InfoRank—intended to rank nodes according to their information content. In order to do so, each node ego-network is enforced as a constraint of an entropy-maximization problem and the subsequent uncertainty reduction is used to quantify the node-specific accessible information. We, then, test the performance of our ranking procedure in terms of reconstruction accuracy and show that it outperforms other centrality measures in identifying the "most informative" nodes. Finally, we discuss the socio-economic implications of network information asymmetry.

  10. Probabilistic evaluation method for axial capacity of single pile based on pile test information. Saika shiken kekka wo koryoshita kuienchoku shijiryoku no kakuritsuronteki hyokaho

    Energy Technology Data Exchange (ETDEWEB)

    Ishii, K.; Suzuki, M. (Shimizu Construction Co. Ltd., Tokyo (Japan)); Nakatani, S. (Ministry of Construction Tokyo (Japan)); Matsui, K. (CTI Engineering Co. Ltd., Tokyo (Japan))

    1991-12-20

    To consider the safety and economics in the design of pile, the reasonable evaluation on estimated accuracy from the accuracy of equation of pile capacity and probabilistic evaluation method is necessary. Therefore, the data analysis based on the collection and summary of the results from load tests of piles is one of powerful approach. In this study, selection of the parameters that cannot obtained from probabilistic model and load test and combination between statistical and experimental data by using Baysian probabilistic theory was examined. As the feature of this study, use of the design pile capacity equation based on the model of evaluation of pile capacity, consideration of the intrinsic difference between statistical data and results of load tests by using Baysian probabilistic theory and quantitative examination of applicability of the proposed method and the results of load tests are given. 24 refs., 5 figs., 7 tabs.

  11. A Simple Model to Rank Shellfish Farming Areas Based on the Risk of Disease Introduction and Spread.

    Science.gov (United States)

    Thrush, M A; Pearce, F M; Gubbins, M J; Oidtmann, B C; Peeler, E J

    2017-08-01

    The European Union Council Directive 2006/88/EC requires that risk-based surveillance (RBS) for listed aquatic animal diseases is applied to all aquaculture production businesses. The principle behind this is the efficient use of resources directed towards high-risk farm categories, animal types and geographic areas. To achieve this requirement, fish and shellfish farms must be ranked according to their risk of disease introduction and spread. We present a method to risk rank shellfish farming areas based on the risk of disease introduction and spread and demonstrate how the approach was applied in 45 shellfish farming areas in England and Wales. Ten parameters were used to inform the risk model, which were grouped into four risk themes based on related pathways for transmission of pathogens: (i) live animal movement, (ii) transmission via water, (iii) short distance mechanical spread (birds) and (iv) long distance mechanical spread (vessels). Weights (informed by expert knowledge) were applied both to individual parameters and to risk themes for introduction and spread to reflect their relative importance. A spreadsheet model was developed to determine quantitative scores for the risk of pathogen introduction and risk of pathogen spread for each shellfish farming area. These scores were used to independently rank areas for risk of introduction and for risk of spread. Thresholds were set to establish risk categories (low, medium and high) for introduction and spread based on risk scores. Risk categories for introduction and spread for each area were combined to provide overall risk categories to inform a risk-based surveillance programme directed at the area level. Applying the combined risk category designation framework for risk of introduction and spread suggested by European Commission guidance for risk-based surveillance, 4, 10 and 31 areas were classified as high, medium and low risk, respectively. © 2016 Crown copyright.

  12. A Topology Evolution Model Based on Revised PageRank Algorithm and Node Importance for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xiaogang Qi

    2015-01-01

    Full Text Available Wireless sensor network (WSN is a classical self-organizing communication network, and its topology evolution currently becomes one of the attractive issues in this research field. Accordingly, the problem is divided into two subproblems: one is to design a new preferential attachment method and the other is to analyze the dynamics of the network topology evolution. To solve the first subproblem, a revised PageRank algorithm, called Con-rank, is proposed to evaluate the node importance upon the existing node contraction, and then a novel preferential attachment is designed based on the node importance calculated by the proposed Con-rank algorithm. To solve the second one, we firstly analyze the network topology evolution dynamics in a theoretical way and then simulate the evolution process. Theoretical analysis proves that the network topology evolution of our model agrees with power-law distribution, and simulation results are well consistent with our conclusions obtained from the theoretical analysis and simultaneously show that our topology evolution model is superior to the classic BA model in the average path length and the clustering coefficient, and the network topology is more robust and can tolerate the random attacks.

  13. A rank-based algorithm of differential expression analysis for small cell line data with statistical control.

    Science.gov (United States)

    Li, Xiangyu; Cai, Hao; Wang, Xianlong; Ao, Lu; Guo, You; He, Jun; Gu, Yunyan; Qi, Lishuang; Guan, Qingzhou; Lin, Xu; Guo, Zheng

    2017-10-13

    To detect differentially expressed genes (DEGs) in small-scale cell line experiments, usually with only two or three technical replicates for each state, the commonly used statistical methods such as significance analysis of microarrays (SAM), limma and RankProd (RP) lack statistical power, while the fold change method lacks any statistical control. In this study, we demonstrated that the within-sample relative expression orderings (REOs) of gene pairs were highly stable among technical replicates of a cell line but often widely disrupted after certain treatments such like gene knockdown, gene transfection and drug treatment. Based on this finding, we customized the RankComp algorithm, previously designed for individualized differential expression analysis through REO comparison, to identify DEGs with certain statistical control for small-scale cell line data. In both simulated and real data, the new algorithm, named CellComp, exhibited high precision with much higher sensitivity than the original RankComp, SAM, limma and RP methods. Therefore, CellComp provides an efficient tool for analyzing small-scale cell line data. © The Author 2017. Published by Oxford University Press.

  14. Using a consensus approach based on the conservation of inter-residue contacts to rank CAPRI models

    KAUST Repository

    Vangone, Anna

    2013-10-17

    Herein we propose the use of a consensus approach, CONSRANK, for ranking CAPRI models. CONSRANK relies on the conservation of inter-residue contacts in the analyzed decoys ensemble. Models are ranked according to their ability to match the most frequently observed contacts. We applied CONSRANK to 19 CAPRI protein-protein targets, covering a wide range of prediction difficulty and involved in a variety of biological functions. CONSRANK results are consistently good, both in terms of native-like (NL) solutions ranked in the top positions and of values of the Area Under the receiver operating characteristic Curve (AUC). For targets having a percentage of NL solutions above 3%, an excellent performance is found, with AUC values approaching 1. For the difficult target T46, having only 3.4% NL solutions, the number of NL solutions in the top 5 and 10 ranked positions is enriched by a factor 30, and the AUC value is as high as 0.997. AUC values below 0.8 are only found for targets featuring a percentage of NL solutions within 1.1%. Remarkably, a false consensus emerges only in one case, T42, which happens to be an artificial protein, whose assembly details remain uncertain, based on controversial experimental data. We also show that CONSRANK still performs very well on a limited number of models, provided that more than 1 NL solution is included in the ensemble, thus extending its applicability to cases where few dozens of models are available.© 2013 Wiley Periodicals, Inc.

  15. Ranking of sabotage/tampering avoidance technology alternatives

    International Nuclear Information System (INIS)

    Andrews, W.B.; Tabatabai, A.S.; Powers, T.B.

    1986-01-01

    Pacific Northwest Laboratory conducted a study to evaluate alternatives to the design and operation of nuclear power plants, emphasizing a reduction of their vulnerability to sabotage. Estimates of core melt accident frequency during normal operations and from sabotage/tampering events were used to rank the alternatives. Core melt frequency for normal operations was estimated using sensitivity analysis of results of probabilistic risk assessments. Core melt frequency for sabotage/tampering was estimated by developing a model based on probabilistic risk analyses, historic data, engineering judgment, and safeguards analyses of plant locations where core melt events could be initiated. Results indicate the most effective alternatives focus on large areas of the plant, increase safety system redundancy, and reduce reliance on single locations for mitigation of transients. Less effective options focus on specific areas of the plant, reduce reliance on some plant areas for safe shutdown, and focus on less vulnerable targets

  16. Ranking of sabotage/tampering avoidance technology alternatives

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, W.B.; Tabatabai, A.S.; Powers, T.B.; Daling, P.M.; Fecht, B.A.; Gore, B.F.; Overcast, T.D.; Rankin, W.R.; Schreiber, R.E.; Tawil, J.J.

    1986-01-01

    Pacific Northwest Laboratory conducted a study to evaluate alternatives to the design and operation of nuclear power plants, emphasizing a reduction of their vulnerability to sabotage. Estimates of core melt accident frequency during normal operations and from sabotage/tampering events were used to rank the alternatives. Core melt frequency for normal operations was estimated using sensitivity analysis of results of probabilistic risk assessments. Core melt frequency for sabotage/tampering was estimated by developing a model based on probabilistic risk analyses, historic data, engineering judgment, and safeguards analyses of plant locations where core melt events could be initiated. Results indicate the most effective alternatives focus on large areas of the plant, increase safety system redundancy, and reduce reliance on single locations for mitigation of transients. Less effective options focus on specific areas of the plant, reduce reliance on some plant areas for safe shutdown, and focus on less vulnerable targets.

  17. Predictive control for stochastic systems based on multi-layer probabilistic sets

    Directory of Open Access Journals (Sweden)

    Huaqing LIANG

    2016-04-01

    Full Text Available Aiming at a class of discrete-time stochastic systems with Markov jump features, the state-feedback predictive control problem under probabilistic constraints of input variables is researched. On the basis of the concept and method of the multi-layer probabilistic sets, the predictive controller design algorithm with the soft constraints of different probabilities is presented. Under the control of the multi-step feedback laws, the system state moves to different ellipses with specified probabilities. The stability of the system is guaranteed, the feasible region of the control problem is enlarged, and the system performance is improved. Finally, a simulation example is given to prove the effectiveness of the proposed method.

  18. Combination of Evidence with Different Weighting Factors: A Novel Probabilistic-Based Dissimilarity Measure Approach

    Directory of Open Access Journals (Sweden)

    Mengmeng Ma

    2015-01-01

    Full Text Available To solve the invalidation problem of Dempster-Shafer theory of evidence (DS with high conflict in multisensor data fusion, this paper presents a novel combination approach of conflict evidence with different weighting factors using a new probabilistic dissimilarity measure. Firstly, an improved probabilistic transformation function is proposed to map basic belief assignments (BBAs to probabilities. Then, a new dissimilarity measure integrating fuzzy nearness and introduced correlation coefficient is proposed to characterize not only the difference between basic belief functions (BBAs but also the divergence degree of the hypothesis that two BBAs support. Finally, the weighting factors used to reassign conflicts on BBAs are developed and Dempster’s rule is chosen to combine the discounted sources. Simple numerical examples are employed to demonstrate the merit of the proposed method. Through analysis and comparison of the results, the new combination approach can effectively solve the problem of conflict management with better convergence performance and robustness.

  19. Probabilistic Modeling of Seismic Risk Based Design for a Dual System Structure

    OpenAIRE

    Sidi, Indra Djati

    2017-01-01

    The dual system structure concept has gained popularity in the construction of high-rise buildings over the last decades. Meanwhile, earthquake engineering design provisions for buildings have moved from the uniform hazard concept to the uniform risk concept upon recognizing the uncertainties involved in the earthquake resistance of concrete structures. In this study, a probabilistic model for the evaluation of such risk is proposed for a dual system structure consisting of shear walls or cor...

  20. PRIMAL: Page Rank-Based Indoor Mapping and Localization Using Gene-Sequenced Unlabeled WLAN Received Signal Strength

    Directory of Open Access Journals (Sweden)

    Mu Zhou

    2015-09-01

    Full Text Available Due to the wide deployment of wireless local area networks (WLAN, received signal strength (RSS-based indoor WLAN localization has attracted considerable attention in both academia and industry. In this paper, we propose a novel page rank-based indoor mapping and localization (PRIMAL by using the gene-sequenced unlabeled WLAN RSS for simultaneous localization and mapping (SLAM. Specifically, first of all, based on the observation of the motion patterns of the people in the target environment, we use the Allen logic to construct the mobility graph to characterize the connectivity among different areas of interest. Second, the concept of gene sequencing is utilized to assemble the sporadically-collected RSS sequences into a signal graph based on the transition relations among different RSS sequences. Third, we apply the graph drawing approach to exhibit both the mobility graph and signal graph in a more readable manner. Finally, the page rank (PR algorithm is proposed to construct the mapping from the signal graph into the mobility graph. The experimental results show that the proposed approach achieves satisfactory localization accuracy and meanwhile avoids the intensive time and labor cost involved in the conventional location fingerprinting-based indoor WLAN localization.

  1. PRIMAL: Page Rank-Based Indoor Mapping and Localization Using Gene-Sequenced Unlabeled WLAN Received Signal Strength.

    Science.gov (United States)

    Zhou, Mu; Zhang, Qiao; Xu, Kunjie; Tian, Zengshan; Wang, Yanmeng; He, Wei

    2015-09-25

    Due to the wide deployment of wireless local area networks (WLAN), received signal strength (RSS)-based indoor WLAN localization has attracted considerable attention in both academia and industry. In this paper, we propose a novel page rank-based indoor mapping and localization (PRIMAL) by using the gene-sequenced unlabeled WLAN RSS for simultaneous localization and mapping (SLAM). Specifically, first of all, based on the observation of the motion patterns of the people in the target environment, we use the Allen logic to construct the mobility graph to characterize the connectivity among different areas of interest. Second, the concept of gene sequencing is utilized to assemble the sporadically-collected RSS sequences into a signal graph based on the transition relations among different RSS sequences. Third, we apply the graph drawing approach to exhibit both the mobility graph and signal graph in a more readable manner. Finally, the page rank (PR) algorithm is proposed to construct the mapping from the signal graph into the mobility graph. The experimental results show that the proposed approach achieves satisfactory localization accuracy and meanwhile avoids the intensive time and labor cost involved in the conventional location fingerprinting-based indoor WLAN localization.

  2. Biomechanical rupture risk assessment of abdominal aortic aneurysms based on a novel probabilistic rupture risk index.

    Science.gov (United States)

    Polzer, Stanislav; Gasser, T Christian

    2015-12-06

    A rupture risk assessment is critical to the clinical treatment of abdominal aortic aneurysm (AAA) patients. The biomechanical AAA rupture risk assessment quantitatively integrates many known AAA rupture risk factors but the variability of risk predictions due to model input uncertainties remains a challenging limitation. This study derives a probabilistic rupture risk index (PRRI). Specifically, the uncertainties in AAA wall thickness and wall strength were considered, and wall stress was predicted with a state-of-the-art deterministic biomechanical model. The discriminative power of PRRI was tested in a diameter-matched cohort of ruptured (n = 7) and intact (n = 7) AAAs and compared to alternative risk assessment methods. Computed PRRI at 1.5 mean arterial pressure was significantly (p = 0.041) higher in ruptured AAAs (20.21(s.d. 14.15%)) than in intact AAAs (3.71(s.d. 5.77)%). PRRI showed a high sensitivity and specificity (discriminative power of 0.837) to discriminate between ruptured and intact AAA cases. The underlying statistical representation of stochastic data of wall thickness, wall strength and peak wall stress had only negligible effects on PRRI computations. Uncertainties in AAA wall stress predictions, the wide range of reported wall strength and the stochastic nature of failure motivate a probabilistic rupture risk assessment. Advanced AAA biomechanical modelling paired with a probabilistic rupture index definition as known from engineering risk assessment seems to be superior to a purely deterministic approach. © 2015 The Author(s).

  3. Probabilistic linguistics

    NARCIS (Netherlands)

    Bod, R.; Heine, B.; Narrog, H.

    2010-01-01

    Probabilistic linguistics takes all linguistic evidence as positive evidence and lets statistics decide. It allows for accurate modelling of gradient phenomena in production and perception, and suggests that rule-like behaviour is no more than a side effect of maximizing probability. This chapter

  4. Probabilistic Design

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Burcharth, H. F.

    This chapter describes how partial safety factors can be used in design of vertical wall breakwaters and an example of a code format is presented. The partial safety factors are calibrated on a probabilistic basis. The code calibration process used to calibrate some of the partial safety factors...

  5. Rank-based testing of equal survivorship based on cross-sectional survival data with or without prospective follow-up.

    Science.gov (United States)

    Chan, Kwun Chuen Gary; Qin, Jing

    2015-10-01

    Existing linear rank statistics cannot be applied to cross-sectional survival data without follow-up since all subjects are essentially censored. However, partial survival information are available from backward recurrence times and are frequently collected from health surveys without prospective follow-up. Under length-biased sampling, a class of linear rank statistics is proposed based only on backward recurrence times without any prospective follow-up. When follow-up data are available, the proposed rank statistic and a conventional rank statistic that utilizes follow-up information from the same sample are shown to be asymptotically independent. We discuss four ways to combine these two statistics when follow-up is present. Simulations show that all combined statistics have substantially improved power compared with conventional rank statistics, and a Mantel-Haenszel test performed the best among the proposal statistics. The method is applied to a cross-sectional health survey without follow-up and a study of Alzheimer's disease with prospective follow-up. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Probabilistic evaluation of design S-N curve and reliability assessment of ASME code-based evaluation

    International Nuclear Information System (INIS)

    Zhao Yongxiang

    1999-01-01

    A probabilistic evaluating approach of design S-N curve and a reliability assessment approach of the ASME code-based evaluation are presented on the basis of Langer S-N model-based P-S-N curves. The P-S-N curves are estimated by a so-called general maximum likelihood method. This method can be applied to deal with the virtual stress amplitude-crack initial life data which have a characteristics of double random variables. Investigation of a set of the virtual stress amplitude-crack initial life (S-N) data of 1Cr18Ni9Ti austenitic stainless steel-welded joint reveals that the P-S-N curves can give a good prediction of scatter regularity of the S-N data. Probabilistic evaluation of the design S-N curve with 0.9999 survival probability has considered various uncertainties, besides of the scatter of the S-N data, to an appropriate extent. The ASME code-based evaluation with 20 reduction factor on the mean life is much more conservative than that with 2 reduction factor on the stress amplitude. Evaluation of the latter in 666.61 MPa virtual stress amplitude is equivalent to 0.999522 survival probability and in 2092.18 MPa virtual stress amplitude equivalent to 0.9999999995 survival probability. This means that the evaluation in the low loading level may be non-conservative and in contrast, too conservative in the high loading level. Cause is that the reduction factors are constants and the factors can not take into account the general observation that scatter of the N data increases with the loading level decreasing. This has indicated that it is necessary to apply the probabilistic approach to the evaluation of design S-N curve

  7. On the evaluation of the efficacy of a smart damper: a new equivalent energy-based probabilistic approach

    International Nuclear Information System (INIS)

    Aly, A M; Christenson, R E

    2008-01-01

    Smart damping technology has been proposed to protect civil structures from dynamic loads. Each application of smart damping control provides varying levels of performance relative to active and passive control strategies. Currently, researchers compare the relative efficacy of smart damping control to active and passive strategies by running numerous simulations. These simulations can require significant computation time and resources. Because of this, it is desirable to develop an approach to assess the applicability of smart damping technology which requires less computation time. This paper discusses and verifies a probabilistic approach to determine the efficacy of smart damping technology based on clipped optimal state feedback control theory

  8. Using a consensus approach based on the conservation of inter-residue contacts to rank CAPRI models

    KAUST Repository

    Vangone, Anna; Cavallo, Luigi; Oliva, Romina M.

    2013-01-01

    Herein we propose the use of a consensus approach, CONSRANK, for ranking CAPRI models. CONSRANK relies on the conservation of inter-residue contacts in the analyzed decoys ensemble. Models are ranked according to their ability to match the most

  9. Personalized PageRank Clustering: A graph clustering algorithm based on random walks

    Science.gov (United States)

    A. Tabrizi, Shayan; Shakery, Azadeh; Asadpour, Masoud; Abbasi, Maziar; Tavallaie, Mohammad Ali

    2013-11-01

    Graph clustering has been an essential part in many methods and thus its accuracy has a significant effect on many applications. In addition, exponential growth of real-world graphs such as social networks, biological networks and electrical circuits demands clustering algorithms with nearly-linear time and space complexity. In this paper we propose Personalized PageRank Clustering (PPC) that employs the inherent cluster exploratory property of random walks to reveal the clusters of a given graph. We combine random walks and modularity to precisely and efficiently reveal the clusters of a graph. PPC is a top-down algorithm so it can reveal inherent clusters of a graph more accurately than other nearly-linear approaches that are mainly bottom-up. It also gives a hierarchy of clusters that is useful in many applications. PPC has a linear time and space complexity and has been superior to most of the available clustering algorithms on many datasets. Furthermore, its top-down approach makes it a flexible solution for clustering problems with different requirements.

  10. Compression and Combining Based on Channel Shortening and Rank Reduction Technique for Cooperative Wireless Sensor Networks

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2013-12-18

    This paper investigates and compares the performance of wireless sensor networks where sensors operate on the principles of cooperative communications. We consider a scenario where the source transmits signals to the destination with the help of L sensors. As the destination has the capacity of processing only U out of these L signals, the strongest U signals are selected while the remaining (L?U) signals are suppressed. A preprocessing block similar to channel-shortening is proposed in this contribution. However, this preprocessing block employs a rank-reduction technique instead of channel-shortening. By employing this preprocessing, we are able to decrease the computational complexity of the system without affecting the bit error rate (BER) performance. From our simulations, it can be shown that these schemes outperform the channel-shortening schemes in terms of computational complexity. In addition, the proposed schemes have a superior BER performance as compared to channel-shortening schemes when sensors employ fixed gain amplification. However, for sensors which employ variable gain amplification, a tradeoff exists in terms of BER performance between the channel-shortening and these schemes. These schemes outperform channel-shortening scheme for lower signal-to-noise ratio.

  11. Reservoir characterization based on tracer response and rank analysis of production and injection rates

    Energy Technology Data Exchange (ETDEWEB)

    Refunjol, B.T. [Lagoven, S.A., Pdvsa (Venezuela); Lake, L.W. [Univ. of Texas, Austin, TX (United States)

    1997-08-01

    Quantification of the spatial distribution of properties is important for many reservoir-engineering applications. But, before applying any reservoir-characterization technique, the type of problem to be tackled and the information available should be analyzed. This is important because difficulties arise in reservoirs where production records are the only information for analysis. This paper presents the results of a practical technique to determine preferential flow trends in a reservoir. The technique is a combination of reservoir geology, tracer data, and Spearman rank correlation coefficient analysis. The Spearman analysis, in particular, will prove to be important because it appears to be insightful and uses injection/production data that are prevalent in circumstances where other data are nonexistent. The technique is applied to the North Buck Draw field, Campbell County, Wyoming. This work provides guidelines to assess information about reservoir continuity in interwell regions from widely available measurements of production and injection rates at existing wells. The information gained from the application of this technique can contribute to both the daily reservoir management and the future design, control, and interpretation of subsequent projects in the reservoir, without the need for additional data.

  12. Neophilia Ranking of Scientific Journals.

    Science.gov (United States)

    Packalen, Mikko; Bhattacharya, Jay

    2017-01-01

    The ranking of scientific journals is important because of the signal it sends to scientists about what is considered most vital for scientific progress. Existing ranking systems focus on measuring the influence of a scientific paper (citations)-these rankings do not reward journals for publishing innovative work that builds on new ideas. We propose an alternative ranking based on the proclivity of journals to publish papers that build on new ideas, and we implement this ranking via a text-based analysis of all published biomedical papers dating back to 1946. In addition, we compare our neophilia ranking to citation-based (impact factor) rankings; this comparison shows that the two ranking approaches are distinct. Prior theoretical work suggests an active role for our neophilia index in science policy. Absent an explicit incentive to pursue novel science, scientists underinvest in innovative work because of a coordination problem: for work on a new idea to flourish, many scientists must decide to adopt it in their work. Rankings that are based purely on influence thus do not provide sufficient incentives for publishing innovative work. By contrast, adoption of the neophilia index as part of journal-ranking procedures by funding agencies and university administrators would provide an explicit incentive for journals to publish innovative work and thus help solve the coordination problem by increasing scientists' incentives to pursue innovative work.

  13. Probabilistic Logic and Probabilistic Networks

    NARCIS (Netherlands)

    Haenni, R.; Romeijn, J.-W.; Wheeler, G.; Williamson, J.

    2009-01-01

    While in principle probabilistic logics might be applied to solve a range of problems, in practice they are rarely applied at present. This is perhaps because they seem disparate, complicated, and computationally intractable. However, we shall argue in this programmatic paper that several approaches

  14. Confluence Reduction for Probabilistic Systems (extended version)

    NARCIS (Netherlands)

    Timmer, Mark; Stoelinga, Mariëlle Ida Antoinette; van de Pol, Jan Cornelis

    2010-01-01

    This paper presents a novel technique for state space reduction of probabilistic specifications, based on a newly developed notion of confluence for probabilistic automata. We prove that this reduction preserves branching probabilistic bisimulation and can be applied on-the-fly. To support the

  15. Characteristics of the evolution of cooperation by the probabilistic peer-punishment based on the difference of payoff

    International Nuclear Information System (INIS)

    Ohdaira, Tetsushi

    2017-01-01

    Highlights: • The probabilistic peer-punishment based on the difference of payoff is introduced. • The characteristics of the evolution of cooperation are studied. • Those characteristics present the significant contribution to knowledge. - Abstract: Regarding costly punishment of two types, especially peer-punishment is considered to decrease the average payoff of all players as well as pool-punishment does, and to facilitate the antisocial punishment as a result of natural selection. To solve those problems, the author has proposed the probabilistic peer-punishment based on the difference of payoff. In the limited condition, the proposed peer-punishment has shown the positive effects on the evolution of cooperation, and increased the average payoff of all players. Based on those findings, this study exhibits the characteristics of the evolution of cooperation by the proposed peer-punishment. Those characteristics present the significant contribution to knowledge that for the evolution of cooperation, a limited number of players should cause severe damage to defectors at the large expense of their payoff when connections between them are sparse, whereas a greater number of players should share the responsibility to punish defectors at the relatively small expense of their payoff when connections between them are dense.

  16. A probabilistic Poisson-based model accounts for an extensive set of absolute auditory threshold measurements.

    Science.gov (United States)

    Heil, Peter; Matysiak, Artur; Neubauer, Heinrich

    2017-09-01

    Thresholds for detecting sounds in quiet decrease with increasing sound duration in every species studied. The neural mechanisms underlying this trade-off, often referred to as temporal integration, are not fully understood. Here, we probe the human auditory system with a large set of tone stimuli differing in duration, shape of the temporal amplitude envelope, duration of silent gaps between bursts, and frequency. Duration was varied by varying the plateau duration of plateau-burst (PB) stimuli, the duration of the onsets and offsets of onset-offset (OO) stimuli, and the number of identical bursts of multiple-burst (MB) stimuli. Absolute thresholds for a large number of ears (>230) were measured using a 3-interval-3-alternative forced choice (3I-3AFC) procedure. Thresholds decreased with increasing sound duration in a manner that depended on the temporal envelope. Most commonly, thresholds for MB stimuli were highest followed by thresholds for OO and PB stimuli of corresponding durations. Differences in the thresholds for MB and OO stimuli and in the thresholds for MB and PB stimuli, however, varied widely across ears, were negative in some ears, and were tightly correlated. We show that the variation and correlation of MB-OO and MB-PB threshold differences are linked to threshold microstructure, which affects the relative detectability of the sidebands of the MB stimuli and affects estimates of the bandwidth of auditory filters. We also found that thresholds for MB stimuli increased with increasing duration of the silent gaps between bursts. We propose a new model and show that it accurately accounts for our results and does so considerably better than a leaky-integrator-of-intensity model and a probabilistic model proposed by others. Our model is based on the assumption that sensory events are generated by a Poisson point process with a low rate in the absence of stimulation and higher, time-varying rates in the presence of stimulation. A subject in a 3I-3AFC

  17. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    Science.gov (United States)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  18. Abstract probabilistic CNOT gate model based on double encoding: study of the errors and physical realizability

    Science.gov (United States)

    Gueddana, Amor; Attia, Moez; Chatta, Rihab

    2015-03-01

    In this work, we study the error sources standing behind the non-perfect linear optical quantum components composing a non-deterministic quantum CNOT gate model, which performs the CNOT function with a success probability of 4/27 and uses a double encoding technique to represent photonic qubits at the control and the target. We generalize this model to an abstract probabilistic CNOT version and determine the realizability limits depending on a realistic range of the errors. Finally, we discuss physical constraints allowing the implementation of the Asymmetric Partially Polarizing Beam Splitter (APPBS), which is at the heart of correctly realizing the CNOT function.

  19. Binding mode prediction and MD/MMPBSA-based free energy ranking for agonists of REV-ERBα/NCoR.

    Science.gov (United States)

    Westermaier, Yvonne; Ruiz-Carmona, Sergio; Theret, Isabelle; Perron-Sierra, Françoise; Poissonnet, Guillaume; Dacquet, Catherine; Boutin, Jean A; Ducrot, Pierre; Barril, Xavier

    2017-08-01

    The knowledge of the free energy of binding of small molecules to a macromolecular target is crucial in drug design as is the ability to predict the functional consequences of binding. We highlight how a molecular dynamics (MD)-based approach can be used to predict the free energy of small molecules, and to provide priorities for the synthesis and the validation via in vitro tests. Here, we study the dynamics and energetics of the nuclear receptor REV-ERBα with its co-repressor NCoR and 35 novel agonists. Our in silico approach combines molecular docking, molecular dynamics (MD), solvent-accessible surface area (SASA) and molecular mechanics poisson boltzmann surface area (MMPBSA) calculations. While docking yielded initial hints on the binding modes, their stability was assessed by MD. The SASA calculations revealed that the presence of the ligand led to a higher exposure of hydrophobic REV-ERB residues for NCoR recruitment. MMPBSA was very successful in ranking ligands by potency in a retrospective and prospective manner. Particularly, the prospective MMPBSA ranking-based validations for four compounds, three predicted to be active and one weakly active, were confirmed experimentally.

  20. A rank-based approach for correcting systematic biases in spatial disaggregation of coarse-scale climate simulations

    Science.gov (United States)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-07-01

    Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.

  1. Development of a Risk-Based Probabilistic Performance-Assessment Method for Long-Term Cover Systems - 2nd Edition

    International Nuclear Information System (INIS)

    HO, CLIFFORD K.; ARNOLD, BILL W.; COCHRAN, JOHN R.; TAIRA, RANDAL Y.

    2002-01-01

    A probabilistic, risk-based performance-assessment methodology has been developed to assist designers, regulators, and stakeholders in the selection, design, and monitoring of long-term covers for contaminated subsurface sites. This report describes the method, the software tools that were developed, and an example that illustrates the probabilistic performance-assessment method using a repository site in Monticello, Utah. At the Monticello site, a long-term cover system is being used to isolate long-lived uranium mill tailings from the biosphere. Computer models were developed to simulate relevant features, events, and processes that include water flux through the cover, source-term release, vadose-zone transport, saturated-zone transport, gas transport, and exposure pathways. The component models were then integrated into a total-system performance-assessment model, and uncertainty distributions of important input parameters were constructed and sampled in a stochastic Monte Carlo analysis. Multiple realizations were simulated using the integrated model to produce cumulative distribution functions of the performance metrics, which were used to assess cover performance for both present- and long-term future conditions. Performance metrics for this study included the water percolation reaching the uranium mill tailings, radon gas flux at the surface, groundwater concentrations, and dose. Results from uncertainty analyses, sensitivity analyses, and alternative design comparisons are presented for each of the performance metrics. The benefits from this methodology include a quantification of uncertainty, the identification of parameters most important to performance (to prioritize site characterization and monitoring activities), and the ability to compare alternative designs using probabilistic evaluations of performance (for cost savings)

  2. Ranking nodes in growing networks: When PageRank fails.

    Science.gov (United States)

    Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng

    2015-11-10

    PageRank is arguably the most popular ranking algorithm which is being applied in real systems ranging from information to biological and infrastructure networks. Despite its outstanding popularity and broad use in different areas of science, the relation between the algorithm's efficacy and properties of the network on which it acts has not yet been fully understood. We study here PageRank's performance on a network model supported by real data, and show that realistic temporal effects make PageRank fail in individuating the most valuable nodes for a broad range of model parameters. Results on real data are in qualitative agreement with our model-based findings. This failure of PageRank reveals that the static approach to information filtering is inappropriate for a broad class of growing systems, and suggest that time-dependent algorithms that are based on the temporal linking patterns of these systems are needed to better rank the nodes.

  3. SRMDAP: SimRank and Density-Based Clustering Recommender Model for miRNA-Disease Association Prediction

    Directory of Open Access Journals (Sweden)

    Xiaoying Li

    2018-01-01

    Full Text Available Aberrant expression of microRNAs (miRNAs can be applied for the diagnosis, prognosis, and treatment of human diseases. Identifying the relationship between miRNA and human disease is important to further investigate the pathogenesis of human diseases. However, experimental identification of the associations between diseases and miRNAs is time-consuming and expensive. Computational methods are efficient approaches to determine the potential associations between diseases and miRNAs. This paper presents a new computational method based on the SimRank and density-based clustering recommender model for miRNA-disease associations prediction (SRMDAP. The AUC of 0.8838 based on leave-one-out cross-validation and case studies suggested the excellent performance of the SRMDAP in predicting miRNA-disease associations. SRMDAP could also predict diseases without any related miRNAs and miRNAs without any related diseases.

  4. Transitive probabilistic CLIR models.

    NARCIS (Netherlands)

    Kraaij, W.; de Jong, Franciska M.G.

    2004-01-01

    Transitive translation could be a useful technique to enlarge the number of supported language pairs for a cross-language information retrieval (CLIR) system in a cost-effective manner. The paper describes several setups for transitive translation based on probabilistic translation models. The

  5. Comparative Probabilistic Assessment of Occupational Pesticide Exposures Based on Regulatory Assessments

    Science.gov (United States)

    Pouzou, Jane G.; Cullen, Alison C.; Yost, Michael G.; Kissel, John C.; Fenske, Richard A.

    2018-01-01

    Implementation of probabilistic analyses in exposure assessment can provide valuable insight into the risks of those at the extremes of population distributions, including more vulnerable or sensitive subgroups. Incorporation of these analyses into current regulatory methods for occupational pesticide exposure is enabled by the exposure data sets and associated data currently used in the risk assessment approach of the Environmental Protection Agency (EPA). Monte Carlo simulations were performed on exposure measurements from the Agricultural Handler Exposure Database and the Pesticide Handler Exposure Database along with data from the Exposure Factors Handbook and other sources to calculate exposure rates for three different neurotoxic compounds (azinphos methyl, acetamiprid, emamectin benzoate) across four pesticide-handling scenarios. Probabilistic estimates of doses were compared with the no observable effect levels used in the EPA occupational risk assessments. Some percentage of workers were predicted to exceed the level of concern for all three compounds: 54% for azinphos methyl, 5% for acetamiprid, and 20% for emamectin benzoate. This finding has implications for pesticide risk assessment and offers an alternative procedure that may be more protective of those at the extremes of exposure than the current approach. PMID:29105804

  6. Comparative Probabilistic Assessment of Occupational Pesticide Exposures Based on Regulatory Assessments.

    Science.gov (United States)

    Pouzou, Jane G; Cullen, Alison C; Yost, Michael G; Kissel, John C; Fenske, Richard A

    2017-11-06

    Implementation of probabilistic analyses in exposure assessment can provide valuable insight into the risks of those at the extremes of population distributions, including more vulnerable or sensitive subgroups. Incorporation of these analyses into current regulatory methods for occupational pesticide exposure is enabled by the exposure data sets and associated data currently used in the risk assessment approach of the Environmental Protection Agency (EPA). Monte Carlo simulations were performed on exposure measurements from the Agricultural Handler Exposure Database and the Pesticide Handler Exposure Database along with data from the Exposure Factors Handbook and other sources to calculate exposure rates for three different neurotoxic compounds (azinphos methyl, acetamiprid, emamectin benzoate) across four pesticide-handling scenarios. Probabilistic estimates of doses were compared with the no observable effect levels used in the EPA occupational risk assessments. Some percentage of workers were predicted to exceed the level of concern for all three compounds: 54% for azinphos methyl, 5% for acetamiprid, and 20% for emamectin benzoate. This finding has implications for pesticide risk assessment and offers an alternative procedure that may be more protective of those at the extremes of exposure than the current approach. © 2017 Society for Risk Analysis.

  7. Groundwater contaminant plume ranking

    International Nuclear Information System (INIS)

    1988-08-01

    Containment plumes at Uranium Mill Tailings Remedial Action (UMTRA) Project sites were ranked to assist in Subpart B (i.e., restoration requirements of 40 CFR Part 192) compliance strategies for each site, to prioritize aquifer restoration, and to budget future requests and allocations. The rankings roughly estimate hazards to the environment and human health, and thus assist in determining for which sites cleanup, if appropriate, will provide the greatest benefits for funds available. The rankings are based on the scores that were obtained using the US Department of Energy's (DOE) Modified Hazard Ranking System (MHRS). The MHRS and HRS consider and score three hazard modes for a site: migration, fire and explosion, and direct contact. The migration hazard mode score reflects the potential for harm to humans or the environment from migration of a hazardous substance off a site by groundwater, surface water, and air; it is a composite of separate scores for each of these routes. For ranking the containment plumes at UMTRA Project sites, it was assumed that each site had been remediated in compliance with the EPA standards and that relict contaminant plumes were present. Therefore, only the groundwater route was scored, and the surface water and air routes were not considered. Section 2.0 of this document describes the assumptions and procedures used to score the groundwater route, and Section 3.0 provides the resulting scores for each site. 40 tabs

  8. Quantitative Analysis of Mixtures of Monoprotic Acids Applying Modified Model-Based Rank Annihilation Factor Analysis on Variation Matrices of Spectrophotometric Acid-Base Titrations

    Directory of Open Access Journals (Sweden)

    Ebrahim Ghorbani-Kalhor

    2015-04-01

    Full Text Available In the current work, a new version of rank annihilation factor analysis was developedto circumvent the rank deficiency problem in multivariate data measurements.Simultaneous determination of dissociation constant and concentration of monoprotic acids was performed by applying model-based rank annihilation factor analysis on variation matrices of spectrophotometric acid-base titrations data. Variation matrices can be obtained by subtracting first row of data matrix from all rows of the main data matrix. This method uses variation matrices instead of multivariate spectrophotometric acid-base titrations matrices to circumvent the rank deficiency problem in the rank quantitation step. The applicability of this approach was evaluated by simulated data at first stage, then the binary mixtures of ascorbic and sorbic acids as model compounds were investigated by the proposed method. At the end, the proposed method was successfully applied for resolving the ascorbic and sorbic acid in an orange juice real sample. Therefore, unique results were achieved by applying rank annihilation factor analysis on variation matrix and using hard soft model combination advantage without any problem and difficulty in rank determination. Normal 0 false false false EN-US X-NONE AR-SA /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;}    

  9. Ranking economic history journals

    DEFF Research Database (Denmark)

    Di Vaio, Gianfranco; Weisdorf, Jacob Louis

    2010-01-01

    This study ranks-for the first time-12 international academic journals that have economic history as their main topic. The ranking is based on data collected for the year 2007. Journals are ranked using standard citation analysis where we adjust for age, size and self-citation of journals. We also...... compare the leading economic history journals with the leading journals in economics in order to measure the influence on economics of economic history, and vice versa. With a few exceptions, our results confirm the general idea about what economic history journals are the most influential for economic...... history, and that, although economic history is quite independent from economics as a whole, knowledge exchange between the two fields is indeed going on....

  10. Ranking Economic History Journals

    DEFF Research Database (Denmark)

    Di Vaio, Gianfranco; Weisdorf, Jacob Louis

    This study ranks - for the first time - 12 international academic journals that have economic history as their main topic. The ranking is based on data collected for the year 2007. Journals are ranked using standard citation analysis where we adjust for age, size and self-citation of journals. We...... also compare the leading economic history journals with the leading journals in economics in order to measure the influence on economics of economic history, and vice versa. With a few exceptions, our results confirm the general idea about what economic history journals are the most influential...... for economic history, and that, although economic history is quite independent from economics as a whole, knowledge exchange between the two fields is indeed going on....

  11. Super-resolution reconstruction of 4D-CT lung data via patch-based low-rank matrix reconstruction

    Science.gov (United States)

    Fang, Shiting; Wang, Huafeng; Liu, Yueliang; Zhang, Minghui; Yang, Wei; Feng, Qianjin; Chen, Wufan; Zhang, Yu

    2017-10-01

    Lung 4D computed tomography (4D-CT), which is a time-resolved CT data acquisition, performs an important role in explicitly including respiratory motion in treatment planning and delivery. However, the radiation dose is usually reduced at the expense of inter-slice spatial resolution to minimize radiation-related health risk. Therefore, resolution enhancement along the superior-inferior direction is necessary. In this paper, a super-resolution (SR) reconstruction method based on a patch low-rank matrix reconstruction is proposed to improve the resolution of lung 4D-CT images. Specifically, a low-rank matrix related to every patch is constructed by using a patch searching strategy. Thereafter, the singular value shrinkage is employed to recover the high-resolution patch under the constraints of the image degradation model. The output high-resolution patches are finally assembled to output the entire image. This method is extensively evaluated using two public data sets. Quantitative analysis shows that the proposed algorithm decreases the root mean square error by 9.7%-33.4% and the edge width by 11.4%-24.3%, relative to linear interpolation, back projection (BP) and Zhang et al’s algorithm. A new algorithm has been developed to improve the resolution of 4D-CT. In all experiments, the proposed method outperforms various interpolation methods, as well as BP and Zhang et al’s method, thus indicating the effectivity and competitiveness of the proposed algorithm.

  12. 14 CFR 1214.1105 - Final ranking.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Final ranking. 1214.1105 Section 1214.1105... Recruitment and Selection Program § 1214.1105 Final ranking. Final rankings will be based on a combination of... preference will be included in this final ranking in accordance with applicable regulations. ...

  13. Modelling software failures of digital I and C in probabilistic safety analyses based on the TELEPERM registered XS operating experience

    International Nuclear Information System (INIS)

    Jockenhoevel-Barttfeld, Mariana; Taurines Andre; Baeckstroem, Ola; Holmberg, Jan-Erik; Porthin, Markus; Tyrvaeinen, Tero

    2015-01-01

    Digital instrumentation and control (I and C) systems appear as upgrades in existing nuclear power plants (NPPs) and in new plant designs. In order to assess the impact of digital system failures, quantifiable reliability models are needed along with data for digital systems that are compatible with existing probabilistic safety assessments (PSA). The paper focuses on the modelling of software failures of digital I and C systems in probabilistic assessments. An analysis of software faults, failures and effects is presented to derive relevant failure modes of system and application software for the PSA. The estimations of software failure probabilities are based on an analysis of the operating experience of TELEPERM registered XS (TXS). For the assessment of application software failures the analysis combines the use of the TXS operating experience at an application function level combined with conservative engineering judgments. Failure probabilities to actuate on demand and of spurious actuation of typical reactor protection application are estimated. Moreover, the paper gives guidelines for the modelling of software failures in the PSA. The strategy presented in this paper is generic and can be applied to different software platforms and their applications.

  14. Probabilistic Electricity Price Forecasting Models by Aggregation of Competitive Predictors

    Directory of Open Access Journals (Sweden)

    Claudio Monteiro

    2018-04-01

    Full Text Available This article presents original probabilistic price forecasting meta-models (PPFMCP models, by aggregation of competitive predictors, for day-ahead hourly probabilistic price forecasting. The best twenty predictors of the EEM2016 EPF competition are used to create ensembles of hourly spot price forecasts. For each hour, the parameter values of the probability density function (PDF of a Beta distribution for the output variable (hourly price can be directly obtained from the expected and variance values associated to the ensemble for such hour, using three aggregation strategies of predictor forecasts corresponding to three PPFMCP models. A Reliability Indicator (RI and a Loss function Indicator (LI are also introduced to give a measure of uncertainty of probabilistic price forecasts. The three PPFMCP models were satisfactorily applied to the real-world case study of the Iberian Electricity Market (MIBEL. Results from PPFMCP models showed that PPFMCP model 2, which uses aggregation by weight values according to daily ranks of predictors, was the best probabilistic meta-model from a point of view of mean absolute errors, as well as of RI and LI. PPFMCP model 1, which uses the averaging of predictor forecasts, was the second best meta-model. PPFMCP models allow evaluations of risk decisions based on the price to be made.

  15. Memristive Probabilistic Computing

    KAUST Repository

    Alahmadi, Hamzah

    2017-10-01

    In the era of Internet of Things and Big Data, unconventional techniques are rising to accommodate the large size of data and the resource constraints. New computing structures are advancing based on non-volatile memory technologies and different processing paradigms. Additionally, the intrinsic resiliency of current applications leads to the development of creative techniques in computations. In those applications, approximate computing provides a perfect fit to optimize the energy efficiency while compromising on the accuracy. In this work, we build probabilistic adders based on stochastic memristor. Probabilistic adders are analyzed with respect of the stochastic behavior of the underlying memristors. Multiple adder implementations are investigated and compared. The memristive probabilistic adder provides a different approach from the typical approximate CMOS adders. Furthermore, it allows for a high area saving and design exibility between the performance and power saving. To reach a similar performance level as approximate CMOS adders, the memristive adder achieves 60% of power saving. An image-compression application is investigated using the memristive probabilistic adders with the performance and the energy trade-off.

  16. Ranking procedure based on mechanical, durability and thermal behavior of mortars with incorporation of phase change materials

    Directory of Open Access Journals (Sweden)

    Cunha, S.

    2015-12-01

    Full Text Available Nowadays, considering the high variety of construction products, adequate material selection, based on their properties and function, becomes increasingly important. In this research, a ranking procedure developed by Czarnecki and Lukowski is applied in mortars with incorporation of phase change materials (PCM. The ranking procedure transforms experimental results of properties into one numerical value. The products can be classified according to their individual properties or even an optimized combination of different properties. The main purpose of this study was the ranking of mortars with incorporation of different contents of PCM based in different binders. Aerial lime, hydraulic lime, gypsum and cement were the binders studied. For each binder, three different mortars were developed. Reference mortars, mortars with incorporation of 40% of PCM and mortars with incorporation of 40% of PCM and 1% of fibers, were tested. Results show that the incorporation of PCM in mortars changes their global performance.Actualmente, existen varios productos de construcción, siendo importante una adecuada selección, con base en sus principales propiedades y funciones. En esta investigación se aplicó un procedimiento de clasificación desarrollado por Czarnecki y Lukowski, en morteros con incorporación de materiales de cambio de fase (PCM. Este procedimiento transforma los resultados experimentales de las propiedades en un único valor numérico. Los productos se clasifican de acuerdo con sus propiedades individuales o en una combinación optimizada de diferentes propiedades. El principal objetivo de este estudio fue la clasificación de morteros basado en los diferentes aglutinantes con incorporación de diferentes cantidades de PCM. Los aglutinantes utilizados fueran la cal aérea, cal hidráulica, yeso y cemento. Para cada aglutinante se han desarrollado tres morteros, siendo morteros de referencia, con incorporación de 40% de PCM y con incorporaci

  17. Ranking coastal flood protection designs from engineered to nature-based

    NARCIS (Netherlands)

    Nat, van der A.; Vellinga, P.; Leemans, R.; Slobbe, van E.

    2016-01-01

    Compared to traditional hard engineering, nature-based flood protection can be more cost effective, use up less raw materials, increase system adaptability and present opportunities to improve ecosystem functioning. However, high flood safety standards cause the need to combine nature-based

  18. A GIS based screening tool for locating and ranking of suitable stormwater harvesting sites in urban areas.

    Science.gov (United States)

    Inamdar, P M; Cook, S; Sharma, A K; Corby, N; O'Connor, J; Perera, B J C

    2013-10-15

    There is the need to re-configure current urban water systems to achieve the objective of sustainable water sensitive cities. Stormwater represents a valuable alternative urban water source to reduce pressure on fresh water resources, and to mitigate the environmental impact of urban stormwater runoff. The selection of suitable urban stormwater harvesting sites is generally based on the judgement of water planners, who are faced with the challenge of considering multiple technical and socio-economic factors that influence the site suitability. To address this challenge, the present study developed a robust GIS based screening methodology for identifying potentially suitable stormwater harvesting sites in urban areas as a first pass for then more detailed investigation. The study initially evaluated suitability based on the match between harvestable runoff and demand through a concept of accumulated catchments. Drainage outlets of these accumulated catchments were considered as potential stormwater harvesting sites. These sites were screened and ranked under screening parameters namely demand, ratio of runoff to demand and weighted demand distance. The methodology described in this paper was successfully applied to a case study in Melbourne, Australia in collaboration with the local water utility. The methodology was found to be effective in supporting the selection of priority sites for stormwater harvesting schemes, as it provided the basis to identify, short-list and rank sites for further detailed investigation. The rapid identification of suitable sites for stormwater harvesting can assist planners in prioritising schemes in areas that will have the most impact on reducing potable water demand. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. In Silico target fishing: addressing a "Big Data" problem by ligand-based similarity rankings with data fusion.

    Science.gov (United States)

    Liu, Xian; Xu, Yuan; Li, Shanshan; Wang, Yulan; Peng, Jianlong; Luo, Cheng; Luo, Xiaomin; Zheng, Mingyue; Chen, Kaixian; Jiang, Hualiang

    2014-01-01

    Ligand-based in silico target fishing can be used to identify the potential interacting target of bioactive ligands, which is useful for understanding the polypharmacology and safety profile of existing drugs. The underlying principle of the approach is that known bioactive ligands can be used as reference to predict the targets for a new compound. We tested a pipeline enabling large-scale target fishing and drug repositioning, based on simple fingerprint similarity rankings with data fusion. A large library containing 533 drug relevant targets with 179,807 active ligands was compiled, where each target was defined by its ligand set. For a given query molecule, its target profile is generated by similarity searching against the ligand sets assigned to each target, for which individual searches utilizing multiple reference structures are then fused into a single ranking list representing the potential target interaction profile of the query compound. The proposed approach was validated by 10-fold cross validation and two external tests using data from DrugBank and Therapeutic Target Database (TTD). The use of the approach was further demonstrated with some examples concerning the drug repositioning and drug side-effects prediction. The promising results suggest that the proposed method is useful for not only finding promiscuous drugs for their new usages, but also predicting some important toxic liabilities. With the rapid increasing volume and diversity of data concerning drug related targets and their ligands, the simple ligand-based target fishing approach would play an important role in assisting future drug design and discovery.

  20. Ranking Specific Sets of Objects.

    Science.gov (United States)

    Maly, Jan; Woltran, Stefan

    2017-01-01

    Ranking sets of objects based on an order between the single elements has been thoroughly studied in the literature. In particular, it has been shown that it is in general impossible to find a total ranking - jointly satisfying properties as dominance and independence - on the whole power set of objects. However, in many applications certain elements from the entire power set might not be required and can be neglected in the ranking process. For instance, certain sets might be ruled out due to hard constraints or are not satisfying some background theory. In this paper, we treat the computational problem whether an order on a given subset of the power set of elements satisfying different variants of dominance and independence can be found, given a ranking on the elements. We show that this problem is tractable for partial rankings and NP-complete for total rankings.

  1. Rank-based characterization of pollen assemblages collected by honey bees using a multi-locus metabarcoding approach.

    Science.gov (United States)

    Richardson, Rodney T; Lin, Chia-Hua; Quijia, Juan O; Riusech, Natalia S; Goodell, Karen; Johnson, Reed M

    2015-11-01

    Difficulties inherent in microscopic pollen identification have resulted in limited implementation for large-scale studies. Metabarcoding, a relatively novel approach, could make pollen analysis less onerous; however, improved understanding of the quantitative capacity of various plant metabarcode regions and primer sets is needed to ensure that such applications are accurate and precise. We applied metabarcoding, targeting the ITS2, matK, and rbcL loci, to characterize six samples of pollen collected by honey bees, Apis mellifera. Additionally, samples were analyzed by light microscopy. We found significant rank-based associations between the relative abundance of pollen types within our samples as inferred by the two methods. Our findings suggest metabarcoding data from plastid loci, as opposed to the ribosomal locus, are more reliable for quantitative characterization of pollen assemblages. Furthermore, multilocus metabarcoding of pollen may be more reliable than single-locus analyses, underscoring the need for discovering novel barcodes and barcode combinations optimized for molecular palynology.

  2. A multicriteria model for ranking of improvement approaches in construction companies based on the PROMETHÉE II method

    Directory of Open Access Journals (Sweden)

    Renata Maciel de Melo

    2015-03-01

    Full Text Available The quality of the construction production process may be improved using several different methods such as Lean Construction, ISO 9001, ISO 14001 or ISO 18001. Construction companies need a preliminary study and systematic implementation of changes to become more competitive and efficient. This paper presents a multicriteria decision model for the selection and ranking of such alternatives for improvement approaches regarding the aspects of quality, sustainability and safety, based on the PROMETHEE II method. The adoption of this model provides more confidence and visibility for decision makers. One of the differentiators of this model is the use of a fragmented set of improvement alternatives. These alternatives were combined with some restrictions to create a global set of alternatives. An application to three scenarios, considering realistic data, was developed. The results of the application show that the model should be incorporated into the strategic planning process of organizations.

  3. A convergence theory for probabilistic metric spaces | Jäger ...

    African Journals Online (AJOL)

    We develop a theory of probabilistic convergence spaces based on Tardiff's neighbourhood systems for probabilistic metric spaces. We show that the resulting category is a topological universe and we characterize a subcategory that is isomorphic to the category of probabilistic metric spaces. Keywords: Probabilistic metric ...

  4. Analysis of temporal-longitudinal-latitudinal characteristics in the global ionosphere based on tensor rank-1 decomposition

    Science.gov (United States)

    Lu, Shikun; Zhang, Hao; Li, Xihai; Li, Yihong; Niu, Chao; Yang, Xiaoyun; Liu, Daizhi

    2018-03-01

    Combining analyses of spatial and temporal characteristics of the ionosphere is of great significance for scientific research and engineering applications. Tensor decomposition is performed to explore the temporal-longitudinal-latitudinal characteristics in the ionosphere. Three-dimensional tensors are established based on the time series of ionospheric vertical total electron content maps obtained from the Centre for Orbit Determination in Europe. To obtain large-scale characteristics of the ionosphere, rank-1 decomposition is used to obtain U^{(1)}, U^{(2)}, and U^{(3)}, which are the resulting vectors for the time, longitude, and latitude modes, respectively. Our initial finding is that the correspondence between the frequency spectrum of U^{(1)} and solar variation indicates that rank-1 decomposition primarily describes large-scale temporal variations in the global ionosphere caused by the Sun. Furthermore, the time lags between the maxima of the ionospheric U^{(2)} and solar irradiation range from 1 to 3.7 h without seasonal dependence. The differences in time lags may indicate different interactions between processes in the magnetosphere-ionosphere-thermosphere system. Based on the dataset displayed in the geomagnetic coordinates, the position of the barycenter of U^{(3)} provides evidence for north-south asymmetry (NSA) in the large-scale ionospheric variations. The daily variation in such asymmetry indicates the influences of solar ionization. The diurnal geomagnetic coordinate variations in U^{(3)} show that the large-scale EIA (equatorial ionization anomaly) variations during the day and night have similar characteristics. Considering the influences of geomagnetic disturbance on ionospheric behavior, we select the geomagnetic quiet GIMs to construct the ionospheric tensor. The results indicate that the geomagnetic disturbances have little effect on large-scale ionospheric characteristics.

  5. A Survey on PageRank Computing

    OpenAIRE

    Berkhin, Pavel

    2005-01-01

    This survey reviews the research related to PageRank computing. Components of a PageRank vector serve as authority weights for web pages independent of their textual content, solely based on the hyperlink structure of the web. PageRank is typically used as a web search ranking component. This defines the importance of the model and the data structures that underly PageRank processing. Computing even a single PageRank is a difficult computational task. Computing many PageRanks is a much mor...

  6. Knowledge based ranking algorithm for comparative assessment of post-closure care needs of closed landfills

    International Nuclear Information System (INIS)

    Sizirici, Banu; Tansel, Berrin; Kumar, Vivek

    2011-01-01

    Post-closure care (PCC) activities at landfills include cap maintenance; water quality monitoring; maintenance and monitoring of the gas collection/control system, leachate collection system, groundwater monitoring wells, and surface water management system; and general site maintenance. The objective of this study was to develop an integrated data and knowledge based decision making tool for preliminary estimation of PCC needs at closed landfills. To develop the decision making tool, 11 categories of parameters were identified as critical areas which could affect future PCC needs. Each category was further analyzed by detailed questions which could be answered with limited data and knowledge about the site, its history, location, and site specific characteristics. Depending on the existing knowledge base, a score was assigned to each question (on a scale 1-10, as 1 being the best and 10 being the worst). Each category was also assigned a weight based on its relative importance on the site conditions and PCC needs. The overall landfill score was obtained from the total weighted sum attained. Based on the overall score, landfill conditions could be categorized as critical, acceptable, or good. Critical condition indicates that the landfill may be a threat to the human health and the environment and necessary steps should be taken. Acceptable condition indicates that the landfill is currently stable and the monitoring should be continued. Good condition indicates that the landfill is stable and the monitoring activities can be reduced in the future. The knowledge base algorithm was applied to two case study landfills for preliminary assessment of PCC performance.

  7. Knowledge based ranking algorithm for comparative assessment of post-closure care needs of closed landfills.

    Science.gov (United States)

    Sizirici, Banu; Tansel, Berrin; Kumar, Vivek

    2011-06-01

    Post-closure care (PCC) activities at landfills include cap maintenance; water quality monitoring; maintenance and monitoring of the gas collection/control system, leachate collection system, groundwater monitoring wells, and surface water management system; and general site maintenance. The objective of this study was to develop an integrated data and knowledge based decision making tool for preliminary estimation of PCC needs at closed landfills. To develop the decision making tool, 11 categories of parameters were identified as critical areas which could affect future PCC needs. Each category was further analyzed by detailed questions which could be answered with limited data and knowledge about the site, its history, location, and site specific characteristics. Depending on the existing knowledge base, a score was assigned to each question (on a scale 1-10, as 1 being the best and 10 being the worst). Each category was also assigned a weight based on its relative importance on the site conditions and PCC needs. The overall landfill score was obtained from the total weighted sum attained. Based on the overall score, landfill conditions could be categorized as critical, acceptable, or good. Critical condition indicates that the landfill may be a threat to the human health and the environment and necessary steps should be taken. Acceptable condition indicates that the landfill is currently stable and the monitoring should be continued. Good condition indicates that the landfill is stable and the monitoring activities can be reduced in the future. The knowledge base algorithm was applied to two case study landfills for preliminary assessment of PCC performance. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Ranking nodes in growing networks: When PageRank fails

    Science.gov (United States)

    Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng

    2015-11-01

    PageRank is arguably the most popular ranking algorithm which is being applied in real systems ranging from information to biological and infrastructure networks. Despite its outstanding popularity and broad use in different areas of science, the relation between the algorithm’s efficacy and properties of the network on which it acts has not yet been fully understood. We study here PageRank’s performance on a network model supported by real data, and show that realistic temporal effects make PageRank fail in individuating the most valuable nodes for a broad range of model parameters. Results on real data are in qualitative agreement with our model-based findings. This failure of PageRank reveals that the static approach to information filtering is inappropriate for a broad class of growing systems, and suggest that time-dependent algorithms that are based on the temporal linking patterns of these systems are needed to better rank the nodes.

  9. Design and analysis of a ranking approach to private location-based services

    DEFF Research Database (Denmark)

    Yiu, Ma Lung; Jensen, Christian S.; Møller, Jesper

    2011-01-01

    solution that expresses the server-side functionality in a single SQL statement. In its basic form, SpaceTwist utilizes well-known incremental NN query processing on the server. When augmented with a server-side granular search technique, SpaceTwist is capable of exploiting relaxed query accuracy......Twist, aims to offer location privacy for k nearest neighbor (kNN) queries at low communication cost without requiring a trusted anonymizer. The solution can be used with a conventional DBMS as well as with a server optimized for location-based services. In particular, we believe that this is the first...

  10. Google goes cancer: improving outcome prediction for cancer patients by network-based ranking of marker genes.

    Directory of Open Access Journals (Sweden)

    Christof Winter

    Full Text Available Predicting the clinical outcome of cancer patients based on the expression of marker genes in their tumors has received increasing interest in the past decade. Accurate predictors of outcome and response to therapy could be used to personalize and thereby improve therapy. However, state of the art methods used so far often found marker genes with limited prediction accuracy, limited reproducibility, and unclear biological relevance. To address this problem, we developed a novel computational approach to identify genes prognostic for outcome that couples gene expression measurements from primary tumor samples with a network of known relationships between the genes. Our approach ranks genes according to their prognostic relevance using both expression and network information in a manner similar to Google's PageRank. We applied this method to gene expression profiles which we obtained from 30 patients with pancreatic cancer, and identified seven candidate marker genes prognostic for outcome. Compared to genes found with state of the art methods, such as Pearson correlation of gene expression with survival time, we improve the prediction accuracy by up to 7%. Accuracies were assessed using support vector machine classifiers and Monte Carlo cross-validation. We then validated the prognostic value of our seven candidate markers using immunohistochemistry on an independent set of 412 pancreatic cancer samples. Notably, signatures derived from our candidate markers were independently predictive of outcome and superior to established clinical prognostic factors such as grade, tumor size, and nodal status. As the amount of genomic data of individual tumors grows rapidly, our algorithm meets the need for powerful computational approaches that are key to exploit these data for personalized cancer therapies in clinical practice.

  11. Application of fuzzy-MOORA method: Ranking of components for reliability estimation of component-based software systems

    Directory of Open Access Journals (Sweden)

    Zeeshan Ali Siddiqui

    2016-01-01

    Full Text Available Component-based software system (CBSS development technique is an emerging discipline that promises to take software development into a new era. As hardware systems are presently being constructed from kits of parts, software systems may also be assembled from components. It is more reliable to reuse software than to create. It is the glue code and individual components reliability that contribute to the reliability of the overall system. Every component contributes to overall system reliability according to the number of times it is being used, some components are of critical usage, known as usage frequency of component. The usage frequency decides the weight of each component. According to their weights, each component contributes to the overall reliability of the system. Therefore, ranking of components may be obtained by analyzing their reliability impacts on overall application. In this paper, we propose the application of fuzzy multi-objective optimization on the basis of ratio analysis, Fuzzy-MOORA. The method helps us find the best suitable alternative, software component, from a set of available feasible alternatives named software components. It is an accurate and easy to understand tool for solving multi-criteria decision making problems that have imprecise and vague evaluation data. By the use of ratio analysis, the proposed method determines the most suitable alternative among all possible alternatives, and dimensionless measurement will realize the job of ranking of components for estimating CBSS reliability in a non-subjective way. Finally, three case studies are shown to illustrate the use of the proposed technique.

  12. Diversifying customer review rankings.

    Science.gov (United States)

    Krestel, Ralf; Dokoohaki, Nima

    2015-06-01

    E-commerce Web sites owe much of their popularity to consumer reviews accompanying product descriptions. On-line customers spend hours and hours going through heaps of textual reviews to decide which products to buy. At the same time, each popular product has thousands of user-generated reviews, making it impossible for a buyer to read everything. Current approaches to display reviews to users or recommend an individual review for a product are based on the recency or helpfulness of each review. In this paper, we present a framework to rank product reviews by optimizing the coverage of the ranking with respect to sentiment or aspects, or by summarizing all reviews with the top-K reviews in the ranking. To accomplish this, we make use of the assigned star rating for a product as an indicator for a review's sentiment polarity and compare bag-of-words (language model) with topic models (latent Dirichlet allocation) as a mean to represent aspects. Our evaluation on manually annotated review data from a commercial review Web site demonstrates the effectiveness of our approach, outperforming plain recency ranking by 30% and obtaining best results by combining language and topic model representations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Identifying and ranking implicit leadership strategies to promote evidence-based practice implementation in addiction health services.

    Science.gov (United States)

    Guerrero, Erick G; Padwa, Howard; Fenwick, Karissa; Harris, Lesley M; Aarons, Gregory A

    2016-05-14

    Despite a solid research base supporting evidence-based practices (EBPs) for addiction treatment such as contingency management and medication-assisted treatment, these services are rarely implemented and delivered in community-based addiction treatment programs in the USA. As a result, many clients do not benefit from the most current and efficacious treatments, resulting in reduced quality of care and compromised treatment outcomes. Previous research indicates that addiction program leaders play a key role in supporting EBP adoption and use. The present study expanded on this previous work to identify strategies that addiction treatment program leaders report using to implement new practices. We relied on a staged and iterative mixed-methods approach to achieve the following four goals: (a) collect data using focus groups and semistructured interviews and conduct analyses to identify implicit managerial strategies for implementation, (b) use surveys to quantitatively rank strategy effectiveness, (c) determine how strategies fit with existing theories of organizational management and change, and (d) use a consensus group to corroborate and expand on the results of the previous three stages. Each goal corresponded to a methodological phase, which included data collection and analytic approaches to identify and evaluate leadership interventions that facilitate EBP implementation in community-based addiction treatment programs. Findings show that the top-ranked strategies involved the recruitment and selection of staff members receptive to change, offering support and requesting feedback during the implementation process, and offering in vivo and hands-on training. Most strategies corresponded to emergent implementation leadership approaches that also utilize principles of transformational and transactional leadership styles. Leadership behaviors represented orientations such as being proactive to respond to implementation needs, supportive to assist staff members

  14. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-11-19

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  15. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-01-01

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  16. Multiple graph regularized protein domain ranking.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  17. Multiple graph regularized protein domain ranking

    Directory of Open Access Journals (Sweden)

    Wang Jim

    2012-11-01

    Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  18. Risk-based ranking of dominant contributors to maritime pollution events

    International Nuclear Information System (INIS)

    Wheeler, T.A.

    1993-01-01

    This report describes a conceptual approach for identifying dominant contributors to risk from maritime shipping of hazardous materials. Maritime transportation accidents are relatively common occurrences compared to more frequently analyzed contributors to public risk. Yet research on maritime safety and pollution incidents has not been guided by a systematic, risk-based approach. Maritime shipping accidents can be analyzed using event trees to group the accidents into 'bins,' or groups, of similar characteristics such as type of cargo, location of accident (e.g., harbor, inland waterway), type of accident (e.g., fire, collision, grounding), and size of release. The importance of specific types of events to each accident bin can be quantified. Then the overall importance of accident events to risk can be estimated by weighting the events' individual bin importance measures by the risk associated with each accident bin. 4 refs., 3 figs., 6 tabs

  19. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin

    2014-01-01

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse

  20. Ranking of tree-ring based temperature reconstructions of the past millennium

    Science.gov (United States)

    Esper, Jan; Krusic, Paul J.; Ljungqvist, Fredrik C.; Luterbacher, Jürg; Carrer, Marco; Cook, Ed; Davi, Nicole K.; Hartl-Meier, Claudia; Kirdyanov, Alexander; Konter, Oliver; Myglan, Vladimir; Timonen, Mauri; Treydte, Kerstin; Trouet, Valerie; Villalba, Ricardo; Yang, Bao; Büntgen, Ulf

    2016-08-01

    Tree-ring chronologies are widely used to reconstruct high-to low-frequency variations in growing season temperatures over centuries to millennia. The relevance of these timeseries in large-scale climate reconstructions is often determined by the strength of their correlation against instrumental temperature data. However, this single criterion ignores several important quantitative and qualitative characteristics of tree-ring chronologies. Those characteristics are (i) data homogeneity, (ii) sample replication, (iii) growth coherence, (iv) chronology development, and (v) climate signal including the correlation with instrumental data. Based on these 5 characteristics, a reconstruction-scoring scheme is proposed and applied to 39 published, millennial-length temperature reconstructions from Asia, Europe, North America, and the Southern Hemisphere. Results reveal no reconstruction scores highest in every category and each has their own strengths and weaknesses. Reconstructions that perform better overall include N-Scan and Finland from Europe, E-Canada from North America, Yamal and Dzhelo from Asia. Reconstructions performing less well include W-Himalaya and Karakorum from Asia, Tatra and S-Finland from Europe, and Great Basin from North America. By providing a comprehensive set of criteria to evaluate tree-ring chronologies we hope to improve the development of large-scale temperature reconstructions spanning the past millennium. All reconstructions and their corresponding scores are provided at http://www.blogs.uni-mainz.de/fb09climatology.

  1. Ranking candidate disease genes from gene expression and protein interaction: a Katz-centrality based approach.

    Directory of Open Access Journals (Sweden)

    Jing Zhao

    Full Text Available Many diseases have complex genetic causes, where a set of alleles can affect the propensity of getting the disease. The identification of such disease genes is important to understand the mechanistic and evolutionary aspects of pathogenesis, improve diagnosis and treatment of the disease, and aid in drug discovery. Current genetic studies typically identify chromosomal regions associated specific diseases. But picking out an unknown disease gene from hundreds of candidates located on the same genomic interval is still challenging. In this study, we propose an approach to prioritize candidate genes by integrating data of gene expression level, protein-protein interaction strength and known disease genes. Our method is based only on two, simple, biologically motivated assumptions--that a gene is a good disease-gene candidate if it is differentially expressed in cases and controls, or that it is close to other disease-gene candidates in its protein interaction network. We tested our method on 40 diseases in 58 gene expression datasets of the NCBI Gene Expression Omnibus database. On these datasets our method is able to predict unknown disease genes as well as identifying pleiotropic genes involved in the physiological cellular processes of many diseases. Our study not only provides an effective algorithm for prioritizing candidate disease genes but is also a way to discover phenotypic interdependency, cooccurrence and shared pathophysiology between different disorders.

  2. U.S. Natural Gas Storage Risk-Based Ranking Methodology and Results

    Energy Technology Data Exchange (ETDEWEB)

    Folga, Steve [Argonne National Lab. (ANL), Argonne, IL (United States); Portante, Edgar [Argonne National Lab. (ANL), Argonne, IL (United States); Shamsuddin, Shabbir [Argonne National Lab. (ANL), Argonne, IL (United States); Tompkins, Angeli [Argonne National Lab. (ANL), Argonne, IL (United States); Talaber, Leah [Argonne National Lab. (ANL), Argonne, IL (United States); McLamore, Mike [Argonne National Lab. (ANL), Argonne, IL (United States); Kavicky, Jim [Argonne National Lab. (ANL), Argonne, IL (United States); Conzelmann, Guenter [Argonne National Lab. (ANL), Argonne, IL (United States); Levin, Todd [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-10-01

    This report summarizes the methodology and models developed to assess the risk to energy delivery from the potential loss of underground gas storage (UGS) facilities located within the United States. The U.S. has a total of 418 existing storage fields, of which 390 are currently active. The models estimate the impacts of a disruption of each of the active UGS facilities on their owners/operators, including (1) local distribution companies (LDCs), (2) directly connected transporting pipelines and thus on the customers in downstream States, and (3) third-party entities and thus on contracted customers expecting the gas shipment. Impacts are measured across all natural gas customer classes. For the electric sector, impacts are quantified in terms of natural gas-fired electric generation capacity potentially affected from the loss of a UGS facility. For the purpose of calculating the overall supply risk, the overall consequence of the disruption of an UGS facility across all customer classes is expressed in terms of the number of expected equivalent residential customer outages per year, which combines the unit business interruption cost per customer class and the estimated number of affected natural gas customers with estimated probabilities of UGS disruptions. All models and analyses are based on publicly available data. The report presents a set of findings and recommendations in terms of data, further analyses, regulatory requirements and standards, and needs to improve gas/electric industry coordination for electric reliability.

  3. Probabilistic Unawareness

    Directory of Open Access Journals (Sweden)

    Mikaël Cozic

    2016-11-01

    Full Text Available The modeling of awareness and unawareness is a significant topic in the doxastic logic literature, where it is usually tackled in terms of full belief operators. The present paper aims at a treatment in terms of partial belief operators. It draws upon the modal probabilistic logic that was introduced by Aumann (1999 at the semantic level, and then axiomatized by Heifetz and Mongin (2001. The paper embodies in this framework those properties of unawareness that have been highlighted in the seminal paper by Modica and Rustichini (1999. Their paper deals with full belief, but we argue that the properties in question also apply to partial belief. Our main result is a (soundness and completeness theorem that reunites the two strands—modal and probabilistic—of doxastic logic.

  4. Evaluation of Term Ranking Algorithms for Pseudo-Relevance Feedback in MEDLINE Retrieval.

    Science.gov (United States)

    Yoo, Sooyoung; Choi, Jinwook

    2011-06-01

    The purpose of this study was to investigate the effects of query expansion algorithms for MEDLINE retrieval within a pseudo-relevance feedback framework. A number of query expansion algorithms were tested using various term ranking formulas, focusing on query expansion based on pseudo-relevance feedback. The OHSUMED test collection, which is a subset of the MEDLINE database, was used as a test corpus. Various ranking algorithms were tested in combination with different term re-weighting algorithms. Our comprehensive evaluation showed that the local context analysis ranking algorithm, when used in combination with one of the reweighting algorithms - Rocchio, the probabilistic model, and our variants - significantly outperformed other algorithm combinations by up to 12% (paired t-test; p algorithm pairs, at least in the context of the OHSUMED corpus. Comparative experiments on term ranking algorithms were performed in the context of a subset of MEDLINE documents. With medical documents, local context analysis, which uses co-occurrence with all query terms, significantly outperformed various term ranking methods based on both frequency and distribution analyses. Furthermore, the results of the experiments demonstrated that the term rank-based re-weighting method contributed to a remarkable improvement in mean average precision.

  5. 1991 Acceptance priority ranking

    International Nuclear Information System (INIS)

    1991-12-01

    The Standard Contract for Disposal of Spent Nuclear Fuel and/or High- Level Radioactive Waste (10 CFR Part 961) that the Department of Energy (DOE) has executed with the owners and generators of civilian spent nuclear fuel requires annual publication of the Acceptance Priority Ranking (APR). The 1991 APR details the order in which DOE will allocate Federal waste acceptance capacity. As required by the Standard Contract, the ranking is based on the age of permanently discharged spent nuclear fuel (SNF), with the owners of the oldest SNF, on an industry-wide basis, given the highest priority. the 1991 APR will be the basis for the annual allocation of waste acceptance capacity to the Purchasers in the 1991 Annual Capacity Report (ACR), to be issued later this year. This document is based on SNF discharges as of December 31, 1990, and reflects Purchaser comments and corrections, as appropriate, to the draft APR issued on May 15, 1991

  6. Minkowski metrics in creating universal ranking algorithms

    Directory of Open Access Journals (Sweden)

    Andrzej Ameljańczyk

    2014-06-01

    Full Text Available The paper presents a general procedure for creating the rankings of a set of objects, while the relation of preference based on any ranking function. The analysis was possible to use the ranking functions began by showing the fundamental drawbacks of commonly used functions in the form of a weighted sum. As a special case of the ranking procedure in the space of a relation, the procedure based on the notion of an ideal element and generalized Minkowski distance from the element was proposed. This procedure, presented as universal ranking algorithm, eliminates most of the disadvantages of ranking functions in the form of a weighted sum.[b]Keywords[/b]: ranking functions, preference relation, ranking clusters, categories, ideal point, universal ranking algorithm

  7. Centrality based Document Ranking

    Science.gov (United States)

    2014-11-01

    support task. As it turned out, the results were very poor, which suggests that using a general purpose IR sytem in this way is not a good idea... Management on the Semantic Web, 2005. 15. P. Mika, T. Tudorache, A. Bernstein, C. Welty, C. A. Knoblock, D. Vrandecic, P. T. Groth, N. F. Noy, K. Janowicz...question- focused sentence retrieval. In HLT ’05: Proceedings of the conference on Hu- man Language Technology and Empirical Methods in Natural

  8. A study on the impact of parameter uncertainty on the emission-based ranking of transportation projects.

    Science.gov (United States)

    2014-01-01

    With the growing concern with air quality levels and, hence, the livability of urban regions in the nation, it has become increasingly common to incorporate vehicular emission considerations in the ranking of transportation projects. Network assignme...

  9. 14th International Probabilistic Workshop

    CERN Document Server

    Taerwe, Luc; Proske, Dirk

    2017-01-01

    This book presents the proceedings of the 14th International Probabilistic Workshop that was held in Ghent, Belgium in December 2016. Probabilistic methods are currently of crucial importance for research and developments in the field of engineering, which face challenges presented by new materials and technologies and rapidly changing societal needs and values. Contemporary needs related to, for example, performance-based design, service-life design, life-cycle analysis, product optimization, assessment of existing structures and structural robustness give rise to new developments as well as accurate and practically applicable probabilistic and statistical engineering methods to support these developments. These proceedings are a valuable resource for anyone interested in contemporary developments in the field of probabilistic engineering applications.

  10. Probabilistic modeling of timber structures

    DEFF Research Database (Denmark)

    Köhler, Jochen; Sørensen, John Dalsgaard; Faber, Michael Havbro

    2007-01-01

    The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) [Joint Committee of Structural Safety. Probabilistic Model Code, Internet...... Publication: www.jcss.ethz.ch; 2001] and of the COST action E24 ‘Reliability of Timber Structures' [COST Action E 24, Reliability of timber structures. Several meetings and Publications, Internet Publication: http://www.km.fgg.uni-lj.si/coste24/coste24.htm; 2005]. The present proposal is based on discussions...... and comments from participants of the COST E24 action and the members of the JCSS. The paper contains a description of the basic reference properties for timber strength parameters and ultimate limit state equations for timber components. The recommended probabilistic model for these basic properties...

  11. A procedure for the determination of scenario earthquakes for seismic design based on probabilistic seismic hazard analysis

    International Nuclear Information System (INIS)

    Hirose, Jiro; Muramatsu, Ken

    2002-03-01

    This report presents a study on the procedures for the determination of scenario earthquakes for seismic design of nuclear power plants (NPPs) based on probabilistic seismic hazard analysis (PSHA). In the recent years, the use of PSHA, which is a part of seismic probabilistic safety assessment (PSA), to determine the design basis earthquake motions for NPPs has been proposed. The identified earthquakes are called probability-based scenario earthquakes (PBSEs). The concept of PBSEs originates both from the study of US NRC and from Ishikawa and Kameda. The assessment of PBSEs is composed of seismic hazard analysis and identification of dominant earthquakes. The objectives of this study are to formulate the concept of PBSEs and to examine the procedures for determining the PBSEs for a domestic NPP site. This report consists of three parts, namely, procedures to compile analytical conditions for PBSEs, an assessment to identify PBSEs for a model site using the Ishikawa's concept and the examination of uncertainties involved in analytical conditions. The results obtained from the examination of PBSEs using Ishikawa's concept are as follows. (a) Since PBSEs are expressed by hazard-consistent magnitude and distance in terms of a prescribed reference probability, it is easy to obtain a concrete image of earthquakes that determine the ground response spectrum to be considered in the design of NPPs. (b) Source contribution factors provide the information on the importance of the earthquake source regions and/or active faults, and allows the selection of a couple of PBSEs based on their importance to the site. (c) Since analytical conditions involve uncertainty, sensitivity analyses on uncertainties that would affect seismic hazard curves and identification of PBSEs were performed on various aspects and provided useful insights for assessment of PBSEs. A result from this sensitivity analysis was that, although the difference in selection of attenuation equations led to a

  12. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

  13. Rank-based characterization of pollen assemblages collected by honey bees using a multi-locus metabarcoding approach1

    Science.gov (United States)

    Richardson, Rodney T.; Lin, Chia-Hua; Quijia, Juan O.; Riusech, Natalia S.; Goodell, Karen; Johnson, Reed M.

    2015-01-01

    Premise of the study: Difficulties inherent in microscopic pollen identification have resulted in limited implementation for large-scale studies. Metabarcoding, a relatively novel approach, could make pollen analysis less onerous; however, improved understanding of the quantitative capacity of various plant metabarcode regions and primer sets is needed to ensure that such applications are accurate and precise. Methods and Results: We applied metabarcoding, targeting the ITS2, matK, and rbcL loci, to characterize six samples of pollen collected by honey bees, Apis mellifera. Additionally, samples were analyzed by light microscopy. We found significant rank-based associations between the relative abundance of pollen types within our samples as inferred by the two methods. Conclusions: Our findings suggest metabarcoding data from plastid loci, as opposed to the ribosomal locus, are more reliable for quantitative characterization of pollen assemblages. Furthermore, multilocus metabarcoding of pollen may be more reliable than single-locus analyses, underscoring the need for discovering novel barcodes and barcode combinations optimized for molecular palynology. PMID:26649264

  14. A new computational account of cognitive control over reinforcement-based decision-making: Modeling of a probabilistic learning task.

    Science.gov (United States)

    Zendehrouh, Sareh

    2015-11-01

    Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Is Probabilistic Evidence a Source of Knowledge?

    Science.gov (United States)

    Friedman, Ori; Turri, John

    2015-01-01

    We report a series of experiments examining whether people ascribe knowledge for true beliefs based on probabilistic evidence. Participants were less likely to ascribe knowledge for beliefs based on probabilistic evidence than for beliefs based on perceptual evidence (Experiments 1 and 2A) or testimony providing causal information (Experiment 2B).…

  16. Complex step-based low-rank extended Kalman filtering for state-parameter estimation in subsurface transport models

    KAUST Repository

    El Gharamti, Mohamad; Hoteit, Ibrahim

    2014-01-01

    The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.

  17. Complex step-based low-rank extended Kalman filtering for state-parameter estimation in subsurface transport models

    KAUST Repository

    El Gharamti, Mohamad

    2014-02-01

    The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.

  18. A probabilistic approach of the Flash Flood Early Warning System (FF-EWS) in Catalonia based on radar ensemble generation

    Science.gov (United States)

    Velasco, David; Sempere-Torres, Daniel; Corral, Carles; Llort, Xavier; Velasco, Enrique

    2010-05-01

    probabilistic component to the FF-EWS. As a first step, we have incorporated the uncertainty in rainfall estimates and forecasts based on an ensemble of equiprobable rainfall scenarios. The presented study has focused on a number of rainfall events and the performance of the FF-EWS evaluated in terms of its ability to produce probabilistic hazard warnings for decision-making support.

  19. Probabilistic Seismic Hazard Assessment Method for Nonlinear Soil Sites based on the Hazard Spectrum of Bedrock Sites

    International Nuclear Information System (INIS)

    Hahm, Dae Gi; Seo, Jeong Moon; Choi, In Kil

    2011-01-01

    For the probabilistic safety assessment of the nuclear power plants (NPP) under seismic events, the rational probabilistic seismic hazard estimation should be performed. Generally, the probabilistic seismic hazard of NPP site is represented by the uniform hazard spectrum (UHS) for the specific annual frequency. In most case, since that the attenuation equations were defined for the bedrock sites, the standard attenuation laws cannot be applied to the general soft soil sites. Hence, for the probabilistic estimation of the seismic hazard of soft soil sites, a methodology of probabilistic seismic hazard analysis (PSHA) coupled with nonlinear dynamic analyses of the soil column are required. Two methods are commonly used for the site response analysis considering the nonlinearity of sites. The one is the deterministic method and another is the probabilistic method. In the analysis of site response, there exist many uncertainty factors such as the variation of the magnitude and frequency contents of input ground motion, and material properties of soil deposits. Hence, nowadays, it is recommended that the adoption of the probabilistic method for the PSHA of soft soil deposits considering such uncertainty factors. In this study, we estimated the amplification factor of the surface of the soft soil deposits with considering the uncertainties of the input ground motions and the soil material properties. Then, we proposed the probabilistic methodology to evaluate the UHS of the soft soil site by multiplying the amplification factor to that of the bedrock site. The proposed method was applied to four typical target sites of KNGR and APR1400 NPP site categories

  20. Ranking Support Vector Machine with Kernel Approximation

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2017-01-01

    Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  1. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  2. Management of the Area 5 Radioactive Waste Management Site using Decision-based, Probabilistic Performance Assessment Modeling

    International Nuclear Information System (INIS)

    Carilli, J.; Crowe, B.; Black, P.; Tauxe, J.; Stockton, T.; Catlett, K.; Yucel, V.

    2003-01-01

    Low-level radioactive waste from cleanup activities at the Nevada Test Site and from multiple sites across the U.S. Department of Energy (DOE) complex is disposed at two active Radioactive Waste Management Sites (RWMS) on the Nevada Test Site. These facilities, which are managed by the DOE National Nuclear Security Administration Nevada Site Office, were recently designated as one of two regional disposal centers and yearly volumes of disposed waste now exceed 50,000 m3 (> 2 million ft3). To safely and cost-effectively manage the disposal facilities, the Waste Management Division of Environmental Management has implemented decision-based management practices using flexible and problem-oriented probabilistic performance assessment modeling. Deterministic performance assessments and composite analyses were completed originally for the Area 5 and Area 3 RWMSs located in, respectively, Frenchman Flat and Yucca Flat on the Nevada Test Site. These documents provide the technical bases for issuance of disposal authorization statements for continuing operation of the disposal facilities. Both facilities are now in a maintenance phase that requires testing of conceptual models, reduction of uncertainty, and site monitoring all leading to eventual closure of the facilities and transition to long-term stewardship

  3. Abstract ID: 240 A probabilistic-based nuclear reaction model for Monte Carlo ion transport in particle therapy.

    Science.gov (United States)

    Maria Jose, Gonzalez Torres; Jürgen, Henniger

    2018-01-01

    In order to expand the Monte Carlo transport program AMOS to particle therapy applications, the ion module is being developed in the radiation physics group (ASP) at the TU Dresden. This module simulates the three main interactions of ions in matter for the therapy energy range: elastic scattering, inelastic collisions and nuclear reactions. The simulation of the elastic scattering is based on the Binary Collision Approximation and the inelastic collisions on the Bethe-Bloch theory. The nuclear reactions, which are the focus of the module, are implemented according to a probabilistic-based model developed in the group. The developed model uses probability density functions to sample the occurrence of a nuclear reaction given the initial energy of the projectile particle as well as the energy at which this reaction will take place. The particle is transported until the reaction energy is reached and then the nuclear reaction is simulated. This approach allows a fast evaluation of the nuclear reactions. The theory and application of the proposed model will be addressed in this presentation. The results of the simulation of a proton beam colliding with tissue will also be presented. Copyright © 2017.

  4. Generalized Fragility Relationships with Local Site Conditions for Probabilistic Performance-based Seismic Risk Assessment of Bridge Inventories

    Directory of Open Access Journals (Sweden)

    Sivathayalan S.

    2012-01-01

    Full Text Available The current practice of detailed seismic risk assessment cannot be easily applied to all the bridges in a large transportation networks due to limited resources. This paper presents a new approach for seismic risk assessment of large bridge inventories in a city or national bridge network based on the framework of probabilistic performance based seismic risk assessment. To account for the influences of local site effects, a procedure to generate site-specific hazard curves that includes seismic hazard microzonation information has been developed for seismic risk assessment of bridge inventories. Simulated ground motions compatible with the site specific seismic hazard are used as input excitations in nonlinear time history analysis of representative bridges for calibration. A normalizing procedure to obtain generalized fragility relationships in terms of structural characteristic parameters of bridge span and size and longitudinal and transverse reinforcement ratios is presented. The seismic risk of bridges in a large inventory can then be easily evaluated using the normalized fragility relationships without the requirement of carrying out detailed nonlinear time history analysis.

  5. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2015-10-01

    Full Text Available In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD and Probabilistic Neural Network (PNN is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.

  6. Phenomena identification ranking table and knowledge base gaps and needs for the modular high-temperature gas-cooled reactor

    International Nuclear Information System (INIS)

    Tokuhiro, Akira; Potirniche, Gabriel; Rink, Karl

    2009-01-01

    The U.S. is developing a modular high-temperature gas-cooled reactor (MHTGR) under the Next Generation Nuclear Plant (NGNP); also known as the Very High Temperature Reactor (VHTR). The generic MHTGR is a graphite-moderated, gas-cooled reactor (GCR) of either a prismatic modular (block-type, PMR) or pebble-bed (PBR) core configuration. The pebble-bed design requires new attention with respect to neutronics, materials, thermal hydraulic, safety and licensing relative to the set of phenomena and engineering analyses associated with the current fleet of legacy LWRs. In fact, the relative knowledge and experiential base on gas reactors is small in comparison to the LWR. There is a dated body of knowledge from some 25+ years ago on GCRs; recently there is a renewed interest. Thus in the present design and development phase of the NGNP/VHTR, there are relevant thermohydraulic safety issues surrounding the MHTGR with issues impacting foremost the design review process. A common phenomena with respect to PMR and PBR core design, is that concerning 'graphite dust' and its interaction and transport with potential fission products (FP) that may be present within the graphite and subsequently in the primary system. The nature of the graphite and FPs, when circulated or transported in the primary, and possibly beyond, is of concern as potentially an relevant 'source term' (radionuclide inventory) of the MHTGR. Based on NUREG/CR-6944, Volumes 1-5, the author briefly describes the state-of-the art knowledge base on graphite dust and FP transport with respect to the anticipated design of the MHTGR. In addition, from the Phenomena Identification and Ranking Tables (PIRTs) developed in these reports we concurrently identify and describe 'gaps and needs' of the knowledge base. That is, we also present the knowledge base gaps and needs with respect to the following: 1) R and D needs relative to PIRTs, 2) (experimental) database needs relative to PIRTs, and 3) simulation and modeling

  7. Probabilistic numerical discrimination in mice.

    Science.gov (United States)

    Berkay, Dilara; Çavdaroğlu, Bilgehan; Balcı, Fuat

    2016-03-01

    Previous studies showed that both human and non-human animals can discriminate between different quantities (i.e., time intervals, numerosities) with a limited level of precision due to their endogenous/representational uncertainty. In addition, other studies have shown that subjects can modulate their temporal categorization responses adaptively by incorporating information gathered regarding probabilistic contingencies into their time-based decisions. Despite the psychophysical similarities between the interval timing and nonverbal counting functions, the sensitivity of count-based decisions to probabilistic information remains an unanswered question. In the current study, we investigated whether exogenous probabilistic information can be integrated into numerosity-based judgments by mice. In the task employed in this study, reward was presented either after few (i.e., 10) or many (i.e., 20) lever presses, the last of which had to be emitted on the lever associated with the corresponding trial type. In order to investigate the effect of probabilistic information on performance in this task, we manipulated the relative frequency of different trial types across different experimental conditions. We evaluated the behavioral performance of the animals under models that differed in terms of their assumptions regarding the cost of responding (e.g., logarithmically increasing vs. no response cost). Our results showed for the first time that mice could adaptively modulate their count-based decisions based on the experienced probabilistic contingencies in directions predicted by optimality.

  8. Ranking of concern, based on environmental indexes, for pharmaceutical and personal care products: an application to the Spanish case.

    Science.gov (United States)

    Ortiz de García, Sheyla; Pinto, Gilberto Pinto; García-Encina, Pedro A; Irusta Mata, Rubén I

    2013-11-15

    A wide range of Pharmaceuticals and Personal Care Products (PPCPs) are present in the environment, and many of their adverse effects are unknown. The emergence of new compounds or changes in regulations have led to dynamical studies of occurrence, impact and treatment, which consider geographical areas and trends in consumption and innovation in the pharmaceutical industry. A Quantitative study of Structure-Activity Relationship ((Q)SAR) was performed to assess the possible adverse effects of ninety six PPCPs and metabolites with negligible experimental data and establish a ranking of concern, which was supported by the EPA EPI Suite™ interface. The environmental and toxicological indexes, the persistence (P), the bioaccumulation (B), the toxicity (T) (extensive) and the occurrence in Spanish aquatic environments (O) (intensive) were evaluated. The most hazardous characteristics in the largest number of compounds were generated by the P index, followed by the T and B indexes. A high number of metabolites has a concern score equal to or greater than their parent compounds. Three PBT and OPBT rankings of concern were proposed using the total and partial ranking method (supported by a Hasse diagram) by the Decision Analysis by Ranking Techniques (DART) tool, which was recently recommended by the European Commission. An analysis of the sensibility of the relative weights of these indexes has been conducted. Hormones, antidepressants (and their metabolites), blood lipid regulators and all of the personal care products considered in this study were at the highest levels of risk according to the PBT and OPBT total rankings. Furthermore, when the OPBT partial ranking was performed, X-ray contrast media, H2 blockers and some antibiotics were included at the highest level of concern. It is important to improve and incorporate useful indexes for the predicted environmental impact of PPCPs and metabolites and thus focus experimental analysis on the compounds that require

  9. Probabilistic Tsunami Hazard Analysis

    Science.gov (United States)

    Thio, H. K.; Ichinose, G. A.; Somerville, P. G.; Polet, J.

    2006-12-01

    The recent tsunami disaster caused by the 2004 Sumatra-Andaman earthquake has focused our attention to the hazard posed by large earthquakes that occur under water, in particular subduction zone earthquakes, and the tsunamis that they generate. Even though these kinds of events are rare, the very large loss of life and material destruction caused by this earthquake warrant a significant effort towards the mitigation of the tsunami hazard. For ground motion hazard, Probabilistic Seismic Hazard Analysis (PSHA) has become a standard practice in the evaluation and mitigation of seismic hazard to populations in particular with respect to structures, infrastructure and lifelines. Its ability to condense the complexities and variability of seismic activity into a manageable set of parameters greatly facilitates the design of effective seismic resistant buildings but also the planning of infrastructure projects. Probabilistic Tsunami Hazard Analysis (PTHA) achieves the same goal for hazards posed by tsunami. There are great advantages of implementing such a method to evaluate the total risk (seismic and tsunami) to coastal communities. The method that we have developed is based on the traditional PSHA and therefore completely consistent with standard seismic practice. Because of the strong dependence of tsunami wave heights on bathymetry, we use a full waveform tsunami waveform computation in lieu of attenuation relations that are common in PSHA. By pre-computing and storing the tsunami waveforms at points along the coast generated for sets of subfaults that comprise larger earthquake faults, we can efficiently synthesize tsunami waveforms for any slip distribution on those faults by summing the individual subfault tsunami waveforms (weighted by their slip). This efficiency make it feasible to use Green's function summation in lieu of attenuation relations to provide very accurate estimates of tsunami height for probabilistic calculations, where one typically computes

  10. Universal scaling in sports ranking

    International Nuclear Information System (INIS)

    Deng Weibing; Li Wei; Cai Xu; Bulou, Alain; Wang Qiuping A

    2012-01-01

    Ranking is a ubiquitous phenomenon in human society. On the web pages of Forbes, one may find all kinds of rankings, such as the world's most powerful people, the world's richest people, the highest-earning tennis players, and so on and so forth. Herewith, we study a specific kind—sports ranking systems in which players' scores and/or prize money are accrued based on their performances in different matches. By investigating 40 data samples which span 12 different sports, we find that the distributions of scores and/or prize money follow universal power laws, with exponents nearly identical for most sports. In order to understand the origin of this universal scaling we focus on the tennis ranking systems. By checking the data we find that, for any pair of players, the probability that the higher-ranked player tops the lower-ranked opponent is proportional to the rank difference between the pair. Such a dependence can be well fitted to a sigmoidal function. By using this feature, we propose a simple toy model which can simulate the competition of players in different matches. The simulations yield results consistent with the empirical findings. Extensive simulation studies indicate that the model is quite robust with respect to the modifications of some parameters. (paper)

  11. Probabilistic short-term forecasting of eruption rate at Kīlauea Volcano using a physics-based model

    Science.gov (United States)

    Anderson, K. R.

    2016-12-01

    Deterministic models of volcanic eruptions yield predictions of future activity conditioned on uncertainty in the current state of the system. Physics-based eruption models are well-suited for deterministic forecasting as they can relate magma physics with a wide range of observations. Yet, physics-based eruption forecasting is strongly limited by an inadequate understanding of volcanic systems, and the need for eruption models to be computationally tractable. At Kīlauea Volcano, Hawaii, episodic depressurization-pressurization cycles of the magma system generate correlated, quasi-exponential variations in ground deformation and surface height of the active summit lava lake. Deflations are associated with reductions in eruption rate, or even brief eruptive pauses, and thus partly control lava flow advance rates and associated hazard. Because of the relatively well-understood nature of Kīlauea's shallow magma plumbing system, and because more than 600 of these events have been recorded to date, they offer a unique opportunity to refine a physics-based effusive eruption forecasting approach and apply it to lava eruption rates over short (hours to days) time periods. A simple physical model of the volcano ascribes observed data to temporary reductions in magma supply to an elastic reservoir filled with compressible magma. This model can be used to predict the evolution of an ongoing event, but because the mechanism that triggers events is unknown, event durations are modeled stochastically from previous observations. A Bayesian approach incorporates diverse data sets and prior information to simultaneously estimate uncertain model parameters and future states of the system. Forecasts take the form of probability distributions for eruption rate or cumulative erupted volume at some future time. Results demonstrate the significant uncertainties that still remain even for short-term eruption forecasting at a well-monitored volcano - but also the value of a physics-based

  12. A new method for class prediction based on signed-rank algorithms applied to Affymetrix® microarray experiments

    Directory of Open Access Journals (Sweden)

    Vassal Aurélien

    2008-01-01

    Full Text Available Abstract Background The huge amount of data generated by DNA chips is a powerful basis to classify various pathologies. However, constant evolution of microarray technology makes it difficult to mix data from different chip types for class prediction of limited sample populations. Affymetrix® technology provides both a quantitative fluorescence signal and a decision (detection call: absent or present based on signed-rank algorithms applied to several hybridization repeats of each gene, with a per-chip normalization. We developed a new prediction method for class belonging based on the detection call only from recent Affymetrix chip type. Biological data were obtained by hybridization on U133A, U133B and U133Plus 2.0 microarrays of purified normal B cells and cells from three independent groups of multiple myeloma (MM patients. Results After a call-based data reduction step to filter out non class-discriminative probe sets, the gene list obtained was reduced to a predictor with correction for multiple testing by iterative deletion of probe sets that sequentially improve inter-class comparisons and their significance. The error rate of the method was determined using leave-one-out and 5-fold cross-validation. It was successfully applied to (i determine a sex predictor with the normal donor group classifying gender with no error in all patient groups except for male MM samples with a Y chromosome deletion, (ii predict the immunoglobulin light and heavy chains expressed by the malignant myeloma clones of the validation group and (iii predict sex, light and heavy chain nature for every new patient. Finally, this method was shown powerful when compared to the popular classification method Prediction Analysis of Microarray (PAM. Conclusion This normalization-free method is routinely used for quality control and correction of collection errors in patient reports to clinicians. It can be easily extended to multiple class prediction suitable with

  13. Local constructions of gender-based violence amongst IDPs in northern Uganda: analysis of archival data collected using a gender- and age-segmented participatory ranking methodology.

    Science.gov (United States)

    Ager, Alastair; Bancroft, Carolyn; Berger, Elizabeth; Stark, Lindsay

    2018-01-01

    Gender-based violence (GBV) is a significant problem in conflict-affected settings. Understanding local constructions of such violence is crucial to developing preventive and responsive interventions to address this issue. This study reports on a secondary analysis of archived data collected as part of formative qualitative work - using a group participatory ranking methodology (PRM) - informing research on the prevalence of GBV amongst IDPs in northern Uganda in 2006. Sixty-four PRM group discussions were held with women, with men, with girls (aged 14 to 18 years), and with boys (aged 14 to 18 years) selected on a randomized basis across four internally displaced persons (IDP) camps in Lira District. Discussions elicited problems facing women in the camps, and - through structured participatory methods - consensus ranking of their importance and narrative accounts explaining these judgments. Amongst forms of GBV faced by women, rape was ranked as the greatest concern amongst participants (with a mean problem rank of 3.4), followed by marital rape (mean problem rank of 4.5) and intimate partner violence (mean problem rank of 4.9). Girls ranked all forms of GBV as higher priority concerns than other participants. Discussions indicated that these forms of GBV were generally considered normalized within the camp. Gender roles and power, economic deprivation, and physical and social characteristics of the camp setting emerged as key explanatory factors in accounts of GBV prevalence, although these played out in different ways with respect to differing forms of violence. All groups acknowledged GBV to represent a significant threat - among other major concerns such as transportation, water, shelter, food and security - for women residing in the camps. Given evidence of the significantly higher risk in the camp of intimate partner violence and marital rape, the relative prominence of the issue of rape in all rankings suggests normalization of violence within the home

  14. Ranking Baltic States Researchers

    Directory of Open Access Journals (Sweden)

    Gyula Mester

    2017-10-01

    Full Text Available In this article, using the h-index and the total number of citations, the best 10 Lithuanian, Latvian and Estonian researchers from several disciplines are ranked. The list may be formed based on the h-index and the total number of citations, given in Web of Science, Scopus, Publish or Perish Program and Google Scholar database. Data for the first 10 researchers are presented. Google Scholar is the most complete. Therefore, to define a single indicator, h-index calculated by Google Scholar may be a good and simple one. The author chooses the Google Scholar database as it is the broadest one.

  15. Unique Nanoparticle Optical Properties Confound Fluorescent Based Assays Widely Employed in Their In Vitro Toxicity Screening and Ranking

    Science.gov (United States)

    Nanoparticles (NPs) are novel materials having at least one dimension less than 100 nm and display unique physicochemical properties due to their nanoscale size. An emphasis has been placed on developing high throughput screening (HTS) assays to characterize and rank the toxiciti...

  16. MATILDA: A Military Laser Range Safety Tool Based on Probabilistic Risk Assessment (PRA) Techniques

    Science.gov (United States)

    2014-08-01

    3 2.1 UK Need for a PRA-Based Approach ............................................................... 3 2.2 A Risk-Based Approach to...Figure 6: MATILDA Coordinate Transformations ....................................................... 22  Figure 7: Geocentric and MICS Coordinates...Star-Shaped Condition ................................................................................. 27  Figure 11: Points of Closest Approach

  17. A comprehensive physiologically based pharmacokinetic knowledgebase and web-based interface for rapid model ranking and querying

    Science.gov (United States)

    Published physiologically based pharmacokinetic (PBPK) models from peer-reviewed articles are often well-parameterized, thoroughly-vetted, and can be utilized as excellent resources for the construction of models pertaining to related chemicals. Specifically, chemical-specific pa...

  18. Probabilistic metric spaces

    CERN Document Server

    Schweizer, B

    2005-01-01

    Topics include special classes of probabilistic metric spaces, topologies, and several related structures, such as probabilistic normed and inner-product spaces. 1983 edition, updated with 3 new appendixes. Includes 17 illustrations.

  19. Risk-based probabilistic approach to assess the impact of false mussel invasions on farmed hard clams.

    Science.gov (United States)

    Liao, Chung-Min; Ju, Yun-Ru; Chio, Chia-Pin; Chen, Wei-Yu

    2010-02-01

    The purpose of this article is to provide a risk-based predictive model to assess the impact of false mussel Mytilopsis sallei invasions on hard clam Meretrix lusoria farms in the southwestern region of Taiwan. The actual spread of invasive false mussel was predicted by using analytical models based on advection-diffusion and gravity models. The proportion of hard clam colonized and infestation by false mussel were used to characterize risk estimates. A mortality model was parameterized to assess hard clam mortality risk characterized by false mussel density and infestation intensity. The published data were reanalyzed to parameterize a predictive threshold model described by a cumulative Weibull distribution function that can be used to estimate the exceeding thresholds of proportion of hard clam colonized and infestation. Results indicated that the infestation thresholds were 2-17 ind clam(-1) for adult hard clams, whereas 4 ind clam(-1) for nursery hard clams. The average colonization thresholds were estimated to be 81-89% for cultivated and nursery hard clam farms, respectively. Our results indicated that false mussel density and infestation, which caused 50% hard clam mortality, were estimated to be 2,812 ind m(-2) and 31 ind clam(-1), respectively. This study further indicated that hard clam farms that are close to the coastal area have at least 50% probability for 43% mortality caused by infestation. This study highlighted that a probabilistic risk-based framework characterized by probability distributions and risk curves is an effective representation of scientific assessments for farmed hard clam in response to the nonnative false mussel invasion.

  20. NetGen: a novel network-based probabilistic generative model for gene set functional enrichment analysis.

    Science.gov (United States)

    Sun, Duanchen; Liu, Yinliang; Zhang, Xiang-Sun; Wu, Ling-Yun

    2017-09-21

    High-throughput experimental techniques have been dramatically improved and widely applied in the past decades. However, biological interpretation of the high-throughput experimental results, such as differential expression gene sets derived from microarray or RNA-seq experiments, is still a challenging task. Gene Ontology (GO) is commonly used in the functional enrichment studies. The GO terms identified via current functional enrichment analysis tools often contain direct parent or descendant terms in the GO hierarchical structure. Highly redundant terms make users difficult to analyze the underlying biological processes. In this paper, a novel network-based probabilistic generative model, NetGen, was proposed to perform the functional enrichment analysis. An additional protein-protein interaction (PPI) network was explicitly used to assist the identification of significantly enriched GO terms. NetGen achieved a superior performance than the existing methods in the simulation studies. The effectiveness of NetGen was explored further on four real datasets. Notably, several GO terms which were not directly linked with the active gene list for each disease were identified. These terms were closely related to the corresponding diseases when accessed to the curated literatures. NetGen has been implemented in the R package CopTea publicly available at GitHub ( http://github.com/wulingyun/CopTea/ ). Our procedure leads to a more reasonable and interpretable result of the functional enrichment analysis. As a novel term combination-based functional enrichment analysis method, NetGen is complementary to current individual term-based methods, and can help to explore the underlying pathogenesis of complex diseases.

  1. Use of raster-based data layers to model spatial variation of seismotectonic data in probabilistic seismic hazard assessment

    Science.gov (United States)

    Zolfaghari, Mohammad R.

    2009-07-01

    Recent achievements in computer and information technology have provided the necessary tools to extend the application of probabilistic seismic hazard mapping from its traditional engineering use to many other applications. Examples for such applications are risk mitigation, disaster management, post disaster recovery planning and catastrophe loss estimation and risk management. Due to the lack of proper knowledge with regard to factors controlling seismic hazards, there are always uncertainties associated with all steps involved in developing and using seismic hazard models. While some of these uncertainties can be controlled by more accurate and reliable input data, the majority of the data and assumptions used in seismic hazard studies remain with high uncertainties that contribute to the uncertainty of the final results. In this paper a new methodology for the assessment of seismic hazard is described. The proposed approach provides practical facility for better capture of spatial variations of seismological and tectonic characteristics, which allows better treatment of their uncertainties. In the proposed approach, GIS raster-based data models are used in order to model geographical features in a cell-based system. The cell-based source model proposed in this paper provides a framework for implementing many geographically referenced seismotectonic factors into seismic hazard modelling. Examples for such components are seismic source boundaries, rupture geometry, seismic activity rate, focal depth and the choice of attenuation functions. The proposed methodology provides improvements in several aspects of the standard analytical tools currently being used for assessment and mapping of regional seismic hazard. The proposed methodology makes the best use of the recent advancements in computer technology in both software and hardware. The proposed approach is well structured to be implemented using conventional GIS tools.

  2. Probabilistic active recognition of multiple objects using Hough-based geometric matching features

    CSIR Research Space (South Africa)

    Govender, N

    2015-01-01

    Full Text Available be recognized simultaneously, and occlusion and clutter (through distracter objects) is common. We propose a representation for object viewpoints using Hough transform based geometric matching features, which are robust in such circumstances. We show how...

  3. Dataset for Probabilistic estimation of residential air exchange rates for population-based exposure modeling

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset provides the city-specific air exchange rate measurements, modeled, literature-based as well as housing characteristics. This dataset is associated with...

  4. Delineating probabilistic species pools in ecology and biogeography

    OpenAIRE

    Karger, Dirk Nikolaus; Cord, Anna F; Kessler, Michael; Kreft, Holger; Kühn, Ingolf; Pompe, Sven; Sandel, Brody; Sarmento Cabral, Juliano; Smith, Adam B; Svenning, Jens-Christian; Tuomisto, Hanna; Weigelt, Patrick; Wesche, Karsten

    2016-01-01

    Aim To provide a mechanistic and probabilistic framework for defining the species pool based on species-specific probabilities of dispersal, environmental suitability and biotic interactions within a specific temporal extent, and to show how probabilistic species pools can help disentangle the geographical structure of different community assembly processes. Innovation Probabilistic species pools provide an improved species pool definition based on probabilities in conjuncti...

  5. From exemplar to grammar: a probabilistic analogy-based model of language learning.

    Science.gov (United States)

    Bod, Rens

    2009-07-01

    While rules and exemplars are usually viewed as opposites, this paper argues that they form end points of the same distribution. By representing both rules and exemplars as (partial) trees, we can take into account the fluid middle ground between the two extremes. This insight is the starting point for a new theory of language learning that is based on the following idea: If a language learner does not know which phrase-structure trees should be assigned to initial sentences, s/he allows (implicitly) for all possible trees and lets linguistic experience decide which is the "best" tree for each sentence. The best tree is obtained by maximizing "structural analogy" between a sentence and previous sentences, which is formalized by the most probable shortest combination of subtrees from all trees of previous sentences. Corpus-based experiments with this model on the Penn Treebank and the Childes database indicate that it can learn both exemplar-based and rule-based aspects of language, ranging from phrasal verbs to auxiliary fronting. By having learned the syntactic structures of sentences, we have also learned the grammar implicit in these structures, which can in turn be used to produce new sentences. We show that our model mimicks children's language development from item-based constructions to abstract constructions, and that the model can simulate some of the errors made by children in producing complex questions. Copyright © 2009 Cognitive Science Society, Inc.

  6. A probabilistic-based approach to monitoring tool wear state and assessing its effect on workpiece quality in nickel-based alloys

    Science.gov (United States)

    Akhavan Niaki, Farbod

    The objective of this research is first to investigate the applicability and advantage of statistical state estimation methods for predicting tool wear in machining nickel-based superalloys over deterministic methods, and second to study the effects of cutting tool wear on the quality of the part. Nickel-based superalloys are among those classes of materials that are known as hard-to-machine alloys. These materials exhibit a unique combination of maintaining their strength at high temperature and have high resistance to corrosion and creep. These unique characteristics make them an ideal candidate for harsh environments like combustion chambers of gas turbines. However, the same characteristics that make nickel-based alloys suitable for aggressive conditions introduce difficulties when machining them. High strength and low thermal conductivity accelerate the cutting tool wear and increase the possibility of the in-process tool breakage. A blunt tool nominally deteriorates the surface integrity and damages quality of the machined part by inducing high tensile residual stresses, generating micro-cracks, altering the microstructure or leaving a poor roughness profile behind. As a consequence in this case, the expensive superalloy would have to be scrapped. The current dominant solution for industry is to sacrifice the productivity rate by replacing the tool in the early stages of its life or to choose conservative cutting conditions in order to lower the wear rate and preserve workpiece quality. Thus, monitoring the state of the cutting tool and estimating its effects on part quality is a critical task for increasing productivity and profitability in machining superalloys. This work aims to first introduce a probabilistic-based framework for estimating tool wear in milling and turning of superalloys and second to study the detrimental effects of functional state of the cutting tool in terms of wear and wear rate on part quality. In the milling operation, the

  7. A probabilistic model for robust localization based on a binaural auditory front-end

    NARCIS (Netherlands)

    May, T.; Par, van de S.L.J.D.E.; Kohlrausch, A.G.

    2011-01-01

    Although extensive research has been done in the field of machine-based localization, the degrading effect of reverberation and the presence of multiple sources on localization performance has remained a major problem. Motivated by the ability of the human auditory system to robustly analyze complex

  8. Students’ difficulties in probabilistic problem-solving

    Science.gov (United States)

    Arum, D. P.; Kusmayadi, T. A.; Pramudya, I.

    2018-03-01

    There are many errors can be identified when students solving mathematics problems, particularly in solving the probabilistic problem. This present study aims to investigate students’ difficulties in solving the probabilistic problem. It focuses on analyzing and describing students errors during solving the problem. This research used the qualitative method with case study strategy. The subjects in this research involve ten students of 9th grade that were selected by purposive sampling. Data in this research involve students’ probabilistic problem-solving result and recorded interview regarding students’ difficulties in solving the problem. Those data were analyzed descriptively using Miles and Huberman steps. The results show that students have difficulties in solving the probabilistic problem and can be divided into three categories. First difficulties relate to students’ difficulties in understanding the probabilistic problem. Second, students’ difficulties in choosing and using appropriate strategies for solving the problem. Third, students’ difficulties with the computational process in solving the problem. Based on the result seems that students still have difficulties in solving the probabilistic problem. It means that students have not able to use their knowledge and ability for responding probabilistic problem yet. Therefore, it is important for mathematics teachers to plan probabilistic learning which could optimize students probabilistic thinking ability.

  9. On the security of a novel probabilistic signature based on bilinear square Diffie-Hellman problem and its extension.

    Science.gov (United States)

    Zhao, Zhenguo; Shi, Wenbo

    2014-01-01

    Probabilistic signature scheme has been widely used in modern electronic commerce since it could provide integrity, authenticity, and nonrepudiation. Recently, Wu and Lin proposed a novel probabilistic signature (PS) scheme using the bilinear square Diffie-Hellman (BSDH) problem. They also extended it to a universal designated verifier signature (UDVS) scheme. In this paper, we analyze the security of Wu et al.'s PS scheme and UDVS scheme. Through concrete attacks, we demonstrate both of their schemes are not unforgeable. The security analysis shows that their schemes are not suitable for practical applications.

  10. Linear Subspace Ranking Hashing for Cross-Modal Retrieval.

    Science.gov (United States)

    Li, Kai; Qi, Guo-Jun; Ye, Jun; Hua, Kien A

    2017-09-01

    Hashing has attracted a great deal of research in recent years due to its effectiveness for the retrieval and indexing of large-scale high-dimensional multimedia data. In this paper, we propose a novel ranking-based hashing framework that maps data from different modalities into a common Hamming space where the cross-modal similarity can be measured using Hamming distance. Unlike existing cross-modal hashing algorithms where the learned hash functions are binary space partitioning functions, such as the sign and threshold function, the proposed hashing scheme takes advantage of a new class of hash functions closely related to rank correlation measures which are known to be scale-invariant, numerically stable, and highly nonlinear. Specifically, we jointly learn two groups of linear subspaces, one for each modality, so that features' ranking orders in different linear subspaces maximally preserve the cross-modal similarities. We show that the ranking-based hash function has a natural probabilistic approximation which transforms the original highly discontinuous optimization problem into one that can be efficiently solved using simple gradient descent algorithms. The proposed hashing framework is also flexible in the sense that the optimization procedures are not tied up to any specific form of loss function, which is typical for existing cross-modal hashing methods, but rather we can flexibly accommodate different loss functions with minimal changes to the learning steps. We demonstrate through extensive experiments on four widely-used real-world multimodal datasets that the proposed cross-modal hashing method can achieve competitive performance against several state-of-the-arts with only moderate training and testing time.

  11. Probabilistic risk assessment of diuron and prometryn in the Gwydir River catchment, Australia, with the input of a novel bioassay based on algal growth.

    Science.gov (United States)

    Shi, Yajuan; Burns, Mitchell; Ritchie, Raymond J; Crossan, Angus; Kennedy, Ivan R

    2014-08-01

    A probabilistic risk assessment of the selected herbicides (diuron and prometryn) in the Gwydir River catchment was conducted, with the input of the EC₅₀ values derived from both literature and a novel bioassay. Laboratory test based on growth of algae exposed to herbicides assayed with a microplate reader was used to examine the toxicity of diuron and prometryn on the growth of Chlorella vulgaris. Both herbicides showed concentration dependent toxicity in inhibiting the growth of Chlorella during the exposure period of 18-72 h. Diuron caused more toxicity as judged by growth rates than prometryn. Thalaba Creek at Merrywinebone was identified as the 'hotspot' for diuron and prometryn risk in the Gwydir catchment. The use of microplate assays coupled with probabilistic risk assessment is recommended for rapid assessment of ecotoxicity of indigenous species, allowing identification of locations in river catchments requiring environmental management. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Probabilistic runoff volume forecasting in risk-based optimization for RTC of urban drainage systems

    DEFF Research Database (Denmark)

    Löwe, Roland; Vezzaro, Luca; Mikkelsen, Peter Steen

    2016-01-01

    overflow risk. The stochastic control framework and the performance of the runoff forecasting models are tested in a case study in Copenhagen (76 km2 with 6 sub-catchments and 7 control points) using 2-h radar rainfall forecasts and inlet flows to control points computed from a variety of noisy...... smoothing. Simulations demonstrate notable improvements of the control efficiency when considering forecast information and additionally when considering forecast uncertainty, compared with optimization based on current basin fillings only....

  13. Analyzing Dynamic Probabilistic Risk Assessment Data through Topology-Based Clustering

    Energy Technology Data Exchange (ETDEWEB)

    Diego Mandelli; Dan Maljovec; BeiWang; Valerio Pascucci; Peer-Timo Bremer

    2013-09-01

    We investigate the use of a topology-based clustering technique on the data generated by dynamic event tree methodologies. The clustering technique we utilizes focuses on a domain-partitioning algorithm based on topological structures known as the Morse-Smale complex, which partitions the data points into clusters based on their uniform gradient flow behavior. We perform both end state analysis and transient analysis to classify the set of nuclear scenarios. We demonstrate our methodology on a dataset generated for a sodium-cooled fast reactor during an aircraft crash scenario. The simulation tracks the temperature of the reactor as well as the time for a recovery team to fix the passive cooling system. Combined with clustering results obtained previously through mean shift methodology, we present the user with complementary views of the data that help illuminate key features that may be otherwise hidden using a single methodology. By clustering the data, the number of relevant test cases to be selected for further analysis can be drastically reduced by selecting a representative from each cluster. Identifying the similarities of simulations within a cluster can also aid in the drawing of important conclusions with respect to safety analysis.

  14. Application of probabilistic precipitation forecasts from a ...

    African Journals Online (AJOL)

    2014-02-14

    Feb 14, 2014 ... Application of probabilistic precipitation forecasts from a deterministic model ... aim of this paper is to investigate the increase in the lead-time of flash flood warnings of the SAFFG using probabilistic precipitation forecasts ... The procedure is applied to a real flash flood event and the ensemble-based.

  15. Bisimulations meet PCTL equivalences for probabilistic automata

    DEFF Research Database (Denmark)

    Song, Lei; Zhang, Lijun; Godskesen, Jens Chr.

    2013-01-01

    Probabilistic automata (PAs) have been successfully applied in formal verification of concurrent and stochastic systems. Efficient model checking algorithms have been studied, where the most often used logics for expressing properties are based on probabilistic computation tree logic (PCTL) and its...

  16. Ranking structures and rank-rank correlations of countries: The FIFA and UEFA cases

    Science.gov (United States)

    Ausloos, Marcel; Cloots, Rudi; Gadomski, Adam; Vitanov, Nikolay K.

    2014-04-01

    Ranking of agents competing with each other in complex systems may lead to paradoxes according to the pre-chosen different measures. A discussion is presented on such rank-rank, similar or not, correlations based on the case of European countries ranked by UEFA and FIFA from different soccer competitions. The first question to be answered is whether an empirical and simple law is obtained for such (self-) organizations of complex sociological systems with such different measuring schemes. It is found that the power law form is not the best description contrary to many modern expectations. The stretched exponential is much more adequate. Moreover, it is found that the measuring rules lead to some inner structures in both cases.

  17. Equal opportunity for low-degree network nodes: a PageRank-based method for protein target identification in metabolic graphs.

    Directory of Open Access Journals (Sweden)

    Dániel Bánky

    Full Text Available Biological network data, such as metabolic-, signaling- or physical interaction graphs of proteins are increasingly available in public repositories for important species. Tools for the quantitative analysis of these networks are being developed today. Protein network-based drug target identification methods usually return protein hubs with large degrees in the networks as potentially important targets. Some known, important protein targets, however, are not hubs at all, and perturbing protein hubs in these networks may have several unwanted physiological effects, due to their interaction with numerous partners. Here, we show a novel method applicable in networks with directed edges (such as metabolic networks that compensates for the low degree (non-hub vertices in the network, and identifies important nodes, regardless of their hub properties. Our method computes the PageRank for the nodes of the network, and divides the PageRank by the in-degree (i.e., the number of incoming edges of the node. This quotient is the same in all nodes in an undirected graph (even for large- and low-degree nodes, that is, for hubs and non-hubs as well, but may differ significantly from node to node in directed graphs. We suggest to assign importance to non-hub nodes with large PageRank/in-degree quotient. Consequently, our method gives high scores to nodes with large PageRank, relative to their degrees: therefore non-hub important nodes can easily be identified in large networks. We demonstrate that these relatively high PageRank scores have biological relevance: the method correctly finds numerous already validated drug targets in distinct organisms (Mycobacterium tuberculosis, Plasmodium falciparum and MRSA Staphylococcus aureus, and consequently, it may suggest new possible protein targets as well. Additionally, our scoring method was not chosen arbitrarily: its value for all nodes of all undirected graphs is constant; therefore its high value captures

  18. Equal opportunity for low-degree network nodes: a PageRank-based method for protein target identification in metabolic graphs.

    Science.gov (United States)

    Bánky, Dániel; Iván, Gábor; Grolmusz, Vince

    2013-01-01

    Biological network data, such as metabolic-, signaling- or physical interaction graphs of proteins are increasingly available in public repositories for important species. Tools for the quantitative analysis of these networks are being developed today. Protein network-based drug target identification methods usually return protein hubs with large degrees in the networks as potentially important targets. Some known, important protein targets, however, are not hubs at all, and perturbing protein hubs in these networks may have several unwanted physiological effects, due to their interaction with numerous partners. Here, we show a novel method applicable in networks with directed edges (such as metabolic networks) that compensates for the low degree (non-hub) vertices in the network, and identifies important nodes, regardless of their hub properties. Our method computes the PageRank for the nodes of the network, and divides the PageRank by the in-degree (i.e., the number of incoming edges) of the node. This quotient is the same in all nodes in an undirected graph (even for large- and low-degree nodes, that is, for hubs and non-hubs as well), but may differ significantly from node to node in directed graphs. We suggest to assign importance to non-hub nodes with large PageRank/in-degree quotient. Consequently, our method gives high scores to nodes with large PageRank, relative to their degrees: therefore non-hub important nodes can easily be identified in large networks. We demonstrate that these relatively high PageRank scores have biological relevance: the method correctly finds numerous already validated drug targets in distinct organisms (Mycobacterium tuberculosis, Plasmodium falciparum and MRSA Staphylococcus aureus), and consequently, it may suggest new possible protein targets as well. Additionally, our scoring method was not chosen arbitrarily: its value for all nodes of all undirected graphs is constant; therefore its high value captures importance in the

  19. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-04-17

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.

  20. Microscopic and probabilistic approach to thermal steady state based on a dice and coin toy model

    International Nuclear Information System (INIS)

    Onorato, Pasquale; Moggio, Lorenzo; Oss, Stefano; Malgieri, Massimiliano

    2017-01-01

    In this article we present an educational approach to thermal equilibrium which was tested on a group of 13 undergraduate students at the University of Trento. The approach is based on a stochastic toy model, in which bodies in thermal contact are represented by rows of squares on a cardboard table, which exchange coins placed on the squares based on the roll of two dice. The discussion of several physical principles, such as the exponential approach to equilibrium, the determination of the equilibrium temperature, and the interpretation of the equilibrium state as the most probable macrostate, proceeds through a continual comparison between the outcomes obtained with the toy model and the results of a real experiment on the thermal contact of two masses of water at different temperatures. At the end of the sequence, a re-analysis of the experimental results in view of both the Boltzmann and Clausius definitions of entropy reveals some limits of the toy model, but also allows for a critical discussion of the concepts of temperature and entropy. In order to provide the reader with a feeling of how the sequence was received by students, and how it helped them understand the topics introduced, we discuss some excerpts from their answers to a conceptual item given at the end of the sequence. (paper)

  1. Evidence-based pathology in its second decade: toward probabilistic cognitive computing.

    Science.gov (United States)

    Marchevsky, Alberto M; Walts, Ann E; Wick, Mark R

    2017-03-01

    Evidence-based pathology advocates using a combination of best available data ("evidence") from the literature and personal experience for the diagnosis, estimation of prognosis, and assessment of other variables that impact individual patient care. Evidence-based pathology relies on systematic reviews of the literature, evaluation of the quality of evidence as categorized by evidence levels and statistical tools such as meta-analyses, estimates of probabilities and odds, and others. However, it is well known that previously "statistically significant" information usually does not accurately forecast the future for individual patients. There is great interest in "cognitive computing" in which "data mining" is combined with "predictive analytics" designed to forecast future events and estimate the strength of those predictions. This study demonstrates the use of IBM Watson Analytics software to evaluate and predict the prognosis of 101 patients with typical and atypical pulmonary carcinoid tumors in which Ki-67 indices have been determined. The results obtained with this system are compared with those previously reported using "routine" statistical software and the help of a professional statistician. IBM Watson Analytics interactively provides statistical results that are comparable to those obtained with routine statistical tools but much more rapidly, with considerably less effort and with interactive graphics that are intuitively easy to apply. It also enables analysis of natural language variables and yields detailed survival predictions for patient subgroups selected by the user. Potential applications of this tool and basic concepts of cognitive computing are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Quantum probabilistic logic programming

    Science.gov (United States)

    Balu, Radhakrishnan

    2015-05-01

    We describe a quantum mechanics based logic programming language that supports Horn clauses, random variables, and covariance matrices to express and solve problems in probabilistic logic. The Horn clauses of the language wrap random variables, including infinite valued, to express probability distributions and statistical correlations, a powerful feature to capture relationship between distributions that are not independent. The expressive power of the language is based on a mechanism to implement statistical ensembles and to solve the underlying SAT instances using quantum mechanical machinery. We exploit the fact that classical random variables have quantum decompositions to build the Horn clauses. We establish the semantics of the language in a rigorous fashion by considering an existing probabilistic logic language called PRISM with classical probability measures defined on the Herbrand base and extending it to the quantum context. In the classical case H-interpretations form the sample space and probability measures defined on them lead to consistent definition of probabilities for well formed formulae. In the quantum counterpart, we define probability amplitudes on Hinterpretations facilitating the model generations and verifications via quantum mechanical superpositions and entanglements. We cast the well formed formulae of the language as quantum mechanical observables thus providing an elegant interpretation for their probabilities. We discuss several examples to combine statistical ensembles and predicates of first order logic to reason with situations involving uncertainty.

  3. Probabilistic safety analysis and risk-based inspection of nuclear research reactors: state-of-the-art and implementation proposal

    Energy Technology Data Exchange (ETDEWEB)

    Marques, Raíssa O.; Vasceoncelos, Vanderley de; Soares, Wellington A.; Silva Júnior, Silvério F.; Raso, Amanda L.; Mesquita, Amir Z., E-mail: raissaomarques@gmail.com, E-mail: vasconv@cdtn.br, E-mail: soaresw@cdtn.br, E-mail: silvasf@cdtn.br, E-mail: amandaraso@hotmail.com, E-mail: amir@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2017-07-01

    Industrial facilities systems deteriorate over time during operation, thus increasing the possibility of accidents. Risk-Based Inspection (RBI) classifies such systems by their risk information with the purpose of prioritizing inspection efforts. RBI can reduce inspection activities, resulting in lower risk levels, and maintaining reliability and safety in acceptable levels. Risk-Informed In-Service Inspection (RI-ISI) is a RBI approach used in nuclear industry. RI-ISI uses outcomes from Probabilistic Safety Analysis (PSA) of Nuclear Power Plants (NPP) to plan In-Service Inspections (ISI). Despite nuclear research reactors are simpler and have lower risks than power reactors, the application of PSA to them may be useful for safety improvements once they are more flexible, provide easier access to its core, and allow changes in fuel configurations in case of experimental tests. Ageing management of structures, systems and components important to safety of a nuclear research reactor throughout its lifetime is also required to assure continued adequacy of safety levels, reliable operation, and compliance with operational limits and conditions. This includes periodic review of ISI programs in which monitoring of material deterioration and aging effects are considered, and that can be supported by the RBI approach. A review of state-of-the-art of PSA and RBI applications to nuclear reactors is presented in this work. Advantages to apply these methodologies are also analyzed. PSA and RBI implementation proposal applied to nuclear research reactors is also presented, as well as its application to a TRIGA research nuclear reactor using computer codes developed by ReliaSoft® Corporation. (author)

  4. Probabilistic safety analysis and risk-based inspection of nuclear research reactors: state-of-the-art and implementation proposal

    International Nuclear Information System (INIS)

    Marques, Raíssa O.; Vasceoncelos, Vanderley de; Soares, Wellington A.; Silva Júnior, Silvério F.; Raso, Amanda L.; Mesquita, Amir Z.

    2017-01-01

    Industrial facilities systems deteriorate over time during operation, thus increasing the possibility of accidents. Risk-Based Inspection (RBI) classifies such systems by their risk information with the purpose of prioritizing inspection efforts. RBI can reduce inspection activities, resulting in lower risk levels, and maintaining reliability and safety in acceptable levels. Risk-Informed In-Service Inspection (RI-ISI) is a RBI approach used in nuclear industry. RI-ISI uses outcomes from Probabilistic Safety Analysis (PSA) of Nuclear Power Plants (NPP) to plan In-Service Inspections (ISI). Despite nuclear research reactors are simpler and have lower risks than power reactors, the application of PSA to them may be useful for safety improvements once they are more flexible, provide easier access to its core, and allow changes in fuel configurations in case of experimental tests. Ageing management of structures, systems and components important to safety of a nuclear research reactor throughout its lifetime is also required to assure continued adequacy of safety levels, reliable operation, and compliance with operational limits and conditions. This includes periodic review of ISI programs in which monitoring of material deterioration and aging effects are considered, and that can be supported by the RBI approach. A review of state-of-the-art of PSA and RBI applications to nuclear reactors is presented in this work. Advantages to apply these methodologies are also analyzed. PSA and RBI implementation proposal applied to nuclear research reactors is also presented, as well as its application to a TRIGA research nuclear reactor using computer codes developed by ReliaSoft® Corporation. (author)

  5. Toward Probabilistic Diagnosis and Understanding of Depression Based on Functional MRI Data Analysis with Logistic Group LASSO.

    Directory of Open Access Journals (Sweden)

    Yu Shimizu

    Full Text Available Diagnosis of psychiatric disorders based on brain imaging data is highly desirable in clinical applications. However, a common problem in applying machine learning algorithms is that the number of imaging data dimensions often greatly exceeds the number of available training samples. Furthermore, interpretability of the learned classifier with respect to brain function and anatomy is an important, but non-trivial issue. We propose the use of logistic regression with a least absolute shrinkage and selection operator (LASSO to capture the most critical input features. In particular, we consider application of group LASSO to select brain areas relevant to diagnosis. An additional advantage of LASSO is its probabilistic output, which allows evaluation of diagnosis certainty. To verify our approach, we obtained semantic and phonological verbal fluency fMRI data from 31 depression patients and 31 control subjects, and compared the performances of group LASSO (gLASSO, and sparse group LASSO (sgLASSO to those of standard LASSO (sLASSO, Support Vector Machine (SVM, and Random Forest. Over 90% classification accuracy was achieved with gLASSO, sgLASSO, as well as SVM; however, in contrast to SVM, LASSO approaches allow for identification of the most discriminative weights and estimation of prediction reliability. Semantic task data revealed contributions to the classification from left precuneus, left precentral gyrus, left inferior frontal cortex (pars triangularis, and left cerebellum (c rus1. Weights for the phonological task indicated contributions from left inferior frontal operculum, left post central gyrus, left insula, left middle frontal cortex, bilateral middle temporal cortices, bilateral precuneus, left inferior frontal cortex (pars triangularis, and left precentral gyrus. The distribution of normalized odds ratios further showed, that predictions with absolute odds ratios higher than 0.2 could be regarded as certain.

  6. Neutron distribution modeling based on integro-probabilistic approach of discrete ordinates method

    International Nuclear Information System (INIS)

    Khromov, V.V.; Kryuchkov, E.F.; Tikhomirov, G.V.

    1992-01-01

    In this paper is described the universal nodal method for the neutron distribution calculation in reactor and shielding problems, based on using of influence functions and factors of local-integrated volume and surface neutron sources in phase subregions. This method permits to avoid the limited capabilities of collision-probability method concerning with the detailed calculation of angular neutron flux dependence, scattering anisotropy and empty channels. The proposed method may be considered as modification of S n - method with advantage of ray-effects elimination. There are presented the description of method theory and algorithm following by the examples of method applications for calculation of neutron distribution in three-dimensional model of fusion reactor blanket and in highly heterogeneous reactor with empty channel

  7. A Probabilistic Framework for Content-Based Diagnosis of Retinal Disease

    Energy Technology Data Exchange (ETDEWEB)

    Tobin Jr, Kenneth William [ORNL; Abdelrahman, Mohamed A [ORNL; Chaum, Edward [ORNL; Muthusamy Govindasamy, Vijaya Priya [ORNL; Karnowski, Thomas Paul [ORNL

    2007-01-01

    Diabetic retinopathy is the leading cause of blindness in the working age population around the world. Computer assisted analysis has the potential to assist in the early detection of diabetes by regular screening of large populations. The widespread availability of digital fundus cameras today is resulting in the accumulation of large image archives of diagnosed patient data that captures historical knowledge of retinal pathology. Through this research we are developing a content-based image retrieval method to verify our hypothesis that retinal pathology can be identified and quantified from visually similar retinal images in an image archive. We will present diagnostic results for specificity and sensitivity on a population of 395 fundus images representing the normal fundus and 14 stratified disease states.

  8. Possibility/Necessity-Based Probabilistic Expectation Models for Linear Programming Problems with Discrete Fuzzy Random Variables

    Directory of Open Access Journals (Sweden)

    Hideki Katagiri

    2017-10-01

    Full Text Available This paper considers linear programming problems (LPPs where the objective functions involve discrete fuzzy random variables (fuzzy set-valued discrete random variables. New decision making models, which are useful in fuzzy stochastic environments, are proposed based on both possibility theory and probability theory. In multi-objective cases, Pareto optimal solutions of the proposed models are newly defined. Computational algorithms for obtaining the Pareto optimal solutions of the proposed models are provided. It is shown that problems involving discrete fuzzy random variables can be transformed into deterministic nonlinear mathematical programming problems which can be solved through a conventional mathematical programming solver under practically reasonable assumptions. A numerical example of agriculture production problems is given to demonstrate the applicability of the proposed models to real-world problems in fuzzy stochastic environments.

  9. 4D Flexible Atom-Pairs: An efficient probabilistic conformational space comparison for ligand-based virtual screening

    Science.gov (United States)

    2011-01-01

    Background The performance of 3D-based virtual screening similarity functions is affected by the applied conformations of compounds. Therefore, the results of 3D approaches are often less robust than 2D approaches. The application of 3D methods on multiple conformer data sets normally reduces this weakness, but entails a significant computational overhead. Therefore, we developed a special conformational space encoding by means of Gaussian mixture models and a similarity function that operates on these models. The application of a model-based encoding allows an efficient comparison of the conformational space of compounds. Results Comparisons of our 4D flexible atom-pair approach with over 15 state-of-the-art 2D- and 3D-based virtual screening similarity functions on the 40 data sets of the Directory of Useful Decoys show a robust performance of our approach. Even 3D-based approaches that operate on multiple conformers yield inferior results. The 4D flexible atom-pair method achieves an averaged AUC value of 0.78 on the filtered Directory of Useful Decoys data sets. The best 2D- and 3D-based approaches of this study yield an AUC value of 0.74 and 0.72, respectively. As a result, the 4D flexible atom-pair approach achieves an average rank of 1.25 with respect to 15 other state-of-the-art similarity functions and four different evaluation metrics. Conclusions Our 4D method yields a robust performance on 40 pharmaceutically relevant targets. The conformational space encoding enables an efficient comparison of the conformational space. Therefore, the weakness of the 3D-based approaches on single conformations is circumvented. With over 100,000 similarity calculations on a single desktop CPU, the utilization of the 4D flexible atom-pair in real-world applications is feasible. PMID:21733172

  10. Face Recognition Performance Improvement using a Similarity Score of Feature Vectors based on Probabilistic Histograms

    Directory of Open Access Journals (Sweden)

    SRIKOTE, G.

    2016-08-01

    Full Text Available This paper proposes an improved performance algorithm of face recognition to identify two face mismatch pairs in cases of incorrect decisions. The primary feature of this method is to deploy the similarity score with respect to Gaussian components between two previously unseen faces. Unlike the conventional classical vector distance measurement, our algorithms also consider the plot of summation of the similarity index versus face feature vector distance. A mixture of Gaussian models of labeled faces is also widely applicable to different biometric system parameters. By comparative evaluations, it has been shown that the efficiency of the proposed algorithm is superior to that of the conventional algorithm by an average accuracy of up to 1.15% and 16.87% when compared with 3x3 Multi-Region Histogram (MRH direct-bag-of-features and Principal Component Analysis (PCA-based face recognition systems, respectively. The experimental results show that similarity score consideration is more discriminative for face recognition compared to feature distance. Experimental results of Labeled Face in the Wild (LFW data set demonstrate that our algorithms are suitable for real applications probe-to-gallery identification of face recognition systems. Moreover, this proposed method can also be applied to other recognition systems and therefore additionally improves recognition scores.

  11. Dengue fever spreading based on probabilistic cellular automata with two lattices

    Science.gov (United States)

    Pereira, F. M. M.; Schimit, P. H. T.

    2018-06-01

    Modeling and simulation of mosquito-borne diseases have gained attention due to a growing incidence in tropical countries in the past few years. Here, we study the dengue spreading in a population modeled by cellular automata, where there are two lattices to model the human-mosquitointeraction: one lattice for human individuals, and one lattice for mosquitoes in order to enable different dynamics in populations. The disease considered is the dengue fever with one, two or three different serotypes coexisting in population. Although many regions exhibit the incidence of only one serotype, here we set a complete framework to also study the occurrence of two and three serotypes at the same time in a population. Furthermore, the flexibility of the model allows its use to other mosquito-borne diseases, like chikungunya, yellow fever and malaria. An approximation of the cellular automata is proposed in terms of ordinary differential equations; the spreading of mosquitoes is studied and the influence of some model parameters are analyzed with numerical simulations. Finally, a method to combat dengue spreading is simulated based on a reduction of mosquito birth and mosquito bites in population.

  12. Probabilistic endowment appraisal system based upon the formalization of geologic decisions. General description

    International Nuclear Information System (INIS)

    Harris, D.P.; Carrigan, F.J.

    1980-04-01

    The objectives of this study include the design of an appraisal system which has the following features: estimates uranium endowment, not resources; formalizes the geologist's geoscience and assists the geologist in the exercise of his geoscience; describes the probability distribution for uranium endowment; diminishes or at least does not contribute to psychometric biases; provides for anonymous exchange among multiple experts of tenets of geoscience, but not the exchange of endowment estimates; provides an endowment estimate based upon geoscience only; is not easily gamed or manipulated; and provides for a quick and easy review of geoscience and resource information. This report is reflective of its title, a general description. The appraisal system resulting from this research is complex in the detail of its design and use. However, the major concepts which are reflected by the system are simple. The purpose of this report is to establish clearly these major concepts and the manner in which the system applies these concepts. Many details, refinements, and caveats are purposefully suppressed in order to provide this general description. While this suppression is a loss to some readers, it is a benefit to a wider spectrum of readers. Those interested in the nuts and bolts of the system will also want to read the user's manual which accompanies this general description

  13. Probabilistic Modeling of Timber Structures

    DEFF Research Database (Denmark)

    Köhler, J.D.; Sørensen, John Dalsgaard; Faber, Michael Havbro

    2005-01-01

    The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) and of the COST action E24 'Reliability of Timber Structures'. The present...... proposal is based on discussions and comments from participants of the COST E24 action and the members of the JCSS. The paper contains a description of the basic reference properties for timber strength parameters and ultimate limit state equations for components and connections. The recommended...

  14. Probabilistic cellular automata.

    Science.gov (United States)

    Agapie, Alexandru; Andreica, Anca; Giuclea, Marius

    2014-09-01

    Cellular automata are binary lattices used for modeling complex dynamical systems. The automaton evolves iteratively from one configuration to another, using some local transition rule based on the number of ones in the neighborhood of each cell. With respect to the number of cells allowed to change per iteration, we speak of either synchronous or asynchronous automata. If randomness is involved to some degree in the transition rule, we speak of probabilistic automata, otherwise they are called deterministic. With either type of cellular automaton we are dealing with, the main theoretical challenge stays the same: starting from an arbitrary initial configuration, predict (with highest accuracy) the end configuration. If the automaton is deterministic, the outcome simplifies to one of two configurations, all zeros or all ones. If the automaton is probabilistic, the whole process is modeled by a finite homogeneous Markov chain, and the outcome is the corresponding stationary distribution. Based on our previous results for the asynchronous case-connecting the probability of a configuration in the stationary distribution to its number of zero-one borders-the article offers both numerical and theoretical insight into the long-term behavior of synchronous cellular automata.

  15. Arbitrage and Hedging in a non probabilistic framework

    OpenAIRE

    Alvarez, Alexander; Ferrando, Sebastian; Olivares, Pablo

    2011-01-01

    The paper studies the concepts of hedging and arbitrage in a non probabilistic framework. It provides conditions for non probabilistic arbitrage based on the topological structure of the trajectory space and makes connections with the usual notion of arbitrage. Several examples illustrate the non probabilistic arbitrage as well perfect replication of options under continuous and discontinuous trajectories, the results can then be applied in probabilistic models path by path. The approach is r...

  16. A common fixed point for operators in probabilistic normed spaces

    International Nuclear Information System (INIS)

    Ghaemi, M.B.; Lafuerza-Guillen, Bernardo; Razani, A.

    2009-01-01

    Probabilistic Metric spaces was introduced by Karl Menger. Alsina, Schweizer and Sklar gave a general definition of probabilistic normed space based on the definition of Menger [Alsina C, Schweizer B, Sklar A. On the definition of a probabilistic normed spaces. Aequationes Math 1993;46:91-8]. Here, we consider the equicontinuity of a class of linear operators in probabilistic normed spaces and finally, a common fixed point theorem is proved. Application to quantum Mechanic is considered.

  17. Multi-stage classification method oriented to aerial image based on low-rank recovery and multi-feature fusion sparse representation.

    Science.gov (United States)

    Ma, Xu; Cheng, Yongmei; Hao, Shuai

    2016-12-10

    Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.

  18. Power System Event Ranking Using a New Linear Parameter-Varying Modeling with a Wide Area Measurement System-Based Approach

    Directory of Open Access Journals (Sweden)

    Mohammad Bagher Abolhasani Jabali

    2017-07-01

    Full Text Available Detecting critical power system events for Dynamic Security Assessment (DSA is required for reliability improvement. The approach proposed in this paper investigates the effects of events on dynamic behavior during nonlinear system response while common approaches use steady-state conditions after events. This paper presents some new and enhanced indices for event ranking based on time-domain simulation and polytopic linear parameter-varying (LPV modeling of a power system. In the proposed approach, a polytopic LPV representation is generated via linearization about some points of the nonlinear dynamic behavior of power system using wide-area measurement system (WAMS concepts and then event ranking is done based on the frequency response of the system models on the vertices. Therefore, the nonlinear behaviors of the system in the time of fault occurrence are considered for events ranking. The proposed algorithm is applied to a power system using nonlinear simulation. The comparison of the results especially in different fault conditions shows the advantages of the proposed approach and indices.

  19. Making Probabilistic Relational Categories Learnable

    Science.gov (United States)

    Jung, Wookyoung; Hummel, John E.

    2015-01-01

    Theories of relational concept acquisition (e.g., schema induction) based on structured intersection discovery predict that relational concepts with a probabilistic (i.e., family resemblance) structure ought to be extremely difficult to learn. We report four experiments testing this prediction by investigating conditions hypothesized to facilitate…

  20. Ranking Hospitals Based on Colon Surgery and Abdominal Hysterectomy Surgical Site Infection Outcomes: Impact of Limiting Surveillance to the Operative Hospital.

    Science.gov (United States)

    Yokoe, Deborah S; Avery, Taliser R; Platt, Richard; Kleinman, Ken; Huang, Susan S

    2018-03-16

    Hospital-specific surgical site infection (SSI) performance following colon surgery and abdominal hysterectomies can impact hospitals' relative rankings around quality metrics used to determine financial penalties. Current SSI surveillance largely focuses on SSI detected at the operative hospital. Retrospective cohort study to assess the impact on hospitals' relative SSI performance rankings when SSI detected at non-operative hospitals are included. We utilized data from a California statewide hospital registry to assess for evidence of SSI following colon surgery or abdominal hysterectomies performed 3/1/2011-11/30/2013 using previously validated claims-based SSI surveillance methods. Risk-adjusted hospital-specific rankings based on SSI detected at operative hospitals versus any California hospital were generated. Among 60,059 colon surgeries at 285 hospitals and 64,918 abdominal hysterectomies at 270 hospitals, 5,921 (9.9%) colon surgeries and 1,481 (2.3%) abdominal hysterectomies received a diagnosis code for SSI within the 30 days following surgery. 7.2% of colon surgery and 13.4% of abdominal hysterectomy SSI would have been missed by operative hospital surveillance alone. The proportion of individual hospital's SSI detected during hospitalizations at other hospitals varied widely. Including non-operative hospital SSI resulted in improved relative ranking of 11 (3.9%) colon surgery and 13 (4.8%) hysterectomy hospitals so that they were no longer in the worst performing quartile, mainly among hospitals with relatively high surgical volumes. Standard SSI surveillance that mainly focuses on infections detected at the operative hospital causes varying degrees of SSI under-estimation, leading to inaccurate assignment or avoidance of financial penalties for approximately one in eleven to sixteen hospitals.