Sample records for metric based rules

  1. Extremal K\\"ahler metrics on blow-ups of parabolic ruled surfaces

    Tipler, Carl


    New examples of extremal K\\"ahler metrics on blow-ups of parabolic ruled surfaces are constructed. The method is based on the gluing construction of Arezzo, Pacard and Singer. This enables to endow ruled surfaces of the form $\\mathbb{P}(\\mathcal{O}\\oplus L)$ with special parabolic structures such that the associated iterated blow-up admits an extremal metric of non-constant scalar curvature.

  2. Consensus reaching in swarms ruled by a hybrid metric-topological distance

    Shang, Yilun


    Recent empirical observations of three-dimensional bird flocks and human crowds have challenged the long-prevailing assumption that a metric interaction distance rules swarming behaviors. In some cases, individual agents are found to be engaged in local information exchanges with a fixed number of neighbors, i.e. a topological interaction. However, complex system dynamics based on pure metric or pure topological distances both face physical inconsistencies in low and high density situations. Here, we propose a hybrid metric-topological interaction distance overcoming these issues and enabling a real-life implementation in artificial robotic swarms. We use network- and graph-theoretic approaches combined with a dynamical model of locally interacting self-propelled particles to study the consensus reaching pro- cess for a swarm ruled by this hybrid interaction distance. Specifically, we establish exactly the probability of reaching consensus in the absence of noise. In addition, simulations of swarms of self-pr...

  3. Fuzzy Rule Base System for Software Classification

    Adnan Shaout


    Full Text Available Given the central role that software development plays in the delivery and application of informationtechnology, managers have been focusing on process improvement in the software development area. Thisimprovement has increased the demand for software measures, or metrics to manage the process. Thismetrics provide a quantitative basis for the development and validation of models during the softwaredevelopment process. In this paper a fuzzy rule-based system will be developed to classify java applicationsusing object oriented metrics. The system will contain the following features:Automated method to extract the OO metrics from the source code,Default/base set of rules that can be easily configured via XML file so companies, developers, teamleaders,etc, can modify the set of rules according to their needs,Implementation of a framework so new metrics, fuzzy sets and fuzzy rules can be added or removeddepending on the needs of the end user,General classification of the software application and fine-grained classification of the java classesbased on OO metrics, andTwo interfaces are provided for the system: GUI and command.

  4. Cluster-based adaptive metric classification

    Giotis, Ioannis; Petkov, Nicolai


    Introducing adaptive metric has been shown to improve the results of distance-based classification algorithms. Existing methods are often computationally intensive, either in the training or in the classification phase. We present a novel algorithm that we call Cluster-Based Adaptive Metric (CLAM) c

  5. Cluster-based adaptive metric classification

    Giotis, Ioannis; Petkov, Nicolai


    Introducing adaptive metric has been shown to improve the results of distance-based classification algorithms. Existing methods are often computationally intensive, either in the training or in the classification phase. We present a novel algorithm that we call Cluster-Based Adaptive Metric (CLAM)

  6. Rule-Based Network Service Provisioning

    Rudy Deca


    Full Text Available Due to the unprecedented development of networks, manual network service provisioning is becoming increasingly risky, error-prone, expensive, and time-consuming. To solve this problem,rule-based methods can provide adequate leverage for automating various network management tasks. This paper presents a rule-based solution for automated network service provisioning. The proposed approach captures configuration data interdependencies using high-level, service-specific, user-configurable rules. We focus on the service validation task, which is illustrated by means of a case study.Based on numerical results, we analyse the influence of the network-level complexity factors and rule descriptive features on the rule efficiency. This analysis shows the operators how to increase rule efficiency while keeping the rules simple and the rule set compact. We present a technique that allows operators to increase the error coverage, and we show that high error coverage scales well when the complexity of networks and services increases.We reassess the correlation function between specific rule efficiency and rule complexity metrics found in previous work, and show that this correlation function holds for various sizes, types, and complexities of networks and services.

  7. Software Metrics Evaluation Based on Entropy

    Selvarani, R; Ramachandran, Muthu; Prasad, Kamakshi


    Software engineering activities in the Industry has come a long way with various improve- ments brought in various stages of the software development life cycle. The complexity of modern software, the commercial constraints and the expectation for high quality products demand the accurate fault prediction based on OO design metrics in the class level in the early stages of software development. The object oriented class metrics are used as quality predictors in the entire OO software development life cycle even when a highly iterative, incremental model or agile software process is employed. Recent research has shown some of the OO design metrics are useful for predicting fault-proneness of classes. In this paper the empirical validation of a set of metrics proposed by Chidamber and Kemerer is performed to assess their ability in predicting the software quality in terms of fault proneness and degradation. We have also proposed the design complexity of object-oriented software with Weighted Methods per Class m...

  8. TANF Rules Data Base

    U.S. Department of Health & Human Services — Single source providing information on Temporary Assistance for Needy Families (TANF) program rules among States and across years (currently 1996-2010), including...

  9. Rule-Based Semantic Sensing

    Woznowski, Przemyslaw


    Rule-Based Systems have been in use for decades to solve a variety of problems but not in the sensor informatics domain. Rules aid the aggregation of low-level sensor readings to form a more complete picture of the real world and help to address 10 identified challenges for sensor network middleware. This paper presents the reader with an overview of a system architecture and a pilot application to demonstrate the usefulness of a system integrating rules with sensor middleware.

  10. LAN Modeling in Rural Areas Based on Variable Metrics Using Fuzzy Logic

    Ak. Ashakumar Singh


    Full Text Available The global scenario of the present world highly needs the communication between the urban areas and the rural areas. To motivate a new system for rural broadband access, there needs the integration of LAN and IEEE 802.11 WLAN technologies. The variable metrics such as Access Protocol, User traffic profile, Buffer size and Data collision and retransmission are involved in the modeling of such LAN. In the paper, a fuzzy logic based LAN modeling technique is designed for which the variable metrics are imprecise. The technique involves the fuzzification of the variable metrics to be input, rule evaluation, and aggregation of the rule outputs. The implementation is done using Fuzzy Inference System (FIS based on Mamdani style in MatLab 7.6 for the representation of the reasoning and effective analysis. Four LAN systems are tested to analyze potential variable metrics to bring a smooth communication in the rural societies.

  11. Decomposition-based transfer distance metric learning for image classification.

    Luo, Yong; Liu, Tongliang; Tao, Dacheng; Xu, Chao


    Distance metric learning (DML) is a critical factor for image analysis and pattern recognition. To learn a robust distance metric for a target task, we need abundant side information (i.e., the similarity/dissimilarity pairwise constraints over the labeled data), which is usually unavailable in practice due to the high labeling cost. This paper considers the transfer learning setting by exploiting the large quantity of side information from certain related, but different source tasks to help with target metric learning (with only a little side information). The state-of-the-art metric learning algorithms usually fail in this setting because the data distributions of the source task and target task are often quite different. We address this problem by assuming that the target distance metric lies in the space spanned by the eigenvectors of the source metrics (or other randomly generated bases). The target metric is represented as a combination of the base metrics, which are computed using the decomposed components of the source metrics (or simply a set of random bases); we call the proposed method, decomposition-based transfer DML (DTDML). In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics. The main advantage of the proposed method compared with existing transfer metric learning approaches is that we directly learn the base metric coefficients instead of the target metric. To this end, far fewer variables need to be learned. We therefore obtain more reliable solutions given the limited side information and the optimization tends to be faster. Experiments on the popular handwritten image (digit, letter) classification and challenge natural image annotation tasks demonstrate the effectiveness of the proposed method.

  12. Value-based metrics and Internet-based enterprises

    Gupta, Krishan M.


    Within the last few years, a host of value-based metrics like EVA, MVA, TBR, CFORI, and TSR have evolved. This paper attempts to analyze the validity and applicability of EVA and Balanced Scorecard for Internet based organizations. Despite the collapse of the dot-com model, the firms engaged in e- commerce continue to struggle to find new ways to account for customer-base, technology, employees, knowledge, etc, as part of the value of the firm. While some metrics, like the Balance Scorecard are geared towards internal use, others like EVA are for external use. Value-based metrics are used for performing internal audits as well as comparing firms against one another; and can also be effectively utilized by individuals outside the firm looking to determine if the firm is creating value for its stakeholders.

  13. Goal-Driven Definition of Product Metrics Based on Properties

    Briand, Lionel; Morasca, Sandro; Basili, Victor R.


    Defining product metrics requires a rigorous and disciplined approach, because useful metrics depend, to a very large extent, on one's goals and assumptions about the studied software process. Unlike in more mature scientific fields, it appears difficult to devise a "universal" set of metrics in software engineering, that can be used across application environments. We propose an approach for the definition of product metrics which is driven by the experimental goals of measurement, expressed via the Goal/Question/Metric (GQM) paradigm, and is based on the mathematical properties of the metrics. This approach integrates several research contributions from the literature into a consistent, practical and rigorous approach. The approach we outline should not be considered as a complete and definitive solution, but as a starting point for discussion about a product metric definition approach widely accepted in the software engineering community. At this point, we intend to provide an intellectual process that we think is necessary to define sound software product metrics. A precise and complete documentation of such an approach will provide the information needed to make the assessment and reuse of a new metric possible. Thus, product metrics are supported by a solid theory which facilitates their review and refinement. Moreover, their definition is made less exploratory and, as a consequence, one is less likely to identify spurious correlations between process and product metrics.

  14. Evaluating hydrological model performance using information theory-based metrics

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  15. US Rocket Propulsion Industrial Base Health Metrics

    Doreswamy, Rajiv


    The number of active liquid rocket engine and solid rocket motor development programs has severely declined since the "space race" of the 1950s and 1960s center dot This downward trend has been exacerbated by the retirement of the Space Shuttle, transition from the Constellation Program to the Space launch System (SLS) and similar activity in DoD programs center dot In addition with consolidation in the industry, the rocket propulsion industrial base is under stress. To Improve the "health" of the RPIB, we need to understand - The current condition of the RPIB - How this compares to past history - The trend of RPIB health center dot This drives the need for a concise set of "metrics" - Analogous to the basic data a physician uses to determine the state of health of his patients - Easy to measure and collect - The trend is often more useful than the actual data point - Can be used to focus on problem areas and develop preventative measures The nation's capability to conceive, design, develop, manufacture, test, and support missions using liquid rocket engines and solid rocket motors that are critical to its national security, economic health and growth, and future scientific needs. center dot The RPIB encompasses US government, academic, and commercial (including industry primes and their supplier base) research, development, test, evaluation, and manufacturing capabilities and facilities. center dot The RPIB includes the skilled workforce, related intellectual property, engineering and support services, and supply chain operations and management. This definition touches the five main segments of the U.S. RPIB as categorized by the USG: defense, intelligence community, civil government, academia, and commercial sector. The nation's capability to conceive, design, develop, manufacture, test, and support missions using liquid rocket engines and solid rocket motors that are critical to its national security, economic health and growth, and future scientific needs

  16. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier


    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  17. Energy-Based Metrics for Arthroscopic Skills Assessment

    Behnaz Poursartip


    Full Text Available Minimally invasive skills assessment methods are essential in developing efficient surgical simulators and implementing consistent skills evaluation. Although numerous methods have been investigated in the literature, there is still a need to further improve the accuracy of surgical skills assessment. Energy expenditure can be an indication of motor skills proficiency. The goals of this study are to develop objective metrics based on energy expenditure, normalize these metrics, and investigate classifying trainees using these metrics. To this end, different forms of energy consisting of mechanical energy and work were considered and their values were divided by the related value of an ideal performance to develop normalized metrics. These metrics were used as inputs for various machine learning algorithms including support vector machines (SVM and neural networks (NNs for classification. The accuracy of the combination of the normalized energy-based metrics with these classifiers was evaluated through a leave-one-subject-out cross-validation. The proposed method was validated using 26 subjects at two experience levels (novices and experts in three arthroscopic tasks. The results showed that there are statistically significant differences between novices and experts for almost all of the normalized energy-based metrics. The accuracy of classification using SVM and NN methods was between 70% and 95% for the various tasks. The results show that the normalized energy-based metrics and their combination with SVM and NN classifiers are capable of providing accurate classification of trainees. The assessment method proposed in this study can enhance surgical training by providing appropriate feedback to trainees about their level of expertise and can be used in the evaluation of proficiency.

  18. Energy-Based Metrics for Arthroscopic Skills Assessment.

    Poursartip, Behnaz; LeBel, Marie-Eve; McCracken, Laura C; Escoto, Abelardo; Patel, Rajni V; Naish, Michael D; Trejos, Ana Luisa


    Minimally invasive skills assessment methods are essential in developing efficient surgical simulators and implementing consistent skills evaluation. Although numerous methods have been investigated in the literature, there is still a need to further improve the accuracy of surgical skills assessment. Energy expenditure can be an indication of motor skills proficiency. The goals of this study are to develop objective metrics based on energy expenditure, normalize these metrics, and investigate classifying trainees using these metrics. To this end, different forms of energy consisting of mechanical energy and work were considered and their values were divided by the related value of an ideal performance to develop normalized metrics. These metrics were used as inputs for various machine learning algorithms including support vector machines (SVM) and neural networks (NNs) for classification. The accuracy of the combination of the normalized energy-based metrics with these classifiers was evaluated through a leave-one-subject-out cross-validation. The proposed method was validated using 26 subjects at two experience levels (novices and experts) in three arthroscopic tasks. The results showed that there are statistically significant differences between novices and experts for almost all of the normalized energy-based metrics. The accuracy of classification using SVM and NN methods was between 70% and 95% for the various tasks. The results show that the normalized energy-based metrics and their combination with SVM and NN classifiers are capable of providing accurate classification of trainees. The assessment method proposed in this study can enhance surgical training by providing appropriate feedback to trainees about their level of expertise and can be used in the evaluation of proficiency.

  19. Standardized reporting of functioning information on ICF--based common metrics.

    Prodinger, Birgit; Tennant, Alan; Stucki, Gerold


    In clinical practice and research a variety of clinical data collection tools are used to collect information on people's functioning for clinical practise and research and National Health Information Systems. Reporting on ICF-based common metrics enables standardized documentation of functioning information in national health information systems. The objective of this methodological note on applying the ICF in rehabilitation is to demonstrate how to report functioning information collected with a data collection tool on ICF-based common metrics. We first specify the requirements for the standardized reporting of functioning information. Secondly, we introduce the methods needed for transforming functioning data to ICF-based common metrics. Finally, we provide an example. The requirements for standardized reporting are as follows: 1) having a common conceptual framework to enable content comparability between any health information; and 2) a measurement framework so that scores between two or more clinical data collection tools can be directly compared. The methods needed to achieve these requirements are the ICF Linking Rules and the Rasch Measurement Model. Using data collected incorporating the 36- item Short Form Health Survey (SF-36), the World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0), and the Stroke Impact Scale 3.0 (SIS 3.0), the application of the standardized reporting based on common metrics is demonstrated. A subset of items from the three tools linked to common chapters of the ICF (d4 Mobility, d5 Self-care and d6 Domestic life), were entered as 'super items' into the Rasch model. Good fit was achieved with no residual local dependency and a unidimensional metric. A transformation table allows for comparison between scales, and between a scale and the reporting common metric. Being able to report functioning information collected with commonly used clinical data collection tools with ICF-based common metrics enables clinicians

  20. Advanced Space Propulsion Based on Vacuum (Spacetime Metric) Engineering

    Puthoff, Harold E


    A theme that has come to the fore in advanced planning for long-range space exploration is the concept that empty space itself (the quantum vacuum, or spacetime metric) might be engineered so as to provide energy/thrust for future space vehicles. Although far-reaching, such a proposal is solidly grounded in modern physical theory, and therefore the possibility that matter/vacuum interactions might be engineered for space-flight applications is not a priori ruled out. As examples, the current development of theoretical physics addresses such topics as warp drives, traversable wormholes and time machines that provide for such vacuum engineering possibilities. We provide here from a broad perspective the physics and correlates/consequences of the engineering of the spacetime metric.

  1. Rule-based Modelling and Tunable Resolution

    Russ Harmer


    Full Text Available We investigate the use of an extension of rule-based modelling for cellular signalling to create a structured space of model variants. This enables the incremental development of rule sets that start from simple mechanisms and which, by a gradual increase in agent and rule resolution, evolve into more detailed descriptions.

  2. Rule-based Modelling and Tunable Resolution

    Harmer, Russ


    We investigate the use of an extension of rule-based modelling for cellular signalling to create a structured space of model variants. This enables the incremental development of rule sets that start from simple mechanisms and which, by a gradual increase in agent and rule resolution, evolve into more detailed descriptions.

  3. Quadratic Error Metric Mesh Simplification Algorithm Based on Discrete Curvature

    Li Yao


    Full Text Available Complex and highly detailed polygon meshes have been adopted for model representation in many areas of computer graphics. Existing works mainly focused on the quadric error metric based complex models approximation, which has not taken the retention of important model details into account. This may lead to visual degeneration. In this paper, we improve Garland and Heckberts’ quadric error metric based algorithm by using the discrete curvature to reserve more features for mesh simplification. Our experiments on various models show that the geometry and topology structure as well as the features of the original models are precisely retained by employing discrete curvature.

  4. A Constructivist Approach to Rule Bases

    Sileno, G.; Boer, A.; van Engers, T.; Loiseau, S.; Filipe, J.; Duval, B.; van den Herik, J.


    The paper presents a set of algorithms for the conversion of rule bases between priority-based and constraint-based representations. Inspired by research in precedential reasoning in law, such algorithms can be used for the analysis of a rule base, and for the study of the impact of the introduction

  5. Ranking streamflow model performance based on Information theory metrics

    Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas


    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.

  6. Metric-Based Cooperative Routing in Multihop Ad Hoc Networks

    Xin He


    Full Text Available Cooperative communication fully leverages the broadcast nature of wireless channels and exploits time/spatial diversity in a distributed manner, thereby achieving significant improvements in system capacity and transmission reliability. Cooperative diversity has been well studied from the physical layer perspective. Thereafter, cooperative MAC design has also drawn much attention recently. However, very little work has addressed cooperation at the routing layer. In this paper, we propose a simple yet efficient scheme for cooperative routing by using cooperative metrics including packet delivery ratio, throughput, and energy consumption efficiency. To make a routing decision based on our scheme, a node needs to first determine whether cooperation on each link is necessary or not, and if necessary, select the optimal cooperative scheme as well as the optimal relay. To do so, we calculate and compare cooperative routing metric values for each potential relay for each different cooperative MAC scheme (C-ARQ and CoopMAC in this study, and further choose the best value and compare it with the noncooperative link metric. Using the final optimal metric value instead of the traditional metric value at the routing layer, new optimal paths are set up in multihop ad hoc networks, by taking into account the cooperative benefits from the MAC layer. The network performance of the cooperative routing solution is demonstrated using a simple network topology.

  7. Network Tomography Based on Additive Metrics

    Ni, Jian


    Inference of the network structure (e.g., routing topology) and dynamics (e.g., link performance) is an essential component in many network design and management tasks. In this paper we propose a new, general framework for analyzing and designing routing topology and link performance inference algorithms using ideas and tools from phylogenetic inference in evolutionary biology. The framework is applicable to a variety of measurement techniques. Based on the framework we introduce and develop several polynomial-time distance-based inference algorithms with provable performance. We provide sufficient conditions for the correctness of the algorithms. We show that the algorithms are consistent (return correct topology and link performance with an increasing sample size) and robust (can tolerate a certain level of measurement errors). In addition, we establish certain optimality properties of the algorithms (i.e., they achieve the optimal $l_\\infty$-radius) and demonstrate their effectiveness via model simulation.

  8. Operator-based metric for nuclear operations automation assessment

    Zacharias, G.L.; Miao, A.X.; Kalkan, A. [Charles River Analytics Inc., Cambridge, MA (United States)] [and others


    Continuing advances in real-time computational capabilities will support enhanced levels of smart automation and AI-based decision-aiding systems in the nuclear power plant (NPP) control room of the future. To support development of these aids, we describe in this paper a research tool, and more specifically, a quantitative metric, to assess the impact of proposed automation/aiding concepts in a manner that can account for a number of interlinked factors in the control room environment. In particular, we describe a cognitive operator/plant model that serves as a framework for integrating the operator`s information-processing capabilities with his procedural knowledge, to provide insight as to how situations are assessed by the operator, decisions made, procedures executed, and communications conducted. Our focus is on the situation assessment (SA) behavior of the operator, the development of a quantitative metric reflecting overall operator awareness, and the use of this metric in evaluating automation/aiding options. We describe the results of a model-based simulation of a selected emergency scenario, and metric-based evaluation of a range of contemplated NPP control room automation/aiding options. The results demonstrate the feasibility of model-based analysis of contemplated control room enhancements, and highlight the need for empirical validation.


    BOLLEN, JOHAN [Los Alamos National Laboratory; RODRIGUEZ, MARKO A. [Los Alamos National Laboratory; VAN DE SOMPEL, HERBERT [Los Alamos National Laboratory


    The evaluation of scholarly communication items is now largely a matter of expert opinion or metrics derived from citation data. Both approaches can fail to take into account the myriad of factors that shape scholarly impact. Usage data has emerged as a promising complement to existing methods o fassessment but the formal groundwork to reliably and validly apply usage-based metrics of schlolarly impact is lacking. The Andrew W. Mellon Foundation funded MESUR project constitutes a systematic effort to define, validate and cross-validate a range of usage-based metrics of schlolarly impact by creating a semantic model of the scholarly communication process. The constructed model will serve as the basis of a creating a large-scale semantic network that seamlessly relates citation, bibliographic and usage data from a variety of sources. A subsequent program that uses the established semantic network as a reference data set will determine the characteristics and semantics of a variety of usage-based metrics of schlolarly impact. This paper outlines the architecture and methodology adopted by the MESUR project and its future direction.

  10. Algebraic Error Based Triangulation and Metric of Lines.

    Wu, Fuchao; Zhang, Ming; Wang, Guanghui; Hu, Zhanyi


    Line triangulation, a classical geometric problem in computer vision, is to determine the 3D coordinates of a line based on its 2D image projections from more than two views of cameras with known projection matrices. Compared to point features, line segments are more robust to matching errors, occlusions, and image uncertainties. In addition to line triangulation, a better metric is needed to evaluate 3D errors of line triangulation. In this paper, the line triangulation problem is investigated by using the Lagrange multipliers theory. The main contributions include: (i) Based on the Lagrange multipliers theory, a formula to compute the Plücker correction is provided, and from the formula, a new linear algorithm, LINa, is proposed for line triangulation; (ii) two optimal algorithms, OPTa-I and OPTa-II, are proposed by minimizing the algebraic error; and (iii) two metrics on 3D line space, the orthogonal metric and the quasi-Riemannian metric, are introduced for the evaluation of line triangulations. Extensive experiments on synthetic data and real images are carried out to validate and demonstrate the effectiveness of the proposed algorithms.

  11. Forged seal detection based on the seal overlay metric.

    Lee, Joong; Kong, Seong G; Lee, Young-Soo; Moon, Ki-Woong; Jeon, Oc-Yeub; Han, Jong Hyun; Lee, Bong-Woo; Seo, Joong-Suk


    This paper describes a method for verifying the authenticity of a seal impression imprinted on a document based on the seal overlay metric, which refers to the ratio of an effective seal impression pattern and the noise in the neighborhood of the reference impression region. A reference seal pattern is obtained by taking the average of a number of high-quality impressions of a genuine seal. A target seal impression to be examined, often on paper with some background texts and lines, is segmented out from the background by an adaptive threshold applied to the histogram of color components. The segmented target seal impression is then spatially aligned with the reference by maximizing the count of matching pixels. Then the seal overlay metric is computed for the reference and the target. If the overlay metric of a target seal is below a predetermined limit for the similarity to the genuine, then the target is classified as a forged seal. To further reduce the misclassification rate, the seal overlay metric is adjusted by the filling rate, which reflects the quality of inked pattern of the target seal. Experiment results demonstrate that the proposed method can detect elaborate seal impressions created by advanced forgery techniques such as lithography and computer-aided manufacturing. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  12. Spread spectrum image watermarking based on perceptual quality metric.

    Zhang, Fan; Liu, Wenyu; Lin, Weisi; Ngan, King Ngi


    Efficient image watermarking calls for full exploitation of the perceptual distortion constraint. Second-order statistics of visual stimuli are regarded as critical features for perception. This paper proposes a second-order statistics (SOS)-based image quality metric, which considers the texture masking effect and the contrast sensitivity in Karhunen-Loève transform domain. Compared with the state-of-the-art metrics, the quality prediction by SOS better correlates with several subjectively rated image databases, in which the images are impaired by the typical coding and watermarking artifacts. With the explicit metric definition, spread spectrum watermarking is posed as an optimization problem: we search for a watermark to minimize the distortion of the watermarked image and to maximize the correlation between the watermark pattern and the spread spectrum carrier. The simple metric guarantees the optimal watermark a closed-form solution and a fast implementation. The experiments show that the proposed watermarking scheme can take full advantage of the distortion constraint and improve the robustness in return.

  13. Classifiers based on optimal decision rules

    Amin, Talha


    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  14. Case-Based Behavior Adaptation Using an Inverse Trust Metric


    Case-Based Behavior Adaptation Using an Inverse Trust Metric Michael W. Floyd and Michael Drinkwater Knexus Research Corporation Springfield...Laboratory (Code 5514) Washington, DC , USA Abstract Robots are added to human teams to increase the team’s skills or...could result in the humans under- utilizing the it, unnecessarily monitoring the robot’s ac - tions, or possibly not using it at all. A robot could be

  15. Exploration of SWRL Rule Bases through Visualization, Paraphrasing, and Categorization of Rules

    Hassanpour, Saeed; O'Connor, Martin J.; Das, Amar K.

    Rule bases are increasingly being used as repositories of knowledge content on the Semantic Web. As the size and complexity of these rule bases increases, developers and end users need methods of rule abstraction to facilitate rule management. In this paper, we describe a rule abstraction method for Semantic Web Rule Language (SWRL) rules that is based on lexical analysis and a set of heuristics. Our method results in a tree data structure that we exploit in creating techniques to visualize, paraphrase, and categorize SWRL rules. We evaluate our approach by applying it to several biomedical ontologies that contain SWRL rules, and show how the results reveal rule patterns within the rule base. We have implemented our method as a plug-in tool for Protégé-OWL, the most widely used ontology modeling software for the Semantic Web. Our tool can allow users to rapidly explore content and patterns in SWRL rule bases, enabling their acquisition and management.

  16. Towards Video Quality Metrics Based on Colour Fractal Geometry

    Richard Noël


    Full Text Available Vision is a complex process that integrates multiple aspects of an image: spatial frequencies, topology and colour. Unfortunately, so far, all these elements were independently took into consideration for the development of image and video quality metrics, therefore we propose an approach that blends together all of them. Our approach allows for the analysis of the complexity of colour images in the RGB colour space, based on the probabilistic algorithm for calculating the fractal dimension and lacunarity. Given that all the existing fractal approaches are defined only for gray-scale images, we extend them to the colour domain. We show how these two colour fractal features capture the multiple aspects that characterize the degradation of the video signal, based on the hypothesis that the quality degradation perceived by the user is directly proportional to the modification of the fractal complexity. We claim that the two colour fractal measures can objectively assess the quality of the video signal and they can be used as metrics for the user-perceived video quality degradation and we validated them through experimental results obtained for an MPEG-4 video streaming application; finally, the results are compared against the ones given by unanimously-accepted metrics and subjective tests.

  17. Fuzzy metric based on the distance function of plane and its application in optimal scheduling problems

    刘民; 李法朝; 吴澄


    Measuring the difference between fuzzy numbers is often needed in many fuzzy optimizationproblems such as manufacturing system production line scheduling with uncertainty environments. In thispaper, based on the distance function of plane R2 and the level importance function, we establish theUID-metric and LPID-metric of measuring the difference between fuzzy numbers, and discuss the basicproperties of UID-metric and LPID-metric, and prove that fuzzy number spaces are metric spaces aboutUID-metric and LPID-metric if and only if the level importance function /(λ) ≠ 0 almost everywhere on [0,1]. Further, we discuss the convergence, separability and completeness of UID-metric and LPID-metricbased on the norms of plane R2. Finally, we analyze the characteristics of UID-metric and LPID-metric bysome application examples.

  18. Trust Metric based Soft Security in Mobile Pervasive Environment

    Madhu Sharma Gaur


    Full Text Available In the decentralized and highly dynamic environment like Mobile Pervasive Environments (MPE trust and security measurement are two major challenging issues for community researchers. So far primarily many of architectural frameworks and models developed and being used. In the vision of pervasive computing where mobile applications are growing immensely with the potential of low cost, high performance, and user centric solutions. This paradigm is highly dynamic and heterogeneous and brings along trust and security challenges regarding vulnerabilities and threats due to inherent open connectivity. Despite advances in the technology, there is still a lack of methods to measure the security and level of trust and framework for the assessment and calculation of the degree of the trustworthiness. In this paper, we explore security and trust metrics concerns requirement and challenges to decide the trust computations metric parameters for a self-adaptive self-monitoring trust based security assurance in mobile pervasive environment. The objective is to identify the trust parameters while routing and determine the node behavior for soft security trust metric. In winding up, we put our efforts to set up security assurance model to deal with attacks and vulnerabilities requirements of system under exploration.

  19. Polar Metric-Weighted Norm-Based Scan Matching for Robot Pose Estimation

    Guanglei Huo


    Full Text Available A novel point-to-point scan matching approach is proposed to address pose estimation and map building issues of mobile robots. Polar Scan Matching (PSM and Metric-Based Iterative Closest Point (Mb-ICP are usually employed for point-to-point scan matching tasks. However, due to the facts that PSM considers the distribution similarity of polar radii in irrelevant region of reference and current scans and Mb-ICP assumes a constant weight in the norm about rotation angle, they may lead to a mismatching of the reference and current scan in real-world scenarios. In order to obtain better match results and accurate estimation of the robot pose, we introduce a new metric rule, Polar Metric-Weighted Norm (PMWN, which takes both rotation and translation into account to match the reference and current scan. For robot pose estimation, the heading rotation angle is estimated by correspondences establishing results and further corrected by an absolute-value function, and then the geometric property of PMWN called projected circle is used to estimate the robot translation. The extensive experiments are conducted to evaluate the performance of PMWN-based approach. The results show that the proposed approach outperforms PSM and Mb-ICP in terms of accuracy, efficiency, and loop closure error of mapping.

  20. Towards Performance Measurement And Metrics Based Analysis of PLA Applications

    Ahmed, Zeeshan


    This article is about a measurement analysis based approach to help software practitioners in managing the additional level complexities and variabilities in software product line applications. The architecture of the proposed approach i.e. ZAC is designed and implemented to perform preprocessesed source code analysis, calculate traditional and product line metrics and visualize results in two and three dimensional diagrams. Experiments using real time data sets are performed which concluded with the results that the ZAC can be very helpful for the software practitioners in understanding the overall structure and complexity of product line applications. Moreover the obtained results prove strong positive correlation between calculated traditional and product line measures.

  1. Term Based Comparison Metrics for Controlled and Uncontrolled Indexing Languages

    Good, B. M.; Tennis, J. T.


    Introduction: We define a collection of metrics for describing and comparing sets of terms in controlled and uncontrolled indexing languages and then show how these metrics can be used to characterize a set of languages spanning folksonomies, ontologies and thesauri. Method: Metrics for term set characterization and comparison were identified and…

  2. Graphlet Based Metrics for the Comparison of Gene Regulatory Networks

    Martin, Alberto J. M.; Dominguez, Calixto; Contreras-Riquelme, Sebastián; Holmes, David S.; Perez-Acle, Tomas


    Understanding the control of gene expression remains one of the main challenges in the post-genomic era. Accordingly, a plethora of methods exists to identify variations in gene expression levels. These variations underlay almost all relevant biological phenomena, including disease and adaptation to environmental conditions. However, computational tools to identify how regulation changes are scarce. Regulation of gene expression is usually depicted in the form of a gene regulatory network (GRN). Structural changes in a GRN over time and conditions represent variations in the regulation of gene expression. Like other biological networks, GRNs are composed of basic building blocks called graphlets. As a consequence, two new metrics based on graphlets are proposed in this work: REConstruction Rate (REC) and REC Graphlet Degree (RGD). REC determines the rate of graphlet similarity between different states of a network and RGD identifies the subset of nodes with the highest topological variation. In other words, RGD discerns how th GRN was rewired. REC and RGD were used to compare the local structure of nodes in condition-specific GRNs obtained from gene expression data of Escherichia coli, forming biofilms and cultured in suspension. According to our results, most of the network local structure remains unaltered in the two compared conditions. Nevertheless, changes reported by RGD necessarily imply that a different cohort of regulators (i.e. transcription factors (TFs)) appear on the scene, shedding light on how the regulation of gene expression occurs when E. coli transits from suspension to biofilm. Consequently, we propose that both metrics REC and RGD should be adopted as a quantitative approach to conduct differential analyses of GRNs. A tool that implements both metrics is available as an on-line web server ( PMID:27695050

  3. Inheritance Hierarchy Based Reuse & Reusability Metrics in OOSD

    Nasib S. Gill,


    Full Text Available Reuse and reusability are two major aspects in object oriented software which can be measured from inheritance hierarchy. Reusability is the prerequisite of reuse but both may or may not bemeasured using same metric. This paper characterizes metrics of reuse and reusability in Object Oriented Software Development (OOSD. Reuse metrics compute the extent to which classes have been reused and reusability metrics computes the extent to which classes can be reused. In this paper five new metrics namely- Breadth of Inheritance Tree (BIT, Method Reuse Per Inheritance Relation (MRPIR,Attribute Reuse Per Inheritance Relation (ARPIR, Generality of Class (GC and Reuse Probability (RP have been proposed. These metrics help to evaluate reuse and reusability of object oriented software.Four extensively validated existing object oriented metrics, namely- Depth of Inheritance Tree (DIT, Number of Children (NOC, Method Inheritance Factor (MIF and Attribute Inheritance Factor (AIFhave been selected and investigated for comparison with proposed metrics. All metrics can be computed from inheritance hierarchies and classified according to their characteristics. Further, metrics areevaluated against a case study. These metrics are helpful in comparing alternative inheritance hierarchies at design time to select best alternative, so that the development time and cost can be reduced.

  4. Generalized Rule Induction Based on Immune Algorithm

    郑建国; 刘芳; 焦李成


    A generalized rule induction mechanism, immune algorithm, for knowledge bases is building an inheritance hierarchy of classes based on the content of their knowledge objects. This hierarchy facilitates group-related processing tasks such as answering set queries, discriminating between objects, finding similarities among objects, etc. Building this hierarchy is a difficult task for knowledge engineers. Conceptual induction may be used to automate or assist engineers in the creation of such a classification structure. This paper introduces a new conceptual rule induction method, which addresses the problem of clustering large amounts of structured objects. The conditions under which the method is applicable are discussed.

  5. Knowledge base rule partitioning design for CLIPS

    Mainardi, Joseph D.; Szatkowski, G. P.


    This describes a knowledge base (KB) partitioning approach to solve the problem of real-time performance using the CLIPS AI shell when containing large numbers of rules and facts. This work is funded under the joint USAF/NASA Advanced Launch System (ALS) Program as applied research in expert systems to perform vehicle checkout for real-time controller and diagnostic monitoring tasks. The Expert System advanced development project (ADP-2302) main objective is to provide robust systems responding to new data frames of 0.1 to 1.0 second intervals. The intelligent system control must be performed within the specified real-time window, in order to meet the demands of the given application. Partitioning the KB reduces the complexity of the inferencing Rete net at any given time. This reduced complexity improves performance but without undo impacts during load and unload cycles. The second objective is to produce highly reliable intelligent systems. This requires simple and automated approaches to the KB verification & validation task. Partitioning the KB reduces rule interaction complexity overall. Reduced interaction simplifies the V&V testing necessary by focusing attention only on individual areas of interest. Many systems require a robustness that involves a large number of rules, most of which are mutually exclusive under different phases or conditions. The ideal solution is to control the knowledge base by loading rules that directly apply for that condition, while stripping out all rules and facts that are not used during that cycle. The practical approach is to cluster rules and facts into associated 'blocks'. A simple approach has been designed to control the addition and deletion of 'blocks' of rules and facts, while allowing real-time operations to run freely. Timing tests for real-time performance for specific machines under R/T operating systems have not been completed but are planned as part of the analysis process to validate the design.

  6. Performance assessment of geospatial simulation models of land-use change--a landscape metric-based approach.

    Sakieh, Yousef; Salmanmahiny, Abdolrassoul


    Performance evaluation is a critical step when developing land-use and cover change (LUCC) models. The present study proposes a spatially explicit model performance evaluation method, adopting a landscape metric-based approach. To quantify GEOMOD model performance, a set of composition- and configuration-based landscape metrics including number of patches, edge density, mean Euclidean nearest neighbor distance, largest patch index, class area, landscape shape index, and splitting index were employed. The model takes advantage of three decision rules including neighborhood effect, persistence of change direction, and urbanization suitability values. According to the results, while class area, largest patch index, and splitting indices demonstrated insignificant differences between spatial pattern of ground truth and simulated layers, there was a considerable inconsistency between simulation results and real dataset in terms of the remaining metrics. Specifically, simulation outputs were simplistic and the model tended to underestimate number of developed patches by producing a more compact landscape. Landscape-metric-based performance evaluation produces more detailed information (compared to conventional indices such as the Kappa index and overall accuracy) on the model's behavior in replicating spatial heterogeneity features of a landscape such as frequency, fragmentation, isolation, and density. Finally, as the main characteristic of the proposed method, landscape metrics employ the maximum potential of observed and simulated layers for a performance evaluation procedure, provide a basis for more robust interpretation of a calibration process, and also deepen modeler insight into the main strengths and pitfalls of a specific land-use change model when simulating a spatiotemporal phenomenon.

  7. Local community detection based on modularity metric G

    Xia, Zhengyou; Gao, Xiangying; Zhang, Xia


    In complex network analysis, the local community detection problem is getting more and more attention. Because of the difficulty to get complete information of the network, such as the World Wide Web, the local community detection has been proposed by researcher. That is, we can detect a community from a certain source vertex with limited knowledge of an entire graph. The previous methods of local community detection now are more or less inadequate in some places. In this paper, we have proposed a new local modularity metric G and based on it, a two-phase algorithm is proposed. The method we have taken is a greedy addition algorithm which means adding vertices into the community until G does not increase. Compared with the previous methods, when our method is calculating the modularity metric, the range of vertices what we considered may affect the quality of the community detection wider. The results of experiments show that whether in computer-generated random graph or in the real networks, our method can effectively solve the problem of the local community detection.

  8. Evaluating Search Engine Relevance with Click-Based Metrics

    Radlinski, Filip; Kurup, Madhu; Joachims, Thorsten

    Automatically judging the quality of retrieval functions based on observable user behavior holds promise for making retrieval evaluation faster, cheaper, and more user centered. However, the relationship between observable user behavior and retrieval quality is not yet fully understood. In this chapter, we expand upon, Radlinski et al. (How does clickthrough data reflect retrieval quality, In Proceedings of the ACM Conference on Information and Knowledge Management (CIKM), 43-52, 2008), presenting a sequence of studies investigating this relationship for an operational search engine on the e-print archive. We find that none of the eight absolute usage metrics we explore (including the number of clicks observed, the frequency with which users reformulate their queries, and how often result sets are abandoned) reliably reflect retrieval quality for the sample sizes we consider. However, we find that paired experiment designs adapted from sensory analysis produce accurate and reliable statements about the relative quality of two retrieval functions. In particular, we investigate two paired comparison tests that analyze clickthrough data from an interleaved presentation of ranking pairs, and find that both give accurate and consistent results. We conclude that both paired comparison tests give substantially more accurate and sensitive evaluation results than the absolute usage metrics in our domain.

  9. A task-based quality control metric for digital mammography

    Maki Bloomquist, A. K.; Mainprize, J. G.; Mawdsley, G. E.; Yaffe, M. J.


    A reader study was conducted to tune the parameters of an observer model used to predict the detectability index (dʹ ) of test objects as a task-based quality control (QC) metric for digital mammography. A simple test phantom was imaged to measure the model parameters, namely, noise power spectrum, modulation transfer function and test-object contrast. These are then used in a non-prewhitening observer model, incorporating an eye-filter and internal noise, to predict dʹ. The model was tuned by measuring dʹ of discs in a four-alternative forced choice reader study. For each disc diameter, dʹ was used to estimate the threshold thicknesses for detectability. Data were obtained for six types of digital mammography systems using varying detector technologies and x-ray spectra. A strong correlation was found between measured and modeled values of dʹ, with Pearson correlation coefficient of 0.96. Repeated measurements from separate images of the test phantom show an average coefficient of variation in dʹ for different systems between 0.07 and 0.10. Standard deviations in the threshold thickness ranged between 0.001 and 0.017 mm. The model is robust and the results are relatively system independent, suggesting that observer model dʹ shows promise as a cross platform QC metric for digital mammography.

  10. Rule-Based Optimization of Reversible Circuits

    Arabzadeh, Mona; Zamani, Morteza Saheb


    Reversible logic has applications in various research areas including low-power design and quantum computation. In this paper, a rule-based optimization approach for reversible circuits is proposed which uses both negative and positive control Toffoli gates during the optimization. To this end, a set of rules for removing NOT gates and optimizing sub-circuits with common-target gates are proposed. To evaluate the proposed approach, the best-reported synthesized circuits and the results of a recent synthesis algorithm which uses both negative and positive controls are used. Our experiments reveal the potential of the proposed approach in optimizing synthesized circuits.

  11. Rule Based Shallow Parser for Arabic Language

    Mona A. Mohammed


    Full Text Available Problem statement: One of language processing approaches that compute a basic analysis of sentence structure rather than attempting full syntactic analysis is shallow syntactic parsing. It is an analysis of a sentence which identifies the constituents (noun groups, verb groups, prepositional groups, but does not specify their internal structure, nor their role in the main sentence. The only technique used for Arabic shallow parser is Support Vector Machine (SVM based approach. The problem faced by shallow parser developers is the boundary identification which is applied to ensure the generation of high accuracy system performance. Approach: The specific objective of the research was to identify the entire Noun Phrases (NPs, Verb Phrases (VPs and Prepositional Phrases (PPs boundaries in the Arabic language. This study discussed various idiosyncrasies of Arabic sentences to derive more accurate rules to detect start and the end boundaries of each clause in an Arabic sentence. New rules were proposed to the shallow parser features up to the generation of two levels from full parse-tree. We described an implementation and evaluate the rule-based shallow parser that handles chunking of Arabic sentences. This research was based on a critical analysis of the Arabic sentences architecture. It discussed various idiosyncrasies of Arabic sentences to derive more accurate rules to detect the start and the end boundaries of each clause in an Arabic sentence. Results: The system was tested manually on 70 Arabic sentences which composed of 1776 words, with the length of the sentences between 4-50 words. The result obtained was significantly better than state of the art Arabic published results, which achieved F-scores of 97%. Conclusion: The main achievement includes the development of Arabic shallow parser based on rule-based approaches. Chunking which constitutes the main contribution is achieved on two successive stages that include grouped sequences of

  12. A fingerprint based metric for measuring similarities of crystalline structures.

    Zhu, Li; Amsler, Maximilian; Fuhrer, Tobias; Schaefer, Bastian; Faraji, Somayeh; Rostami, Samare; Ghasemi, S Alireza; Sadeghi, Ali; Grauzinyte, Migle; Wolverton, Chris; Goedecker, Stefan


    Measuring similarities/dissimilarities between atomic structures is important for the exploration of potential energy landscapes. However, the cell vectors together with the coordinates of the atoms, which are generally used to describe periodic systems, are quantities not directly suitable as fingerprints to distinguish structures. Based on a characterization of the local environment of all atoms in a cell, we introduce crystal fingerprints that can be calculated easily and define configurational distances between crystalline structures that satisfy the mathematical properties of a metric. This distance between two configurations is a measure of their similarity/dissimilarity and it allows in particular to distinguish structures. The new method can be a useful tool within various energy landscape exploration schemes, such as minima hopping, random search, swarm intelligence algorithms, and high-throughput screenings.

  13. Osteoporosis Recognition Based on Similarity Metric with SVM

    Ke Zhou


    Full Text Available The purpose: Applying different techniques of classification to osteoporotic bone tissue texture analysis, exploring the recognition rate of the different classification methods. Methods: Using gray-level co-occurrence matrix (GLCM and running a length matrix texture analysis to extract bone tissue slice image characteristic parameters, and to classify respectively 4x and 10x microscope images of the two groups: the sham (SHAM and the ovariectomized (OVX group image. Results: The metric support vector machine (SVM classification algorithm, based on SVM learning or recognition rate, was higher than the stand-alone measure, and the classification results were stable. Conclusion: Measurement of the SVM classification algorithm for osteoporotic bone slices texture analysis revealed a high recognition rate.

  14. A fingerprint based metric for measuring similarities of crystalline structures

    Zhu, Li; Fuhrer, Tobias; Schaefer, Bastian; Grauzinyte, Migle; Goedecker, Stefan, E-mail: [Department of Physics, Universität Basel, Klingelbergstr. 82, 4056 Basel (Switzerland); Amsler, Maximilian [Department of Physics, Universität Basel, Klingelbergstr. 82, 4056 Basel (Switzerland); Department of Materials Science and Engineering, Northwestern University, Evanston, Illinois 60208 (United States); Faraji, Somayeh; Rostami, Samare; Ghasemi, S. Alireza [Institute for Advanced Studies in Basic Sciences, P.O. Box 45195-1159, Zanjan (Iran, Islamic Republic of); Sadeghi, Ali [Physics Department, Shahid Beheshti University, G. C., Evin, 19839 Tehran (Iran, Islamic Republic of); Wolverton, Chris [Department of Materials Science and Engineering, Northwestern University, Evanston, Illinois 60208 (United States)


    Measuring similarities/dissimilarities between atomic structures is important for the exploration of potential energy landscapes. However, the cell vectors together with the coordinates of the atoms, which are generally used to describe periodic systems, are quantities not directly suitable as fingerprints to distinguish structures. Based on a characterization of the local environment of all atoms in a cell, we introduce crystal fingerprints that can be calculated easily and define configurational distances between crystalline structures that satisfy the mathematical properties of a metric. This distance between two configurations is a measure of their similarity/dissimilarity and it allows in particular to distinguish structures. The new method can be a useful tool within various energy landscape exploration schemes, such as minima hopping, random search, swarm intelligence algorithms, and high-throughput screenings.

  15. A fingerprint based metric for measuring similarities of crystalline structures

    Zhu, Li; Fuhrer, Tobias; Schaefer, Bastian; Faraji, Somayeh; Rostami, Samara; Ghasemi, S Alireza; Sadeghi, Ali; Grauzinyte, Migle; Wolverton, Christopher; Goedecker, Stefan


    Measuring similarities/dissimilarities between atomic structures is important for the exploration of potential energy landscapes. However, the cell vectors together with the coordinates of the atoms, which are generally used to describe periodic systems, are quantities not suitable as fingerprints to distinguish structures. Based on a characterization of the local environment of all atoms in a cell we introduce crystal fingerprints that can be calculated easily and allow to define configurational distances between crystalline structures that satisfy the mathematical properties of a metric. This distance between two configurations is a measure of their similarity/dissimilarity and it allows in particular to distinguish structures. The new method is an useful tool within various energy landscape exploration schemes, such as minima hopping, random search, swarm intelligence algorithms and high-throughput screenings.

  16. Metrics-based assessments of research: incentives for 'institutional plagiarism'?

    Berry, Colin


    The issue of plagiarism--claiming credit for work that is not one's own, rightly, continues to cause concern in the academic community. An analysis is presented that shows the effects that may arise from metrics-based assessments of research, when credit for an author's outputs (chiefly publications) is given to an institution that did not support the research but which subsequently employs the author. The incentives for what is termed here "institutional plagiarism" are demonstrated with reference to the UK Research Assessment Exercise in which submitting units of assessment are shown in some instances to derive around twice the credit for papers produced elsewhere by new recruits, compared to papers produced 'in-house'.

  17. The Algorithm for Rule-base Refinement on Fuzzy Set

    LI Feng; WU Cui-hong; DING Xiang-wu


    In the course of running an artificial intelligent system many redundant rules are often produced. To refine the knowledge base, viz. to remove the redundant rules, can accelerate the reasoning and shrink the rule base. The purpose of the paper is to present the thinking on the topic and design the algorithm to remove the redundant rules from the rule base.The "abstraction" of "state variable", redundant rules and the least rule base are discussed in the paper. The algorithm on refining knowledge base is also presented.


    Ahmed Jumaa Alsaket


    Full Text Available Arabic machine translation has been taking place in machine translation projects in recent years. This study concentrates on the translation of Arabic text to its equivalent in Malay language. The problem of this research is the syntactic and morphological differences between Arabic and Malay adjective sentences. The main aim of this study is to design and develop Arabic-Malay machine translation model. First, we analyze the adjective role in the Arabic and Malay languages. Based on this analysis, we identify the transfer bilingual rules form source language to target language so that the translation of source language to target language can be performed by computers successfully. Then, we build and implement a machine translation prototype called AMTS to translate from Arabic to Malay based on rule based approach. The system is evaluated on set of simple Arabic sentences. The techniques used to evaluate the correctness of the system translation are the BLEU metric algorithm and the human judgment. The results of the BLEU algorithm show that the AMTS system performs better than Google in the translation of Arabic sentences into Malay. In addition, the average accuracy given by human judges is 92.3% for our system and 75.3% for Google.

  19. Texture-based Visualization of Metrics on Software Architectures

    Byelas, Heorhiy; Telea, Alexandru; Spencer, SN


    We present a method that combines textures, blending, and scattered-data interpolation to visualize several metrics defined on overlapping areas-of-interest on UML class diagrams. We aim to simplify the task of visually correlating the distribution and outlier values of a multivariate metric dataset

  20. A methodology for evaluation and selection of nanoparticle manufacturing processes based on sustainability metrics.

    Naidu, Sasikumar; Sawhney, Rapinder; Li, Xueping


    A set of sustainability metrics, covering the economic, environmental and sociological dimensions of sustainability for evaluation of nanomanufacturing processes is developed. The metrics are divided into two categories namely industrial engineering metrics (process and safety metrics) and green chemistry metrics (environmental impact). The waste reduction algorithm (WAR) is used to determine the environmental impact of the processes and NAIADE (Novel Approach to Imprecise Assessment and Decision Environments) software is used for evaluation and decision analysis. The methodology is applied to three processes used for silica nanoparticle synthesis based on sol-gel and flame methods.

  1. Choosing goals, not rules: deciding among rule-based action plans.

    Klaes, Christian; Westendorff, Stephanie; Chakrabarti, Shubhodeep; Gail, Alexander


    In natural situations, movements are often directed toward locations different from that of the evoking sensory stimulus. Movement goals must then be inferred from the sensory cue based on rules. When there is uncertainty about the rule that applies for a given cue, planning a movement involves both choosing the relevant rule and computing the movement goal based on that rule. Under these conditions, it is not clear whether primates compute multiple movement goals based on all possible rules before choosing an action, or whether they first choose a rule and then only represent the movement goal associated with that rule. Supporting the former hypothesis, we show that neurons in the frontoparietal reach areas of monkeys simultaneously represent two different rule-based movement goals, which are biased by the monkeys' choice preferences. Apparently, primates choose between multiple behavioral options by weighing against each other the movement goals associated with each option.


    Decombas, Marc; Dufaux, Frederic; Renan, Erwann; Pesquet-Popescu, Beatrice; Capman, Francois


    ICIP2012; We propose a full reference visual quality metric to evaluate a semantic coding system which may not preserve exactly the position and/or the shape of objects. The metric is based on Scale-Invariant Feature Transform (SIFT) points. More specifically, Structural SIMilarity (SSIM) on windows around the SIFT points measures the compression artifacts (SSIM_SIFT). Conversely, the standard deviation of the matching distance between the SIFT points measures the geometric distortion (GEOMET...

  3. Association Rule Pruning based on Interestingness Measures with Clustering

    R. Bhaskaran


    Full Text Available Association rule mining plays vital part in knowledge mining. The difficult task is discovering knowledge or useful rules from the large number of rules generated for reduced support. For pruning or grouping rules, several techniques are used such as rule structure cover methods, informative cover methods, rule clustering, etc. Another way of selecting association rules is based on interestingness measures such as support, confidence, correlation, and so on. In this paper, we study how rule clusters of the pattern Xi -> Y are distributed over different interestingness measures.

  4. NOC: QOS Metrics Modelling And Analysis Based On Dynamic Routing

    Abdelkader SAADAOUI


    Full Text Available Increasing heterogeneous software and hardware blocks constitute complex ICs known as System on Chip (SoC. These blocks are conceived as intellectual property (IP cores. Designers are developing SoCs by using IP cores reuse, which include interconnection architecture and interface to peripheral devices.Because of the SoC growing complexity, some researchers tend to concentrate more on the communication rather than the computation aspect. This area of research has leading to the Network on Chip (NoC Concepts.The research domain of NoC has many applications needing high communication performances. Therefore NoC offers attractive solutions to these applications.One of the goals of NoC technology is to maintain a required Quality of Service (QoS, defined in terms of acceptable parameters values.This paper proposes a presentation of QoS metrics model based on QoS parameters such as Endto-End Delays (EED and throughputs (Thp, for different applications. This study is based on dynamic routing simulation of a 4x4 mesh NoC behaviour under three communications processes namely TCP, VBR and CBR.

  5. Metric-based Hamiltonians, null boundaries, and isolated horizons

    Booth, I S


    We extend the quasilocal (metric-based) Hamiltonian formulation of general relativity so that it may be used to study regions of spacetime with null boundaries. In particular we use this generalized Brown-York formalism to study the physics of isolated horizons. We show that the first law of isolated horizon mechanics follows directly from the first variation of the Hamiltonian. This variation is not restricted to the phase space of solutions to the equations of motion but is instead through the space of all (off-shell) spacetimes that contain isolated horizons. We find two-surface integrals evaluated on the horizons that are consistent with the Hamiltonian and which define the energy and angular momentum of these objects. These are closely related to the corresponding Komar integrals and for Kerr-Newman spacetime are equal to the corresponding ADM/Bondi quantities. Thus, the energy of an isolated horizon calculated by this method is in agreement with that recently calculated by Ashtekar and collaborators but...

  6. Moral empiricism and the bias for act-based rules.

    Ayars, Alisabeth; Nichols, Shaun


    Previous studies on rule learning show a bias in favor of act-based rules, which prohibit intentionally producing an outcome but not merely allowing the outcome. Nichols, Kumar, Lopez, Ayars, and Chan (2016) found that exposure to a single sample violation in which an agent intentionally causes the outcome was sufficient for participants to infer that the rule was act-based. One explanation is that people have an innate bias to think rules are act-based. We suggest an alternative empiricist account: since most rules that people learn are act-based, people form an overhypothesis (Goodman, 1955) that rules are typically act-based. We report three studies that indicate that people can use information about violations to form overhypotheses about rules. In study 1, participants learned either three "consequence-based" rules that prohibited allowing an outcome or three "act-based" rules that prohibiting producing the outcome; in a subsequent learning task, we found that participants who had learned three consequence-based rules were more likely to think that the new rule prohibited allowing an outcome. In study 2, we presented participants with either 1 consequence-based rule or 3 consequence-based rules, and we found that those exposed to 3 such rules were more likely to think that a new rule was also consequence based. Thus, in both studies, it seems that learning 3 consequence-based rules generates an overhypothesis to expect new rules to be consequence-based. In a final study, we used a more subtle manipulation. We exposed participants to examples act-based or accident-based (strict liability) laws and then had them learn a novel rule. We found that participants who were exposed to the accident-based laws were more likely to think a new rule was accident-based. The fact that participants' bias for act-based rules can be shaped by evidence from other rules supports the idea that the bias for act-based rules might be acquired as an overhypothesis from the

  7. Sediment transport-based metrics of wetland stability

    Ganju, Neil K.; Kirwan, Matthew L.; Dickhudt, Patrick J.; Guntenspergen, Glenn R.; Cahoon, Donald R.; Kroeger, Kevin D.


    Despite the importance of sediment availability on wetland stability, vulnerability assessments seldom consider spatiotemporal variability of sediment transport. Models predict that the maximum rate of sea level rise a marsh can survive is proportional to suspended sediment concentration (SSC) and accretion. In contrast, we find that SSC and accretion are higher in an unstable marsh than in an adjacent stable marsh, suggesting that these metrics cannot describe wetland vulnerability. Therefore, we propose the flood/ebb SSC differential and organic-inorganic suspended sediment ratio as better vulnerability metrics. The unstable marsh favors sediment export (18 mg L−1 higher on ebb tides), while the stable marsh imports sediment (12 mg L−1 higher on flood tides). The organic-inorganic SSC ratio is 84% higher in the unstable marsh, and stable isotopes indicate a source consistent with marsh-derived material. These simple metrics scale with sediment fluxes, integrate spatiotemporal variability, and indicate sediment sources.

  8. A Trustability Metric for Code Search based on Developer Karma

    Gysin, Florian S


    The promise of search-driven development is that developers will save time and resources by reusing external code in their local projects. To efficiently integrate this code, users must be able to trust it, thus trustability of code search results is just as important as their relevance. In this paper, we introduce a trustability metric to help users assess the quality of code search results and therefore ease the cost-benefit analysis they undertake trying to find suitable integration candidates. The proposed trustability metric incorporates both user votes and cross-project activity of developers to calculate a "karma" value for each developer. Through the karma value of all its developers a project is ranked on a trustability scale. We present JBender, a proof-of-concept code search engine which implements our trustability metric and we discuss preliminary results from an evaluation of the prototype.

  9. Comparison of surface-based and image-based quality metrics for the analysis of dimensional computed tomography data

    Francisco A. Arenhart


    Full Text Available This paper presents a comparison of surface-based and image-based quality metrics for dimensional X-ray computed tomography (CT data. The chosen metrics are used to characterize two key aspects in acquiring signals with CT systems: the loss of information (blurring and the adding of unwanted information (noise. A set of structured experiments was designed to test the response of the metrics to different influencing factors. It is demonstrated that, under certain circumstances, the results of both types of metrics become conflicting, emphasizing the importance of using surface information for evaluating the quality dimensional CT data. Specific findings using both types of metrics are also discussed.

  10. Fuzzification of ASAT's rule based aimpoint selection

    Weight, Thomas H.


    The aimpoint algorithms being developed at Dr. Weight and Associates are based on the concept of fuzzy logic. This approach does not require a particular type of sensor data or algorithm type, but allows the user to develop a fuzzy logic algorithm based on existing aimpoint algorithms and models. This provides an opportunity for the user to upgrade an existing system design to achieve higher performance at minimal cost. Many projects have aimpoint algorithms which are based on 'crisp' logic rule based algorithms. These algorithms are sensitive to glint, corner reflectors, or intermittent thruster firings, and to uncertainties in the a priori estimates of angle of attack. If these projects are continued through to a demonstration involving a launch to hit a target, it is quite possible that the crisp logic approaches will need to be upgraded to handle these important error sources.

  11. A No-Reference Sharpness Metric Based on Structured Ringing for JPEG2000 Images

    Zhipeng Cao


    Full Text Available This work presents a no-reference image sharpness metric based on human blur perception for JPEG2000 compressed image. The metric mainly uses a ringing measure. And a blurring measure is used for compensation when the blur is so severe that ringing artifacts are concealed. We used the anisotropic diffusion for the preliminary ringing map and refined it by considering the property of ringing structure. The ringing detection of the proposed metric does not depend on edge detection, which is suitable for high degraded images. The characteristics of the ringing and blurring measures are analyzed and validated theoretically and experimentally. The performance of the proposed metric is tested and compared with that of some existing JPEG2000 sharpness metrics on three widely used databases. The experimental results show that the proposed metric is accurate and reliable in predicting the sharpness of JPEG2000 images.

  12. Thematic investigations in France based on metric camera imagery

    Lecordix, P. Y.


    Spacelab metric camera images were used to study geological features, land use, and forestry, and were compared with other data sources used in cartography. For geological surveys, the metric camera is comparable to SPOT satellite, and better than LANDSAT. For land use, Spacelab images are unsatisfactory in urban areas; woodland and scrub is over-represented due to shadow effects and inclusion of water covered with aquatic plants; forest distribution is well reproduced; sandy features are well identified. For forest inventories, results are surprisingly good, e.g., only 4% error in distinguishing resinous and leafy trees.

  13. Evidence Based Cataloguing: Moving Beyond the Rules

    Kathy Carter


    Full Text Available Cataloguing is sometimes regarded as a rule-bound, production-based activity that offers little scope for professional judgement and decision-making. In reality, cataloguing involves challenging decisions that can have significant service and financial impacts. The current environment for cataloguing is a maelstrom of changing demands and competing visions for the future. With information-seekers turning en masse to Google and their behaviour receiving greater attention, library vendors are offering “discovery layer” products to replace traditional OPACs, and cataloguers are examining and debating a transformed version of their descriptive cataloguing rules (Resource Description and Access or RDA. In his “Perceptions of the future of cataloging: Is the sky really falling?” (2009, Ivey provides a good summary of this environment. At the same time, myriad new metadata formats and schema are being developed and applied for digital collections in libraries and other institutions. In today’s libraries, cataloguing is no longer limited to management of traditional AACR and MARC-based metadata for traditional library collections. And like their parent institutions, libraries cannot ignore growing pressures to demonstrate accountability and tangible value provided by their services. More than ever, research and an evidence based approach can help guide cataloguing decision-making.

  14. A Model-Based Approach to Object-Oriented Software Metrics

    梅宏; 谢涛; 杨芙清


    The need to improve software productivity and software quality has put for-ward the research on software metrics technology and the development of software metrics toolto support related activities. To support object-oriented software metrics practice effectively,a model-based approach to object-oriented software metrics is proposed in this paper. Thisapproach guides the metrics users to adopt the quality metrics model to measure the object-oriented software products. The development of the model can be achieved by using a top-downapproach. This approach explicitly proposes the conception of absolute normalization computa-tion and relative normalization computation for a metrics model. Moreover, a generic softwaremetrics tool - Jade Bird Object-Oriented Metrics Tool (JBOOMT) is designed to implementthis approach. The parser-based approach adopted by the tool makes the information of thesource program accurate and complete for measurement. It supports various customizablehierarchical metrics models and provides a flexible user interface for users to manipulate themodels. It also supports absolute and relative normalization mechanisms in different situations.

  15. Mining association rule efficiently based on data warehouse

    陈晓红; 赖邦传; 罗铤


    The conventional complete association rule set was replaced by the least association rule set in data warehouse association rule mining process. The least association rule set should comply with two requirements: 1) it should be the minimal and the simplest association rule set; 2) its predictive power should in no way be weaker than that of the complete association rule set so that the precision of the association rule set analysis can be guaranteed.By adopting the least association rule set, the pruning of weak rules can be effectively carried out so as to greatly reduce the number of frequent itemset, and therefore improve the mining efficiency. Finally, based on the classical Apriori algorithm, the upward closure property of weak rules is utilized to develop a corresponding efficient algorithm.

  16. Rule based systems for big data a machine learning approach

    Liu, Han; Cocea, Mihaela


    The ideas introduced in this book explore the relationships among rule based systems, machine learning and big data. Rule based systems are seen as a special type of expert systems, which can be built by using expert knowledge or learning from real data. The book focuses on the development and evaluation of rule based systems in terms of accuracy, efficiency and interpretability. In particular, a unified framework for building rule based systems, which consists of the operations of rule generation, rule simplification and rule representation, is presented. Each of these operations is detailed using specific methods or techniques. In addition, this book also presents some ensemble learning frameworks for building ensemble rule based systems.

  17. A Program Complexity Metric Based on Variable Usage for Algorithmic Thinking Education of Novice Learners

    Fuwa, Minori; Kayama, Mizue; Kunimune, Hisayoshi; Hashimoto, Masami; Asano, David K.


    We have explored educational methods for algorithmic thinking for novices and implemented a block programming editor and a simple learning management system. In this paper, we propose a program/algorithm complexity metric specified for novice learners. This metric is based on the variable usage in arithmetic and relational formulas in learner's…

  18. Idioms-based Business Rule Extraction

    Smit, R


    This thesis studies the extraction of embedded business rules, using the idioms of the used framework to identify them. Embedded business rules exist as source code in the software system and knowledge about them may get lost. Extraction of those business rules could make them accessible and managea

  19. Will Rule based BPM obliterate Process Models?

    Joosten, S.; Joosten, H.J.M.


    Business rules can be used directly for controlling business processes, without reference to a business process model. In this paper we propose to use business rules to specify both business processes and the software that supports them. Business rules expressed in smart mathematical notations bring

  20. Sheaves of metric structures

    Daza, Maicol A Ochoa


    We introduce and develop the theory of metric sheaves. A metric sheaf $\\A$ is defined on a topological space $X$ such that each fiber is a metric model. We describe the construction of the generic model as the quotient space of the sheaf through an appropriate filter. Semantics in this model is completely controlled and understood by the forcing rules in the sheaf.

  1. Target-Based Maintenance of Privacy Preserving Association Rules

    Ahluwalia, Madhu V.


    In the context of association rule mining, the state-of-the-art in privacy preserving data mining provides solutions for categorical and Boolean association rules but not for quantitative association rules. This research fills this gap by describing a method based on discrete wavelet transform (DWT) to protect input data privacy while preserving…

  2. Metrics for Radiologists in the Era of Value-based Health Care Delivery.

    Sarwar, Ammar; Boland, Giles; Monks, Annamarie; Kruskal, Jonathan B


    Accelerated by the Patient Protection and Affordable Care Act of 2010, health care delivery in the United States is poised to move from a model that rewards the volume of services provided to one that rewards the value provided by such services. Radiology department operations are currently managed by an array of metrics that assess various departmental missions, but many of these metrics do not measure value. Regulators and other stakeholders also influence what metrics are used to assess medical imaging. Metrics such as the Physician Quality Reporting System are increasingly being linked to financial penalties. In addition, metrics assessing radiology's contribution to cost or outcomes are currently lacking. In fact, radiology is widely viewed as a contributor to health care costs without an adequate understanding of its contribution to downstream cost savings or improvement in patient outcomes. The new value-based system of health care delivery and reimbursement will measure a provider's contribution to reducing costs and improving patient outcomes with the intention of making reimbursement commensurate with adherence to these metrics. The authors describe existing metrics and their application to the practice of radiology, discuss the so-called value equation, and suggest possible metrics that will be useful for demonstrating the value of radiologists' services to their patients.

  3. Designing Fuzzy Rule Based Expert System for Cyber Security

    Goztepe, Kerim


    The state of cyber security has begun to attract more attention and interest outside the community of computer security experts. Cyber security is not a single problem, but rather a group of highly different problems involving different sets of threats. Fuzzy Rule based system for cyber security is a system consists of a rule depository and a mechanism for accessing and running the rules. The depository is usually constructed with a collection of related rule sets. The aim of this study is to...

  4. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  5. Metric-based method of software requirements correctness improvement

    Yaremchuk Svitlana


    Full Text Available The work highlights the most important principles of software reliability management (SRM. The SRM concept construes a basis for developing a method of requirements correctness improvement. The method assumes that complicated requirements contain more actual and potential design faults/defects. The method applies a newer metric to evaluate the requirements complexity and double sorting technique evaluating the priority and complexity of a particular requirement. The method enables to improve requirements correctness due to identification of a higher number of defects with restricted resources. Practical application of the proposed method in the course of demands review assured a sensible technical and economic effect.



    A method for quality mesh generation of parametric curved surfaces is proposed. It is shown that the main difference between the proposed method and previous ones is that our meshing process is done completely in the parametric domains with the guarantee of mesh quality. To obtain this aim, the Delaunay method is extended to anisotropic context of 2D domains, and a Riemannian metric map is introduced to remedy the mapping distortion from object space to parametric domain. Compared with previous algorithms, the approach is much simpler, more robust and speedy. The algorithm is implemented and examples for several geometries are presented to demonstrate the efficiency and validity of the method.

  7. A C++ Class for Rule-Base Objects

    William J. Grenney


    Full Text Available A C++ class, called Tripod, was created as a tool to assist with the development of rule-base decision support systems. The Tripod class contains data structures for the rule-base and member functions for operating on the data. The rule-base is defined by three ASCII files. These files are translated by a preprocessor into a single file that is located when a rule-base object is instantiated. The Tripod class was tested as part of a proto-type decision support system (DSS for winter highway maintenance in the Intermountain West. The DSS is composed of two principal modules: the main program, called the wrapper, and a Tripod rule-base object. The wrapper is a procedural module that interfaces with remote sensors and an external meterological database. The rule-base contains the logic for advising an inexperienced user and for assisting with the decision making process.

  8. Automatic Induction of Rule Based Text Categorization

    D.Maghesh Kumar


    Full Text Available The automated categorization of texts into predefined categories has witnessed a booming interest in the last 10 years, due to the increased availability of documents in digital form and the ensuingneed to organize them. In the research community the dominant approach to this problem is based on machine learning techniques: a general inductive process automatically builds a classifier by learning, from a set of preclassified documents, the characteristics of the categories. This paper describes, a novel method for the automatic induction of rule-based text classifiers. This method supports a hypothesis language of the form "if T1, … or Tn occurs in document d, and none of T1+n,... Tn+m occurs in d, then classify d under category c," where each Ti is a conjunction of terms. This survey discusses the main approaches to text categorization that fall within the machine learning paradigm. Issues pertaining tothree different problems, namely, document representation, classifier construction, and classifier evaluation were discussed in detail.

  9. Development of access-based metrics for site location of ground segment in LEO missions

    Hossein Bonyan Khamseh


    Full Text Available The classical metrics of ground segment site location do not take account of the pattern of ground segment access to the satellite. In this paper, based on the pattern of access between the ground segment and the satellite, two metrics for site location of ground segments in Low Earth Orbits (LEO missions were developed. The two developed access-based metrics are total accessibility duration and longest accessibility gap in a given period of time. It is shown that repeatability cycle is the minimum necessary time interval to study the steady behavior of the two proposed metrics. System and subsystem characteristics of the satellite represented by each of the metrics are discussed. Incorporation of the two proposed metrics, along with the classical ones, in the ground segment site location process results in financial saving in satellite development phase and reduces the minimum required level of in-orbit autonomy of the satellite. To show the effectiveness of the proposed metrics, simulation results are included for illustration.

  10. Efficient mining of association rules based on gravitational search algorithm

    Fariba Khademolghorani


    Full Text Available Association rules mining are one of the most used tools to discover relationships among attributes in a database. A lot of algorithms have been introduced for discovering these rules. These algorithms have to mine association rules in two stages separately. Most of them mine occurrence rules which are easily predictable by the users. Therefore, this paper discusses the application of gravitational search algorithm for discovering interesting association rules. This evolutionary algorithm is based on the Newtonian gravity and the laws of motion. Furthermore, contrary to the previous methods, the proposed method in this study is able to mine the best association rules without generating frequent itemsets and is independent of the minimum support and confidence values. The results of applying this method in comparison with the method of mining association rules based upon the particle swarm optimization show that our method is successful.

  11. Reducing Inconsistent Rules Based on Irregular Decision Table

    兰轶东; 张霖; 刘连臣


    In this paper, we study the problem of rule extraction from data sets using the rough set method. For inconsistent rules due to improper selection of split-points during discretization, and/or to lack of information, we propose two methods to remove their inconsistency based on irregular decision tables.By using these methods, inconsistent rules are eliminated as far as possible, without affecting the remaining consistent rules. Experimental test indicates that use of the new method leads to an improvement in the mean accuracy of the extracted rules.

  12. Similarity and rules United: similarity- and rule-based processing in a single neural network.

    Verguts, Tom; Fias, Wim


    A central controversy in cognitive science concerns the roles of rules versus similarity. To gain some leverage on this problem, we propose that rule- versus similarity-based processes can be characterized as extremes in a multidimensional space that is composed of at least two dimensions: the number of features (Pothos, 2005) and the physical presence of features. The transition of similarity- to rule-based processing is conceptualized as a transition in this space. To illustrate this, we show how a neural network model uses input features (and in this sense produces similarity-based responses) when it has a low learning rate or in the early phases of training, but it switches to using self-generated, more abstract features (and in this sense produces rule-based responses) when it has a higher learning rate or is in the later phases of training. Relations with categorization and the psychology of learning are pointed out.

  13. Observationally-Based Data/Model Metrics from the Southern Ocean Climate Model Atlas

    Abell, J.; Russell, J. L.; Goodman, P. J.


    The Southern Ocean Climate Model Atlas makes available observationally-based standardized data/model metrics of the latest simulations of climate and projections of climate change from available climate models. Global climate model simulations differ greatly in the Southern Ocean, so the development of consistent, observationally-based metrics, by which to assess the fidelity of model simulations is essential. We will present metrics showing and quantifying the results of the modern day climate simulations over the Southern Ocean from models submitted as part of the CMIP5/IPCC-AR5 process. Our analysis will focus on the simulations of the temperature, salinity and carbon at various depths and along significant hydrographic sections. The models exhibit different skill levels with various metrics between models and also within individual models.

  14. H-Metric: Characterizing Image Datasets via Homogenization Based on KNN-Queries

    Welington M da Silva


    Full Text Available Precision-Recall is one of the main metrics for evaluating content-based image retrieval techniques. However, it does not provide an ample perception of the properties of an image dataset immersed in a metric space. In this work, we describe an alternative metric named H-Metric, which is determined along a sequence of controlled modifications in the image dataset. The process is named homogenization and works by altering the homogeneity characteristics of the classes of the images. The result is a process that measures how hard it is to deal with a set of images in respect to content-based retrieval, offering support in the task of analyzing configurations of distance functions and of features extractors.

  15. Link prediction based on temporal similarity metrics using continuous action set learning automata

    Moradabadi, Behnaz; Meybodi, Mohammad Reza


    Link prediction is a social network research area that tries to predict future links using network structure. The main approaches in this area are based on predicting future links using network structure at a specific period, without considering the links behavior through different periods. For example, a common traditional approach in link prediction calculates a chosen similarity metric for each non-connected link and outputs the links with higher similarity scores as the prediction result. In this paper, we propose a new link prediction method based on temporal similarity metrics and Continuous Action set Learning Automata (CALA). The proposed method takes advantage of using different similarity metrics as well as different time periods. In the proposed algorithm, we try to model the link prediction problem as a noisy optimization problem and use a team of CALAs to solve the noisy optimization problem. CALA is a reinforcement based optimization tool which tries to learn the optimal behavior from the environment feedbacks. To determine the importance of different periods and similarity metrics on the prediction result, we define a coefficient for each of different periods and similarity metrics and use a CALA for each coefficient. Each CALA tries to learn the true value of the corresponding coefficient. Final link prediction is obtained from a combination of different similarity metrics in different times based on the obtained coefficients. The link prediction results reported here show satisfactory of the proposed method for some social network data sets.

  16. TerrorCat: a translation error categorization-based MT quality metric


    We present TerrorCat, a submission to the WMT’12 metrics shared task. TerrorCat uses frequencies of automatically obtained translation error categories as base for pairwise comparison of translation hypotheses, which is in turn used to generate a score for every translation. The metric shows high overall correlation with human judgements on the system level and more modest results on the level of individual sentences.

  17. Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search

    Nakamura, Katsuhiko; Hoshina, Akemi

    This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.

  18. Green Net Value Added as a Sustainability Metric Based on ...

    Sustainability measurement in economics involves evaluation of environmental and economic impact in an integrated manner. In this study, system level economic data are combined with environmental impact from a life cycle assessment (LCA) of a common product. We are exploring a costing approach that captures traditional costs but also incorporates externality costs to provide a convenient, easily interpretable metric. Green Net Value Added (GNVA) is a type of full cost accounting that incorporates total revenue, the cost of materials and services, depreciation, and environmental externalities. Two, but not all, of the potential environmental impacts calculated by the standard LCIA method (TRACI) could be converted to externality cost values. We compute externality costs disaggregated by upstream sectors, full cost, and GNVA to evaluate the relative sustainability of Bounty® paper towels manufactured at two production facilities. We found that the longer running, more established line had a higher GNVA than the newer line. The dominant factors contributing to externality costs are calculated to come from the stationary sources in the supply chain: electricity generation (27-35%), refineries (20-21%), pulp and paper making (15-23%). Health related externalities from Particulate Matter (PM2.5) and Carbon Dioxide equivalent (CO2e) emissions appear largely driven by electricity usage and emissions by the facilities, followed by pulp processing and transport. Supply

  19. Rule Generation Based On Dominance Matrices and Functions

    安利平; 陈增强; 袁著祉; 仝凌云


    Rough set theory has proved to be a useful tool for rule induction. But, the theory based on indiscernibility relation or similarity relation cannot induce rules from decision tables with criteria. Greco et al have proposed a new rough set approach based on dominance relation to handle the problems. In this paper, the concept of dominance matrix is put forward and the dominance function is constructed to compute the minimal decision rules that are more general and applicable than the ones induced by the classical rough set theory. In addition, the methodology of simplification is presented to eliminate the redundancy in the rule set.

  20. Optical Generation of Fuzzy-Based Rules

    Gur, Eran; Mendlovic, David; Zalevsky, Zeev


    In the last third of the 20th century, fuzzy logic has risen from a mathematical concept to an applicable approach in soft computing. Today, fuzzy logic is used in control systems for various applications, such as washing machines, train-brake systems, automobile automatic gear, and so forth. The approach of optical implementation of fuzzy inferencing was given by the authors in previous papers, giving an extra emphasis to applications with two dominant inputs. In this paper the authors introduce a real-time optical rule generator for the dual-input fuzzy-inference engine. The paper briefly goes over the dual-input optical implementation of fuzzy-logic inferencing. Then, the concept of constructing a set of rules from given data is discussed. Next, the authors show ways to implement this procedure optically. The discussion is accompanied by an example that illustrates the transformation from raw data into fuzzy set rules.

  1. Client Device Based Content Adaptation Using Rule Base



    Full Text Available Problem statement: Content adaptation have been playing an important role in mobile devices, wherein the content display differs from desktop computers in many aspects, such as display screens, processing power, network connection bandwidth. In order to display web contents appropriately on mobile devices and on other types of devices such as hand computers, PDAs, Smart phones, it is important to adapt or transcode them to fit the characteristics of these devices. Approach: Existing content adaptation systems deploy various techniques which have been developed for specific purposes and goals. By exploiting various possible combinations of available resources, appropriate adaptation process can be carried over on the actual data, so that the information can be assimilated in a different end system other than the intended system. In this study, we present a content adaptation system based on rules created for mobile devices. Rules are invoked based on the individual client device information. Results: The adaptation has been performed according to the delivery device which was formalized through the profiler system. A profile holds information about the hardware and software specifications of the device thereby enabling the adaption of web content based on their characteristics which enables the user to access the web easily on various devices. Conclusion/Recommendation: This study enhances the viability of the information being presented to user, which will be independent of the end system being used for accessing the information. With the help of configurable rules, effective content adaptation can be achieved to provide optimal result.

  2. A finite element-based injury metric for pulmonary contusion: investigation of candidate metrics through correlation with computed tomography.

    Gayzik, F Scott; Hoth, J Jason; Daly, Melissa; Meredith, J Wayne; Stitzel, Joel D


    Pulmonary contusion (PC) is the most common thoracic soft tissue injury following non-penetrating blunt trauma and has been associated with mortality rates as high as 25%. This study is part of an ongoing effort to develop a finite element based injury criteria for PC. The aims of this study are two fold. The first is to investigate the use of computed tomography (CT) to quantify the volume of pathologic lung tissue in a prospective study of PC. The second is to use a finite element model (FEM) of the lung to investigate several mathematical predictors of contusion to determine the injury metric that best matches the spatial distribution of contusion obtained from the CT analysis. PC is induced in-situ utilizing male Sprague Dawley rats (n = 24) through direct impact to the right lung at 5.0 ms(-1). Force vs. deflection data are collected and used for model validation and optimization. CT scans are taken at 24 hours, 48 hours, 1 week, and 1 month post contusion. A numerical simulation is performed using an FEM of the rat lung and surrounding structures. Injury predictors investigated include maximum first principal strain, maximum shear strain, triaxial mean strain, octahedral shear stress, and maximum shear stress. Strain rate and the product of strain and strain rate are evaluated for all listed strains. At each post-impact time point, the volume of contused lung is used to determine the specific elements representing pathologic lung. Through this method, a threshold is determined for all listed metrics. The spatial distribution of the elements exceeding this threshold is compared to the spatial distribution of high-radiopacity lung tissue in the CT through a three dimensional registration technique to determine the predictor with the best correlation to the outcome. Impacts resulted in a mean energy input to the lung of 8.74 +/- 2.5 mJ. Segmentation of the imaging data yielded a mean unilateral high-radiopacity tissue estimate of 14.5% by volume at 24 hours with

  3. Gradient Analysis of Urbanization Pattern Based on Class-level Landscape Metrics for Shanghai Region

    HUANG Zhen; ZHANG Li-quan; CHEN Liang


    The spatial pattern of urbanization in the Shanghai metropolitan area is quantified with GIS-based land use data set and gradient analysis of landscape metrics. A number of landscape metrics were computed along a 64 km long and 6 km wide west-east transect and another 66 km long and 6 km wide south-north transect. The results of transect analysis with class-level metrics showed that the spatial pattern of urbanization could be reliably quantified using landscape metrics with a gradient analysis approach, and the location of the urbanization center could be identified precisely and consistently with multiple indices of the landscape metrics used in this study. Different land use types exhibited distinctive, but not necessarily unique,spatial signatures that were dependent on specific landscape metrics. These results seemed to characterize the urban core of the Shanghai metropolitan area rather accurately and precisely: Agriculture patches were abundant and less fragmented; the urban land use types were extensive,having many small patches and highly fragmented.

  4. Study on Overburden’s Destructive Rules Based on Similar Material Simulation

    Ruyu Zheng


    Full Text Available Analysis on the current research status, this article studies on the dynamic subsidence principles of overburden rock strata during coal mining based on similar material simulation test. Close ranged industrial photogram metric system was introduced to collect data. After coordinate transformation, matching and model amendment, dynamic subsidence curves which can be used to analyze the continuity characteristics of overburden subsidence, changes of boundary angle and displacement angle, volume transferring law from rock to surface, etc. were got. The result is useful in further study of the dynamic rule of overburden strata and enriches mining subsidence principles.

  5. A no-reference video quality assessment metric based on ROI

    Jia, Lixiu; Zhong, Xuefei; Tu, Yan; Niu, Wenjuan


    A no reference video quality assessment metric based on the region of interest (ROI) was proposed in this paper. In the metric, objective video quality was evaluated by integrating the quality of two compressed artifacts, i.e. blurring distortion and blocking distortion. The Gaussian kernel function was used to extract the human density maps of the H.264 coding videos from the subjective eye tracking data. An objective bottom-up ROI extraction model based on magnitude discrepancy of discrete wavelet transform between two consecutive frames, center weighted color opponent model, luminance contrast model and frequency saliency model based on spectral residual was built. Then only the objective saliency maps were used to compute the objective blurring and blocking quality. The results indicate that the objective ROI extraction metric has a higher the area under the curve (AUC) value. Comparing with the conventional video quality assessment metrics which measured all the video quality frames, the metric proposed in this paper not only decreased the computation complexity, but improved the correlation between subjective mean opinion score (MOS) and objective scores.

  6. Target recognition based on modified combination rule

    Chen Tianlu; Que Peiwen


    Evidence theory is widely used in the field of target recognition. The invalidation problem of this theory when dealing with highly conflict evidences is a research hotspot. Several alternatives of the combination rule are analyzed and compared. A new combination approach is proposed. Calculate the reliabilities of evidence sources using existing evidences. Construct reliabilities judge matrixes and get the weights of each evidence source. Weight average all inputted evidences. Combine processed evidences with D-S combination rule repeatedly to identify a target. The application in multi-sensor target recognition as well as the comparison with typical alternatives all validated that this approach can dispose highly conflict evidences efficiently and get reasonable recognition results rapidly.

  7. Comparison of the sustainability metrics of the petrochemical and biomass-based routes to methionine

    Sanders, J.P.M.; Sheldon, R.A.


    Sustainability metrics, based on material efficiency, energy input, land use and costs, of three processesfor the manufacture of methionine are compared. The petrochemical process affords dl-methionine whilethe two biomass-based routes afford the l-enantiomer. From the point of view of the major

  8. Accelerating Time-Varying Hardware Volume Rendering Using TSP Trees and Color-Based Error Metrics

    Ellsworth, David; Chiang, Ling-Jen; Shen, Han-Wei; Kwak, Dochan (Technical Monitor)


    This paper describes a new hardware volume rendering algorithm for time-varying data. The algorithm uses the Time-Space Partitioning (TSP) tree data structure to identify regions within the data that have spatial or temporal coherence. By using this coherence, the rendering algorithm can improve performance when the volume data is larger than the texture memory capacity by decreasing the amount of textures required. This coherence can also allow improved speed by appropriately rendering flat-shaded polygons instead of textured polygons, and by not rendering transparent regions. To reduce the polygonization overhead caused by the use of the hierarchical data structure, we introduce an optimization method using polygon templates. The paper also introduces new color-based error metrics, which more accurately identify coherent regions compared to the earlier scalar-based metrics. By showing experimental results from runs using different data sets and error metrics, we demonstrate that the new methods give substantial improvements in volume rendering performance.

  9. Fuzzy rule-based support vector regression system

    Ling WANG; Zhichun MU; Hui GUO


    In this paper,we design a fuzzy rule-based support vector regression system.The proposed system utilizes the advantages of fuzzy model and support vector regression to extract support vectors to generate fuzzy if-then rules from the training data set.Based on the first-order linear Tagaki-Sugeno (TS) model,the structure of rules is identified by the support vector regression and then the consequent parameters of rules are tuned by the global least squares method.Our model is applied to the real world regression task.The simulation results gives promising performances in terms of a set of fuzzy rules,which can be easily interpreted by humans.

  10. Moving from gamma passing rates to patient DVH-based QA metrics in pretreatment dose QA

    Zhen, Heming; Nelms, Benjamin E.; Tome, Wolfgang A. [Department of Medical Physics, University of Wisconsin, Madison, Wisconsin 53705 (United States); Department of Human Oncology, University of Wisconsin, Madison, Wisconsin 53792 and Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Department of Medical Physics, University of Wisconsin, Madison, Wisconsin 53705 and Department of Human Oncology, University of Wisconsin, Madison, Wisconsin 53792 (United States)


    Purpose: The purpose of this work is to explore the usefulness of the gamma passing rate metric for per-patient, pretreatment dose QA and to validate a novel patient-dose/DVH-based method and its accuracy and correlation. Specifically, correlations between: (1) gamma passing rates for three 3D dosimeter detector geometries vs clinically relevant patient DVH-based metrics; (2) Gamma passing rates of whole patient dose grids vs DVH-based metrics, (3) gamma passing rates filtered by region of interest (ROI) vs DVH-based metrics, and (4) the capability of a novel software algorithm that estimates corrected patient Dose-DVH based on conventional phan-tom QA data are analyzed. Methods: Ninety six unique ''imperfect'' step-and-shoot IMRT plans were generated by applying four different types of errors on 24 clinical Head/Neck patients. The 3D patient doses as well as the dose to a cylindrical QA phantom were then recalculated using an error-free beam model to serve as a simulated measurement for comparison. Resulting deviations to the planned vs simulated measured DVH-based metrics were generated, as were gamma passing rates for a variety of difference/distance criteria covering: dose-in-phantom comparisons and dose-in-patient comparisons, with the in-patient results calculated both over the whole grid and per-ROI volume. Finally, patient dose and DVH were predicted using the conventional per-beam planar data as input into a commercial ''planned dose perturbation'' (PDP) algorithm, and the results of these predicted DVH-based metrics were compared to the known values. Results: A range of weak to moderate correlations were found between clinically relevant patient DVH metrics (CTV-D95, parotid D{sub mean}, spinal cord D1cc, and larynx D{sub mean}) and both 3D detector and 3D patient gamma passing rate (3%/3 mm, 2%/2 mm) for dose-in-phantom along with dose-in-patient for both whole patient volume and filtered per-ROI. There was

  11. Rank-Based Methods for Selection of Landscape Metrics for Land Cover Pattern Change Detection

    Priyakant Sinha


    Full Text Available Often landscape metrics are not thoroughly evaluated with respect to remote sensing data characteristics, such as their behavior in relation to variation in spatial and temporal resolution, number of land cover classes or dominant land cover categories. In such circumstances, it may be difficult to ascertain whether a change in a metric is due to landscape pattern change or due to the inherent variability in multi-temporal data. This study builds on this important consideration and proposes a rank-based metric selection process through computation of four difference-based indices (β, γ, ξ and θ using a Max–Min/Max normalization approach. Land cover classification was carried out for two contrasting provinces, the Liverpool Range (LR and Liverpool Plains (LP, of the Brigalow Belt South Bioregion (BBSB of NSW, Australia. Landsat images, Multi Spectral Scanner (MSS of 1972–1973 and TM of 1987–1988, 1993–1994, 1999–2000 and 2009–2010 were classified using object-based image analysis methods. A total of 30 landscape metrics were computed and their sensitivities towards variation in spatial and temporal resolutions, number of land cover classes and dominant land cover categories were evaluated by computing a score based on Max–Min/Max normalization. The landscape metrics selected on the basis of the proposed methods (Diversity index (MSIDI, Area weighted mean patch fractal dimension (SHAPE_AM, Mean core area (CORE_MN, Total edge (TE, No. of patches (NP, Contagion index (CONTAG, Mean nearest neighbor index (ENN_MN and Mean patch fractal dimension (FRAC_MN were successful and effective in identifying changes over five different change periods. Major changes in land cover pattern after 1993 were observed, and though the trends were similar in both cases, the LP region became more fragmented than the LR. The proposed method was straightforward to apply, and can deal with multiple metrics when selection of an appropriate set can become

  12. Detection of image quality metamers based on the metric for unified image quality

    Miyata, Kimiyoshi; Tsumura, Norimichi


    In this paper, we introduce a concept of the image quality metamerism as an expanded version of the metamerism defined in the color science. The concept is used to unify different image quality attributes, and applied to introduce a metric showing the degree of image quality metamerism to analyze a cultural property. Our global goal is to build a metric to evaluate total quality of images acquired by different imaging systems and observed under different viewing conditions. As the basic step to the global goal, the metric is consisted of color, spectral and texture information in this research, and applied to detect image quality metamers to investigate the cultural property. The property investigated is the oldest extant version of folding screen paintings that depict the thriving city of Kyoto designated as a nationally important cultural property in Japan. Gold colored areas painted by using high granularity colorants compared with other color areas in the property are evaluated based on the metric, then the metric is visualized as a map showing the possibility of the image quality metamer to the reference pixel.

  13. SU-E-T-572: A Plan Quality Metric for Evaluating Knowledge-Based Treatment Plans.

    Chanyavanich, V; Lo, J; Das, S


    In prostate IMRT treatment planning, the variation in patient anatomy makes it difficult to estimate a priori the potentially achievable extent of dose reduction possible to the rectum and bladder. We developed a mutual information-based framework to estimate the achievable plan quality for a new patient, prior to any treatment planning or optimization. The knowledge-base consists of 250 retrospective prostate IMRT plans. Using these prior plans, twenty query cases were each matched with five cases from the database. We propose a simple DVH plan quality metric (PQ) based on the weighted-sum of the areas under the curve (AUC) of the PTV, rectum and bladder. We evaluate the plan quality of knowledge-based generated plans, and established a correlation between the plan quality and case similarity. The introduced plan quality metric correlates well (r2 = 0.8) with the mutual similarity between cases. A matched case with high anatomical similarity can be used to produce a new high quality plan. Not surprisingly, a poorly matched case with low degree of anatomical similarity tends to produce a low quality plan, since the adapted fluences from a dissimilar case cannot be modified sufficiently to yield acceptable PTV coverage. The plan quality metric is well-correlated to the degree of anatomical similarity between a new query case and matched cases. Further work will investigate how to apply this metric to further stratify and select cases for knowledge-based planning. © 2012 American Association of Physicists in Medicine.

  14. Risk-based rules for crane safety systems

    Ruud, Stian [Section for Control Systems, DNV Maritime, 1322 Hovik (Norway)], E-mail:; Mikkelsen, Age [Section for Lifting Appliances, DNV Maritime, 1322 Hovik (Norway)], E-mail:


    The International Maritime Organisation (IMO) has recommended a method called formal safety assessment (FSA) for future development of rules and regulations. The FSA method has been applied in a pilot research project for development of risk-based rules and functional requirements for systems and components for offshore crane systems. This paper reports some developments in the project. A method for estimating target reliability for the risk-control options (safety functions) by means of the cost/benefit decision criterion has been developed in the project and is presented in this paper. Finally, a structure for risk-based rules is proposed and presented.

  15. ICU-based rehabilitation and its appropriate metrics.

    Gosselink, Rik; Needham, Dale; Hermans, Greet


    Survival of critically ill patients is frequently associated with significant functional impairment and reduced health-related quality of life. Early rehabilitation of ICU patients has recently been identified as an important focus for interdisciplinary ICU teams. However, the amount of rehabilitation performed in ICUs is often inadequate. The scope of the review is to discuss recent developments in application of assessment tools and rehabilitation in critically ill patients within an interdisciplinary approach. ICU-based rehabilitation has become an important evidence-based component in the management of patients with critical illness. The assessment and evidence-based treatment of these patients should include a focus on prevention and treatment of deconditioning (muscle weakness, joint stiffness, impaired functional performance) and weaning failure (respiratory muscle weakness) to identify targets for rehabilitation. A variety of modalities for assessment and early ICU rehabilitation are supported by emerging clinical research and must be implemented according to the stage of critical illness, comorbidities, and consciousness and cooperation of the patient. Daily evaluation of every critically ill patient should include evaluation of the need for bedrest and immobility, and assessment of the potential for early rehabilitation interventions. Early ICU rehabilitation is an interdisciplinary team responsibility, involving physical therapists, occupational therapists, nurses and medical staff.

  16. An interpolated periodogram-based metric for comparison of time series with unequal lengths

    Caiado, Jorge; Crato, Nuno; Peña, Daniel


    We propose a periodogram-based metric for classification and clustering of time series with different sample sizes. For such cases, we know that the Euclidean distance between the periodogram ordinates cannot be used. One possible way to deal with this problem is to interpolate lineary one of the periodograms in order to estimate ordinates of the same frequencies.

  17. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng


    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  18. Fusion set selection with surrogate metric in multi-atlas based image segmentation

    Zhao, Tingting; Ruan, Dan


    Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation.

  19. Simulation of large-scale rule-based models

    Hlavacek, William S [Los Alamos National Laboratory; Monnie, Michael I [Los Alamos National Laboratory; Colvin, Joshua [NON LANL; Faseder, James [NON LANL


    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  20. Contribution to SER Prediction: A New Metric Based on RC Transient Simulations

    Micolau, G.; Castellani-Coulie, K.; Aziza, H.; Portal, J.-M.


    This work focuses on speeding up simulation time of SEU systematic detection in a 90 nm SRAM cell. Simulations were run in order to validate a simplified approach based on the injection of a noise source current at the sensitive node of an analytical RC circuit. Moreover, a new SEU reliability metric, mandatory for reliability studies, is introduced. It is based on based on transient I-V simulations.

  1. Optimizing Mining Association Rules for Artificial Immune System based Classification



    Full Text Available The primary function of a biological immune system is to protect the body from foreign molecules known as antigens. It has great pattern recognition capability that may be used to distinguish between foreigncells entering the body (non-self or antigen and the body cells (self. Immune systems have many characteristics such as uniqueness, autonomous, recognition of foreigners, distributed detection, and noise tolerance . Inspired by biological immune systems, Artificial Immune Systems have emerged during the last decade. They are incited by many researchers to design and build immune-based models for a variety of application domains. Artificial immune systems can be defined as a computational paradigm that is inspired by theoretical immunology, observed immune functions, principles and mechanisms. Association rule mining is one of the most important and well researched techniques of data mining. The goal of association rules is to extract interesting correlations, frequent patterns, associations or casual structures among sets of items in thetransaction databases or other data repositories. Association rules are widely used in various areas such as inventory control, telecommunication networks, intelligent decision making, market analysis and risk management etc. Apriori is the most widely used algorithm for mining the association rules. Other popular association rule mining algorithms are frequent pattern (FP growth, Eclat, dynamic itemset counting (DIC etc. Associative classification uses association rule mining in the rule discovery process to predict the class labels of the data. This technique has shown great promise over many other classification techniques. Associative classification also integrates the process of rule discovery and classification to build the classifier for the purpose of prediction. The main problem with the associative classification approach is the discovery of highquality association rules in a very large space of

  2. A PEG Construction of LDPC Codes Based on the Betweenness Centrality Metric



    Full Text Available Progressive Edge Growth (PEG constructions are usually based on optimizing the distance metric by using various methods. In this work however, the distance metric is replaced by a different one, namely the betweenness centrality metric, which was shown to enhance routing performance in wireless mesh networks. A new type of PEG construction for Low-Density Parity-Check (LDPC codes is introduced based on the betweenness centrality metric borrowed from social networks terminology given that the bipartite graph describing the LDPC is analogous to a network of nodes. The algorithm is very efficient in filling edges on the bipartite graph by adding its connections in an edge-by-edge manner. The smallest graph size the new code could construct surpasses those obtained from a modified PEG algorithm - the RandPEG algorithm. To the best of the authors' knowledge, this paper produces the best regular LDPC column-weight two graphs. In addition, the technique proves to be competitive in terms of error-correcting performance. When compared to MacKay, PEG and other recent modified-PEG codes, the algorithm gives better performance over high SNR due to its particular edge and local graph properties.

  3. Rule-based spatial modeling with diffusing, geometrically constrained molecules

    Lohel Maiko; Lenser Thorsten; Ibrahim Bashar; Gruenert Gerd; Hinze Thomas; Dittrich Peter


    Abstract Background We suggest a new type of modeling approach for the coarse grained, particle-based spatial simulation of combinatorially complex chemical reaction systems. In our approach molecules possess a location in the reactor as well as an orientation and geometry, while the reactions are carried out according to a list of implicitly specified reaction rules. Because the reaction rules can contain patterns for molecules, a combinatorially complex or even infinitely sized reaction net...

  4. Rule-based semantic web services matching strategy

    Fan, Hong; Wang, Zhihua


    With the development of Web services technology, the number of service increases rapidly, and it becomes a challenge task that how to efficiently discovery the services that exactly match the user's requirements from the large scale of services library. Many semantic Web services discovery technologies proposed by the recent literatures only focus on the keyword-based or primary semantic based service's matching. This paper studies the rules and rule reasoning based service matching algorithm in the background of large scale services library. Firstly, the formal descriptions of semantic web services and service matching is presented. The services' matching are divided into four levels: Exact, Plugin, Subsume and Fail and their formal descriptions are also presented. Then, the service matching is regarded as rule-based reasoning issues. A set of match rules are firstly given and the related services set is retrieved from services ontology base through rule-based reasoning, and their matching levels are determined by distinguishing the relationships between service's I/O and user's request I/O. Finally, the experiment based on two services sets show that the proposed services matching strategy can easily implement the smart service discovery and obtains the high service discovery efficiency in comparison with the traditional global traversal strategy.

  5. Antipodally Invariant Metrics for Fast Regression-Based Super-Resolution.

    Perez-Pellitero, Eduardo; Salvador, Jordi; Ruiz-Hidalgo, Javier; Rosenhahn, Bodo


    Dictionary-based super-resolution (SR) algorithms usually select dictionary atoms based on the distance or similarity metrics. Although the optimal selection of the nearest neighbors is of central importance for such methods, the impact of using proper metrics for SR has been overlooked in literature, mainly due to the vast usage of Euclidean distance. In this paper, we present a very fast regression-based algorithm, which builds on the densely populated anchored neighborhoods and sublinear search structures. We perform a study of the nature of the features commonly used for SR, observing that those features usually lie in the unitary hypersphere, where every point has a diametrically opposite one, i.e., its antipode, with same module and angle, but the opposite direction. Even though, we validate the benefits of using antipodally invariant metrics, most of the binary splits use Euclidean distance, which does not handle antipodes optimally. In order to benefit from both the worlds, we propose a simple yet effective antipodally invariant transform that can be easily included in the Euclidean distance calculation. We modify the original spherical hashing algorithm with this metric in our antipodally invariant spherical hashing scheme, obtaining the same performance as a pure antipodally invariant metric. We round up our contributions with a novel feature transform that obtains a better coarse approximation of the input image thanks to iterative backprojection. The performance of our method, which we named antipodally invariant SR, improves quality (Peak Signal to Noise Ratio) and it is faster than any other state-of-the-art method.

  6. Information Entropy-Based Metrics for Measuring Emergences in Artificial Societies

    Mingsheng Tang


    Full Text Available Emergence is a common phenomenon, and it is also a general and important concept in complex dynamic systems like artificial societies. Usually, artificial societies are used for assisting in resolving several complex social issues (e.g., emergency management, intelligent transportation system with the aid of computer science. The levels of an emergence may have an effect on decisions making, and the occurrence and degree of an emergence are generally perceived by human observers. However, due to the ambiguity and inaccuracy of human observers, to propose a quantitative method to measure emergences in artificial societies is a meaningful and challenging task. This article mainly concentrates upon three kinds of emergences in artificial societies, including emergence of attribution, emergence of behavior, and emergence of structure. Based on information entropy, three metrics have been proposed to measure emergences in a quantitative way. Meanwhile, the correctness of these metrics has been verified through three case studies (the spread of an infectious influenza, a dynamic microblog network, and a flock of birds with several experimental simulations on the Netlogo platform. These experimental results confirm that these metrics increase with the rising degree of emergences. In addition, this article also has discussed the limitations and extended applications of these metrics.

  7. The Applicability of the Density Rule of Pathwardhan and Kumer and the Rule Based on Linear Isopiestic Relation



    The applicability of the density rule of Pathwardhan and Kumer and the rule based on the linear isopiestic relation is studied by comparison with experimental density data in the literature. Predicted and measured values for 18 electrolyte mixtures are compared. The two rules are good for mixtures with and without common ions, including those containing associating ions. The deviations of the rule based on the linear isopiestic relation are slightly higher for the mixtures involving very strong ion complexes, but the predictions are still quite satisfactory.The density rule of Pathwardhan and Kumer is more accurate for these mixtures. However, it is not applicable for mixtures containing non-electrolytes. The rule based on the linear isopiestic relation is extended to mixtures involving non-electrolytes. The predictions for the mixtures containing both electrolytes and non-electrolytes and the non-electrolyte mixtures are accurate. All these results indicate that this rule is a widely avvlicable approach.

  8. The Applicability of the Density Rule of Pathwardhan and Kumer and the Rule Based on Linear Isopiestic Relation


    The applicability of the density rule of Pathwardhan and Kumer and the rule based on the linear isopiestic relation is studied by comparison with experimental density data in the literature. Predicted and measured values for 18 electrolyte mixtures are compared. The two rules are good for mixtures with and without common ions, including those containing associating ions. The deviations of the rule based on the linear isopiestic relation are slightly higher for the mixtures involving very strong ion complexes, but the predictions are still quite satisfactory. The density rule of Pathwardhan and Kumer is more accurate for these mixtures. However, it is not applicable for mixtures containing non-electrolytes. The rule based on the linear isopiestic relation is extended to mixtures involving non-electrolytes. The predictions for the mixtures containing both electrolytes and non-electrolytes and the non-electrolyte mixtures are accurate. All these results indicate that this rule is a widely applicable approach.

  9. Classification approach based on association rules mining for unbalanced data

    Ndour, Cheikh


    This paper deals with the supervised classification when the response variable is binary and its class distribution is unbalanced. In such situation, it is not possible to build a powerful classifier by using standard methods such as logistic regression, classification tree, discriminant analysis, etc. To overcome this short-coming of these methods that provide classifiers with low sensibility, we tackled the classification problem here through an approach based on the association rules learning because this approach has the advantage of allowing the identification of the patterns that are well correlated with the target class. Association rules learning is a well known method in the area of data-mining. It is used when dealing with large database for unsupervised discovery of local patterns that expresses hidden relationships between variables. In considering association rules from a supervised learning point of view, a relevant set of weak classifiers is obtained from which one derives a classification rule...

  10. Rule-based transformations for geometric modelling

    Thomas Bellet


    Full Text Available The context of this paper is the use of formal methods for topology-based geometric modelling. Topology-based geometric modelling deals with objects of various dimensions and shapes. Usually, objects are defined by a graph-based topological data structure and by an embedding that associates each topological element (vertex, edge, face, etc. with relevant data as their geometric shape (position, curve, surface, etc. or application dedicated data (e.g. molecule concentration level in a biological context. We propose to define topology-based geometric objects as labelled graphs. The arc labelling defines the topological structure of the object whose topological consistency is then ensured by labelling constraints. Nodes have as many labels as there are different data kinds in the embedding. Labelling constraints ensure then that the embedding is consistent with the topological structure. Thus, topology-based geometric objects constitute a particular subclass of a category of labelled graphs in which nodes have multiple labels.

  11. Rule-based transformations for geometric modelling

    Bellet, Thomas; Gall, Pascale Le; 10.4204/EPTCS.48.5


    The context of this paper is the use of formal methods for topology-based geometric modelling. Topology-based geometric modelling deals with objects of various dimensions and shapes. Usually, objects are defined by a graph-based topological data structure and by an embedding that associates each topological element (vertex, edge, face, etc.) with relevant data as their geometric shape (position, curve, surface, etc.) or application dedicated data (e.g. molecule concentration level in a biological context). We propose to define topology-based geometric objects as labelled graphs. The arc labelling defines the topological structure of the object whose topological consistency is then ensured by labelling constraints. Nodes have as many labels as there are different data kinds in the embedding. Labelling constraints ensure then that the embedding is consistent with the topological structure. Thus, topology-based geometric objects constitute a particular subclass of a category of labelled graphs in which nodes hav...

  12. A Belief Rule-Based Expert System to Diagnose Influenza

    Hossain, Mohammad Shahadat; Khalid, Md. Saifuddin; Akter, Shamima


    ). The RIMER approach can handle different types of uncertainties, both in knowledge representation, and in inference procedures. The knowledge-base of this system was constructed by using records of the real patient data along with in consultation with the Influenza specialists of Bangladesh. Practical case......, development and application of an expert system to diagnose influenza under uncertainty. The recently developed generic belief rule-based inference methodology by using the evidential reasoning (RIMER) approach is employed to develop this expert system, termed as Belief Rule Based Expert System (BRBES...... studies were used to validate the BRBES. The system generated results are effective and reliable than from manual system in terms of accuracy....


    Mohammed Aboaoga


    Full Text Available Name Entity Recognition is very important task in many natural language processing applications such as; Machine Translation, Question Answering, Information Extraction, Text Summarization, Semantic Applications and Word Sense Disambiguation. Rule-based approach is one of the techniques that are used for named entity recognition to identify the named entities such as a person names, location names and organization names. The recent rule-based methods have been applied to recognize the person names in political domain. They ignored the recognition of other named entity types such as locations and organizations. We have used the rule based approach for recognizing the named entity type (person names for Arabic. We have developed four rules for identifying the person names depending on the position of name. We have used an in-house Arabic corpus collected from newspaper achieves. The evaluation method that compares the results of the system with the manually annotated text has been applied in order to compute precision, recall and f-measure. In the experiment of this study, the average f-measure for recognizing person names are (92.66, 92.04 and 90.43% in sport, economic and politic domain respectively. The experimental results showed that our rule-based method achieved the highest f-measure values in sport domain comparing with political and economic domains.


    Nisha Mariam Varughese


    Full Text Available Security is one of the major challenges in open network. There are so many types of attacks which follow fixed patterns or frequently change their patterns. It is difficult to find the malicious attack which does not have any fixed patterns. The Distributed Denial of Service (DDoS attacks like Botnets are used to slow down the system performance. To address such problems Collaborative Network Security Management System (CNSMS is proposed along with the association mining rule. CNSMS system is consists of collaborative Unified Threat Management (UTM, cloud based security centre and traffic prober. The traffic prober captures the internet traffic and given to the collaborative UTM. Traffic is analysed by the Collaborative UTM, to determine whether it contains any malicious attack or not. If any security event occurs, it will reports to the cloud based security centre. The security centre generates security rules based on association mining rule and distributes to the network. The cloud based security centre is used to store the huge amount of tragic, their logs and the security rule generated. The feedback is evaluated and the invalid rules are eliminated to improve the system efficiency.

  15. Monitor-Based Statistical Model Checking for Weighted Metric Temporal Logic

    Bulychev, Petr; David, Alexandre; Larsen, Kim Guldstrand


    We present a novel approach and implementation for ana- lysing weighted timed automata (WTA) with respect to the weighted metric temporal logic (WMTL≤ ). Based on a stochastic semantics of WTAs, we apply statistical model checking (SMC) to estimate and test probabilities of satisfaction with desi......We present a novel approach and implementation for ana- lysing weighted timed automata (WTA) with respect to the weighted metric temporal logic (WMTL≤ ). Based on a stochastic semantics of WTAs, we apply statistical model checking (SMC) to estimate and test probabilities of satisfaction...... often exact and ex- perimentally tight. The technique is implemented in the new tool Casaal that we seamlessly connect to Uppaal-smc in a tool chain. We demon- strate the applicability of our technique and the efficiency of our imple- mentation through a number of case-studies....

  16. Rule-Based Storytelling Text-to-Speech (TTS Synthesis

    Ramli Izzad


    Full Text Available In recent years, various real life applications such as talking books, gadgets and humanoid robots have drawn the attention to pursue research in the area of expressive speech synthesis. Speech synthesis is widely used in various applications. However, there is a growing need for an expressive speech synthesis especially for communication and robotic. In this paper, global and local rule are developed to convert neutral to storytelling style speech for the Malay language. In order to generate rules, modification of prosodic parameters such as pitch, intensity, duration, tempo and pauses are considered. Modification of prosodic parameters is examined by performing prosodic analysis on a story collected from an experienced female and male storyteller. The global and local rule is applied in sentence level and synthesized using HNM. Subjective tests are conducted to evaluate the synthesized storytelling speech quality of both rules based on naturalness, intelligibility, and similarity to the original storytelling speech. The results showed that global rule give a better result than local rule

  17. An Embedded Rule-Based Diagnostic Expert System in Ada

    Jones, Robert E.; Liberman, Eugene M.


    Ada is becoming an increasingly popular programming language for large Government-funded software projects. Ada with it portability, transportability, and maintainability lends itself well to today's complex programming environment. In addition, expert systems have also assumed a growing role in providing human-like reasoning capability expertise for computer systems. The integration is discussed of expert system technology with Ada programming language, especially a rule-based expert system using an ART-Ada (Automated Reasoning Tool for Ada) system shell. NASA Lewis was chosen as a beta test site for ART-Ada. The test was conducted by implementing the existing Autonomous Power EXpert System (APEX), a Lisp-based power expert system, in ART-Ada. Three components, the rule-based expert systems, a graphics user interface, and communications software make up SMART-Ada (Systems fault Management with ART-Ada). The rules were written in the ART-Ada development environment and converted to Ada source code. The graphics interface was developed with the Transportable Application Environment (TAE) Plus, which generates Ada source code to control graphics images. SMART-Ada communicates with a remote host to obtain either simulated or real data. The Ada source code generated with ART-Ada, TAE Plus, and communications code was incorporated into an Ada expert system that reads the data from a power distribution test bed, applies the rule to determine a fault, if one exists, and graphically displays it on the screen. The main objective, to conduct a beta test on the ART-Ada rule-based expert system shell, was achieved. The system is operational. New Ada tools will assist in future successful projects. ART-Ada is one such tool and is a viable alternative to the straight Ada code when an application requires a rule-based or knowledge-based approach.


    Slaven Smojver


    Full Text Available Ever increasing scope and complexity of regulations and other rules that govern human society emphasise importance of the inspection of compliance to those rules. Often-used approaches to the inspection of compliance suffer from drawbacks such as overly idealistic assumptions and narrowness of application. Specifically, inspection models are frequently limited to situations where inspected entity has to comply with only one rule. Furthermore, inspection strategies regularly overlook some useful and available information such as varying costs of compliance to different rules. This article presents an agent-based model for inspection of compliance to many rules, which addresses abovementioned drawbacks. In the article, crime economic, game-theoretic and agent-based modelling approaches to inspection are briefly described, as well as their impact on the model. The model is described and simulation of a simplified version of the model is presented. The obtained results demonstrate that inspection strategies which take into account rules’ compliance costs perform significantly better than random strategies and better than cycle-based strategies. Additionally, the results encourage further, wider testing and validation of the model.

  19. Autonomous Rule Based Robot Navigation In Orchards

    Andersen, Jens Christian; Ravn, Ole; Andersen, Nils Axel


    Orchard navigation using sensor-based localization and exible mission management facilitates successful missions independent of the Global Positioning System (GPS). This is especially important while driving between tight tree rows where the GPS coverage is poor. This paper suggests localization ......, obstacle avoidance, path planning and drive control. The system is tested successfully using a Hako 20 kW tractor during autonomous missions in both cherry and apple orchards with mission length of up to 2.3 km including the headland turns.......Orchard navigation using sensor-based localization and exible mission management facilitates successful missions independent of the Global Positioning System (GPS). This is especially important while driving between tight tree rows where the GPS coverage is poor. This paper suggests localization...

  20. Metric Madness

    Kroon, Cindy D.


    Created for a Metric Day activity, Metric Madness is a board game for two to four players. Students review and practice metric vocabulary, measurement, and calculations by playing the game. Playing time is approximately twenty to thirty minutes.

  1. Impact of distance-based metric learning on classification and visualization model performance and structure-activity landscapes

    Kireeva, Natalia V.; Ovchinnikova, Svetlana I.; Kuznetsov, Sergey L.; Kazennov, Andrey M.; Tsivadze, Aslan Yu.


    This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.

  2. A rule-based Afan Oromo Grammar Checker

    Debela Tesfaye


    Full Text Available Natural language processing (NLP is a subfield of computer science, with strong connections to artificial intelligence. One area of NLP is concerned with creating proofing systems, such as grammar checker. Grammar checker determines the syntactical correctness of a sentence which is mostly used in word processors and compilers. For languages, such as Afan Oromo, advanced tools have been lacking and are still in the early stages. In this paper a rule based grammar checker is presented. The rule base is entirely developed and dependent on the morphology of the language . The checker is evaluated and shown a promising result.

  3. Ensemble-based approximation of observation impact using an observation-based verification metric

    Matthias Sommer


    Full Text Available Knowledge on the contribution of observations to forecast accuracy is crucial for the refinement of observing and data assimilation systems. Several recent publications highlighted the benefits of efficiently approximating this observation impact using adjoint methods or ensembles. This study proposes a modification of an existing method for computing observation impact in an ensemble-based data assimilation and forecasting system and applies the method to a pre-operational, convective-scale regional modelling environment. Instead of the analysis, the modified approach uses observation-based verification metrics to mitigate the effect of correlation between the forecast and its verification norm. Furthermore, a peculiar property in the distribution of individual observation impact values is used to define a reliability indicator for the accuracy of the impact approximation. Applying this method to a 3-day test period shows that a well-defined observation impact value can be approximated for most observation types and the reliability indicator successfully depicts where results are not significant.


    T. Nadana Ravishankar


    Full Text Available Though Information Retrieval (IR in big data has been an active field of research for past few years; the popularity of the native languages presents a unique challenge in big data information retrieval systems. There is a need to retrieve information which is present in English and display it in the native language for users. This aim of cross language information retrieval is complicated by unique features of the native languages such as: morphology, compound word formations, word spelling variations, ambiguity, word synonym, other language influence and etc. To overcome some of these issues, the native language is modeled using a grammar rule based approach in this work. The advantage of this approach is that the native language is modeled and its unique features are encoded using a set of inference rules. This rule base coupled with the customized ontological system shows considerable potential and is found to show better precision and recall.

  5. Face recognition based on subset selection via metric learning on manifold

    Hong SHAO; Shuang CHEN; Jie-yi ZHAO; Wen-cheng CUI; Tian-shu YU


    With the development of face recognition using sparse representation based classifi cation (SRC), many relevant methods have been proposed and investigated. However, when the dictionary is large and the representation is sparse, only a small proportion of the elements contributes to the l1-minimization. Under this observation, several approaches have been developed to carry out an efficient element selection procedure before SRC. In this paper, we employ a metric learning approach which helps fi nd the active elements correctly by taking into account the interclass/intraclass relationship and manifold structure of face images. After the metric has been learned, a neighborhood graph is constructed in the projected space. A fast marching algorithm is used to rapidly select the subset from the graph, and SRC is implemented for classifi cation. Experimental results show that our method achieves promising performance and signifi cant efficiency enhancement.

  6. Block-based test data adequacy measurement criteria and test complexity metrics

    陈卫东; 杨建军; 叶澄清; 潘云鹤


    On the basis of software testing tools we developed for progrmnming languages, we firstly present a new control flowgraph model based on block. In view of the notion of block, we extend the traditional program-based software test data adequacy measurement criteria, and empirically analyze the subsume relation between these measurement criteria. Then, we define four test complexity metrics based on block. They are J-complexity 0; J-complexity 1 ; J-complexity 1 + ; J-complexity 2. Finally, we show the Kiviat diagram that makes software quality visible.

  7. Block-based test data adequacy measurement criteria and test complexity metrics


    On the basis of software testing tools we developed for programming languages, we firstly present a new control flowgraph model based on block. In view of the notion of block, we extend the traditional program-based software test data adequacy measurement criteria, and empirically analyze the subsume relation between these measurement criteria. Then, we define four test complexity metrics based on block. They are J-complexity 0; J-complexity 1; J-complexity 1 +; J-complexity 2. Finally, we show the Kiviat diagram that makes software quality visible.

  8. Generating Fuzzy Rule-based Systems from Examples Based on Robust Support Vector Machine

    JIA Jiong; ZHANG Hao-ran


    This paper firstly proposes a new support vector machine regression (SVR) with a robust loss function, and designs a gradient based algorithm for implementation of the SVR,then uses the SVR to extract fuzzy rules and designs fuzzy rule-based system. Simulations show that fuzzy rule-based system technique based on robust SVR achieves superior performance to the conventional fuzzy inference method, the proposed method provides satisfactory performance with excellent approximation and generalization property than the existing algorithm.

  9. Ruled-based control of off-grid electrolysis

    Serna, A.; Tadeo, F.; Normey-Rico, J. E.


    This work deals with a ruled-based control strategy to produce hydrogen from wind and wave energy in an offshore platform. These renewable energies feed a set of alkaline electrolyzers that produce H2. The proposed control system allows regulating the operation of the electrolyzers, taking into a...

  10. A Rule-Based System for Test Quality Improvement

    Costagliola, Gennaro; Fuccella, Vittorio


    To correctly evaluate learners' knowledge, it is important to administer tests composed of good quality question items. By the term "quality" we intend the potential of an item in effectively discriminating between skilled and untrained students and in obtaining tutor's desired difficulty level. This article presents a rule-based e-testing system…


    Mariana Romanyshyn


    Full Text Available Last decade witnessed a lot of research in the field of sentiment analysis. Understanding the attitude and the emotions that people express in written text proved to be really important and helpful in sociology, political science, psychology, market research, and, of course, artificial intelligence. This paper demonstrates a rule-based approach to clause-level sentiment analysis of reviews in Ukrainian. The general architecture of the implemented sentiment analysis system is presented, the current stage of research is described and further work is explained. The main emphasis is made on the design of rules for computing sentiments.

  12. Rules-based object-relational databases ontology construction

    Chen Jia; Wu Yue


    To solve the problems of sharing and reusing information in the information system, a rules-based ontology constructing approach from object-relational databases is proposed. A 3-tuple ontology constructing model is proposed first. Then, four types of ontology constructing rules including class, property, property characteristics, and property restrictions axe formalized affording to the model. Experiment results described in Web ontology language prove that our proposed approach is feasible for applying in the semantic objects project of semantic computing laboratory in UC Irvine. Our approach reduces about twenty percent constructing time compared with the ontology construction from relational databases.

  13. An Incremental Rule Acquisition Algorithm Based on Rough Set

    YU Hong; YANG Da-chun


    Rough Set is a valid mathematical theory developed in recent years,which has the ability to deal with imprecise,uncertain,and vague information.This paper presents a new incremental rule acquisition algorithm based on rough set theory.First,the relation of the new instances with the original rule set is discussed.Then the change law of attribute reduction and value reduction are studied when a new instance is added.Follow,a new incremental learning algorithm for decision tables is presented within the framework of rough set.Finally,the new algorithm and the classical algorithm are analyzed and compared by theory and experiments.

  14. Rule based fuzzy logic approach for classification of fibromyalgia syndrome.

    Arslan, Evren; Yildiz, Sedat; Albayrak, Yalcin; Koklukaya, Etem


    Fibromyalgia syndrome (FMS) is a chronic muscle and skeletal system disease observed generally in women, manifesting itself with a widespread pain and impairing the individual's quality of life. FMS diagnosis is made based on the American College of Rheumatology (ACR) criteria. However, recently the employability and sufficiency of ACR criteria are under debate. In this context, several evaluation methods, including clinical evaluation methods were proposed by researchers. Accordingly, ACR had to update their criteria announced back in 1990, 2010 and 2011. Proposed rule based fuzzy logic method aims to evaluate FMS at a different angle as well. This method contains a rule base derived from the 1990 ACR criteria and the individual experiences of specialists. The study was conducted using the data collected from 60 inpatient and 30 healthy volunteers. Several tests and physical examination were administered to the participants. The fuzzy logic rule base was structured using the parameters of tender point count, chronic widespread pain period, pain severity, fatigue severity and sleep disturbance level, which were deemed important in FMS diagnosis. It has been observed that generally fuzzy predictor was 95.56 % consistent with at least of the specialists, who are not a creator of the fuzzy rule base. Thus, in diagnosis classification where the severity of FMS was classified as well, consistent findings were obtained from the comparison of interpretations and experiences of specialists and the fuzzy logic approach. The study proposes a rule base, which could eliminate the shortcomings of 1990 ACR criteria during the FMS evaluation process. Furthermore, the proposed method presents a classification on the severity of the disease, which was not available with the ACR criteria. The study was not limited to only disease classification but at the same time the probability of occurrence and severity was classified. In addition, those who were not suffering from FMS were

  15. Design Transformations for Rule-based Procedural Modeling

    Lienhard, Stefan


    We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.

  16. Compensation of skull motion and breathing motion in CT using data-based and image-based metrics, respectively

    Bruder, H.; Rohkohl, C.; Stierstorfer, K.; Flohr, T.


    We present a novel reconstruction for motion correction of non-cardiac organs. With non-cooperative patients or in emergency case, breathing motion or motion of the skull may compromise image quality. Our algorithm is based on the optimization of either motion artefact metrics or data-driven metrics. This approach was successfully applied in cardiac CTA [1]. While motion correction of the coronary vessels requires a local motion model, global motion models are sufficient for organs like the lung or the skull. The parameter vector for the global affine motion is estimated iteratively, using the open source optimization library NLOPT. The image is updated using motion compensated reconstruction in each of the iterations. Evaluation of the metric value, e.g. the image entropy, provides information for the next iteration loop. After reaching the fixed point of the iteration, the final motion parameters are used for a motion-compensated full quality reconstruction. In head imaging the motion model is based on translation and rotation, in thoracic imaging the rotation is replaced by non-isotropic scaling in all three dimensions. We demonstrate the efficiency of the method in thoracic imaging by evaluating PET-CT data from free-breathing patients. In neuro imaging, data from stroke patients showing skull tremor were analyzed. It was shown that motion artefacts can be largely reduced and spatial resolution was restored. In head imaging, similar results can be obtained using motion artefact metrics or data-driven metrics. In case of image-based metrics, the entropy of the image proved to be superior. Breathing motion could also be significantly reduced using entropy metric. However, in this case data driven metrics cannot be applied because the line integrals associated to the ROI of the lung have to be computed using the local ROI mechanism [2] It was shown that the lung signal is corrupted by signals originating from the complement of the lung. Thus a meaningful

  17. Complexity metric as a complement to measurement based IMRT/VMAT patient-specific QA

    Götstedt, J.; Karlsson Hauer, A.; Bäck, A.


    IMRT/VMAT treatment plans contain treatment fields with MLC openings of various size and shape. Clinical dose calculation algorithms show limitations in calculating the correct dose in small and irregular parts of a MLC opening which leads to differences between the planned and delivered dose distributions. The patient-specific IMRT QA is often designed to compare planned and measured dose distributions and is therefore heavily dependent on the measurement equipment and the evaluation method. The purpose of this study is to develop a complexity metric based on shape and size of MLC openings that correlates to the dose differences between planned and delivered 3D dose distributions. Different MLC openings are measured and evaluated and used to determine a penalty function to steer the complexity metric and make the complexity scores correlate to dose difference pass rates. Results of this initial study show that a correlation was found between complexity scores and dose difference pass rates for static fields with varied complexity. Preliminary results also show that the complexity metric can distinguish clinical IMRT fields with higher complexity.

  18. Speech based transmission index for all: An intelligibility metric for variable hearing ability.

    Mechergui, Nader; Djaziri-Larbi, Sonia; Jaïdane, Mériem


    A method to measure the speech intelligibility in public address systems for normal hearing and hearing impaired persons is presented. The proposed metric is an extension of the speech based Speech Transmission Index to account for accurate perceptual masking and variable hearing ability: The sound excitation pattern generated at the ear is accurately computed using an auditory filter model, and its shapes depend on frequency, sound level, and hearing impairment. This extension yields a better prediction of the intensity of auditory masking which is used to rectify the modulation transfer function and thus to objectively assess the speech intelligibility experienced by hearing impaired as well as by normal hearing persons in public spaces. The proposed metric was developed within the framework of the European Active and Assisted Living research program, and was labeled "SB-STI for All." Extensive subjective in-Lab and in vivo tests have been conducted and the proposed metric proved to have a good correlation with subjective intelligibility scores.

  19. AVNM: A Voting based Novel Mathematical Rule for Image Classification.

    Vidyarthi, Ankit; Mittal, Namita


    In machine learning, the accuracy of the system depends upon classification result. Classification accuracy plays an imperative role in various domains. Non-parametric classifier like K-Nearest Neighbor (KNN) is the most widely used classifier for pattern analysis. Besides its easiness, simplicity and effectiveness characteristics, the main problem associated with KNN classifier is the selection of a number of nearest neighbors i.e. "k" for computation. At present, it is hard to find the optimal value of "k" using any statistical algorithm, which gives perfect accuracy in terms of low misclassification error rate. Motivated by the prescribed problem, a new sample space reduction weighted voting mathematical rule (AVNM) is proposed for classification in machine learning. The proposed AVNM rule is also non-parametric in nature like KNN. AVNM uses the weighted voting mechanism with sample space reduction to learn and examine the predicted class label for unidentified sample. AVNM is free from any initial selection of predefined variable and neighbor selection as found in KNN algorithm. The proposed classifier also reduces the effect of outliers. To verify the performance of the proposed AVNM classifier, experiments are made on 10 standard datasets taken from UCI database and one manually created dataset. The experimental result shows that the proposed AVNM rule outperforms the KNN classifier and its variants. Experimentation results based on confusion matrix accuracy parameter proves higher accuracy value with AVNM rule. The proposed AVNM rule is based on sample space reduction mechanism for identification of an optimal number of nearest neighbor selections. AVNM results in better classification accuracy and minimum error rate as compared with the state-of-art algorithm, KNN, and its variants. The proposed rule automates the selection of nearest neighbor selection and improves classification rate for UCI dataset and manually created dataset. Copyright © 2016 Elsevier

  20. A Novel Riemannian Metric Based on Riemannian Structure and Scaling Information for Fixed Low-Rank Matrix Completion.

    Mao, Shasha; Xiong, Lin; Jiao, Licheng; Feng, Tian; Yeung, Sai-Kit


    Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.

  1. A Code Level Based Programmer Assessment and Selection Criterion Using Metric Tools

    Ezekiel U. Okike


    Full Text Available this study presents a code level measurement of computer programs developed by computer programmers using a Chidamber and Kemerer Java metric (CKJM tool and the Myers Briggs Type Indicator (MBTI tool. The identification of potential computer programmers using personality trait factors does not seem to be the best approach without a code level measurement of the quality of programs. Hence the need to evolve a metric tool which measures both personality traits of programmers and code level quality of programs developed by programmers. This is the focus of this study. In this experiment, a set of Java based programming tasks were given to 33 student programmers who could confidently use the Java programming language. The codes developed by these students were analyzed for quality using a CKJM tool. Cohesion, coupling and number of public methods (NPM metrics were used in the study. The choice of these three metrics from the CKJM suite was because they are useful in measuring well designed codes. By examining the cohesion values of classes, high cohesion ranges [0,1] and low coupling imply well designed code. Also number of methods (NPM in a well-designed class is always less than 5 when cohesion range is [0,1]. Results from this study show that 19 of the 33 programmers developed good and cohesive programs while 14 did not. Further analysis revealed the personality traits of programmers and the number of good programs written by them. Programmers with Introverted Sensing Thinking Judging (ISTJ traits produced the highest number of good programs, followed by Introverted iNtuitive Thinking Perceiving (INTP, Introverted iNtuitive Feelingng Perceiving (INTP, and Extroverted Sensing Thinking Judging (ESTJ

  2. Analysis and Evaluating Security of Component-Based Software Development: A Security Metrics Framework

    Irshad Ahmad Mir


    Full Text Available Evaluating the security of software systems is a complex problem for the research communities due to the multifaceted and complex operational environment of the system involved. Many efforts towards the secure system development methodologies like secSDLC by Microsoft have been made but the measurement scale on which the security can be measured got least success. As with a shift in the nature of software development from standalone applications to distributed environment where there are a number of potential adversaries and threats present, security has been outlined and incorporated at the architectural level of the system and so is the need to evaluate and measure the level of security achieved . In this paper we present a framework for security evaluation at the design and architectural phase of the system development. We have outlined the security objectives based on the security requirements of the system and analyzed the behavior of various software architectures styles. As the component-based development (CBD is an important and widely used model to develop new large scale software due to various benefits like increased reuse, reduce time to market and cost. Our emphasis is on CBD and we have proposed a framework for the security evaluation of Component based software design and derived the security metrics for the main three pillars of security, confidentiality, integrity and availability based on the component composition, dependency and inter component data/information flow. The proposed framework and derived metrics are flexible enough, in way that the system developer can modify the metrics according to the situation and are applicable both at the development phases and as well as after development.

  3. Canopy Height Estimation by Characterizing Waveform LiDAR Geometry Based on Shape-Distance Metric

    Eric Ariel L. Salas


    Full Text Available There have been few approaches developed for the estimation of height using waveform LiDAR data. Unlike any existing methods, we illustrate how the new Moment Distance (MD framework can characterize the canopy height based on the geometry and return power of the LiDAR waveform without having to go through curve modeling processes. Our approach offers the possibilities of using the raw waveform data to capture vital information from the variety of complex waveform shapes in LiDAR. We assess the relationship of the MD metrics to the key waveform landmarks—such as locations of peaks, power of returns, canopy heights, and height metrics—using synthetic data and real Laser Vegetation Imaging Sensor (LVIS data. In order to verify the utility of the new approach, we use field measurements obtained through the DESDynI (Deformation, Ecosystem Structure and Dynamics of Ice campaign. Our results reveal that the MDI can capture temporal dynamics of canopy and segregate generations of stands based on curve shapes. The method satisfactorily estimates the canopy height using the synthetic (r2 = 0.40 and the LVIS dataset (r2 = 0.74. The MDI is also comparable with existing RH75 (relative height at 75% and RH50 (relative height at 50% height metrics. Furthermore, the MDI shows better correlations with ground-based measurements than relative height metrics. The MDI performs well at any type of waveform shape. This opens the possibility of looking more closely at single-peaked waveforms that usually carries complex extremes.

  4. Affine Invariant, Model-Based Object Recognition Using Robust Metrics and Bayesian Statistics

    Zografos, Vasileios; 10.1007/11559573_51


    We revisit the problem of model-based object recognition for intensity images and attempt to address some of the shortcomings of existing Bayesian methods, such as unsuitable priors and the treatment of residuals with a non-robust error norm. We do so by using a refor- mulation of the Huber metric and carefully chosen prior distributions. Our proposed method is invariant to 2-dimensional affine transforma- tions and, because it is relatively easy to train and use, it is suited for general object matching problems.

  5. Analysis on the Metrics used in Optimizing Electronic Business based on Learning Techniques

    Irina-Steliana STAN


    Full Text Available The present paper proposes a methodology of analyzing the metrics related to electronic business. The drafts of the optimizing models include KPIs that can highlight the business specific, if only they are integrated by using learning-based techniques. Having set the most important and high-impact elements of the business, the models should get in the end the link between them, by automating business flows. The human resource will be found in the situation of collaborating more and more with the optimizing models which will translate into high quality decisions followed by profitability increase.

  6. Reliable Coverage Area Based Link Expiration Time (LET) Routing Metric for Mobile Ad Hoc Networks

    Ahmed, Izhar; Tepe, K. E.; Singh, B. K.

    This paper presents a new routing metric for mobile ad hoc networks. It considers both coverage area as well as link expiration information, which in turn requires position, speed and direction information of nodes in the network. With this new metric, a routing protocol obtains routes that last longer with as few hops as possible. The proposed routing metric is implemented with Ad Hoc On-Demand Distance Vector Routing (AODV) protocol. Thus, the performance of the proposed routing metric is tested against the minimum hop metric of AODV. Simulation results show that the AODV protocol with the new routing metric significantly improves delivery ratio and reduces routing overhead. The delay performance of AODV with the new metric is comparable to its minimum hop metric implementation.

  7. Development and evaluation of aperture-based complexity metrics using film and EPID measurements of static MLC openings

    Götstedt, Julia [Department of Radiation Physics, University of Gothenburg, Göteborg 413 45 (Sweden); Karlsson Hauer, Anna; Bäck, Anna, E-mail: [Department of Therapeutic Radiation Physics, Sahlgrenska University Hospital, Göteborg 413 45 (Sweden)


    Purpose: Complexity metrics have been suggested as a complement to measurement-based quality assurance for intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT). However, these metrics have not yet been sufficiently validated. This study develops and evaluates new aperture-based complexity metrics in the context of static multileaf collimator (MLC) openings and compares them to previously published metrics. Methods: This study develops the converted aperture metric and the edge area metric. The converted aperture metric is based on small and irregular parts within the MLC opening that are quantified as measured distances between MLC leaves. The edge area metric is based on the relative size of the region around the edges defined by the MLC. Another metric suggested in this study is the circumference/area ratio. Earlier defined aperture-based complexity metrics—the modulation complexity score, the edge metric, the ratio monitor units (MU)/Gy, the aperture area, and the aperture irregularity—are compared to the newly proposed metrics. A set of small and irregular static MLC openings are created which simulate individual IMRT/VMAT control points of various complexities. These are measured with both an amorphous silicon electronic portal imaging device and EBT3 film. The differences between calculated and measured dose distributions are evaluated using a pixel-by-pixel comparison with two global dose difference criteria of 3% and 5%. The extent of the dose differences, expressed in terms of pass rate, is used as a measure of the complexity of the MLC openings and used for the evaluation of the metrics compared in this study. The different complexity scores are calculated for each created static MLC opening. The correlation between the calculated complexity scores and the extent of the dose differences (pass rate) are analyzed in scatter plots and using Pearson’s r-values. Results: The complexity scores calculated by the edge

  8. Metrication in the United States. Citations from the NTIS data base

    Habercom, G. E., Jr.


    Materials are cited on the metric system of measurements and conversion of U.S. measurements to the metric system. The metrication process, industrial and economic impacts, problems involved, the benefits anticipated, and public reactions and relations are discussed. The metric system is discussed, along with its application to various fields. This updated bibliography contains 192 abstracts, 13 of which are new entries to the previous edition.

  9. Statistical rice yield modeling using blended MODIS-Landsat based crop phenology metrics in Taiwan

    Chen, C. R.; Chen, C. F.; Nguyen, S. T.; Lau, K. V.


    Taiwan is a populated island with a majority of residents settled in the western plains where soils are suitable for rice cultivation. Rice is not only the most important commodity, but also plays a critical role for agricultural and food marketing. Information of rice production is thus important for policymakers to devise timely plans for ensuring sustainably socioeconomic development. Because rice fields in Taiwan are generally small and yet crop monitoring requires information of crop phenology associating with the spatiotemporal resolution of satellite data, this study used Landsat-MODIS fusion data for rice yield modeling in Taiwan. We processed the data for the first crop (Feb-Mar to Jun-Jul) and the second (Aug-Sep to Nov-Dec) in 2014 through five main steps: (1) data pre-processing to account for geometric and radiometric errors of Landsat data, (2) Landsat-MODIS data fusion using using the spatial-temporal adaptive reflectance fusion model, (3) construction of the smooth time-series enhanced vegetation index 2 (EVI2), (4) rice yield modeling using EVI2-based crop phenology metrics, and (5) error verification. The fusion results by a comparison bewteen EVI2 derived from the fusion image and that from the reference Landsat image indicated close agreement between the two datasets (R2 > 0.8). We analysed smooth EVI2 curves to extract phenology metrics or phenological variables for establishment of rice yield models. The results indicated that the established yield models significantly explained more than 70% variability in the data (p-value 0.8), in both cases. The root mean square error (RMSE) and mean absolute error (MAE) used to measure the model accuracy revealed the consistency between the estimated yields and the government's yield statistics. This study demonstrates advantages of using EVI2-based phenology metrics (derived from Landsat-MODIS fusion data) for rice yield estimation in Taiwan prior to the harvest period.

  10. Fuzzy Sets-based Control Rules for Terminating Algorithms

    Jose L. VERDEGAY


    Full Text Available In this paper some problems arising in the interface between two different areas, Decision Support Systems and Fuzzy Sets and Systems, are considered. The Model-Base Management System of a Decision Support System which involves some fuzziness is considered, and in that context the questions on the management of the fuzziness in some optimisation models, and then of using fuzzy rules for terminating conventional algorithms are presented, discussed and analyzed. Finally, for the concrete case of the Travelling Salesman Problem, and as an illustration of determination, management and using the fuzzy rules, a new algorithm easy to implement in the Model-Base Management System of any oriented Decision Support System is shown.

  11. A Rule Based System for Speech Language Context Understanding

    Imran Sarwar Bajwa; Muhammad Abbas Choudhary


    Speech or Natural language contents are major tools of communication. This research paper presents a natural language processing based automated system for understanding speech language text. A new rule based model has been presented for analyzing the natural languages and extracting the relative meanings from the given text. User writes the natural language text in simple English in a few paragraphs and the designed system has a sound ability of analyzing the given script by the user. After composite analysis and extraction of associated information, the designed system gives particular meanings to an assortment of speech language text on the basis of its context. The designed system uses standard speech language rules that are clearly defined for all speech languages as English,Urdu, Chinese, Arabic, French, etc. The designed system provides a quick and reliable way to comprehend speech language context and generate respective meanings.

  12. Transfer of Rule-Based Expertise through a Tutorial Dialogue


    involving a study of the epistemology of MYCIN’s knowledge) and (2) management of the Task uad Thesis 3 dialogue so as to achieve economical, systematic...Goldstein, I. The genetic epistemology of rule systems. The International Journal of Man-Machine Studies, 1979,11, 61-77. [Gordon, 1971] Gordon D...Evaluating the performance of a computer-based consultant. Computer Programs in Biomedicine , 1978, 9, 1978, 95-102. *1 4 449 Author Index [Atkinson

  13. Comparison of some classification algorithms based on deterministic and nondeterministic decision rules

    Delimata, Paweł


    We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.

  14. Fast rule-based bioactivity prediction using associative classification mining

    Yu Pulan


    Full Text Available Abstract Relating chemical features to bioactivities is critical in molecular design and is used extensively in the lead discovery and optimization process. A variety of techniques from statistics, data mining and machine learning have been applied to this process. In this study, we utilize a collection of methods, called associative classification mining (ACM, which are popular in the data mining community, but so far have not been applied widely in cheminformatics. More specifically, classification based on predictive association rules (CPAR, classification based on multiple association rules (CMAR and classification based on association rules (CBA are employed on three datasets using various descriptor sets. Experimental evaluations on anti-tuberculosis (antiTB, mutagenicity and hERG (the human Ether-a-go-go-Related Gene blocker datasets show that these three methods are computationally scalable and appropriate for high speed mining. Additionally, they provide comparable accuracy and efficiency to the commonly used Bayesian and support vector machines (SVM methods, and produce highly interpretable models.

  15. Improving Intrusion Detection System Based on Snort Rules for Network Probe Attacks Detection with Association Rules Technique of Data Mining

    Nattawat Khamphakdee


    Full Text Available The intrusion detection system (IDS is an important network security tool for securing computer and network systems. It is able to detect and monitor network traffic data. Snort IDS is an open-source network security tool. It can search and match rules with network traffic data in order to detect attacks, and generate an alert. However, the Snort IDS  can detect only known attacks. Therefore, we have proposed a procedure for improving Snort IDS rules, based on the association rules data mining technique for detection of network probe attacks.  We employed the MIT-DARPA 1999 data set for the experimental evaluation. Since behavior pattern traffic data are both normal and abnormal, the abnormal behavior data is detected by way of the Snort IDS. The experimental results showed that the proposed Snort IDS rules, based on data mining detection of network probe attacks, proved more efficient than the original Snort IDS rules, as well as icmp.rules and icmp-info.rules of Snort IDS.  The suitable parameters for the proposed Snort IDS rules are defined as follows: Min_sup set to 10%, and Min_conf set to 100%, and through the application of eight variable attributes. As more suitable parameters are applied, higher accuracy is achieved.

  16. Symmetric Kullback-Leibler Metric Based Tracking Behaviors for Bioinspired Robotic Eyes.

    Liu, Hengli; Luo, Jun; Wu, Peng; Xie, Shaorong; Li, Hengyu


    A symmetric Kullback-Leibler metric based tracking system, capable of tracking moving targets, is presented for a bionic spherical parallel mechanism to minimize a tracking error function to simulate smooth pursuit of human eyes. More specifically, we propose a real-time moving target tracking algorithm which utilizes spatial histograms taking into account symmetric Kullback-Leibler metric. In the proposed algorithm, the key spatial histograms are extracted and taken into particle filtering framework. Once the target is identified, an image-based control scheme is implemented to drive bionic spherical parallel mechanism such that the identified target is to be tracked at the center of the captured images. Meanwhile, the robot motion information is fed forward to develop an adaptive smooth tracking controller inspired by the Vestibuloocular Reflex mechanism. The proposed tracking system is designed to make the robot track dynamic objects when the robot travels through transmittable terrains, especially bumpy environment. To perform bumpy-resist capability under the condition of violent attitude variation when the robot works in the bumpy environment mentioned, experimental results demonstrate the effectiveness and robustness of our bioinspired tracking system using bionic spherical parallel mechanism inspired by head-eye coordination.

  17. Rough set and rule-based multicriteria decision aiding

    Roman Slowinski


    Full Text Available The aim of multicriteria decision aiding is to give the decision maker a recommendation concerning a set of objects evaluated from multiple points of view called criteria. Since a rational decision maker acts with respect to his/her value system, in order to recommend the most-preferred decision, one must identify decision maker's preferences. In this paper, we focus on preference discovery from data concerning some past decisions of the decision maker. We consider the preference model in the form of a set of "if..., then..." decision rules discovered from the data by inductive learning. To structure the data prior to induction of rules, we use the Dominance-based Rough Set Approach (DRSA. DRSA is a methodology for reasoning about data, which handles ordinal evaluations of objects on considered criteria and monotonic relationships between these evaluations and the decision. We review applications of DRSA to a large variety of multicriteria decision problems.

  18. Developing a green metric mechanism versus LEED for tall buildings in Qatar: evaluation-based case study

    Galal A Ibrahim, Hatem [Department of Architecture and Urban Planning, College of Engineering, Qatar University (Qatar)], E-mail:


    Qatar, with its large and growing economy is one of the busiest construction sites in the world. In Doha, numerous tall buildings have been constructed to provide office space and meet life style property demand. The aim of this paper is to develop a new green metrics system for Doha tall buildings. This green metrics system distributes credits based on indoor thermal comfort, energy consumption, water management and innovation in design. The system was applied in the Tornado tower, a 52-storey office building situated in Doha's West Bay area and compared with the LEED system. It was found that the new metrics system developed herein is better suited to Doha's tall buildings than the LEED system. This paper presented a new green metrics system which will be helpful in determining the environmental performance of tall buildings in Qatar.

  19. Acceleration of association‐rule based markov decision processes

    Ma. de G. García‐Hernández


    Full Text Available In this paper, we present a new approach for the estimation of Markov decision processes based on efficient association rulemining techniques such as Apriori. For the fastest solution of the resulting association‐rule based Markov decision process,several accelerating procedures such as asynchronous updates and prioritization using a static ordering have been applied. Anew criterion for state reordering in decreasing order of maximum reward is also compared with a modified topologicalreordering algorithm. Experimental results obtained on a finite state and action‐space stochastic shortest path problemdemonstrate the feasibility of the new approach.

  20. ALC: automated reduction of rule-based models

    Gilles Ernst


    Full Text Available Abstract Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files.

  1. Registering multiple range images based on distance metric of surface-to-surface

    ZHANG Hongbin; XIE Feng


    In most conventional algorithms of registering multiple range images, the pose parameters are estimated by using the distance sum between closest point pairs as the objective function. These algorithms have the problems of inexact point correspondence, registration accuracy, and sensitivity to initial registration parameters. Due to the scanner settings, scanner distance, and surface slopes, two or more 3D data sets are unlikely to be acquired such that the 3D data points exactly correspond, and also each point in the data set may represent different surface areas. This paper proposes a novel registration algorithm based on a distance metric of surface-to-surface. The algorithm uses triangle meshes to represent the surfaces. Based on surface sampling and the point-to-surface distances, the integration calculation of the mean distance between surfaces is derived and reduced to a simple formula. The method was tested on synthetic and real range images.

  2. A rule-based neural controller for inverted pendulum system.

    Hao, J; Vandewalle, J; Tan, S


    This paper tries to demonstrate how a heuristic neural control approach can be used to solve a complex nonlinear control problem. The control task is to swing up a pendulum mounted on a cart from its stable position (vertically down) to the zero state (up right) and keep it there by applying a sequence of two opposing constant forces of equal magnitude to the mass center of the cart. In addition, the displacement of the cart itself is confined to within a preset limit during the swinging up action and it will eventually be brought to the origin of the track. This is truly a nontrivial nonlinear regulation problem and is considerably difficult compared to the pendulum balancing problem (and its variations) widely adopted as a benchmarking test system for neural controllers. Through the solution of this specific control problem, we try to illustrate a heuristic neural control approach with task decomposition, control rule extraction and neural net rule implementation as its basic elements. Specializing to the pendulum problem, the global control task is decomposed into subtasks namely pendulum positioning and cart positioning. Accordingly, three separate neural subcontrollers are designed to cater to the subtasks and their coordination, i.e., pendulum subcontroller (PSC), cart subcontroller (CSC) and the switching subcontroller (SSC). Each of the subcontrollers is designed based on the rules and guidelines obtained from the experiences of a human operator. The simulation result is included to show the actual performance of the controller.

  3. A rule-based stemmer for Arabic Gulf dialect

    Belal Abuata


    Full Text Available Arabic dialects arewidely used from many years ago instead of Modern Standard Arabic language in many fields. The presence of dialects in any language is a big challenge. Dialects add a new set of variational dimensions in some fields like natural language processing, information retrieval and even in Arabic chatting between different Arab nationals. Spoken dialects have no standard morphological, phonological and lexical like Modern Standard Arabic. Hence, the objective of this paper is to describe a procedure or algorithm by which a stem for the Arabian Gulf dialect can be defined. The algorithm is rule based. Special rules are created to remove the suffixes and prefixes of the dialect words. Also, the algorithm applies rules related to the word size and the relation between adjacent letters. The algorithm was tested for a number of words and given a good correct stem ratio. The algorithm is also compared with two Modern Standard Arabic algorithms. The results showed that Modern Standard Arabic stemmers performed poorly with Arabic Gulf dialect and our algorithm performed poorly when applied for Modern Standard Arabic words.

  4. Critical thinking skills in nursing students: comparison of simulation-based performance with metrics.

    Fero, Laura J; O'Donnell, John M; Zullo, Thomas G; Dabbs, Annette DeVito; Kitutu, Julius; Samosky, Joseph T; Hoffman, Leslie A


    This paper is a report of an examination of the relationship between metrics of critical thinking skills and performance in simulated clinical scenarios. Paper and pencil assessments are commonly used to assess critical thinking but may not reflect simulated performance. In 2007, a convenience sample of 36 nursing students participated in measurement of critical thinking skills and simulation-based performance using videotaped vignettes, high-fidelity human simulation, the California Critical Thinking Disposition Inventory and California Critical Thinking Skills Test. Simulation-based performance was rated as 'meeting' or 'not meeting' overall expectations. Test scores were categorized as strong, average, or weak. Most (75.0%) students did not meet overall performance expectations using videotaped vignettes or high-fidelity human simulation; most difficulty related to problem recognition and reporting findings to the physician. There was no difference between overall performance based on method of assessment (P = 0.277). More students met subcategory expectations for initiating nursing interventions (P ≤ 0.001) using high-fidelity human simulation. The relationship between videotaped vignette performance and critical thinking disposition or skills scores was not statistically significant, except for problem recognition and overall critical thinking skills scores (Cramer's V = 0.444, P = 0.029). There was a statistically significant relationship between overall high-fidelity human simulation performance and overall critical thinking disposition scores (Cramer's V = 0.413, P = 0.047). Students' performance reflected difficulty meeting expectations in simulated clinical scenarios. High-fidelity human simulation performance appeared to approximate scores on metrics of critical thinking best. Further research is needed to determine if simulation-based performance correlates with critical thinking skills in the clinical setting. © 2010 The Authors. Journal of Advanced

  5. New QCD sum rules based on canonical commutation relations

    Hayata, Tomoya


    New derivation of QCD sum rules by canonical commutators is developed. It is the simple and straightforward generalization of Thomas-Reiche-Kuhn sum rule on the basis of Kugo-Ojima operator formalism of a non-abelian gauge theory and a suitable subtraction of UV divergences. By applying the method to the vector and axial vector current in QCD, the exact Weinberg’s sum rules are examined. Vector current sum rules and new fractional power sum rules are also discussed.

  6. Network on Chip: a New Approach of QoS Metric Modeling Based on Calculus Theory

    Salem NASRI


    Full Text Available According to ITRS, in 2018, ICs will be able to integrate billions of transistors, with feature sizes around 18 nm and clock frequencies near to 10 GHz. In this context, Network on Chip (NoC appears as an attractive solution to implement future high performance networks and more QoS management. A NoC is composed by IP cores (Intellectual Propriety and switches connected among themselves by communicationchannels. End-to-End Delay (EED communication is accomplished by the exchange of data among IP cores.Often, the structure of particular messages is not adequate for the communication purposes. This leads to the concept of packet switching. In the context of NoCs, packets are composed by header, payload, and trailer. Packets are divided into small pieces called Flits. It appears of importance, to meet the required performance in NoC hardware resources. It should be specified in an earlier step of the system design. The main attention should be given to the choice of some network parameters such as the physical buffer size in the node. The EED and packet loss are some of the critical QoS metrics. Some real-time and multimedia applications bound up these parameters and require specific hardware resources and particular management approaches in the NoC switch.A traffic contract (SLA, Service Level Agreement specifies the ability of a network or protocol to give guaranteed performance, throughput or latency bounds based on mutually agreed measures, usually by prioritizing traffic. A defined Quality of Service (QoS may be required for some types of network real time traffic or multimedia applications. The main goal of this paper is, using the Network on Chip modeling architecture, to define a QoS metric. We focus on the network delay bound and packet losses. This approach is based on the Network Calculus theory, a mathematical model to represent the data flows behavior between IPs interconnected over NoC.We propose an approach of QoS-metric based on Qo

  7. Design based Object-Oriented Metrics to Measure Coupling and Cohesion



    Full Text Available The object oriented design and object oriented development environment are currently popular in software organizations due to the object oriented programming languages. As the object oriented technology enters into software organizations, it has created new challenges for the companies which used only product metrics as atool for monitoring, controlling and maintaining the software product. This paper presents the new object oriented metrics namely for coupling of class by counting the number of associated classes within a class & total associated class and cohesion at the method and function level for cohesion to estimates object oriented software. In order to this, we discuss in this paper object oriented issues and measures with analysis of object oriented metrics through coupling and cohesion to check the complexity with weight count method. We also discuses the estimation process after analysis of proposed object oriented metrics to measures and check the better performance of object oriented metrics in comparison to other object oriented metrics.

  8. Building distributed rule-based systems using the AI Bus

    Schultz, Roger D.; Stobie, Iain C.


    The AI Bus software architecture was designed to support the construction of large-scale, production-quality applications in areas of high technology flux, running heterogeneous distributed environments, utilizing a mix of knowledge-based and conventional components. These goals led to its current development as a layered, object-oriented library for cooperative systems. This paper describes the concepts and design of the AI Bus and its implementation status as a library of reusable and customizable objects, structured by layers from operating system interfaces up to high-level knowledge-based agents. Each agent is a semi-autonomous process with specialized expertise, and consists of a number of knowledge sources (a knowledge base and inference engine). Inter-agent communication mechanisms are based on blackboards and Actors-style acquaintances. As a conservative first implementation, we used C++ on top of Unix, and wrapped an embedded Clips with methods for the knowledge source class. This involved designing standard protocols for communication and functions which use these protocols in rules. Embedding several CLIPS objects within a single process was an unexpected problem because of global variables, whose solution involved constructing and recompiling a C++ version of CLIPS. We are currently working on a more radical approach to incorporating CLIPS, by separating out its pattern matcher, rule and fact representations and other components as true object oriented modules.




    Full Text Available Metrics are used to help a software engineer in quantitative analysis to assess the quality of the design before a system is built. The focus of Object-Oriented metrics is on the class which is the fundamental building block of the Object-Oriented architecture. These metrics are focused on internal object structure and external object structure. Internal object structure reflects the complexity of each individual entity such as methods and classes. External complexity measures the interaction among entities such as Coupling and Inheritance. This paper mainly focuses on a set of object oriented metrics that can be used to measure the quality of an object oriented design. Two types of complexity metrics in Object-Oriented paradigm namely Mood metrics and Lorenz & Kidd metrics. Mood metrics consist of Method inheritance factor(MIF, Coupling factor(CF, Attribute inheritance factor(AIF, Method hiding factor(MHF, Attribute hiding factor(AHF, and polymorphism factor(PF. Lorenz & Kidd metrics consist of Number of operations overridden (NOO, Number operations added (NOA, Specialization index(SI. Mood metrics and Lorenz & Kidd metrics measurements are used mainly by designers and testers. Designers uses these metrics to access the software early in process,making changes that will reduce complexity and improve the continuing capability of the design. Testers use to test the software for finding the complexity, performance of the system, quality of the software. This paper reviews Mood metrics and Lorenz & Kidd metrics are validates theoretically and empirically methods. In thispaper, work has been done to explore the quality of design of software components using object oriented paradigm. A number of object oriented metrics have been proposed in the literature for measuring the design attributes such as inheritance, coupling, polymorphism etc. This paper, metrics have been used to analyzevarious features of software component. Complexity of methods

  10. Medicaid program: rescission of School-Based Administration/Transportation final rule, Outpatient Hospital Services final rule, and partial rescission of Case Management Interim final rule. Final rule.


    This rule finalizes our proposal to rescind the December 28, 2007 final rule entitled, "Elimination of Reimbursement under Medicaid for School Administration Expenditures and Costs Related to Transportation of School-Age Children Between Home and School;" the November 7, 2008 final rule entitled, "Clarification of Outpatient Hospital Facility (Including Outpatient Hospital Clinic) Services Definition;" and certain provisions of the December 4, 2007 interim final rule entitled, "Optional State Plan Case Management Services." These regulations have been the subject of Congressional moratoria and have not yet been implemented (or, with respect to the case management interim final rule, have only been partially implemented) by CMS. In light of concerns raised about the adverse effects that could result from these regulations, in particular, the potential restrictions on services available to beneficiaries and the lack of clear evidence demonstrating that the approaches taken in the regulations are warranted, CMS is rescinding the two final rules in full, and partially rescinding the interim final rule. Rescinding these provisions will permit further opportunity to determine the best approach to further the objectives of the Medicaid program in providing necessary health benefits coverage to needy individuals.

  11. The Adaptation of Mobile Learning System Based on Business Rules


    <正>In the mobile learning system,it is important to adapt to mobile devices.Most of mobile learning systems are not quickly suitable for mobile devices.In order to provide adaptive mobile services,the approach for adaptation is proposed in this paper.Firstly,context of mobile devices and its influence on mobile learning system are analized and business rules based on these analysis are presented.Then,using the approach,the mobile learning system is constructed.The example implies this approach can adapt the mobile service to the mobile devices flexibly.

  12. Zeta diversity as a concept and metric that unifies incidence-based biodiversity patterns.

    Hui, Cang; McGeoch, Melodie A


    Patterns in species incidence and compositional turnover are central to understanding what drives biodiversity. Here we propose zeta (ζ) diversity, the number of species shared by multiple assemblages, as a concept and metric that unifies incidence-based diversity measures, patterns, and relationships. Unlike other measures of species compositional turnover, zeta diversity partitioning quantifies the complete set of diversity components for multiple assemblages, comprehensively representing the spatial structure of multispecies distributions. To illustrate the application and ecological value of zeta diversity, we show how it scales with sample number, grain, and distance. Zeta diversity reconciles several different biodiversity patterns, including the species accumulation curve, the species-area relationship, multispecies occupancy patterns, and scaling of species endemism. Exponential and power-law forms of zeta diversity are associated with stochastic versus niche assembly processes. Zeta diversity may provide new insights on biodiversity patterns, the processes driving them, and their response to environmental change.

  13. Quantum Algorithm for K-Nearest Neighbors Classification Based on the Metric of Hamming Distance

    Ruan, Yue; Xue, Xiling; Liu, Heng; Tan, Jianing; Li, Xi


    K-nearest neighbors (KNN) algorithm is a common algorithm used for classification, and also a sub-routine in various complicated machine learning tasks. In this paper, we presented a quantum algorithm (QKNN) for implementing this algorithm based on the metric of Hamming distance. We put forward a quantum circuit for computing Hamming distance between testing sample and each feature vector in the training set. Taking advantage of this method, we realized a good analog for classical KNN algorithm by setting a distance threshold value t to select k - n e a r e s t neighbors. As a result, QKNN achieves O(n 3) performance which is only relevant to the dimension of feature vectors and high classification accuracy, outperforms Llyod's algorithm (Lloyd et al. 2013) and Wiebe's algorithm (Wiebe et al. 2014).

  14. Speckle-metric-optimization-based adaptive optics for laser beam projection and coherent beam combining.

    Vorontsov, Mikhail; Weyrauch, Thomas; Lachinova, Svetlana; Gatz, Micah; Carhart, Gary


    Maximization of a projected laser beam's power density at a remotely located extended object (speckle target) can be achieved by using an adaptive optics (AO) technique based on sensing and optimization of the target-return speckle field's statistical characteristics, referred to here as speckle metrics (SM). SM AO was demonstrated in a target-in-the-loop coherent beam combining experiment using a bistatic laser beam projection system composed of a coherent fiber-array transmitter and a power-in-the-bucket receiver. SM sensing utilized a 50 MHz rate dithering of the projected beam that provided a stair-mode approximation of the outgoing combined beam's wavefront tip and tilt with subaperture piston phases. Fiber-integrated phase shifters were used for both the dithering and SM optimization with stochastic parallel gradient descent control.

  15. Image retrieval method based on metric learning for convolutional neural network

    Wang, Jieyuan; Qian, Ying; Ye, Qingqing; Wang, Biao


    At present, the research of content-based image retrieval (CBIR) focuses on learning effective feature for the representations of origin images and similarity measures. The retrieval accuracy and efficiency are crucial to a CBIR. With the rise of deep learning, convolutional network is applied in the domain of image retrieval and achieved remarkable results, but the image visual feature extraction of convolutional neural network exist high dimension problems, this problem makes the image retrieval and speed ineffective. This paper uses the metric learning for the image visual features extracted from the convolutional neural network, decreased the feature redundancy, improved the retrieval performance. The work in this paper is also a necessary part for further implementation of feature hashing to the approximate-nearest-neighbor (ANN) retrieval method.

  16. Detect-and-forward in two-hop relay channels: a metrics-based analysis

    Benjillali, Mustapha


    In this paper, we analyze the coded performance of a cooperative system with multiple parallel relays using "Detect-and-Forward" (DetF) strategy where each relay demodulates the overheard signal and forwards the detected binary words. The proposed method is based on the probabilistic characterization of the reliability metrics given under the form of L-values. First, we derive analytical expressions of the probability density functions (PDFs) of the L-values in the elementary two-hop DetF relay channel with different source-relay channel state information assumptions. Then, we apply the obtained expressions to calculate the theoretically achievable rates and compare them with the practical throughput of a simulated turbo-coded transmission. Next, we derive tight approximations for the end-to-end coded bit error rate (BER) of a general cooperative scheme with multiple parallel relays. Simulation results demonstrate the accuracy of our derivations for different cooperation configurations and conditions. © 2010 IEEE.

  17. WellnessRules: A Web 3.0 Case Study in RuleML-Based Prolog-N3 Profile Interoperation

    Boley, Harold; Osmun, Taylor Michael; Craig, Benjamin Larry

    An interoperation study, WellnessRules, is described, where rules about wellness opportunities are created by participants in rule languages such as Prolog and N3, and translated within a wellness community using RuleML/XML. The wellness rules are centered around participants, as profiles, encoding knowledge about their activities conditional on the season, the time-of-day, the weather, etc. This distributed knowledge base extends FOAF profiles with a vocabulary and rules about wellness group networking. The communication between participants is organized through Rule Responder, permitting wellness-profile translation and distributed querying across engines. WellnessRules interoperates between rules and queries in the relational (Datalog) paradigm of the pure-Prolog subset of POSL and in the frame (F-logic) paradigm of N3. An evaluation of Rule Responder instantiated for WellnessRules revealed acceptable Web response times.

  18. Rule - based Fault Diagnosis Expert System for Wind Turbine

    Deng Xiao-Wen


    Full Text Available Under the trend of increasing installed capacity of wind power, the intelligent fault diagnosis of wind turbine is of great significance to the safe and efficient operation of wind farms. Based on the knowledge of fault diagnosis of wind turbines, this paper builds expert system diagnostic knowledge base by using confidence production rules and expert system self-learning method. In Visual Studio 2013 platform, C # language is selected and ADO.NET technology is used to access the database. Development of Fault Diagnosis Expert System for Wind Turbine. The purpose of this paper is to realize on-line diagnosis of wind turbine fault through human-computer interaction, and to improve the diagnostic capability of the system through the continuous improvement of the knowledge base.

  19. A Rule-Based Industrial Boiler Selection System

    Tan, C. F.; Khalil, S. N.; Karjanto, J.; Tee, B. T.; Wahidin, L. S.; Chen, W.; Rauterberg, G. W. M.; Sivarao, S.; Lim, T. L.


    Boiler is a device used for generating the steam for power generation, process use or heating, and hot water for heating purposes. Steam boiler consists of the containing vessel and convection heating surfaces only, whereas a steam generator covers the whole unit, encompassing water wall tubes, super heaters, air heaters and economizers. The selection of the boiler is very important to the industry for conducting the operation system successfully. The selection criteria are based on rule based expert system and multi-criteria weighted average method. The developed system consists of Knowledge Acquisition Module, Boiler Selection Module, User Interface Module and Help Module. The system capable of selecting the suitable boiler based on criteria weighted. The main benefits from using the system is to reduce the complexity in the decision making for selecting the most appropriate boiler to palm oil process plant.

  20. Comparison of Highly Resolved Model-Based Exposure Metrics for Traffic-Related Air Pollutants to Support Environmental Health Studies

    Shih Ying Chang


    Full Text Available Human exposure to air pollution in many studies is represented by ambient concentrations from space-time kriging of observed values. Space-time kriging techniques based on a limited number of ambient monitors may fail to capture the concentration from local sources. Further, because people spend more time indoors, using ambient concentration to represent exposure may cause error. To quantify the associated exposure error, we computed a series of six different hourly-based exposure metrics at 16,095 Census blocks of three Counties in North Carolina for CO, NOx, PM2.5, and elemental carbon (EC during 2012. These metrics include ambient background concentration from space-time ordinary kriging (STOK, ambient on-road concentration from the Research LINE source dispersion model (R-LINE, a hybrid concentration combining STOK and R-LINE, and their associated indoor concentrations from an indoor infiltration mass balance model. Using a hybrid-based indoor concentration as the standard, the comparison showed that outdoor STOK metrics yielded large error at both population (67% to 93% and individual level (average bias between −10% to 95%. For pollutants with significant contribution from on-road emission (EC and NOx, the on-road based indoor metric performs the best at the population level (error less than 52%. At the individual level, however, the STOK-based indoor concentration performs the best (average bias below 30%. For PM2.5, due to the relatively low contribution from on-road emission (7%, STOK-based indoor metric performs the best at both population (error below 40% and individual level (error below 25%. The results of the study will help future epidemiology studies to select appropriate exposure metric and reduce potential bias in exposure characterization.

  1. Repeated Rule Acquisition using Rule Ontology from Similar Web Sites Based on Genetic Algorithm

    Shanmugapriya. D


    Full Text Available The Semantic Web, which is the key component of Web 2.0 and Web 3.0, is an evolving development of the World Wide Web in which the semantics of information and services on the Web are being defined. Knowledge is an essential part of most Semantic Web applications and ontology, which is a formal explicit description of concepts or classes in a domain of discussion, is the most important part of the knowledge. As a model for knowledge description and formalization, ontology’s are widely used to represent user profiles in personalized web information gathering. The ontology can decrease the amount of information and reduce the work of utilizing the information in rule acquisition, because it is generalized and specifically rearranged for rule acquisition. Moreover, the ontology can be accumulated and reused throughout repeated rule acquisition. The main contribution of existing work is that the complete and detailed rule composition process with examples and its evaluation. The enhancement work is, with the existing system concept we combining the concept of selecting exact parts that contain rules from Web pages to increase the accuracy in their result. In this proposed work we are combining the screen method from WebPages with use of genetic Techniques to extract the rule optimally.

  2. Uncertain rule-based fuzzy systems introduction and new directions

    Mendel, Jerry M


    The second edition of this textbook provides a fully updated approach to fuzzy sets and systems that can model uncertainty — i.e., “type-2” fuzzy sets and systems. The author demonstrates how to overcome the limitations of classical fuzzy sets and systems, enabling a wide range of applications from time-series forecasting to knowledge mining to control. In this new edition, a bottom-up approach is presented that begins by introducing classical (type-1) fuzzy sets and systems, and then explains how they can be modified to handle uncertainty. The author covers fuzzy rule-based systems – from type-1 to interval type-2 to general type-2 – in one volume. For hands-on experience, the book provides information on accessing MatLab and Java software to complement the content. The book features a full suite of classroom material. Presents fully updated material on new breakthroughs in human-inspired rule-based techniques for handling real-world uncertainties; Allows those already familiar with type-1 fuzzy se...

  3. Rainfall events prediction using rule-based fuzzy inference system

    Asklany, Somia A.; Elhelow, Khaled; Youssef, I. K.; Abd El-wahab, M.


    We are interested in rainfall events prediction by applying rule-based reasoning and fuzzy logic. Five parameters: relative humidity, total cloud cover, wind direction, temperature and surface pressure are the input variables for our model, each has three membership functions. The data used is twenty years METAR data for Cairo airport station (HECA) [1972-1992] 30° 3' 29″ N, 31° 13' 44″ E. and five years METAR data for Mersa Matruh station (HEMM) 31° 20' 0″ N, 27° 13' 0″ E. Different models for each station were constructed depending on the available data sets. Among the overall 243 possibilities we have based our models on one hundred eighteen fuzzy IF-THEN rules and fuzzy reasoning. The output variable which has four membership functions, takes values from zero to one hundred corresponding to the percentage for rainfall events given for every hourly data. We used two skill scores to verify our results, the Brier score and the Friction score. The results are in high agreements with the recorded data for the stations with increasing in output values towards the real time rain events. All implementation are done with MATLAB 7.9.

  4. Using Geometry-Based Metrics as Part of Fitness-for-Purpose Evaluations of 3D City Models

    Wong, K.; Ellul, C.


    Three-dimensional geospatial information is being increasingly used in a range of tasks beyond visualisation. 3D datasets, however, are often being produced without exact specifications and at mixed levels of geometric complexity. This leads to variations within the models' geometric and semantic complexity as well as the degree of deviation from the corresponding real world objects. Existing descriptors and measures of 3D data such as CityGML's level of detail are perhaps only partially sufficient in communicating data quality and fitness-for-purpose. This study investigates whether alternative, automated, geometry-based metrics describing the variation of complexity within 3D datasets could provide additional relevant information as part of a process of fitness-for-purpose evaluation. The metrics include: mean vertex/edge/face counts per building; vertex/face ratio; minimum 2D footprint area and; minimum feature length. Each metric was tested on six 3D city models from international locations. The results show that geometry-based metrics can provide additional information on 3D city models as part of fitness-for-purpose evaluations. The metrics, while they cannot be used in isolation, may provide a complement to enhance existing data descriptors if backed up with local knowledge, where possible.

  5. Reinforcement-Based Fuzzy Neural Network ontrol with Automatic Rule Generation


    A reinforcemen-based fuzzy neural network control with automatic rule generation RBFNNC) is pro-posed. A set of optimized fuzzy control rules can be automatically generated through reinforcement learning based onthe state variables of object system. RBFNNC was applied to a cart-pole balancing system and simulation resultshows significant improvements on the rule generation.

  6. The Structure of a Knowledge Base for Cataloging Rules.

    Jeng, Ling Hwey


    Describes a study which investigated the general patterns and level of applicability of Anglo American Cataloging Rules (AACR2). It is suggested that designers and revisers of AACR2 should reduce the number of rules of very limited applicability levels and enhance the rule contents for application domains to expand the usefulness of AACR2. (SD)

  7. Associations between rule-based parenting practices and child screen viewing: A cross-sectional study

    Joanna M. Kesten


    Conclusions: Limit setting is associated with greater SV. Collaborative rule setting may be effective for managing boys' game-console use. More research is needed to understand rule-based parenting practices.

  8. Finite element-based injury metrics for pulmonary contusion via concurrent model optimization.

    Gayzik, F Scott; Hoth, J Jason; Stitzel, Joel D


    This study explores the relationship between impact severity and resulting pulmonary contusion (PC) for four impact conditions using a rat model of the injury. The force-deflection response from a Finite Element (FE) model of the lung was simultaneously matched to experimental data from distinct impacts via a genetic algorithm optimization. Sprague-Dawley rats underwent right-side thoracotomy prior to impact. Insults were applied directly to the lung via an instrumented piston. Five cohorts were tested: a sham group and four groups experiencing lung insults of varying degrees of severity. The values for impact velocity (V) and penetration depth (D) of the cohorts were Group 1, (V = 6.0 m · s(-1), D = 5.0 mm), Group 2, (V = 1.5 m · s(-1),D = 5.0 mm), Group 3, (V = 6 m · s(-1), D = 2.0 mm), and Group 4, (V = 1.5 m · s(-1), D = 2.0 mm). CT scans were acquired at 24 h, 48 h, and 1 week post-insult. Contusion volume was determined through segmentation. FE-based injury metrics for PC were determined at 24 h and 1 week post-impact, based on the observed volume of contusion and first principal strain. At 24 h post-impact, the volume of high radiopacity lung (HRL) was greatest for the severe impact group (mean HRL = 9.21 ± 4.89) and was significantly greater than all other cohorts but Group 3. The concurrent optimization matched simulated and observed impact energy within one standard deviation for Group 1 (energy = 3.88 ± 0.883 mJ, observed vs. 4.47 mJ, simulated) and Group 2 (energy = 1.46 ± 0.403 mJ, observed vs. 1.50 mJ, simulated) impacts. Statistically significant relationships between HRL and impact energy are presented. The FEA-based injury metrics at 24 h post-contusion are ε(max) · ε(max) exceeding 94.5 s(-1), ε (max) exceeding 0.284 and ε(max) exceeding 470 s(-1). Thresholds for injury to the lung still present at 1 week post-impact were also determined. They are ε(max) · ε(max) exceeding 149 s(-1), ε (max) exceeding 0.343 and ε(max) exceeding

  9. Vortices as degenerate metrics

    Baptista, J M


    We note that the Bogomolny equation for abelian vortices is precisely the condition for invariance of the Hermitian-Einstein equation under a degenerate conformal transformation. This leads to a natural interpretation of vortices as degenerate hermitian metrics that satisfy a certain curvature equation. Using this viewpoint, we rephrase standard results about vortices and make some new observations. We note the existence of a conceptually simple, non-linear rule for superposing vortex solutions, and we describe the natural behaviour of the L^2-metric on the moduli space upon certain restrictions.

  10. Metric learning for automatic sleep stage classification.

    Phan, Huy; Do, Quan; Do, The-Luan; Vu, Duc-Lung


    We introduce in this paper a metric learning approach for automatic sleep stage classification based on single-channel EEG data. We show that learning a global metric from training data instead of using the default Euclidean metric, the k-nearest neighbor classification rule outperforms state-of-the-art methods on Sleep-EDF dataset with various classification settings. The overall accuracy for Awake/Sleep and 4-class classification setting are 98.32% and 94.49% respectively. Furthermore, the superior accuracy is achieved by performing classification on a low-dimensional feature space derived from time and frequency domains and without the need for artifact removal as a preprocessing step.

  11. Towards a Generic Trace for Rule Based Constraint Reasoning

    Junior, Armando Gonçalves Da Silva; Menezes, Luis-Carlos; Da Silva, Marcos-Aurélio Almeida; Robin, Jacques


    CHR is a very versatile programming language that allows programmers to declaratively specify constraint solvers. An important part of the development of such solvers is in their testing and debugging phases. Current CHR implementations support those phases by offering tracing facilities with limited information. In this report, we propose a new trace for CHR which contains enough information to analyze any aspects of \\CHRv\\ execution at some useful abstract level, common to several implementations. %a large family of rule based solvers. This approach is based on the idea of generic trace. Such a trace is formally defined as an extension of the $\\omega_r^\\lor$ semantics of CHR. We show that it can be derived form the SWI Prolog CHR trace.

  12. SPARQL Query Re-writing Using Partonomy Based Transformation Rules

    Jain, Prateek; Yeh, Peter Z.; Verma, Kunal; Henson, Cory A.; Sheth, Amit P.

    Often the information present in a spatial knowledge base is represented at a different level of granularity and abstraction than the query constraints. For querying ontology's containing spatial information, the precise relationships between spatial entities has to be specified in the basic graph pattern of SPARQL query which can result in long and complex queries. We present a novel approach to help users intuitively write SPARQL queries to query spatial data, rather than relying on knowledge of the ontology structure. Our framework re-writes queries, using transformation rules to exploit part-whole relations between geographical entities to address the mismatches between query constraints and knowledge base. Our experiments were performed on completely third party datasets and queries. Evaluations were performed on Geonames dataset using questions from National Geographic Bee serialized into SPARQL and British Administrative Geography Ontology using questions from a popular trivia website. These experiments demonstrate high precision in retrieval of results and ease in writing queries.

  13. Geometrical Interpretation of Shannon's Entropy Based on the Born Rule

    Jankovic, Marko V


    In this paper we will analyze discrete probability distributions in which probabilities of particular outcomes of some experiment (microstates) can be represented by the ratio of natural numbers (in other words, probabilities are represented by digital numbers of finite representation length). We will introduce several results that are based on recently proposed JoyStick Probability Selector, which represents a geometrical interpretation of the probability based on the Born rule. The terms of generic space and generic dimension of the discrete distribution, as well as, effective dimension are going to be introduced. It will be shown how this simple geometric representation can lead to an optimal code length coding of the sequence of signals. Then, we will give a new, geometrical, interpretation of the Shannon entropy of the discrete distribution. We will suggest that the Shannon entropy represents the logarithm of the effective dimension of the distribution. Proposed geometrical interpretation of the Shannon ...

  14. Computational visual distinctness metric

    Martínez-Baena, J.; Toet, A.; Fdez-Vidal, X.R.; Garrido, A.; Rodríguez-Sánchez, R.


    A new computational visual distinctness metric based on principles of the early human visual system is presented. The metric is applied to quantify (1) the visual distinctness of targets in complex natural scenes and (2) the perceptual differences between compressed and uncompressed images. The new

  15. An Association Rule Mining Algorithm Based on a Boolean Matrix

    Hanbing Liu


    Full Text Available Association rule mining is a very important research topic in the field of data mining. Discovering frequent itemsets is the key process in association rule mining. Traditional association rule algorithms adopt an iterative method to discovery, which requires very large calculations and a complicated transaction process. Because of this, a new association rule algorithm called ABBM is proposed in this paper. This new algorithm adopts a Boolean vector "relational calculus" method to discovering frequent itemsets. Experimental results show that this algorithm can quickly discover frequent itemsets and effectively mine potential association rules.

  16. A fuzzy rule based framework for noise annoyance modeling.

    Botteldooren, Dick; Verkeyn, Andy; Lercher, Peter


    Predicting the effect of noise on individual people and small groups is an extremely difficult task due to the influence of a multitude of factors that vary from person to person and from context to context. Moreover, noise annoyance is inherently a vague concept. That is why, in this paper, it is argued that noise annoyance models should identify a fuzzy set of possible effects rather than seek a very accurate crisp prediction. Fuzzy rule based models seem ideal candidates for this task. This paper provides the theoretical background for building these models. Existing empirical knowledge is used to extract a few typical rules that allow making the model more specific for small groups of individuals. The resulting model is tested on two large-scale social surveys augmented with exposure simulations. The testing demonstrates how this new way of thinking about noise effect modeling can be used in practice both in management support as a "noise annoyance adviser" and in social science for testing hypotheses such as the effect of noise sensitivity or the degree of urbanization.

  17. Improved nonlinear fault detection strategy based on the Hellinger distance metric: Plug flow reactor monitoring

    Harrou, Fouzi


    Fault detection has a vital role in the process industry to enhance productivity, efficiency, and safety, and to avoid expensive maintenance. This paper proposes an innovative multivariate fault detection method that can be used for monitoring nonlinear processes. The proposed method merges advantages of nonlinear projection to latent structures (NLPLS) modeling and those of Hellinger distance (HD) metric to identify abnormal changes in highly correlated multivariate data. Specifically, the HD is used to quantify the dissimilarity between current NLPLS-based residual and reference probability distributions obtained using fault-free data. Furthermore, to enhance further the robustness of these methods to measurement noise, and reduce the false alarms due to modeling errors, wavelet-based multiscale filtering of residuals is used before the application of the HD-based monitoring scheme. The performances of the developed NLPLS-HD fault detection technique is illustrated using simulated plug flow reactor data. The results show that the proposed method provides favorable performance for detection of faults compared to the conventional NLPLS method.

  18. Design-based metrology: beyond CD/EPE metrics to evaluate printability performance

    Halder, Sandip; Mailfert, Julien; Leray, Philippe; Rio, David; Peng, Yi-Hsing; Laenens, Bart


    Process-window (PW) evaluation is critical to assess the lithography process quality and limitations. Usual CD-based PW gives only a partial answer. Simulations such as Tachyon LMC (Lithography Manufacturability Check) can efficiently overcome this limitation by analyzing the entire predicted resist contours. But so far experimental measurements did not allow such flexibility. This paper shows an innovative experimental flow, which allows the user to directly validate LMC results across PW for a select group of reference patterns, thereby overcoming the limitations found in the traditional CD-based PW analysis. To evaluate the process window on wafer more accurately, we take advantage of design based metrology and extract experimental contours from the CD-SEM measurements. Then we implement an area metric to quantify the area coverage of the experimental contours with respect to the intended ones, using a defined "sectorization" for the logic structures. This `sectorization' aims to differentiate specific areas on the logic structures being analyzed, such as corners, line-ends, short and long lines. This way, a complete evaluation of the information contained in each CD-SEM picture is performed, without having to discard any information. This solution doesn't look at the area coverage of an entire feature, but uses a `sectorization' to differentiate specific feature areas such as corners, line-ends, short and long lines, and thus look at those area coverages. An assessment of resist model/OPC quality/process quality at sub nm-level accuracy is rendered possible.

  19. Wavelet-based denoising of the Fourier metric in real-time wavefront correction for single molecule localization microscopy

    Tehrani, Kayvan Forouhesh; Mortensen, Luke J.; Kner, Peter


    Wavefront sensorless schemes for correction of aberrations induced by biological specimens require a time invariant property of an image as a measure of fitness. Image intensity cannot be used as a metric for Single Molecule Localization (SML) microscopy because the intensity of blinking fluorophores follows exponential statistics. Therefore a robust intensity-independent metric is required. We previously reported a Fourier Metric (FM) that is relatively intensity independent. The Fourier metric has been successfully tested on two machine learning algorithms, a Genetic Algorithm and Particle Swarm Optimization, for wavefront correction about 50 μm deep inside the Central Nervous System (CNS) of Drosophila. However, since the spatial frequencies that need to be optimized fall into regions of the Optical Transfer Function (OTF) that are more susceptible to noise, adding a level of denoising can improve performance. Here we present wavelet-based approaches to lower the noise level and produce a more consistent metric. We compare performance of different wavelets such as Daubechies, Bi-Orthogonal, and reverse Bi-orthogonal of different degrees and orders for pre-processing of images.

  20. Metrical Quantization

    Klauder, J R


    Canonical quantization may be approached from several different starting points. The usual approaches involve promotion of c-numbers to q-numbers, or path integral constructs, each of which generally succeeds only in Cartesian coordinates. All quantization schemes that lead to Hilbert space vectors and Weyl operators---even those that eschew Cartesian coordinates---implicitly contain a metric on a flat phase space. This feature is demonstrated by studying the classical and quantum ``aggregations'', namely, the set of all facts and properties resident in all classical and quantum theories, respectively. Metrical quantization is an approach that elevates the flat phase space metric inherent in any canonical quantization to the level of a postulate. Far from being an unwanted structure, the flat phase space metric carries essential physical information. It is shown how the metric, when employed within a continuous-time regularization scheme, gives rise to an unambiguous quantization procedure that automatically ...

  1. The Death of Socrates: Managerialism, Metrics and Bureaucratisation in Universities

    Orr, Yancey; Orr, Raymond


    Neoliberalism exults the ability of unregulated markets to optimise human relations. Yet, as David Graeber has recently illustrated, it is paradoxically built on rigorous systems of rules, metrics and managers. The potential transition to a market-based tuition and research-funding model for higher education in Australia has, not surprisingly,…

  2. A Novel Risk Metric for Staff Turnover in a Software Project Based on Information Entropy

    Rong Jiang


    Full Text Available Staff turnover in a software project is a significant risk that can result in project failure. Despite the urgency of this issue, however, relevant studies are limited and are mostly qualitative; quantitative studies are extremely rare. This paper proposes a novel risk metric for staff turnover in a software project based on the information entropy theory. To address the gaps of existing studies, five aspects are considered, namely, staff turnover probability, turnover type, staff level, software project complexity, and staff order degree. This paper develops a method of calculating staff turnover risk probability in a software project based on the field, equity, and goal congruence theories. The proposed method prevents the probability of subjective estimation. It is more objective and comprehensive and superior than existing research. This paper not only presents a detailed operable model, but also theoretically demonstrates the scientificity and rationality of the research. The case study performed in this study indicates that the approach is reasonable, effective, and feasible.

  3. Optimal Rate Control in H.264 Video Coding Based on Video Quality Metric

    R. Karthikeyan


    Full Text Available The aim of this research is to find a method for providing better visual quality across the complete video sequence in H.264 video coding standard. H.264 video coding standard with its significantly improved coding efficiency finds important applications in various digital video streaming, storage and broadcast. To achieve comparable quality across the complete video sequence with the constrains on bandwidth availability and buffer fullness, it is important to allocate more bits to frames with high complexity or a scene change and fewer bits to other less complex frames. A frame layer bit allocation scheme is proposed based on the perceptual quality metric as indicator of the frame complexity. The proposed model computes the Quality Index ratio (QIr of the predicted quality index of the current frame to the average quality index of all the previous frames in the group of pictures which is used for bit allocation to the current frame along with bits computed based on buffer availability. The standard deviation of the perceptual quality indicator MOS computed for the proposed model is significantly less which means the quality of the video sequence is identical throughout the full video sequence. Thus the experiment results shows that the proposed model effectively handles the scene changes and scenes with high motion for better visual quality.

  4. Prediction of Breast Cancer using Rule Based Classification

    Nagendra Kumar SINGH


    Full Text Available The current work proposes a model for prediction of breast cancer using the classification approach in data mining. The proposed model is based on various parameters, including symptoms of breast cancer, gene mutation and other risk factors causing breast cancer. Mutations have been predicted in breast cancer causing genes with the help of alignment of normal and abnormal gene sequences; then predicting the class label of breast cancer (risky or safe on the basis of IF-THEN rules, using Genetic Algorithm (GA. In this work, GA has used variable gene encoding mechanisms for chromosomes encoding, uniform population generations and selects two chromosomes by Roulette-Wheel selection technique for two-point crossover, which gives better solutions. The performance of the model is evaluated using the F score measure, Matthews Correlation Coefficient (MCC and Receiver Operating Characteristic (ROC by plotting points (Sensitivity V/s 1- Specificity.

  5. Fuzzy-Rule-Based Object Identification Methodology for NAVI System

    Rosalyn R. Porle


    Full Text Available We present an object identification methodology applied in a navigation assistance for visually impaired (NAVI system. The NAVI has a single board processing system (SBPS, a digital video camera mounted headgear, and a pair of stereo earphones. The captured image from the camera is processed by the SBPS to generate a specially structured stereo sound suitable for vision impaired people in understanding the presence of objects/obstacles in front of them. The image processing stage is designed to identify the objects in the captured image. Edge detection and edge-linking procedures are applied in the processing of image. A concept of object preference is included in the image processing scheme and this concept is realized using a fuzzy-rule base. The blind users are trained with the stereo sound produced by NAVI for achieving a collision-free autonomous navigation.

  6. Distributed intrusion detection system based on fuzzy rules

    Qiao, Peili; Su, Jie; Liu, Yahui


    Computational Intelligence is the theory and method solving problems by simulating the intelligence of human using computer and it is the development of Artificial Intelligence. Fuzzy Technique is one of the most important theories of computational Intelligence. Genetic Fuzzy Technique and Neuro-Fuzzy Technique are the combination of Fuzzy Technique and novel techniques. This paper gives a distributed intrusion detection system based on fuzzy rules that has the characters of distributed parallel processing, self-organization, self-learning and self-adaptation by the using of Neuro-Fuzzy Technique and Genetic Fuzzy Technique. Specially, fuzzy decision technique can be used to reduce false detection. The results of the simulation experiment show that this intrusion detection system model has the characteristics of distributed, error tolerance, dynamic learning, and adaptation. It solves the problem of low identifying rate to new attacks and hidden attacks. The false detection rate is low. This approach is efficient to the distributed intrusion detection.

  7. Grapheme-color synaesthesia benefits rule-based Category learning.

    Watson, Marcus R; Blair, Mark R; Kozik, Pavel; Akins, Kathleen A; Enns, James T


    Researchers have long suspected that grapheme-color synaesthesia is useful, but research on its utility has so far focused primarily on episodic memory and perceptual discrimination. Here we ask whether it can be harnessed during rule-based Category learning. Participants learned through trial and error to classify grapheme pairs that were organized into categories on the basis of their associated synaesthetic colors. The performance of synaesthetes was similar to non-synaesthetes viewing graphemes that were physically colored in the same way. Specifically, synaesthetes learned to categorize stimuli effectively, they were able to transfer this learning to novel stimuli, and they falsely recognized grapheme-pair foils, all like non-synaesthetes viewing colored graphemes. These findings demonstrate that synaesthesia can be exploited when learning the kind of material taught in many classroom settings.


    Ms. Sanober Shaikh


    Full Text Available In this paper a new mining algorithm is defined based on frequent item set. Apriori Algorithm scans the database every time when it finds the frequent item set so it is very time consuming and at each step it generates candidate item set. So for large databases it takes lots of space to store candidate item set. The defined algorithm scans the database at the start only once and then makes the undirected item set graph. From this graph by considering minimum support it finds the frequent item set and by considering the minimum confidence it generates the association rule. If database and minimum support is changed, the new algorithm finds the new frequent items by scanning undirected item set graph. That is why it’s executing efficiency is improved distinctly compared to traditional algorithm.

  9. A Mixed Approach to Similarity Metric Selection in Affinity Propagation-Based WiFi Fingerprinting Indoor Positioning.

    Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella


    The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different

  10. Rule-based spatial modeling with diffusing, geometrically constrained molecules

    Lohel Maiko


    Full Text Available Abstract Background We suggest a new type of modeling approach for the coarse grained, particle-based spatial simulation of combinatorially complex chemical reaction systems. In our approach molecules possess a location in the reactor as well as an orientation and geometry, while the reactions are carried out according to a list of implicitly specified reaction rules. Because the reaction rules can contain patterns for molecules, a combinatorially complex or even infinitely sized reaction network can be defined. For our implementation (based on LAMMPS, we have chosen an already existing formalism (BioNetGen for the implicit specification of the reaction network. This compatibility allows to import existing models easily, i.e., only additional geometry data files have to be provided. Results Our simulations show that the obtained dynamics can be fundamentally different from those simulations that use classical reaction-diffusion approaches like Partial Differential Equations or Gillespie-type spatial stochastic simulation. We show, for example, that the combination of combinatorial complexity and geometric effects leads to the emergence of complex self-assemblies and transportation phenomena happening faster than diffusion (using a model of molecular walkers on microtubules. When the mentioned classical simulation approaches are applied, these aspects of modeled systems cannot be observed without very special treatment. Further more, we show that the geometric information can even change the organizational structure of the reaction system. That is, a set of chemical species that can in principle form a stationary state in a Differential Equation formalism, is potentially unstable when geometry is considered, and vice versa. Conclusions We conclude that our approach provides a new general framework filling a gap in between approaches with no or rigid spatial representation like Partial Differential Equations and specialized coarse-grained spatial

  11. Rule-based model of vein graft remodeling.

    Minki Hwang

    Full Text Available When vein segments are implanted into the arterial system for use in arterial bypass grafting, adaptation to the higher pressure and flow of the arterial system is accomplished thorough wall thickening and expansion. These early remodeling events have been found to be closely coupled to the local hemodynamic forces, such as shear stress and wall tension, and are believed to be the foundation for later vein graft failure. To further our mechanistic understanding of the cellular and extracellular interactions that lead to global changes in tissue architecture, a rule-based modeling method is developed through the application of basic rules of behaviors for these molecular and cellular activities. In the current method, smooth muscle cell (SMC, extracellular matrix (ECM, and monocytes are selected as the three components that occupy the elements of a grid system that comprise the developing vein graft intima. The probabilities of the cellular behaviors are developed based on data extracted from in vivo experiments. At each time step, the various probabilities are computed and applied to the SMC and ECM elements to determine their next physical state and behavior. One- and two-dimensional models are developed to test and validate the computational approach. The importance of monocyte infiltration, and the associated effect in augmenting extracellular matrix deposition, was evaluated and found to be an important component in model development. Final model validation is performed using an independent set of experiments, where model predictions of intimal growth are evaluated against experimental data obtained from the complex geometry and shear stress patterns offered by a mid-graft focal stenosis, where simulation results show good agreements with the experimental data.

  12. Research on Algorithm for Mining Negative Association Rules Based on Frequent Pattern Tree


    Typical association rules consider only items enumerated in transactions. Such rules are referred to as positive association rules. Negative association rules also consider the same items, but in addition consider negated items (i.e. absent from transactions). Negative association rules are useful in market-basket analysis to identify products that conflict with each other or products that complement each other. They are also very convenient for associative classifiers, classifiers that build their classification model based on association rules. Indeed, mining for such rules necessitates the examination of an exponentially large search space. Despite their usefulness, very few algorithms to mine them have been proposed to date. In this paper, an algorithm based on FP-tree is presented to discover negative association rules.

  13. Design based Object-Oriented Metrics to Measure Coupling and Cohesion



    The object oriented design and object oriented development environment are currently popular in software organizations due to the object oriented programming languages. As the object oriented technology enters into software organizations, it has created new challenges for the companies which used only product metrics as atool for monitoring, controlling and maintaining the software product. This paper presents the new object oriented metrics namely for coupling of class by counting the number...

  14. Resilience Metrics for the Electric Power System: A Performance-Based Approach.

    Vugrin, Eric D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Castillo, Andrea R [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silva-Monroy, Cesar Augusto [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    Grid resilience is a concept related to a power system's ability to continue operating and delivering power even in the event that low probability, high-consequence disruptions such as hurricanes, earthquakes, and cyber-attacks occur. Grid resilience objectives focus on managing and, ideally, minimizing potential consequences that occur as a result of these disruptions. Currently, no formal grid resilience definitions, metrics, or analysis methods have been universally accepted. This document describes an effort to develop and describe grid resilience metrics and analysis methods. The metrics and methods described herein extend upon the Resilience Analysis Process (RAP) developed by Watson et al. for the 2015 Quadrennial Energy Review. The extension allows for both outputs from system models and for historical data to serve as the basis for creating grid resilience metrics and informing grid resilience planning and response decision-making. This document describes the grid resilience metrics and analysis methods. Demonstration of the metrics and methods is shown through a set of illustrative use cases.

  15. Fuzzy rule-based seizure prediction based on correlation dimension changes in intracranial EEG.

    Rabbi, Ahmed F; Aarabi, Ardalan; Fazel-Rezai, Reza


    In this paper, we present a method for epileptic seizure prediction from intracranial EEG recordings. We applied correlation dimension, a nonlinear dynamics based univariate characteristic measure for extracting features from EEG segments. Finally, we designed a fuzzy rule-based system for seizure prediction. The system is primarily designed based on expert's knowledge and reasoning. A spatial-temporal filtering method was used in accordance with the fuzzy rule-based inference system for issuing forecasting alarms. The system was evaluated on EEG data from 10 patients having 15 seizures.

  16. Evaluation of cassette-based digital radiography detectors using standardized image quality metrics: AAPM TG-150 Draft Image Detector Tests.

    Li, Guang; Greene, Travis C; Nishino, Thomas K; Willis, Charles E


    The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region-of-interest (ROI)-based techniques to measure nonuniformity, minimum signal-to-noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX-1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG-150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG-150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG-150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG-150 tests can be

  17. SVD-based quality metric for image and video using machine learning.

    Narwaria, Manish; Lin, Weisi


    We study the use of machine learning for visual quality evaluation with comprehensive singular value decomposition (SVD)-based visual features. In this paper, the two-stage process and the relevant work in the existing visual quality metrics are first introduced followed by an in-depth analysis of SVD for visual quality assessment. Singular values and vectors form the selected features for visual quality assessment. Machine learning is then used for the feature pooling process and demonstrated to be effective. This is to address the limitations of the existing pooling techniques, like simple summation, averaging, Minkowski summation, etc., which tend to be ad hoc. We advocate machine learning for feature pooling because it is more systematic and data driven. The experiments show that the proposed method outperforms the eight existing relevant schemes. Extensive analysis and cross validation are performed with ten publicly available databases (eight for images with a total of 4042 test images and two for video with a total of 228 videos). We use all publicly accessible software and databases in this study, as well as making our own software public, to facilitate comparison in future research.

  18. Optimal stellar photometry for multi-conjugate adaptive optics systems using science-based metrics

    Turri, P; Stetson, P B; Fiorentino, G; Andersen, D R; Bono, G; Massari, D; Veran, J -P


    We present a detailed discussion of how to obtain precise stellar photometry in crowded fields using images obtained with multi-conjugate adaptive optics (MCAO), with the intent of informing the scientific development of this key technology for the Extremely Large Telescopes. We use deep J and K$_\\mathrm{s}$ exposures of NGC 1851 obtained using the Gemini Multi-Conjugate Adaptive Optics System (GeMS) on Gemini South to quantify the performance of the system and to develop an optimal strategy for extracting precise stellar photometry from the images using well-known PSF-fitting techniques. We judge the success of the various techniques we employ by using science-based metrics, particularly the width of the main sequence turn-off region. We also compare the GeMS photometry with the exquisite HST data of the same target in the visible. We show that the PSF produced by GeMS possesses significant spatial and temporal variability that must be accounted for during the photometric analysis by allowing the PSF model a...

  19. A novel rules based approach for estimating software birthmark.

    Nazir, Shah; Shahzad, Sara; Khan, Sher Afzal; Alias, Norma Binti; Anwar, Sajid


    Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark.

  20. A Novel Rules Based Approach for Estimating Software Birthmark

    Shah Nazir


    Full Text Available Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark.

  1. Alternative metrics


    As the old 'publish or perish' adage is brought into question, additional research-impact indices, known as altmetrics, are offering new evaluation alternatives. But such metrics may need to adjust to the evolution of science publishing.

  2. Species-Level Differences in Hyperspectral Metrics among Tropical Rainforest Trees as Determined by a Tree-Based Classifier

    Dar A. Roberts


    Full Text Available This study explores a method to classify seven tropical rainforest tree species from full-range (400–2,500 nm hyperspectral data acquired at tissue (leaf and bark, pixel and crown scales using laboratory and airborne sensors. Metrics that respond to vegetation chemistry and structure were derived using narrowband indices, derivative- and absorption-based techniques, and spectral mixture analysis. We then used the Random Forests tree-based classifier to discriminate species with minimally-correlated, importance-ranked metrics. At all scales, best overall accuracies were achieved with metrics derived from all four techniques and that targeted chemical and structural properties across the visible to shortwave infrared spectrum (400–2500 nm. For tissue spectra, overall accuracies were 86.8% for leaves, 74.2% for bark, and 84.9% for leaves plus bark. Variation in tissue metrics was best explained by an axis of red absorption related to photosynthetic leaves and an axis distinguishing bark water and other chemical absorption features. Overall accuracies for individual tree crowns were 71.5% for pixel spectra, 70.6% crown-mean spectra, and 87.4% for a pixel-majority technique. At pixel and crown scales, tree structure and phenology at the time of image acquisition were important factors that determined species spectral separability.

  3. Operating Rule Classification System of Water Supply Reservoir Based on Learning Classifier System

    ZHANG Xian-feng; WANG Xiao-lin; YIN Zheng-jie; LI Hui-qiang


    An operating rule classification system based on lesrning classifier system (LCS), which learns through credit assignment (bucket brigade algorithm, BBA) and rule discovery (genetic algorithm, GA), is established to extract water-supply reservoir operating rules. The proposed system acquires an online identification rate of 95% for training samples and an offline rate of 85% for testing samples in a case study. The performances of the rule classification system are discussed from the rationality of the obtained rules, the impact of training samples on rule extraction, and a comparison between the rule classification system and the artificial neural network (ANN). The results indicate that the LCS is feasible and effective for the system to obtain the reservoir supply operating rules.


    S. Prakash


    Full Text Available Mutual Funds are becoming effective way for investors to participate in financial markets. An investor must learn to analyze and measure the risk and return of the portfolio. The performance of funds is mainly affected by characteristics such as asset size, turnover and fee structure. Investors’ highest priority lies in understanding the relation between fund performance and above properties. Currently the investors depend upon advisors for their financial planning and further no customized tools are available for investment decision. In this work, a fund planner tool called Techno-Portfolio Advisor is proposed which helps the investors to understand the critical relations and support mutual funds selection across the Asset Management Companies (AMCs in India. The Techno-Portfolio Advisor is designed based on the fuzzy inference rules by considering the investor preferences like investment amount, age, future goal and return rate. Further, the optimal funds for achieving the investor goal are evaluated based on the quantitative data available from the historical NAV from SEBI/AMFI/AMCs. Thus the Techno-Portfolio Advisor creates awareness among the investor community in choosing the optimal mutual fund scheme as well as to achieve their investment goals.

  5. Rule-based deduplication of article records from bibliographic databases.

    Jiang, Yu; Lin, Can; Meng, Weiyi; Yu, Clement; Cohen, Aaron M; Smalheiser, Neil R


    We recently designed and deployed a metasearch engine, Metta, that sends queries and retrieves search results from five leading biomedical databases: PubMed, EMBASE, CINAHL, PsycINFO and the Cochrane Central Register of Controlled Trials. Because many articles are indexed in more than one of these databases, it is desirable to deduplicate the retrieved article records. This is not a trivial problem because data fields contain a lot of missing and erroneous entries, and because certain types of information are recorded differently (and inconsistently) in the different databases. The present report describes our rule-based method for deduplicating article records across databases and includes an open-source script module that can be deployed freely. Metta was designed to satisfy the particular needs of people who are writing systematic reviews in evidence-based medicine. These users want the highest possible recall in retrieval, so it is important to err on the side of not deduplicating any records that refer to distinct articles, and it is important to perform deduplication online in real time. Our deduplication module is designed with these constraints in mind. Articles that share the same publication year are compared sequentially on parameters including PubMed ID number, digital object identifier, journal name, article title and author list, using text approximation techniques. In a review of Metta searches carried out by public users, we found that the deduplication module was more effective at identifying duplicates than EndNote without making any erroneous assignments.

  6. Effectiveness of vegetation-based biodiversity offset metrics as surrogates for ants.

    Hanford, Jayne K; Crowther, Mathew S; Hochuli, Dieter F


    Biodiversity offset schemes are globally popular policy tools for balancing the competing demands of conservation and development. Trading currencies for losses and gains in biodiversity value at development and credit sites are usually based on several vegetation attributes combined to yield a simple score (multimetric), but the score is rarely validated prior to implementation. Inaccurate biodiversity trading currencies are likely to accelerate global biodiversity loss through unrepresentative trades of losses and gains. We tested a model vegetation multimetric (i.e., vegetation structural and compositional attributes) typical of offset trading currencies to determine whether it represented measurable components of compositional and functional biodiversity. Study sites were located in remnant patches of a critically endangered ecological community in western Sydney, Australia, an area representative of global conflicts between conservation and expanding urban development. We sampled ant fauna composition with pitfall traps and enumerated removal by ants of native plant seeds from artificial seed containers (seed depots). Ants are an excellent model taxon because they are strongly associated with habitat complexity, respond rapidly to environmental change, and are functionally important at many trophic levels. The vegetation multimetric did not predict differences in ant community composition or seed removal, despite underlying assumptions that biodiversity trading currencies used in offset schemes represent all components of a site's biodiversity value. This suggests that vegetation multimetrics are inadequate surrogates for total biodiversity value. These findings highlight the urgent need to refine existing offsetting multimetrics to ensure they meet underlying assumptions of surrogacy. Despite the best intentions, offset schemes will never achieve their goal of no net loss of biodiversity values if trades are based on metrics unrepresentative of total

  7. Dynamic Eye Tracking Based Metrics for Infant Gaze Patterns in the Face-Distractor Competition Paradigm

    Ahtola, Eero; Stjerna, Susanna; Yrttiaho, Santeri; Nelson, Charles A.; Leppänen, Jukka M.; Vanhatalo, Sampsa


    Objective To develop new standardized eye tracking based measures and metrics for infants’ gaze dynamics in the face-distractor competition paradigm. Method Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45), as well as one sample of 5-month-old infants (n = 22) in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants’ initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability) and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus). Results The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants. Conclusion The results suggest that eye tracking based assessments of infants’ cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals. Significance Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development. PMID:24845102

  8. Dynamic eye tracking based metrics for infant gaze patterns in the face-distractor competition paradigm.

    Eero Ahtola

    Full Text Available OBJECTIVE: To develop new standardized eye tracking based measures and metrics for infants' gaze dynamics in the face-distractor competition paradigm. METHOD: Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45, as well as one sample of 5-month-old infants (n = 22 in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants' initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus. RESULTS: The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants. CONCLUSION: The results suggest that eye tracking based assessments of infants' cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals. SIGNIFICANCE: Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development.

  9. Metrics for vector quantization-based parametric speech enhancement and separation

    Christensen, Mads Græsbøll


    Speech enhancement and separation algorithms sometimes employ a two-stage processing scheme, wherein the signal is first mapped to an intermediate low-dimensional parametric description after which the parameters are mapped to vectors in codebooks trained on, for exam- ple, individual noise......-free sources using a vector quantizer. To obtain accurate parameters, one must employ a good estimator in finding the parameters of the intermediate representation, like a maximum like- lihood estimator. This leaves some unanswered questions, however, like what metrics to use in the subsequent vector...... quantization process and how to systematically derive them. This paper aims at answering these questions. Metrics for this are presented and derived, and their use is exemplified on a number of different signal models by deriving closed-form expressions. The metrics essentially take into account in the vector...

  10. Evaluation of Landsat-Based METRIC Modeling to Provide High-Spatial Resolution Evapotranspiration Estimates for Amazonian Forests

    Izaya Numata


    Full Text Available While forest evapotranspiration (ET dynamics in the Amazon have been studied both as point estimates using flux towers, as well as spatially coarse surfaces using satellite data, higher resolution (e.g., 30 m resolution ET estimates are necessary to address finer spatial variability associated with forest biophysical characteristics and their changes by natural and human impacts. The objective of this study is to evaluate the potential of the Landsat-based METRIC (Mapping Evapotranspiration at high Resolution with Internalized Calibration model to estimate high-resolution (30 m forest ET by comparing to flux tower ET (FT ET data collected over seasonally dry tropical forests in Rondônia, the southwestern region of the Amazon. Analyses were conducted at daily, monthly and seasonal scales for the dry seasons (June–September for Rondônia of 2000–2002. Overall daily ET comparison between FT ET and METRIC ET across the study site showed r2 = 0.67 with RMSE = 0.81 mm. For seasonal ET comparison, METRIC-derived ET estimates showed an agreement with FT ET measurements during the dry season of r2 >0.70 and %MAE <15%. We also discuss some challenges and potential applications of METRIC for Amazonian forests.

  11. Classification based on pruning and double covered rule sets for the internet of things applications.

    Li, Shasha; Zhou, Zhongmei; Wang, Weiping


    The Internet of things (IOT) is a hot issue in recent years. It accumulates large amounts of data by IOT users, which is a great challenge to mining useful knowledge from IOT. Classification is an effective strategy which can predict the need of users in IOT. However, many traditional rule-based classifiers cannot guarantee that all instances can be covered by at least two classification rules. Thus, these algorithms cannot achieve high accuracy in some datasets. In this paper, we propose a new rule-based classification, CDCR-P (Classification based on the Pruning and Double Covered Rule sets). CDCR-P can induce two different rule sets A and B. Every instance in training set can be covered by at least one rule not only in rule set A, but also in rule set B. In order to improve the quality of rule set B, we take measure to prune the length of rules in rule set B. Our experimental results indicate that, CDCR-P not only is feasible, but also it can achieve high accuracy.

  12. Closed-set-based Discovery of Representative Association Rules Revisited

    Balcázar, José L


    The output of an association rule miner is often huge in practice. This is why several concise lossless representations have been proposed, such as the "essential" or "representative" rules. We revisit the algorithm given by Kryszkiewicz (Int. Symp. Intelligent Data Analysis 2001, Springer-Verlag LNCS 2189, 350-359) for mining representative rules. We show that its output is sometimes incomplete, due to an oversight in its mathematical validation, and we propose an alternative complete generator that works within only slightly larger running times.

  13. Multi-Metric Based Face Identification with Multi Configuration LBP Descriptor

    Djeddou Mustapha


    Full Text Available This paper deals with the performance improvement of a mono modal face identification. A statistical study of various structures of the LBPs (Local Binary Patterns features associated to two metrics is performed to find out those committing errors on different subjects. Then, during the identification stage, these optimal variants are used, and a simple score level fusion is adopted. The score fusion is done after min-max normalization. The main contribution of this paper consists in the association of multiple LBP schemes with different metrics using simple fusion operation. The overall identification rating up to 99% using AT&T database is achieved.

  14. Rule-based query answering method for a knowledge base of economic crimes

    Bak, Jaroslaw


    We present a description of the PhD thesis which aims to propose a rule-based query answering method for relational data. In this approach we use an additional knowledge which is represented as a set of rules and describes the source data at concept (ontological) level. Queries are posed in the terms of abstract level. We present two methods. The first one uses hybrid reasoning and the second one exploits only forward chaining. These two methods are demonstrated by the prototypical implementation of the system coupled with the Jess engine. Tests are performed on the knowledge base of the selected economic crimes: fraudulent disbursement and money laundering.

  15. CRIS: A Rule-Based Approach for Customized Academic Advising

    Chung-Wei Yeh


    Full Text Available This study presents a customized academic e-advising service by using rule-based technology to provide each individual learner for recommending courses for college students in Taiwan. Since academic advising for taking courses is mostly by advisors to assist students to achieve educational, career, and personal goals, which made it important in the higher education system. To enhance the counseling effectiveness for advisors to assist students in fitting their professional field and improve their learning experience, we proposed an application system, called CRIS (course recommendation intelligent system. The CRIS consists of six functions: academic profile review, academic interest analysis, career and curriculum matchmaking, recommend courses analysis, department recommend analysis and record assessment. This work provides the solution in three layers, data layer, processing layer and solution layer, via four steps: (1 database design and data transfer, (2 student profile analysis, (3 customized academic advising generation and (4 solution analysis. A comparison of academic score and the combination of individual students' interest in learning and academic achievement satisfaction survey are conducted to test the effectiveness. The experiment result shows that the participating college students considered that the CRIS helpful in their adjustment to the university and it increased their success at the university.

  16. Segmentation-based and rule-based spectral mixture analysis for estimating urban imperviousness

    Li, Miao; Zang, Shuying; Wu, Changshan; Deng, Yingbin


    For detailed estimation of urban imperviousness, numerous image processing methods have been developed, and applied to different urban areas with some success. Most of these methods, however, are global techniques. That is, they have been applied to the entire study area without considering spatial and contextual variations. To address this problem, this paper explores whether two spatio-contextual analysis techniques, namely segmentation-based and rule-based analysis, can improve urban imperviousness estimation. These two spatio-contextual techniques were incorporated to a classic urban imperviousness estimation technique, fully-constrained linear spectral mixture analysis (FCLSMA) method. In particular, image segmentation was applied to divide the image to homogenous segments, and spatially varying endmembers were chosen for each segment. Then an FCLSMA was applied for each segment to estimate the pixel-wise fractional coverage of high-albedo material, low-albedo material, vegetation, and soil. Finally, a rule-based analysis was carried out to estimate the percent impervious surface area (%ISA). The developed technique was applied to a Landsat TM image acquired in Milwaukee River Watershed, an urbanized watershed in Wisconsin, United States. Results indicate that the performance of the developed segmentation-based and rule-based LSMA (S-R-LSMA) outperforms traditional SMA techniques, with a mean average error (MAE) of 5.44% and R2 of 0.88. Further, a comparative analysis shows that, when compared to segmentation, rule-based analysis plays a more essential role in improving the estimation accuracy.

  17. New Performance Metrics for Quantitative Polymerase Chain Reaction-Based Microbial Source Tracking Methods

    Binary sensitivity and specificity metrics are not adequate to describe the performance of quantitative microbial source tracking methods because the estimates depend on the amount of material tested and limit of detection. We introduce a new framework to compare the performance ...

  18. Linking customer and financial metrics to shareholder value : The leverage effect in customer-based valuation

    Schulze, C.; Skiera, B.; Wiesel, T.


    Customers are the most important assets of most companies, such that customer equity has been used as a proxy for shareholder value. However, linking customer metrics to shareholder value without considering debt and non-operating assets ignores their effects on relative changes in customer equity a

  19. Linking customer and financial metrics to shareholder value : The leverage effect in customer-based valuation

    Schulze, C.; Skiera, B.; Wiesel, T.

    Customers are the most important assets of most companies, such that customer equity has been used as a proxy for shareholder value. However, linking customer metrics to shareholder value without considering debt and non-operating assets ignores their effects on relative changes in customer equity

  20. A General Attribute and Rule Based Role-Based Access Control Model


    Growing numbers of users and many access control policies which involve many different resource attributes in service-oriented environments bring various problems in protecting resource. This paper analyzes the relationships of resource attributes to user attributes in all policies, and propose a general attribute and rule based role-based access control(GAR-RBAC) model to meet the security needs. The model can dynamically assign users to roles via rules to meet the need of growing numbers of users. These rules use different attribute expression and permission as a part of authorization constraints, and are defined by analyzing relations of resource attributes to user attributes in many access policies that are defined by the enterprise. The model is a general access control model, and can support many access control policies, and also can be used to wider application for service. The paper also describes how to use the GAR-RBAC model in Web service environments.

  1. A-Train Based Observational Metrics for Model Evaluation in Extratropical Cyclones

    Naud, Catherine M.; Booth, James F.; Del Genio, Anthony D.; van den Heever, Susan C.; Posselt, Derek J.


    Extratropical cyclones contribute most of the precipitation in the midlatitudes, i.e. up to 70 during winter in the northern hemisphere, and can generate flooding, extreme winds, blizzards and have large socio-economic impacts. As such, it is important that general circulation models (GCMs) accurately represent these systems so their evolution in a warming climate can be understood. However, there are still uncertainties on whether warming will increase their frequency of occurrence, their intensity and how much rain or snow they bring. Part of the issue is that models have trouble representing their strength, but models also have biases in the amount of clouds and precipitation they produce. This is caused by potential issues in various aspects of the models: convection, boundary layer, and cloud scheme to only mention a few. In order to pinpoint which aspects of the models need improvement for a better representation of extratropical cyclone precipitation and cloudiness, we will present A-train based observational metrics: cyclone-centered, warm and cold frontal composites of cloud amount and type, precipitation rate and frequency of occurrence. Using the same method to extract similar fields from the model, we will present an evaluation of the GISS-ModelE2 and the IPSL-LMDZ-5B models, based on their AR5 and more recent versions. The AR5 version of the GISS model underestimates cloud cover in extratropical cyclones while the IPSL AR5 version overestimates it. In addition, we will show how the observed CloudSat-CALIPSO cloud vertical distribution across cold fronts changes with moisture amount and cyclone strength, and test if the two models successfully represent these changes. We will also show how CloudSat-CALIPSO derived cloud type (i.e. convective vs. stratiform) evolves across warm fronts as cyclones age, and again how this is represented in the models. Our third process-based analysis concerns cumulus clouds in the post-cold frontal region and how their

  2. Metric learning

    Bellet, Aurelien; Sebban, Marc


    Similarity between objects plays an important role in both human cognitive processes and artificial systems for recognition and categorization. How to appropriately measure such similarities for a given task is crucial to the performance of many machine learning, pattern recognition and data mining methods. This book is devoted to metric learning, a set of techniques to automatically learn similarity and distance functions from data that has attracted a lot of interest in machine learning and related fields in the past ten years. In this book, we provide a thorough review of the metric learnin

  3. SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation

    Zhao, T; Ruan, D [UCLA School of Medicine, Los Angeles, CA (United States)


    Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject to random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be

  4. Reduced rule base self-tuning fuzzy PI controller for TCSC

    Hameed, Salman; Das, Biswarup; Pant, Vinay [Department of Electrical Engineering, Indian Institute of Technology, Roorkee, Roorkee - 247 667, Uttarakhand (India)


    In this paper, a reduced rule base self-tuning fuzzy PI controller (STFPIC) for thyristor controlled series capacitor (TCSC) is proposed. Essentially, a STFPIC consists of two fuzzy logic controllers (FLC). In this work, for each FLC, 49 rules have been used and as a result, the overall complexity of the STFPIC increases substantially. To reduce this complexity, application of singular value decomposition (SVD) based rule reduction technique is also proposed in this paper. By applying this methodology, the number of rules in each FLC has been reduced from 49 to 9. Therefore, the proposed rule base reduction technique reduces the total number of rules in the STFPIC by almost 80% (from 49 x 2 = 98 to 9 x 2 = 18), thereby reducing the complexity of the STFPIC significantly. The feasibility of the proposed algorithm has been tested on 2-area 4-machine power system and 10-machine 39-bus system through detailed digital simulation using MATLAB/SIMULINK. (author)

  5. Knowledge-based prediction of plan quality metrics in intracranial stereotactic radiosurgery.

    Shiraishi, Satomi; Tan, Jun; Olsen, Lindsey A; Moore, Kevin L


    The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose-volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V10Gy (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution's VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QMclin - QMpred, and a coefficient of determination, R(2). For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. The most accurate predictions are obtained when plans are stratified based on proximity to OARs and their PTV

  6. Sensor-based navigation of a mobile robot using automatically constructed fuzzy rules

    Watanabe, Y.; Pin, F.G.


    A system for automatic generation of fuzzy rules is proposed which is based on a new approach, called ``Fuzzy Behaviorist,`` and on its associated formalism for rule base development in behavior-based robot control systems. The automated generator of fuzzy rules automatically constructs the set of rules and the associated membership functions that implement reasoning schemes that have been expressed in qualitative terms. The system also checks for completeness of the rule base and independence and/or redundancy of the rules to ensure that the requirements of the formalism are satisfied. Examples of the automatic generation of fuzzy rules for cases involving suppression and/or inhibition of fuzzy behaviors are given and discussed. Experimental results obtained with the automated fuzzy rule generator applied to the domain of sensor-based navigation in a priori unknown environments using one of our autonomous test-bed robots are then presented and discussed to illustrate the feasibility of large-scale automatic fuzzy rule generation using our proposed ``Fuzzy Behaviorist`` approach.

  7. Automatic generation of fuzzy rules for the sensor-based navigation of a mobile robot

    Pin, F.G.; Watanabe, Y.


    A system for automatic generation of fuzzy rules is proposed which is based on a new approach, called {open_quotes}Fuzzy Behaviorist,{close_quotes} and on its associated formalism for rule base development in behavior-based robot control systems. The automated generator of fuzzy rules automatically constructs the set of rules and the associated membership functions that implement reasoning schemes that have been expressed in qualitative terms. The system also checks for completeness of the rule base and independence and/or redundancy of the rules to ensure that the requirements of the formalism are satisfied. Examples of the automatic generation of fuzzy rules for cases involving suppression and/or inhibition of fuzzy behaviors are given and discussed. Experimental results obtained with the automated fuzzy rule generator applied to the domain of sensor-based navigation in a priori unknown environments using one of our autonomous test-bed robots are then presented and discussed to illustrate the feasibility of large-scale automatic fuzzy rule generation using our proposed {open_quotes}Fuzzy Behaviorist{close_quotes} approach.

  8. Rule-Based and Information-Integration Category Learning in Normal Aging

    Maddox, W. Todd; Pacheco, Jennifer; Reeves, Maia; Zhu, Bo; Schnyer, David M.


    The basal ganglia and prefrontal cortex play critical roles in category learning. Both regions evidence age-related structural and functional declines. The current study examined rule-based and information-integration category learning in a group of older and younger adults. Rule-based learning is thought to involve explicit, frontally mediated…

  9. An algebraic approach to revising propositional rule-based knowledge bases

    LUAN ShangMin; DAI GuoZhong


    One of the important topics in knowledge base revision is to introduce an efficient implementation algorithm. Algebraic approaches have good characteristics and implementation method; they may be a choice to solve the problem. An algebraic approach is presented to revise propositional rule-based knowledge bases in this paper. A way is firstly introduced to transform a propositional rule-based knowl- edge base into a Petri net. A knowledge base is represented by a Petri net, and facts are represented by the initial marking. Thus, the consistency check of a knowledge base is equivalent to the reachability problem of Petri nets. The reachability of Petri nets can be decided by whether the state equation has a solution; hence the con- sistency check can also be implemented by algebraic approach. Furthermore, al- gorithms are introduced to revise a propositional rule-based knowledge base, as well as extended logic programming. Compared with related works, the algorithms presented in the paper are efficient, and the time complexities of these algorithms are polynomial.

  10. Anisotropic interaction rules in circular motions of pigeon flocks: An empirical study based on sparse Bayesian learning

    Chen, Duxin; Xu, Bowen; Zhu, Tao; Zhou, Tao; Zhang, Hai-Tao


    Coordination shall be deemed to the result of interindividual interaction among natural gregarious animal groups. However, revealing the underlying interaction rules and decision-making strategies governing highly coordinated motion in bird flocks is still a long-standing challenge. Based on analysis of high spatial-temporal resolution GPS data of three pigeon flocks, we extract the hidden interaction principle by using a newly emerging machine learning method, namely the sparse Bayesian learning. It is observed that the interaction probability has an inflection point at pairwise distance of 3-4 m closer than the average maximum interindividual distance, after which it decays strictly with rising pairwise metric distances. Significantly, the density of spatial neighbor distribution is strongly anisotropic, with an evident lack of interactions along individual velocity. Thus, it is found that in small-sized bird flocks, individuals reciprocally cooperate with a variational number of neighbors in metric space and tend to interact with closer time-varying neighbors, rather than interacting with a fixed number of topological ones. Finally, extensive numerical investigation is conducted to verify both the revealed interaction and decision-making principle during circular flights of pigeon flocks.

  11. Optimization of decision rules based on dynamic programming approach

    Zielosko, Beata


    This chapter is devoted to the study of an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure that is the difference between number of rows in a given decision table and the number of rows labeled with the most common decision for this table divided by the number of rows in the decision table. We fix a threshold γ, such that 0 ≤ γ < 1, and study so-called γ-decision rules (approximate decision rules) that localize rows in subtables which uncertainty is at most γ. Presented algorithm constructs a directed acyclic graph Δ γ T which nodes are subtables of the decision table T given by pairs "attribute = value". The algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The chapter contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2014 Springer International Publishing Switzerland.

  12. Changing from a Rules-based to a Principles-based Accounting Logic: A Review

    Marta Silva Guerreiro


    Full Text Available We explore influences on unlisted companies when Portugal moved from a code law, rules-based accounting system, to a principles-based accounting system of adapted International Financial Reporting Standards (IFRS. Institutionalisation of the new principles-based system was generally facilitated by a socio-economic and political context that increasingly supported IFRS logic. This helped central actors gain political opportunity, mobilise important allies, and accommodate major protagonists. The preparedness of unlisted companies to adopt the new IFRS-based accounting system voluntarily was explained by their desire to maintain social legitimacy. However, it was affected negatively by the embeddedness of rule-based practices in the ‘old’ prevailing institutional logic.

  13. Woody Plant Cover Dynamics in Sahelian Drylands from Earth Observation Based Seasonal Metrics

    Brandt, M.; Hiernaux, P.; Fensholt, R.; Tagesson, T.; Rasmussen, K.; Mbow, C.


    Woody plants play an important role in drylands primary productivity and peoples' livelihood, however, due to their scattered appearance, quantifying and monitoring their abundance over a large area is challenging. From in situ measured woody cover we develop a phenology driven model to estimate the canopy cover of woody species in the Sahelian drylands. Annual maps are applied to monitor dynamics of woody populations in relation to climate and anthropogenic interference. The model estimates the total canopy cover of all woody phanerophytes and the concept is based on the significant difference in phenophases of dryland trees, shrubs and bushes as compared to that of the herbaceous plants. Whereas annual herbaceous are only green during the rainy season and senescence occurs shortly after flowering towards the last rains, most woody plants remain photosynthetically active over large parts of the year. We use Moderate Resolution Imaging Spectroradiometer (MODIS) and SPOT VEGETATION (VGT) seasonal metrics representing the dry season to reproduce in situ woody cover at 77 field sites (178 observations in 3x3 km plots between 2000 and 2014) in Niger, Mali and Senegal. The extrapolation to Sahel scale shows agreement between VGT and MODIS at an almost nine times higher woody cover than in the global tree cover product MOD44B which only captures trees of a certain minimum size. Trends over 15 years show that the pattern is closely related to population density and land cover/use. A negative woody cover change can be observed in densely populated areas, but a positive change is seen in sparsely populated regions. Whereas woody cover in cropland is generally stable, it is strongly positive in savannas and woodland. Discrepancies between the countries are huge and also deforestation can be observed at a more local scale. The method is applicable and derived woody cover maps of the Sahel are freely available. They represent an improvement of existing products and a

  14. Rough Set Based K-Exception Approach to Approximate Rule Reduction

    ZHANG Feng; ZHANG Xianfeng; QIN Zhiguang; LIU Jinde


    There are rules refering to infrequent instances after the procession of attribute reduction and value reduction with traditional methods. A rough set RS based k-exception approach (RSKEA) to rule reduction is presented. Its main idea lies in a two-phase RS based rule reduction. An ordinary decision table is attained through general method of RS knowledge reduction in the first phase. Then a kexception candidate set is nominated according to the decision table. RS rule reduction is employed for the reformed source data set, which remove all the instances included in the k-exception set. We apply the approach to the automobile database. Results show that it can reduce the number and complexity of rules with adjustable conflict rate, which contributes to approximate rule reduction.

  15. Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things

    Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik


    This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.

  16. Diffeomorphic Metric Mapping of High Angular Resolution Diffusion Imaging based on Riemannian Structure of Orientation Distribution Functions

    Du, Jia; Qiu, Anqi


    In this paper, we propose a novel large deformation diffeomorphic registration algorithm to align high angular resolution diffusion images (HARDI) characterized by orientation distribution functions (ODFs). Our proposed algorithm seeks an optimal diffeomorphism of large deformation between two ODF fields in a spatial volume domain and at the same time, locally reorients an ODF in a manner such that it remains consistent with the surrounding anatomical structure. To this end, we first review the Riemannian manifold of ODFs. We then define the reorientation of an ODF when an affine transformation is applied and subsequently, define the diffeomorphic group action to be applied on the ODF based on this reorientation. We incorporate the Riemannian metric of ODFs for quantifying the similarity of two HARDI images into a variational problem defined under the large deformation diffeomorphic metric mapping (LDDMM) framework. We finally derive the gradient of the cost function in both Riemannian spaces of diffeomorphis...

  17. Database Reverse Engineering based on Association Rule Mining

    Nattapon Pannurat


    Full Text Available Maintaining a legacy database is a difficult task especially when system documentation is poor written or even missing. Database reverse engineering is an attempt to recover high-level conceptual design from the existing database instances. In this paper, we propose a technique to discover conceptual schema using the association mining technique. The discovered schema corresponds to the normalization at the third normal form, which is a common practice in many business organizations. Our algorithm also includes the rule filtering heuristic to solve the problem of exponential growth of discovered rules inherited with the association mining technique.

  18. Compression-based classification of biological sequences and structures via the Universal Similarity Metric: experimental assessment

    Manzini Giovanni


    Full Text Available Abstract Background Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity, NCD (Normalized Compression Dissimilarity and CD (Compression Dissimilarity. Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. Results We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC

  19. Will higher traffic flow lead to more traffic conflicts? A crash surrogate metric based analysis.

    Kuang, Yan; Qu, Xiaobo; Yan, Yadan


    In this paper, we aim to examine the relationship between traffic flow and potential conflict risks by using crash surrogate metrics. It has been widely recognized that one traffic flow corresponds to two distinct traffic states with different speeds and densities. In view of this, instead of simply aggregating traffic conditions with the same traffic volume, we represent potential conflict risks at a traffic flow fundamental diagram. Two crash surrogate metrics, namely, Aggregated Crash Index and Time to Collision, are used in this study to represent the potential conflict risks with respect to different traffic conditions. Furthermore, Beijing North Ring III and Next Generation SIMulation Interstate 80 datasets are utilized to carry out case studies. By using the proposed procedure, both datasets generate similar trends, which demonstrate the applicability of the proposed methodology and the transferability of our conclusions.

  20. Empirical Support for Concept Drifting Approaches: Results Based on New Performance Metrics

    Parneeta Sidhu


    Full Text Available Various types of online learning algorithms have been developed so far to handle concept drift in data streams. We perform more detailed evaluation of these algorithms through new performance metrics - prequential accuracy, kappa statistic, CPU evaluation time, model cost, and memory usage. Experimental evaluation using various artificial and real-world datasets prove that the various concept drifting algorithms provide highly accurate results in classifying new data instances even in a resource constrained environment, irrespective of size of dataset, type of drift or presence of noise in the dataset. We also present empirically the impact of various features- size of ensemble, period value, threshold value, multiplicative factor and the presence of noise on all the key performance metrics.

  1. What is the position of Clinical and Experimental Reproductive Medicine in its scholarly journal network based on journal metrics?

    Huh, Sun


    Clinical and Experimental Reproductive Medicine (CERM) converted its language to English only beginning with the first issue of 2011. From that point in time, one of the goals of the journal has been to become a truly international journal. This paper aims to identify the position of CERM in its scholarly journal network based on the journal's metrics. The journal's metrics, including citations, countries of author affiliation, and countries of citing authors, Hirsch index, and proportion of funded articles, were gathered from Web of Science and analyzed. The two-year impact factor of 2013 was calculated at 0.971 excluding self-citation, which corresponds to a Journal Citation Reports ranking of 85.9% in the category of obstetrics and gynecology. In 2012, 2013, and 2014, the total citations were 17, 68, and 85, respectively. Authors from nine countries contributed to CERM. Researchers from 25 countries cited CERM in their articles. The Hirsch index was six. Out of 88 original articles, 35 studies received funds (39.8%). Based on the journal metrics, changing the journal language to English was found to be successful in promoting CERM to international journal status.

  2. A metric for success

    Carver, Gary P.


    The federal agencies are working with industry to ease adoption of the metric system. The goal is to help U.S. industry compete more successfully in the global marketplace, increase exports, and create new jobs. The strategy is to use federal procurement, financial assistance, and other business-related activities to encourage voluntary conversion. Based upon the positive experiences of firms and industries that have converted, federal agencies have concluded that metric use will yield long-term benefits that are beyond any one-time costs or inconveniences. It may be time for additional steps to move the Nation out of its dual-system comfort zone and continue to progress toward metrication. This report includes 'Metric Highlights in U.S. History'.

  3. An Integrated Metric Based Hierarchical Routing Algorithm in Broadband Communication System

    SHI Chengge; HU Jiajun; Milton Chang


    We give an integrated metric basedhierarchical routing algorithm - FMRSF (FunctionFi(.) minimum routing selected first) algorithm inbroadband communication system in this paper. Withthe authors' analysis strategy, this paper gives a rout-ing solution for hierarchical communication system,and the solution is suited to both ATM network andIP network. Due to the highMevel logic network map-ping in a hierarchical communication system, a largecommunication network can be described as a moresimple logic network on a high level. But, it is dif-ficult to evaluate the QoS parameters of the relativefactors of a logic network (For example: the time de-lay and the bandwidth of logic nodes or logic links).We develop our strategy with FMRSF - algorithm fordifferent routing path, and select the reasonable pathfor one communication session. After designing an in-tegrated metric function describing the QoS metrics ofthe relative factors of a logic network on the high lev-els in a broadband communication system, we provethat the new routing algorithm - FMRSF algorithm ismore simple and applicable, compared with the globaloptimum algorithm.

  4. Selection Metric for Photovoltaic Materials Screening Based on Detailed-Balance Analysis

    Blank, Beatrix; Kirchartz, Thomas; Lany, Stephan; Rau, Uwe


    The success of recently discovered absorber materials for photovoltaic applications has been generating increasing interest in systematic materials screening over the last years. However, the key for a successful materials screening is a suitable selection metric that goes beyond the Shockley-Queisser theory that determines the thermodynamic efficiency limit of an absorber material solely by its band-gap energy. In this work, we develop a selection metric to quantify the potential photovoltaic efficiency of a material. Our approach is compatible with detailed balance and applicable in computational and experimental materials screening. We use the complex refractive index to calculate radiative and nonradiative efficiency limits and the respective optimal thickness in the high mobility limit. We compare our model to the widely applied selection metric by Yu and Zunger [Phys. Rev. Lett. 108, 068701 (2012), 10.1103/PhysRevLett.108.068701] with respect to their dependence on thickness, internal luminescence quantum efficiency, and refractive index. Finally, the model is applied to complex refractive indices calculated via electronic structure theory.

  5. Selection Metric for Photovoltaic Materials Screening Based on Detailed-Balance Analysis

    Lany, Stephan [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Blank, Beatrix [IEK5-Photovoltaics; Kirchartz, Thomas [IEK5-Photovoltaics; University of Duisburg-Essen; Rau, Uwe [IEK5-Photovoltaics


    The success of recently discovered absorber materials for photovoltaic applications has been generating increasing interest in systematic materials screening over the last years. However, the key for a successful materials screening is a suitable selection metric that goes beyond the Shockley-Queisser theory that determines the thermodynamic efficiency limit of an absorber material solely by its band-gap energy. In this work, we develop a selection metric to quantify the potential photovoltaic efficiency of a material. Our approach is compatible with detailed balance and applicable in computational and experimental materials screening. We use the complex refractive index to calculate radiative and nonradiative efficiency limits and the respective optimal thickness in the high mobility limit. We compare our model to the widely applied selection metric by Yu and Zunger [Phys. Rev. Lett. 108, 068701 (2012)] with respect to their dependence on thickness, internal luminescence quantum efficiency, and refractive index. Finally, the model is applied to complex refractive indices calculated via electronic structure theory.

  6. On Inference Rules of Logic-Based Information Retrieval Systems.

    Chen, Patrick Shicheng


    Discussion of relevance and the needs of the users in information retrieval focuses on a deductive object-oriented approach and suggests eight inference rules for the deduction. Highlights include characteristics of a deductive object-oriented system, database and data modeling language, implementation, and user interface. (Contains 24…

  7. Rule-Based vs. Behavior-Based Self-Deployment for Mobile Wireless Sensor Networks

    Urdiales, Cristina; Aguilera, Francisco; González-Parada, Eva; Cano-García, Jose; Sandoval, Francisco


    In mobile wireless sensor networks (MWSN), nodes are allowed to move autonomously for deployment. This process is meant: (i) to achieve good coverage; and (ii) to distribute the communication load as homogeneously as possible. Rather than optimizing deployment, reactive algorithms are based on a set of rules or behaviors, so nodes can determine when to move. This paper presents an experimental evaluation of both reactive deployment approaches: rule-based and behavior-based ones. Specifically, we compare a backbone dispersion algorithm with a social potential fields algorithm. Most tests are done under simulation for a large number of nodes in environments with and without obstacles. Results are validated using a small robot network in the real world. Our results show that behavior-based deployment tends to provide better coverage and communication balance, especially for a large number of nodes in areas with obstacles. PMID:27399709

  8. Domain-based Teaching Strategy for Intelligent Tutoring System Based on Generic Rules

    Kseibat, Dawod; Mansour, Ali; Adjei, Osei; Phillips, Paul

    In this paper we present a framework for selecting the proper instructional strategy for a given teaching material based on its attributes. The new approach is based on a flexible design by means of generic rules. The framework was adapted in an Intelligent Tutoring System to teach Modern Standard Arabic language to adult English-speaking learners with no pre-knowledge of Arabic language is required.

  9. Evaluation of a rule base for decision making in general practice.

    Essex, B; Healy, M


    BACKGROUND. Decision making in general practice relies heavily on judgmental expertise. It should be possible to codify this expertise into rules and principles. AIM. A study was undertaken to evaluate the effectiveness, of rules from a rule base designed to improve students' and trainees' management decisions relating to patients seen in general practice. METHOD. The rule base was developed after studying decisions about and management of thousands of patients seen in one general practice over an eight year period. Vignettes were presented to 93 fourth year medical students and 179 general practitioner trainees. They recorded their perception and management of each case before and after being presented with a selection of relevant rules. Participants also commented on their level of agreement with each of the rules provided with the vignettes. A panel of five independent assessors then rated as good, acceptable or poor, the participants' perception and management of each case before and after seeing the rules. RESULTS. Exposure to a few selected rules of thumb improved the problem perception and management decisions of both undergraduates and trainees. The degree of improvement was not related to previous experience or to the stated level of agreement with the proposed rules. The assessors identified difficulties students and trainees experienced in changing their perceptions and management decisions when the rules suggested options they had not considered. CONCLUSION. The rules developed to improve decision making skills in general practice are effective when used with vignettes. The next phase is to transform the rule base into an expert system to train students and doctors to acquire decision making skills. It could also be used to provide decision support when confronted with difficult management decisions in general practice. PMID:8204334

  10. Quadrupolar metrics

    Quevedo, Hernando


    We review the problem of describing the gravitational field of compact stars in general relativity. We focus on the deviations from spherical symmetry which are expected to be due to rotation and to the natural deformations of mass distributions. We assume that the relativistic quadrupole moment takes into account these deviations, and consider the class of axisymmetric static and stationary quadrupolar metrics which satisfy Einstein's equations in empty space and in the presence of matter represented by a perfect fluid. We formulate the physical conditions that must be satisfied for a particular spacetime metric to describe the gravitational field of compact stars. We present a brief review of the main static and axisymmetric exact solutions of Einstein's vacuum equations, satisfying all the physical conditions. We discuss how to derive particular stationary and axisymmetric solutions with quadrupolar properties by using the solution generating techniques which correspond either to Lie symmetries and B\\"acku...

  11. Rough set based decision rule generation to find behavioural patterns of customers



    Rough sets help in finding significant attributes of large data sets and generating decision rules for classifying new instances. Though multiple regression analysis, discriminant analysis, log-it analysis and several other techniques can be used for predicting results, they consider insignificant information also for processing which may lead to false positives and false negatives. In this study, we proposed rough set based decision rule generation framework to find reduct and to generate decision rules for predicting the Decision class. We conducted experiments over data of Portuguese Banking institution. From the proposed method, the dimensionality of data is reduced and decision rules are generated which predicts deposit nature of customers by 90%accuracy.

  12. Combining Hand-crafted Rules and Unsupervised Learning in Constraint-based Morphological Disambiguation

    Oflazer, K; Oflazer, Kemal; Tur, Gokhan


    This paper presents a constraint-based morphological disambiguation approach that is applicable languages with complex morphology--specifically agglutinative languages with productive inflectional and derivational morphological phenomena. In certain respects, our approach has been motivated by Brill's recent work, but with the observation that his transformational approach is not directly applicable to languages like Turkish. Our system combines corpus independent hand-crafted constraint rules, constraint rules that are learned via unsupervised learning from a training corpus, and additional statistical information from the corpus to be morphologically disambiguated. The hand-crafted rules are linguistically motivated and tuned to improve precision without sacrificing recall. The unsupervised learning process produces two sets of rules: (i) choose rules which choose morphological parses of a lexical item satisfying constraint effectively discarding other parses, and (ii) delete rules, which delete parses sati...

  13. Recommendation System Based On Association Rules For Distributed E-Learning Management Systems

    Mihai, Gabroveanu


    Traditional Learning Management Systems are installed on a single server where learning materials and user data are kept. To increase its performance, the Learning Management System can be installed on multiple servers; learning materials and user data could be distributed across these servers obtaining a Distributed Learning Management System. In this paper is proposed the prototype of a recommendation system based on association rules for Distributed Learning Management System. Information from LMS databases is analyzed using distributed data mining algorithms in order to extract the association rules. Then the extracted rules are used as inference rules to provide personalized recommendations. The quality of provided recommendations is improved because the rules used to make the inferences are more accurate, since these rules aggregate knowledge from all e-Learning systems included in Distributed Learning Management System.

  14. A Web-Based Graphical Food Frequency Assessment System: Design, Development and Usability Metrics.

    Franco, Rodrigo Zenun; Alawadhi, Balqees; Fallaize, Rosalind; Lovegrove, Julie A; Hwang, Faustina


    Food frequency questionnaires (FFQs) are well established in the nutrition field, but there remain important questions around how to develop online tools in a way that can facilitate wider uptake. Also, FFQ user acceptance and evaluation have not been investigated extensively. This paper presents a Web-based graphical food frequency assessment system that addresses challenges of reproducibility, scalability, mobile friendliness, security, and usability and also presents the utilization metrics and user feedback from a deployment study. The application design employs a single-page application Web architecture with back-end services (database, authentication, and authorization) provided by Google Firebase's free plan. Its design and responsiveness take advantage of the Bootstrap framework. The FFQ was deployed in Kuwait as part of the EatWellQ8 study during 2016. The EatWellQ8 FFQ contains 146 food items (including drinks). Participants were recruited in Kuwait without financial incentive. Completion time was based on browser timestamps and usability was measured using the System Usability Scale (SUS), scoring between 0 and 100. Products with a SUS higher than 70 are considered to be good. A total of 235 participants created accounts in the system, and 163 completed the FFQ. Of those 163 participants, 142 reported their gender (93 female, 49 male) and 144 reported their date of birth (mean age of 35 years, range from 18-65 years). The mean completion time for all FFQs (n=163), excluding periods of interruption, was 14.2 minutes (95% CI 13.3-15.1 minutes). Female participants (n=93) completed in 14.1 minutes (95% CI 12.9-15.3 minutes) and male participants (n=49) completed in 14.3 minutes (95% CI 12.6-15.9 minutes). Participants using laptops or desktops (n=69) completed the FFQ in an average of 13.9 minutes (95% CI 12.6-15.1 minutes) and participants using smartphones or tablets (n=91) completed in an average of 14.5 minutes (95% CI 13.2-15.8 minutes). The median SUS

  15. Mining Interesting Positive and Negative Association Rule Based on Improved Genetic Algorithm (MIPNAR_GA

    Nikky Suryawanshi Rai


    Full Text Available Association Rule mining is very efficient technique for finding strong relation between correlated data. The correlation of data gives meaning full extraction process. For the mining of positive and negative rules, a variety of algorithms are used such as Apriori algorithm and tree based algorithm. A number of algorithms are wonder performance but produce large number of negative association rule and also suffered from multi-scan problem. The idea of this paper is to eliminate these problems and reduce large number of negative rules. Hence we proposed an improved approach to mine interesting positive and negative rules based on genetic and MLMS algorithm. In this method we used a multi-level multiple support of data table as 0 and 1. The divided process reduces the scanning time of database. The proposed algorithm is a combination of MLMS and genetic algorithm. This paper proposed a new algorithm (MIPNAR_GA for mining interesting positive and negative rule from frequent and infrequent pattern sets. The algorithm is accomplished in to three phases: a.Extract frequent and infrequent pattern sets by using apriori method b.Efficiently generate positive and negative rule. c.Prune redundant rule by applying interesting measures. The process of rule optimization is performed by genetic algorithm and for evaluation of algorithm conducted the real world dataset such as heart disease data and some standard data used from UCI machine learning repository.

  16. 3D Air Quality and the Clean Air Interstate Rule: Lagrangian Sampling of CMAQ Model Results to Aid Regional Accountability Metrics

    Fairlie, T. D.; Szykman, Jim; Pierce, Robert B.; Gilliland, A. B.; Engel-Cox, Jill; Weber, Stephanie; Kittaka, Chieko; Al-Saadi, Jassim A.; Scheffe, Rich; Dimmick, Fred; hide


    The Clean Air Interstate Rule (CAIR) is expected to reduce transport of air pollutants (e.g. fine sulfate particles) in nonattainment areas in the Eastern United States. CAIR highlights the need for an integrated air quality observational and modeling system to understand sulfate as it moves in multiple dimensions, both spatially and temporally. Here, we demonstrate how results from an air quality model can be combined with a 3d monitoring network to provide decision makers with a tool to help quantify the impact of CAIR reductions in SO2 emissions on regional transport contributions to sulfate concentrations at surface monitors in the Baltimore, MD area, and help improve decision making for strategic implementation plans (SIPs). We sample results from the Community Multiscale Air Quality (CMAQ) model using ensemble back trajectories computed with the NASA Langley Research Center trajectory model to provide Lagrangian time series and vertical profile information, that can be compared with NASA satellite (MODIS), EPA surface, and lidar measurements. Results are used to assess the regional transport contribution to surface SO4 measurements in the Baltimore MSA, and to characterize the dominant source regions for low, medium, and high SO4 episodes.

  17. Likelihood-based scoring rules for comparing density forecasts in tails

    Diks, C.; Panchenko, V.; van Dijk, D.


    We propose new scoring rules based on conditional and censored likelihood for assessing the predictive accuracy of competing density forecasts over a specific region of interest, such as the left tail in financial risk management. These scoring rules can be interpreted in terms of Kullback-Leibler d

  18. Applications of fuzzy sets to rule-based expert system development

    Lea, Robert N.


    Problems of implementing rule-based expert systems using fuzzy sets are considered. A fuzzy logic software development shell is used that allows inclusion of both crisp and fuzzy rules in decision making and process control problems. Results are given that compare this type of expert system to a human expert in some specific applications. Advantages and disadvantages of such systems are discussed.

  19. Causal association rule mining methods based on fuzzy state description

    Liang Kaijian; Liang Quan; Yang Bingru


    Aiming at the research that using more new knowledge to develope knowledge system with dynamic accordance, and under the background of using Fuzzy language field and Fuzzy language values structure as description framework, the generalized cell Automation that can synthetically process fuzzy indeterminacy and random indeterminacy and generalized inductive logic causal model is brought forward. On this basis, a kind of the new method that can discover causal association rules is provded. According to the causal information of standard sample space and commonly sample space,through constructing its state (abnormality) relation matrix, causal association rules can be gained by using inductive reasoning mechanism. The estimate of this algorithm complexity is given,and its validity is proved through case.

  20. Model-based inflation forecasts and monetary policy rules

    Wouters, Raf; Dombrecht, Michel


    In this paper, the interaction between inflation and monetary policy rules is analysed within the framework of a dynamic general equilibrium model derived from optimising behaviour and rational expectations. Using model simulations, it is illustrated that the control of monetary policy over the inflation process is strongly dependent on the role of forward looking expectations in the price and wage setting process and on the credibility of monetary policy in the expectation formation process ...

  1. New Metric Based Algorithm for Test Vector Generation in VLSI Testing

    M. V. Atre


    Full Text Available A new algorithm for test-vector-generation (TVG for combinational circuits has been presented for testing VLSI chips. This is done by defining a suitable metric or distance, in the space of all input vectors, between a vector and a set of vectors. The test vectors are generated by suitably maximising the above distance. Two different methods of maximising the distance are suggested. Performances of the two methods for different circuits are presented and compared with the random method of TVG. It was observed that method B is superior to the other two methods. Also, method A is slightly better than method R.

  2. $C^3$-index: A PageRank based multi-faceted metric for authors' performance measurement

    Pradhan, Dinesh; Paul, Partha Sarathi; Maheswari, Umesh; Nandi, Subrata; Chakraborty, Tanmoy


    Ranking scientific authors is an important but challenging task, mostly due to the dynamic nature of the evolving scientific publications. The basic indicators of an author's productivity and impact are still the number of publications and the citation count (leading to the popular metrics such as h-index, g-index etc.). H-index and its popular variants are mostly effective in ranking highly-cited authors, thus fail to resolve ties while ranking medium-cited and low-cited authors who are majo...

  3. Measuring distance “as the horse runs”: Cross-scale comparison of terrain-based metrics

    Buttenfield, Barbara P.; Ghandehari, M; Leyk, S; Stanislawski, Larry V.; Brantley, M E; Qiang, Yi


    Distance metrics play significant roles in spatial modeling tasks, such as flood inundation (Tucker and Hancock 2010), stream extraction (Stanislawski et al. 2015), power line routing (Kiessling et al. 2003) and analysis of surface pollutants such as nitrogen (Harms et al. 2009). Avalanche risk is based on slope, aspect, and curvature, all directly computed from distance metrics (Gutiérrez 2012). Distance metrics anchor variogram analysis, kernel estimation, and spatial interpolation (Cressie 1993). Several approaches are employed to measure distance. Planar metrics measure straight line distance between two points (“as the crow flies”) and are simple and intuitive, but suffer from uncertainties. Planar metrics assume that Digital Elevation Model (DEM) pixels are rigid and flat, as tiny facets of ceramic tile approximating a continuous terrain surface. In truth, terrain can bend, twist and undulate within each pixel.Work with Light Detection and Ranging (lidar) data or High Resolution Topography to achieve precise measurements present challenges, as filtering can eliminate or distort significant features (Passalacqua et al. 2015). The current availability of lidar data is far from comprehensive in developed nations, and non-existent in many rural and undeveloped regions. Notwithstanding computational advances, distance estimation on DEMs has never been systematically assessed, due to assumptions that improvements are so small that surface adjustment is unwarranted. For individual pixels inaccuracies may be small, but additive effects can propagate dramatically, especially in regional models (e.g., disaster evacuation) or global models (e.g., sea level rise) where pixels span dozens to hundreds of kilometers (Usery et al 2003). Such models are increasingly common, lending compelling reasons to understand shortcomings in the use of planar distance metrics. Researchers have studied curvature-based terrain modeling. Jenny et al. (2011) use curvature to generate

  4. Feedback for reinforcement learning based brain-machine interfaces using confidence metrics.

    Prins, Noeline W; Sanchez, Justin C; Prasad, Abhishek


    For brain-machine interfaces (BMI) to be used in activities of daily living by paralyzed individuals, the BMI should be as autonomous as possible. One of the challenges is how the feedback is extracted and utilized in the BMI. Our long-term goal is to create autonomous BMIs that can utilize an evaluative feedback from the brain to update the decoding algorithm and use it intelligently in order to adapt the decoder. In this study, we show how to extract the necessary evaluative feedback from a biologically realistic (synthetic) source, use both the quantity and the quality of the feedback, and how that feedback information can be incorporated into a reinforcement learning (RL) controller architecture to maximize its performance. Motivated by the perception-action-reward cycle (PARC) in the brain which links reward for cognitive decision making and goal-directed behavior, we used a reward-based RL architecture named Actor-Critic RL as the model. Instead of using an error signal towards building an autonomous BMI, we envision to use a reward signal from the nucleus accumbens (NAcc) which plays a key role in the linking of reward to motor behaviors. To deal with the complexity and non-stationarity of biological reward signals, we used a confidence metric which was used to indicate the degree of feedback accuracy. This confidence was added to the Actor's weight update equation in the RL controller architecture. If the confidence was high (>0.2), the BMI decoder used this feedback to update its parameters. However, when the confidence was low, the BMI decoder ignored the feedback and did not update its parameters. The range between high confidence and low confidence was termed as the 'ambiguous' region. When the feedback was within this region, the BMI decoder updated its weight at a lower rate than when fully confident, which was decided by the confidence. We used two biologically realistic models to generate synthetic data for MI (Izhikevich model) and NAcc (Humphries

  5. Feedback for reinforcement learning based brain-machine interfaces using confidence metrics

    Prins, Noeline W.; Sanchez, Justin C.; Prasad, Abhishek


    Objective. For brain-machine interfaces (BMI) to be used in activities of daily living by paralyzed individuals, the BMI should be as autonomous as possible. One of the challenges is how the feedback is extracted and utilized in the BMI. Our long-term goal is to create autonomous BMIs that can utilize an evaluative feedback from the brain to update the decoding algorithm and use it intelligently in order to adapt the decoder. In this study, we show how to extract the necessary evaluative feedback from a biologically realistic (synthetic) source, use both the quantity and the quality of the feedback, and how that feedback information can be incorporated into a reinforcement learning (RL) controller architecture to maximize its performance. Approach. Motivated by the perception-action-reward cycle (PARC) in the brain which links reward for cognitive decision making and goal-directed behavior, we used a reward-based RL architecture named Actor-Critic RL as the model. Instead of using an error signal towards building an autonomous BMI, we envision to use a reward signal from the nucleus accumbens (NAcc) which plays a key role in the linking of reward to motor behaviors. To deal with the complexity and non-stationarity of biological reward signals, we used a confidence metric which was used to indicate the degree of feedback accuracy. This confidence was added to the Actor’s weight update equation in the RL controller architecture. If the confidence was high (>0.2), the BMI decoder used this feedback to update its parameters. However, when the confidence was low, the BMI decoder ignored the feedback and did not update its parameters. The range between high confidence and low confidence was termed as the ‘ambiguous’ region. When the feedback was within this region, the BMI decoder updated its weight at a lower rate than when fully confident, which was decided by the confidence. We used two biologically realistic models to generate synthetic data for MI (Izhikevich

  6. Concurrence of rule- and similarity-based mechanisms in artificial grammar learning.

    Opitz, Bertram; Hofmann, Juliane


    A current theoretical debate regards whether rule-based or similarity-based learning prevails during artificial grammar learning (AGL). Although the majority of findings are consistent with a similarity-based account of AGL it has been argued that these results were obtained only after limited exposure to study exemplars, and performance on subsequent grammaticality judgment tests has often been barely above chance level. In three experiments the conditions were investigated under which rule- and similarity-based learning could be applied. Participants were exposed to exemplars of an artificial grammar under different (implicit and explicit) learning instructions. The analysis of receiver operating characteristics (ROC) during a final grammaticality judgment test revealed that explicit but not implicit learning led to rule knowledge. It also demonstrated that this knowledge base is built up gradually while similarity knowledge governed the initial state of learning. Together these results indicate that rule- and similarity-based mechanisms concur during AGL. Moreover, it could be speculated that two different rule processes might operate in parallel; bottom-up learning via gradual rule extraction and top-down learning via rule testing. Crucially, the latter is facilitated by performance feedback that encourages explicit hypothesis testing.

  7. A structure-based distance metric for high-dimensional space exploration with multidimensional scaling.

    Lee, Jenny Hyunjung; McDonnell, Kevin T; Zelenyuk, Alla; Imre, Dan; Mueller, Klaus


    Although the euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging intercluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multidimensional scaling (MDS) where one can often observe nonintuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly in high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our biscale framework distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate euclidean distance.

  8. Comparative Analysis and Survey of Ant Colony Optimization based Rule Miners

    Zulfiqar Ali


    Full Text Available In this research study, we analyze the performance of bio inspired classification approaches by selecting Ant-Miners (Ant-Miner, cAnt_Miner, cAnt_Miner2 and cAnt_MinerPB for the discovery of classification rules in terms of accuracy, terms per rule, number of rules, running time and model size discovered by the corresponding rule mining algorithm. Classification rule discovery is still a challenging and emerging research problem in the field of data mining and knowledge discovery. Rule based classification has become cutting edge research area due to its importance and popular application areas in the banking, market basket analysis, credit card fraud detection, costumer behaviour, stock market prediction and protein sequence analysis. There are various approaches proposed for the discovery of classification rules like Artificial Neural Networks, Genetic Algorithm, Evolutionary Programming, SVM and Swarm Intelligence. This research study is focused on classification rule discovery by Ant Colony Optimization. For the performance analysis, Myra Tool is used for experiments on the 18 public datasets (available on the UCI repository. Data sets are selected with varying number of instances, number of attributes and number of classes. This research paper also provides focused survey of Ant-Miners for the discovery of classification rules.

  9. A Rule-Based Language for Integrating Business Processes and Business Rules

    Pham, Tuan Anh; Le Thanh, Nhan


    International audience; Business process modeling has become a popular method for improving organizational efficiency and quality. Automatic validation of process models is one of the most valuable features of modeling tools, in face of the increasing complexity of enterprise business processes and the richness of modeling languages. This paper proposes a formal language, Event-Condition-Action-Event (ECAE), for integrating Colored Petri Nets (CPN)-based business process with a set of busines...

  10. A stochastic wind turbine wake model based on new metrics for wake characterization: A stochastic wind turbine wake model based on new metrics for wake characterization

    Doubrawa, Paula [Sibley School of Mechanical and Aerospace Engineering, Cornell University, Upson Hall Ithaca 14850 New York USA; Barthelmie, Rebecca J. [Sibley School of Mechanical and Aerospace Engineering, Cornell University, Upson Hall Ithaca 14850 New York USA; Wang, Hui [Sibley School of Mechanical and Aerospace Engineering, Cornell University, Upson Hall Ithaca 14850 New York USA; Churchfield, Matthew J. [National Renewable Energy Laboratory, Golden 80401 Colorado USA


    Understanding the detailed dynamics of wind turbine wakes is critical to predicting the performance and maximizing the efficiency of wind farms. This knowledge requires atmospheric data at a high spatial and temporal resolution, which are not easily obtained from direct measurements. Therefore, research is often based on numerical models, which vary in fidelity and computational cost. The simplest models produce axisymmetric wakes and are only valid beyond the near wake. Higher-fidelity results can be obtained by solving the filtered Navier-Stokes equations at a resolution that is sufficient to resolve the relevant turbulence scales. This work addresses the gap between these two extremes by proposing a stochastic model that produces an unsteady asymmetric wake. The model is developed based on a large-eddy simulation (LES) of an offshore wind farm. Because there are several ways of characterizing wakes, the first part of this work explores different approaches to defining global wake characteristics. From these, a model is developed that captures essential features of a LES-generated wake at a small fraction of the cost. The synthetic wake successfully reproduces the mean characteristics of the original LES wake, including its area and stretching patterns, and statistics of the mean azimuthal radius. The mean and standard deviation of the wake width and height are also reproduced. This preliminary study focuses on reproducing the wake shape, while future work will incorporate velocity deficit and meandering, as well as different stability scenarios.

  11. Evolving Rule-Based Systems in two Medical Domains using Genetic Programming

    Tsakonas, A.; Dounias, G.; Jantzen, Jan


    We demonstrate, compare and discuss the application of two genetic programming methodologies for the construction of rule-based systems in two medical domains: the diagnosis of Aphasia's subtypes and the classification of Pap-Smear Test examinations. The first approach consists of a scheme...... that combines genetic programming and heuristic hierarchical crisp rule-base construction. The second model is composed by a grammar driven genetic programming system for the generation of fuzzy rule-based systems. Results are also compared for their efficiency, accuracy and comprehensibility, to those...... of a standard entropy based machine learning approach and to those of a standard genetic programming symbolic expression approach. In the diagnosis of subtypes of Aphasia, two models for crisp rule-bases are presented. The first one discriminates between four major types and the second attempts...

  12. Techniques and Methods to Improve the Audit Process of the Distributed Informatics Systems Based on Metric System

    Marius POPA


    Full Text Available The paper presents how an assessment system is implemented to evaluate the IT&C audit process quality. Issues regarding theoretical and practical terms are presented together with a brief presentation of the metrics and indicators developed in previous researches. The implementation process of an indicator system is highlighted and linked to specification stated in international standards regarding the measurement process. Also, the effects of an assessment system on the IT&C audit process quality are emphasized to demonstrate the importance of such assessment system. The audit process quality is an iterative process consisting of repetitive improvements based on objective measures established on analytical models of the indicators.

  13. Dry Season Evapotranspiration Dynamics over Human-Impacted Landscapes in the Southern Amazon Using the Landsat-Based METRIC Model

    Kul Khand


    Full Text Available Although seasonal and temporal variations in evapotranspiration (ET in Amazonia have been studied based upon flux-tower data and coarse resolution satellite-based models, ET dynamics over human-impacted landscapes are highly uncertain in this region. In this study, we estimate ET rates from critical land cover types over highly fragmented landscapes in the southern Amazon and characterize the ET dynamics during the dry season using the METRIC (Mapping Evapotranspiration at high Resolution with Internalized Calibration model. METRIC, a Landsat-based ET model, that generates spatially continuous ET estimates at a 30 m spatial resolution widely used for agricultural applications, was adapted to the southern Amazon by using the NDVI indexed reference ET fraction (ETrF approach. Compared to flux tower-based ET rates, this approach showed an improved performance on the forest ET estimation over the standard METRIC approach, with R2 = 0.73 from R2 = 0.70 and RMSE reduced from 0.77 mm/day to 0.35 mm/day. We used this approach integrated into the METRIC procedure to estimate ET rates from primary, regenerated, and degraded forests and pasture in Acre, Rondônia, and Mato Grosso, all located in the southern Amazon, during the dry season in 2009. The lowest ET rates occurred in Mato Grosso, the driest region. Acre and Rondônia, both located in the southwestern Amazon, had similar ET rates for all land cover types. Dry season ET rates between primary forest and regenerated forest were similar (p > 0.05 in all sites, ranging between 2.5 and 3.4 mm/day for both forest cover types in the three sites. ET rates from degraded forest in Mato Grosso were significantly lower (p < 0.05 compared to the other forest cover types, with a value of 2.03 mm/day on average. Pasture showed the lowest ET rates during the dry season at all study sites, with the dry season average ET varying from 1.7 mm/day in Mato Grosso to 2.8 mm/day in Acre.

  14. A Linguistic Evaluation of Rule-Based, Phrase-Based, and Neural MT Engines

    Burchardt Aljoscha


    Full Text Available In this paper, we report an analysis of the strengths and weaknesses of several Machine Translation (MT engines implementing the three most widely used paradigms. The analysis is based on a manually built test suite that comprises a large range of linguistic phenomena. Two main observations are on the one hand the striking improvement of an commercial online system when turning from a phrase-based to a neural engine and on the other hand that the successful translations of neural MT systems sometimes bear resemblance with the translations of a rule-based MT system.

  15. Design rules for lossy mode resonance based sensors.

    Del Villar, Ignacio; Hernaez, Miguel; Zamarreño, Carlos R; Sánchez, Pedro; Fernández-Valdivielso, Carlos; Arregui, Francisco J; Matias, Ignacio R


    Lossy mode resonances can be obtained in the transmission spectrum of cladding removed multimode optical fiber coated with a thin-film. The sensitivity of these devices to changes in the properties of the coating or the surrounding medium can be optimized by means of the adequate parameterization of the coating refractive index, the coating thickness, and the surrounding medium refractive index. Some basic rules of design, which enable the selection of the best parameters for each specific sensing application, are indicated in this work.

  16. The diagnostic rules of peripheral lung cancer preliminary study based on data mining technique

    Yongqian Qiang; Youmin Guo; Xue Li; Qiuping Wang; Hao Chen; Duwu Cui


    Objective: To discuss the clinical and imaging diagnostic rules of peripheral lung cancer by data mining technique, and to explore new ideas in the diagnosis of peripheral lung cancer, and to obtain early-stage technology and knowledge support of computer-aided detecting (CAD). Methods: 58 cases of peripheral lung cancer confirmed by clinical pathology were collected. The data were imported into the database after the standardization of the clinical and CT findings attributes were identified. The data was studied comparatively based on Association Rules (AR) of the knowledge discovery process and the Rough Set (RS) reduction algorithm and Genetic Algorithm(GA) of the generic data analysis tool (ROSETTA), respectively. Results: The genetic classification algorithm of ROSETTA generates 5 000 or so diagnosis rules. The RS reduction algorithm of Johnson's Algorithm generates 51 diagnosis rules and the AR algorithm generates 123 diagnosis rules. Three data mining methods basically consider gender, age,cough, location, lobulation sign, shape, ground-glass density attributes as the main basis for the diagnosis of peripheral lung cancer. Conclusion: These diagnosis rules for peripheral lung cancer with three data mining technology is same as clinical diagnostic rules, and these rules also can be used to build the knowledge base of expert system. This study demonstrated the potential values of data mining technology in clinical imaging diagnosis and differential diagnosis.

  17. Speech enhancement based on multitaper spectrum and psychoacoustical weighting rule

    WU Hongwei; WU Zhenyang; ZHAO Li


    Multitaper spectrum has lower variance than the traditional periodogram. The noise spectrum and the noise to noisy signal spectrum ratio (NNSR) were estimated from the multitaper spectrum of the noisy signal; the pre-enhanced speech for calculating the noise masking threshold was obtained by the spectral amplitude subtraction method, whose gain is a function of NNSR; the final enhanced speech was obtained by suppressing the Fourier spectrum of the noisy speech with the psychoacoustical weighting rule incorporating the noise masking threshold. Because of the low variance feature of the multitaper spectrum, a modified offset formula was proposed to calculate the noise masking threshold, thus the reconstructed speech with this modification has an improvement in MBSD (Modified Bark Spectral Distortion).When a maximum limitation less than one to the psychoacoustical weighting rule is further proposed, the higher the input SNR (> 0 dB) is, the more improvement the segmental SNR and the overall SNR have. The informal listening tests show that there is little speech distortion for the enhanced speech processed by the proposed method, the background noise is reduced much and free of musical noise.

  18. An Adaptive Steganographic Method in Frequency Domain Based on Statistical Metrics of Image

    Seyyed Amin Seyyedi


    Full Text Available Steganography is a branch of information hiding. A tradeoff between the hiding payload and quality of digital image steganographic schemes is major challenge of the steganographic methods. An adaptive steganographic method for embedding secret message into gray scale images is proposed. Before embedding the secret message, the cover image is transformed into frequency domain by integer wavelet. The middle frequency band of cover image is partitioned into 4×4 non overlapping blocks. The blocks by deviation and entropy metrics are classified into three categories: smooth, edge, and texture regions. Number of bits which can be embedded in a block is defined by block features. Moreover, RC4 encryption method is used to increase secrecy protection. Experimental results denote the feasibility of the proposed method. Statistical tests were conducted to collect related data to verify the security of method.

  19. Rule-Based Analytic Asset Management for Space Exploration Systems (RAMSES) Project

    National Aeronautics and Space Administration — Payload Systems Inc. (PSI) and the Massachusetts Institute of Technology (MIT) were selected to jointly develop the Rule-based Analytic Asset Management for Space...

  20. A Rule-Based Modeling for the Description of Flexible and Self-healing Business Processes

    Boukhebouze, Mohamed; Amghar, Youssef; Benharkat, Aïcha-Nabila; Maamar, Zakaria

    In this paper we discuss the importance of ensuring that business processes are label robust and agile at the same time robust and agile. To this end, we consider reviewing the way business processes are managed. For instance we consider offering a flexible way to model processes so that changes in regulations are handled through some self-healing mechanisms. These changes may raise exceptions at run-time if not properly reflected on these processes. To this end we propose a new rule based model that adopts the ECA rules and is built upon formal tools. The business logic of a process can be summarized with a set of rules that implement an organization’s policies. Each business rule is formalized using our ECAPE formalism (Event-Condition-Action-Post condition- post Event). This formalism allows translating a process into a graph of rules that is analyzed in terms of reliably and flexibility.

  1. Modeling for (physical) biologists: an introduction to the rule-based approach.

    Chylek, Lily A; Harris, Leonard A; Faeder, James R; Hlavacek, William S


    Models that capture the chemical kinetics of cellular regulatory networks can be specified in terms of rules for biomolecular interactions. A rule defines a generalized reaction, meaning a reaction that permits multiple reactants, each capable of participating in a characteristic transformation and each possessing certain, specified properties, which may be local, such as the state of a particular site or domain of a protein. In other words, a rule defines a transformation and the properties that reactants must possess to participate in the transformation. A rule also provides a rate law. A rule-based approach to modeling enables consideration of mechanistic details at the level of functional sites of biomolecules and provides a facile and visual means for constructing computational models, which can be analyzed to study how system-level behaviors emerge from component interactions.

  2. Modeling for (physical) biologists: an introduction to the rule-based approach

    Chylek, Lily A.; Harris, Leonard A.; Faeder, James R.; Hlavacek, William S.


    Models that capture the chemical kinetics of cellular regulatory networks can be specified in terms of rules for biomolecular interactions. A rule defines a generalized reaction, meaning a reaction that permits multiple reactants, each capable of participating in a characteristic transformation and each possessing certain, specified properties, which may be local, such as the state of a particular site or domain of a protein. In other words, a rule defines a transformation and the properties that reactants must possess to participate in the transformation. A rule also provides a rate law. A rule-based approach to modeling enables consideration of mechanistic details at the level of functional sites of biomolecules and provides a facile and visual means for constructing computational models, which can be analyzed to study how system-level behaviors emerge from component interactions.

  3. A Fuzzy Rule-Based Expert System for Evaluating Intellectual Capital

    Mohammad Hossein Fazel Zarandi


    Full Text Available A fuzzy rule-based expert system is developed for evaluating intellectual capital. A fuzzy linguistic approach assists managers to understand and evaluate the level of each intellectual capital item. The proposed fuzzy rule-based expert system applies fuzzy linguistic variables to express the level of qualitative evaluation and criteria of experts. Feasibility of the proposed model is demonstrated by the result of intellectual capital performance evaluation for a sample company.

  4. Genealogical Information Search by Using Parent Bidirectional Breadth Algorithm and Rule Based Relationship

    Nuanmeesri, Sumitra; Meesad, Payung


    Genealogical information is the best histories resources for culture study and cultural heritage. The genealogical research generally presents family information and depict tree diagram. This paper presents Parent Bidirectional Breadth Algorithm (PBBA) to find consanguine relationship between two persons. In addition, the paper utilizes rules based system in order to identify consanguine relationship. The study reveals that PBBA is fast to solve the genealogical information search problem and the Rule Based Relationship provides more benefits in blood relationship identification.

  5. Extract Rules by Using Rough Set and Knowledge—Based NN

    王士同; E.Scott; 等


    In this paper,rough set theory is used to extract roughly-correct inference rules from information systems.Based on this idea,the learning algorithm ERCR is presented.In order to refine the learned roughly-correct inference rules,the knowledge-based neural network is used.The method presented here sufficiently combines the advantages of rough set theory and neural network.

  6. RSPOP: rough set-based pseudo outer-product fuzzy rule identification algorithm.

    Ang, Kai Keng; Quek, Chai


    System modeling with neuro-fuzzy systems involves two contradictory requirements: interpretability verses accuracy. The pseudo outer-product (POP) rule identification algorithm used in the family of pseudo outer-product-based fuzzy neural networks (POPFNN) suffered from an exponential increase in the number of identified fuzzy rules and computational complexity arising from high-dimensional data. This decreases the interpretability of the POPFNN in linguistic fuzzy modeling. This article proposes a novel rough set-based pseudo outer-product (RSPOP) algorithm that integrates the sound concept of knowledge reduction from rough set theory with the POP algorithm. The proposed algorithm not only performs feature selection through the reduction of attributes but also extends the reduction to rules without redundant attributes. As many possible reducts exist in a given rule set, an objective measure is developed for POPFNN to correctly identify the reducts that improve the inferred consequence. Experimental results are presented using published data sets and real-world application involving highway traffic flow prediction to evaluate the effectiveness of using the proposed algorithm to identify fuzzy rules in the POPFNN using compositional rule of inference and singleton fuzzifier (POPFNN-CRI(S)) architecture. Results showed that the proposed rough set-based pseudo outer-product algorithm reduces computational complexity, improves the interpretability of neuro-fuzzy systems by identifying significantly fewer fuzzy rules, and improves the accuracy of the POPFNN.

  7. Testing the performance of technical trading rules in the Chinese markets based on superior predictive test

    Wang, Shan; Jiang, Zhi-Qiang; Li, Sai-Ping; Zhou, Wei-Xing


    Technical trading rules have a long history of being used by practitioners in financial markets. The profitable ability and efficiency of technical trading rules are yet controversial. In this paper, we test the performance of more than seven thousand traditional technical trading rules on the Shanghai Securities Composite Index (SSCI) from May 21, 1992 through June 30, 2013 and China Securities Index 300 (CSI 300) from April 8, 2005 through June 30, 2013 to check whether an effective trading strategy could be found by using the performance measurements based on the return and Sharpe ratio. To correct for the influence of the data-snooping effect, we adopt the Superior Predictive Ability test to evaluate if there exists a trading rule that can significantly outperform the benchmark. The result shows that for SSCI, technical trading rules offer significant profitability, while for CSI 300, this ability is lost. We further partition the SSCI into two sub-series and find that the efficiency of technical trading in sub-series, which have exactly the same spanning period as that of CSI 300, is severely weakened. By testing the trading rules on both indexes with a five-year moving window, we find that during the financial bubble from 2005 to 2007, the effectiveness of technical trading rules is greatly improved. This is consistent with the predictive ability of technical trading rules which appears when the market is less efficient.

  8. Rule based Part of speech Tagger for Homoeopathy Clinical realm

    Dwivedi, Sanjay K


    A tagger is a mandatory segment of most text scrutiny systems, as it consigned a s yntax class (e.g., noun, verb, adjective, and adverb) to every word in a sentence. In this paper, we present a simple part of speech tagger for homoeopathy clinical language. This paper reports about the anticipated part of speech tagger for homoeopathy clinical language. It exploit standard pattern for evaluating sentences, untagged clinical corpus of 20085 words is used, from which we had selected 125 sentences (2322 tokens). The problem of tagging in natural language processing is to find a way to tag every word in a text as a meticulous part of speech. The basic idea is to apply a set of rules on clinical sentences and on each word, Accuracy is the leading factor in evaluating any POS tagger so the accuracy of proposed tagger is also conversed.

  9. Haldane's Rule: Genetic Bases and Their Empirical Support.

    Delph, Lynda F; Demuth, Jeffery P


    There are few patterns in evolution that are as rigidly held as Haldane's rule (HR), which states, "When in the first generation between hybrids between 2 species, 1 sex is absent, rare, or sterile, that sex is always the heterogametic sex." Yet despite considerable attention for almost a century, questions persist as to how many independent examples exist and what is (are) the underlying genetic cause(s). Here, we review recent evidence extending HR to plants, where previously it has only been documented in animals. We also discuss recent comparative analyses that show much more variation in sex-chromosome composition than previously recognized, thus increasing the number of potential independent origins of HR dramatically. Finally, we review the standing of genetic theories proposed to explain HR in light of the new examples and new molecular understanding.

  10. Design High Efficiency-Minimum Rule Base PID Like Fuzzy Computed Torque Controller

    Alireza Khalilian


    Full Text Available The minimum rule base Proportional Integral Derivative (PID Fuzzy Computed Torque Controller is presented in this research. The popularity of PID Fuzzy Computed Torque Controller can be attributed to their robust performance in a wide range of operating conditions and partly to their functional simplicity. The process of setting of PID Fuzzy Computed Torque Controller can be determined as an optimization task. Over the years, use of intelligent strategies for tuning of these controllers has been growing. PID methodology has three inputs and if any input is described with seven linguistic values, and any rule has three conditions we will need 343 rules. It is too much work to write 343 rules. In this research the PID-like fuzzy controller can be constructed as a parallel structure of a PD-like fuzzy controller and a PI controller to have the minimum rule base. However computed torque controller is work based on cancelling decoupling and nonlinear terms of dynamic parameters of each link, this controller is work based on manipulator dynamic model and this technique is highly sensitive to the knowledge of all parameters of nonlinear robot manipulator’s dynamic equation. This research is used to reduce or eliminate the computed torque controller problem based on minimum rule base fuzzy logic theory to control of flexible robot manipulator system and testing of the quality of process control in the simulation environment of MATLAB/SIMULINK Simulator.

  11. Strategies for adding adaptive learning mechanisms to rule-based diagnostic expert systems

    Stclair, D. C.; Sabharwal, C. L.; Bond, W. E.; Hacke, Keith


    Rule-based diagnostic expert systems can be used to perform many of the diagnostic chores necessary in today's complex space systems. These expert systems typically take a set of symptoms as input and produce diagnostic advice as output. The primary objective of such expert systems is to provide accurate and comprehensive advice which can be used to help return the space system in question to nominal operation. The development and maintenance of diagnostic expert systems is time and labor intensive since the services of both knowledge engineer(s) and domain expert(s) are required. The use of adaptive learning mechanisms to increment evaluate and refine rules promises to reduce both time and labor costs associated with such systems. This paper describes the basic adaptive learning mechanisms of strengthening, weakening, generalization, discrimination, and discovery. Next basic strategies are discussed for adding these learning mechanisms to rule-based diagnostic expert systems. These strategies support the incremental evaluation and refinement of rules in the knowledge base by comparing the set of advice given by the expert system (A) with the correct diagnosis (C). Techniques are described for selecting those rules in the in the knowledge base which should participate in adaptive learning. The strategies presented may be used with a wide variety of learning algorithms. Further, these strategies are applicable to a large number of rule-based diagnostic expert systems. They may be used to provide either immediate or deferred updating of the knowledge base.

  12. A decade of robot-assisted radical prostatectomy training: Time-based metrics and qualitative grading for fellows and residents.

    Altok, Muammer; Achim, Mary F; Matin, Surena F; Pettaway, Curtis A; Chapin, Brian F; Davis, John W


    As modern urology residency and fellowship training in robot-assisted surgery evolves toward standardized curricula (didactics, dry/wet-laboratory exercises, and surgical assistance), additional tools are needed to evaluate on-console performance. At the start of our robotics program in 2006, we set-up a time- and quality-based evaluation program and aim to consolidate this data into a simple set of metrics for self-evaluation. Using our index procedure of robot-assisted radical prostatectomy (RARP), we prospectively collected data on 2,215 cases over 10 years from 6 faculty surgeons and 94 trainees (43 urologic oncology fellows and 51 urology residents). The steps of the operation were divided into 11 consistent steps, and the metrics included time to completion and quality using a 6-level grading system. Time metrics were consolidated into quartiles for benchmarking. The median times for trainees to complete each step were 15% to 120% higher than those of the staff (Pstaff results. Steps performed by trainees were carefully chosen for a high success rate, and on our Likert-like scale were graded 4 to 5 in more than 95% of cases. There were no grade 0 (very poor) cases, and grades 1 (multiple technical errors) and 2 (could not be completed but without safety issues) were rare (staff. As a trainee progress through a rotation, these benchmarks can assist in prioritizing the need for more attention to a basic step vs. progression to more advanced steps. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Asthma exacerbations and traffic: examining relationships using link-based traffic metrics and a comprehensive patient database.

    Lindgren, Paula; Johnson, Jean; Williams, Allan; Yawn, Barbara; Pratt, Gregory C


    The Rochester Epidemiology Project (REP) is a unique community-based medical record data linkage system that provides individual patient address, diagnosis and visit information for all hospitalizations, as well as emergency department, urgent care and outpatient clinic visits for asthma. Proximity to traffic is known to be associated with asthma exacerbations and severity. Our null hypothesis was that there is no association between residential proximity to traffic and asthma exacerbations over eleven years of REP data. Spatial coordinates of the homes of 19,915 individuals diagnosed with asthma were extracted from the REP database. Three metrics of traffic exposure at residences were calculated from link-based traffic count data. We used exploratory statistics as well as logistic and Poisson regression to examine associations between three traffic metrics at the home address and asthma exacerbations. Asthma exacerbations increased as traffic levels near the home increased. Proximity to traffic was a significant predictor of asthma exacerbations in logistic and Poisson regressions controlling for age, gender and block group poverty. Over eleven years in a comprehensive county-wide data set of asthma patients, and after controlling for demographic effects, we found evidence that living in proximity to traffic increased the risk of asthma exacerbations.

  14. A Rule-Based System Implementing a Method for Translating FOL Formulas into NL Sentences

    Mpagouli, Aikaterini; Hatzilygeroudis, Ioannis

    In this paper, we mainly present the implementation of a system that translates first order logic (FOL) formulas into natural language (NL) sentences. The motivation comes from an intelligent tutoring system teaching logic as a knowledge representation language, where it is used as a means for feedback to the students-users. FOL to NL conversion is achieved by using a rule-based approach, where we exploit the pattern matching capabilities of rules. So, the system consists of rule-based modules corresponding to the phases of our translation methodology. Facts are used in a lexicon providing lexical and grammatical information that helps in producing the NL sentences. The whole system is implemented in Jess, a java-implemented rule-based programming tool. Experimental results confirm the success of our choices.

  15. Arabic Rule-Based Named Entity Recognition Systems Progress and Challenges

    Ramzi Esmail Salah


    Full Text Available Rule-based approaches are using human-made rules to extract Named Entities (NEs, it is one of the most famous ways to extract NE as well as Machine Learning.  The term Named Entity Recognition (NER is defined as a task determined to indicate personal names, locations, organizations and many other entities. In Arabic language, Big Data challenges make Arabic NER develops rapidly and extracts useful information from texts. The current paper sheds some light on research progress in rule-based via a diagnostic comparison among linguistic resource, entity type, domain, and performance. We also highlight the challenges of the processing Arabic NEs through rule-based systems. It is expected that good performance of NER will be effective to other modern fields like semantic web searching, question answering, machine translation, information retrieval, and abstracting systems.

  16. Interval Type-II Fuzzy Rule-Based STATCOM for Voltage Regulation in the Power System

    Ying-Yi Hong


    Full Text Available The static synchronous compensator (STATCOM has recently received much attention owing to its ability to stabilize power systems and mitigate voltage variations. This paper investigates a novel interval type-II fuzzy rule-based PID (proportional-integral-derivative controller for the STATCOM to mitigate bus voltage variations caused by large changes in load and the intermittent generation of photovoltaic (PV arrays. The proposed interval type-II fuzzy rule base utilizes the output of the PID controller to tune the signal applied to the STATCOM. The rules involve upper and lower membership functions that ensure the stable responses of the controlled system. The proposed method is implemented using the NEPLAN software package and MATLAB/Simulink with co-simulation. A six-bus system is used to show the effectiveness of the proposed method. Comparative studies show that the proposed method is superior to traditional PID and type-I fuzzy rule-based methods.


    R Poorva Devi


    Full Text Available So far, in cloud computing distinct customer is accessed and consumed enormous amount of services through web, offered by cloud service provider (CSP. However cloud is providing one of the services is, security-as-a-service to its clients, still people are terrified to use the service from cloud vendor. Number of solutions, security components and measurements are coming with the new scope for the cloud security issue, but 79.2% security outcome only obtained from the different scientists, researchers and other cloud based academy community. To overcome the problem of cloud security the proposed model that is, “Quality based Enhancing the user data protection via fuzzy rule based systems in cloud environment”, will helps to the cloud clients by the way of accessing the cloud resources through remote monitoring management (RMMM and what are all the services are currently requesting and consuming by the cloud users that can be well analyzed with Managed service provider (MSP rather than a traditional CSP. Normally, people are trying to secure their own private data by applying some key management and cryptographic based computations again it will direct to the security problem. In order to provide good quality of security target result by making use of fuzzy rule based systems (Constraint & Conclusion segments in cloud environment. By using this technique, users may obtain an efficient security outcome through the cloud simulation tool of Apache cloud stack simulator.

  18. Differential Impact of Visuospatial Working Memory on Rule-based and Information-integration Category Learning.

    Xing, Qiang; Sun, Hailong


    Previous studies have indicated that the category learning system is a mechanism with multiple processing systems, and that working memory has different effects on category learning. But how does visuospatial working memory affect perceptual category learning? As there is no definite answer to this question, we conducted three experiments. In Experiment 1, the dual-task paradigm with sequential presentation was adopted to investigate the influence of visuospatial working memory on rule-based and information-integration category learning. The results showed that visuospatial working memory interferes with rule-based but not information-integration category learning. In Experiment 2, the dual-task paradigm with simultaneous presentation was used, in which the categorization task was integrated into the visuospatial working memory task. The results indicated that visuospatial working memory affects information-integration category learning but not rule-based category learning. In Experiment 3, the dual-task paradigm with simultaneous presentation was employed, in which visuospatial working memory was integrated into the category learning task. The results revealed that visuospatial working memory interferes with both rule-based and information-integration category learning. Through these three experiments, we found that, regarding the rule-based category learning, working memory load is the main mechanism by which visuospatial working memory influences the discovery of the category rules. In addition, regarding the information-integration category learning, visual resources mainly operates on the category representation.

  19. Hedging Rules for Water Supply Reservoir Based on the Model of Simulation and Optimization

    Yi Ji


    Full Text Available This study proposes a hedging rule model which is composed of a two-period reservior operation model considering the damage depth and hedging rule parameter optimization model. The former solves hedging rules based on a given poriod’s water supply weighting factor and carryover storage target, while the latter optimization model is used to optimize the weighting factor and carryover storage target based on the hedging rules. The coupling model gives the optimal poriod’s water supply weighting factor and carryover storage target to guide release. The conclusions achieved from this study as follows: (1 the water supply weighting factor and carryover storage target have a direct impact on the three elements of the hedging rule; (2 parameters can guide reservoirs to supply water reasonably after optimization of the simulation and optimization model; and (3 in order to verify the utility of the hedging rule, the Heiquan reservoir is used as a case study and particle swarm optimization algorithm with a simulation model is adopted for optimizing the parameter. The results show that the proposed hedging rule can improve the operation performances of the water supply reservoir.

  20. Network on Chip: a New Approach of QoS Metric Modeling Based on Calculus Theory

    Nasri, Salem


    A NoC is composed by IP cores (Intellectual Propriety) and switches connected among themselves by communication channels. End-to-End Delay (EED) communication is accomplished by the exchange of data among IP cores. Often, the structure of particular messages is not adequate for the communication purposes. This leads to the concept of packet switching. In the context of NoCs, packets are composed by header, payload, and trailer. Packets are divided into small pieces called Flits. It appears of importance, to meet the required performance in NoC hardware resources. It should be specified in an earlier step of the system design. The main attention should be given to the choice of some network parameters such as the physical buffer size in the node. The EED and packet loss are some of the critical QoS metrics. Some real-time and multimedia applications bound up these parameters and require specific hardware resources and particular management approaches in the NoC switch. A traffic contract (SLA, Service Level Ag...

  1. Performance metric optimization advocates CPFR in supply chains: A system dynamics model based study

    Balaji Janamanchi


    Full Text Available Background: Supply Chain partners often find themselves in rather helpless positions, unable to improve their firm’s performance and profitability because their partners although willing to share production information do not fully collaborate in tackling customer order variations as they don’t seem to appreciate the benefits of such collaboration. Methods: We use a two-player (supplier-manufacturer System Dynamics model to study the dynamics to assess the impact and usefulness of supply chain partner collaboration on the supply chain performance measures. Results: Simulation results of supply chain metrics under varied customer order patterns viz., basecase, random normal, random uniform, random upwardtrend, and random downwardtrend under (a basecase, (b independent optimization by manufacturer, and (c collaborative optimization by manufacturer and supplier, are obtained to contrast them to develop useful insights. Conclusions: Focus on obtaining improved inventory turns with optimization techniques provides some viable options to managers and makes a strong case for increased collaborative planning forecasting and replenishment (CPFR in supply chains. Despite the differences in the inventory management practices that it was contrasted with, CPFR has proven to be beneficial in a supply chain environment for all SC partners.

  2. Assessing woody vegetation trends in Sahelian drylands using MODIS based seasonal metrics

    Brandt, Martin Stefan; Hiernaux, Pierre; Rasmussen, Kjeld


    management of parkland trees by the farmers. Positive changes are observed in savannas (2.5 ± 5.4) and woodland areas (3.9 ± 7.3). The major pattern of woody cover change reveals strong increases in the sparsely populated Sahel zones of eastern Senegal, western Mali and central Chad, but a decreasing trend......Woody plants play a major role for the resilience of drylands and in peoples' livelihoods. However, due to their scattered distribution, quantifying and monitoring woody cover over space and time is challenging. We develop a phenology driven model and train/validate MODIS (MCD43A4, 500 m) derived...... metrics with 178 ground observations from Niger, Senegal and Mali to estimate woody cover trends from 2000 to 2014 over the entire Sahel. The annual woody cover estimation at 500 m scale is fairly accurate with an RMSE of 4.3 (woody cover %) and r2 = 0.74. Over the 15 year period we observed an average...

  3. Community-Based Reasoning in Games: Salience, Rule-Following, and Counterfactuals

    Cyril Hédoin


    Full Text Available This paper develops a game-theoretic and epistemic account of a peculiar mode of practical reasoning that sustains focal points but also more general forms of rule-following behavior which I call community-based reasoning (CBR. It emphasizes the importance of counterfactuals in strategic interactions. In particular, the existence of rules does not reduce to observable behavioral patterns but also encompasses a range of counterfactual beliefs and behaviors. This feature was already at the core of Wittgenstein’s philosophical account of rule-following. On this basis, I consider the possibility that CBR may provide a rational basis for cooperation in the prisoner’s dilemma.

  4. Ozone stomatal flux and O3 concentration-based metrics for Astronium graveolens Jacq., a Brazilian native forest tree species.

    Cassimiro, Jéssica C; Moura, Bárbara B; Alonso, Rocio; Meirelles, Sérgio T; Moraes, Regina M


    The current levels of surface ozone (O3) are high enough to negatively affect trees in large regions of São Paulo State, southeastern Brazil, where standards for the protection of vegetation against the adverse effects of O3 do not exist. We evaluated three O3 metrics - phytotoxic ozone dose (POD), accumulated ozone exposure over the threshold of 40 ppb h (AOT40), and the sum of all hourly average concentrations (SUM00) - for the Brazilian native tropical tree species Astronium graveolens Jacq. We used the DO3SE (Deposition of Ozone for Stomatal Exchange) model and calculated PODY for different thresholds (from 0 to 6 mmol O3 m(-2) PLA s(-1)), evaluating the model's performance through the relationship between measured and modelled conductance. The response parameters were: visible foliar injury, considered as incidence (% injured plants), severity (% injured leaves in relation to the number of leaves on injured plants), and leaf abscission. The model performance was suitable and significant (R(2) = 0.58; p < 0.001). POD0 was better correlated to incidence and leaf abscission, and SUM00 was better correlated to severity. The highest values of O3 concentration-based metrics (AOT40 and SUM00) did not coincide with those of POD0. Further investigation may improve the model and contribute to the proposition of a national standard for the protection of native species.

  5. Content-Based High-Resolution Remote Sensing Image Retrieval via Unsupervised Feature Learning and Collaborative Affinity Metric Fusion

    Yansheng Li


    Full Text Available With the urgent demand for automatic management of large numbers of high-resolution remote sensing images, content-based high-resolution remote sensing image retrieval (CB-HRRS-IR has attracted much research interest. Accordingly, this paper proposes a novel high-resolution remote sensing image retrieval approach via multiple feature representation and collaborative affinity metric fusion (IRMFRCAMF. In IRMFRCAMF, we design four unsupervised convolutional neural networks with different layers to generate four types of unsupervised features from the fine level to the coarse level. In addition to these four types of unsupervised features, we also implement four traditional feature descriptors, including local binary pattern (LBP, gray level co-occurrence (GLCM, maximal response 8 (MR8, and scale-invariant feature transform (SIFT. In order to fully incorporate the complementary information among multiple features of one image and the mutual information across auxiliary images in the image dataset, this paper advocates collaborative affinity metric fusion to measure the similarity between images. The performance evaluation of high-resolution remote sensing image retrieval is implemented on two public datasets, the UC Merced (UCM dataset and the Wuhan University (WH dataset. Large numbers of experiments show that our proposed IRMFRCAMF can significantly outperform the state-of-the-art approaches.

  6. A Web-Based Rice Plant Expert System Using Rule-Based Reasoning

    Anton Setiawan Honggowibowo


    Full Text Available Rice plants can be attacked by various kinds of diseases which are possible to be determined from their symptoms. However, it is to recognize that to find out the exact type of disease, an agricultural expert’s opinion is needed, meanwhile the numbers of agricultural experts are limited and there are too many problems to be solved at the same time. This makes a system with a capability as an expert is required. This system must contain the knowledge of the diseases and symptom of rice plants as an agricultural expert has to have. This research designs a web-based expert system using rule-based reasoning. The rule are modified from the method of forward chaining inference and backward chaining in order to to help farmers in the rice plant disease diagnosis. The web-based rice plants disease diagnosis expert system has the advantages to access and use easily. With web-based features inside, it is expected that the farmer can accesse the expert system everywhere to overcome the problem to diagnose rice diseases.

  7. Gain ratio based fuzzy weighted association rule mining classifier for medical diagnostic interface

    N S Nithya; K Duraiswamy


    The health care environment still needs knowledge based discovery for handling wealth of data. Extraction of the potential causes of the diseases is the most important factor for medical data mining. Fuzzy association rule mining is wellperformed better than traditional classifiers but it suffers from the exponential growth of the rules produced. In the past, we have proposed an information gain based fuzzy association rule mining algorithm for extracting both association rules and membership functions of medical data to reduce the rules. It used a ranking based weight value to identify the potential attribute. When we take a large number of distinct values, the computation of information gain value is not feasible. In this paper, an enhanced approach, called gain ratio based fuzzy weighted association rule mining, is thus proposed for distinct diseases and also increase the learning time of the previous one. Experimental results show that there is a marginal improvement in the attribute selection process and also improvement in the classifier accuracy. The system has been implemented in Java platform and verified by using benchmark data from the UCI machine learning repository.

  8. Opposition-Based Discrete PSO Using Natural Encoding for Classification Rule Discovery

    Naveed Kazim Khan


    Full Text Available In this paper we present a new Discrete Particle Swarm Optimization approach to induce rules from discrete data. The proposed algorithm, called Opposition‐ based Natural Discrete PSO (ONDPSO, initializes its population by taking into account the discrete nature of the data. Particles are encoded using a Natural Encoding scheme. Each member of the population updates its position iteratively on the basis of a newly designed position update rule. Opposition‐based learning is implemented in the optimization process. The encoding scheme and position update rule used by the algorithm allows individual terms corresponding to different attributes within the rule’s antecedent to be a disjunction of the values of those attributes. The performance of the proposed algorithm is evaluated against seven different datasets using a tenfold testing scheme. The achieved median accuracy is compared against various evolutionary and non‐evolutionary classification techniques. The algorithm produces promising results by creating highly accurate and precise rules for each dataset.

  9. A Frequent Closed Itemsets Lattice-based Approach for Mining Minimal Non-Redundant Association Rules

    Vo, Bay


    There are many algorithms developed for improvement the time of mining frequent itemsets (FI) or frequent closed itemsets (FCI). However, the algorithms which deal with the time of generating association rules were not put in deep research. In reality, in case of a database containing many FI/FCI (from ten thousands up to millions), the time of generating association rules is much larger than that of mining FI/FCI. Therefore, this paper presents an application of frequent closed itemsets lattice (FCIL) for mining minimal non-redundant association rules (MNAR) to reduce a lot of time for generating rules. Firstly, we use CHARM-L for building FCIL. After that, based on FCIL, an algorithm for fast generating MNAR will be proposed. Experimental results show that the proposed algorithm is much faster than frequent itemsets lattice-based algorithm in the mining time.

  10. Rule-based design of synthetic transcription factors in eukaryotes.

    Purcell, Oliver; Peccoud, Jean; Lu, Timothy K


    To design and build living systems, synthetic biologists have at their disposal an increasingly large library of naturally derived and synthetic parts. These parts must be combined together in particular orders, orientations, and spacings to achieve desired functionalities. These structural constraints can be viewed as grammatical rules describing how to assemble parts together into larger functional units. Here, we develop a grammar for the design of synthetic transcription factors (sTFs) in eukaryotic cells and implement it within GenoCAD, a Computer-Aided Design (CAD) software for synthetic biology. Knowledge derived from experimental evidence was captured in this grammar to guide the user to create designer transcription factors that should operate as intended. The grammar can be easily updated and refined as our experience with using sTFs in different contexts increases. In combination with grammars that define other synthetic systems, we anticipate that this work will enable the more reliable, efficient, and automated design of synthetic cells with rich functionalities.

  11. Comparing the Effectiveness of GPS-Enhanced Voice Guidance for Pedestrians with Metric- and Landmark-Based Instruction Sets

    Rehrl, Karl; Häusler, Elisabeth; Leitinger, Sven

    This paper reports on a field experiment comparing two different kinds of verbal turn instructions in the context of GPS-based pedestrian navigation. The experiment was conducted in the city of Salzburg with 20 participants. Both instruction sets were based on qualitative turn direction concepts. The first one was enhanced with metric distance information and the second one was enhanced with landmark-anchored directions gathered from participants of a previous field experiment. The results show that in context of GPS-enhanced pedestrian navigation both kinds of instruction sets lead to similar navigation performance. Results also demonstrate that effective voice-only guidance of pedestrians in unfamiliar environments at a minimal error rate and without stopping the walk is feasible. Although both kinds of instructions lead to similar navigation performance, participants clearly preferred landmark-enhanced instructions.

  12. Stability of Switched Feedback Time-Varying Dynamic Systems Based on the Properties of the Gap Metric for Operators

    M. De la Sen


    Full Text Available The stabilization of dynamic switched control systems is focused on and based on an operator-based formulation. It is assumed that the controlled object and the controller are described by sequences of closed operator pairs (L,C on a Hilbert space H of the input and output spaces and it is related to the existence of the inverse of the resulting input-output operator being admissible and bounded. The technical mechanism addressed to get the results is the appropriate use of the fact that closed operators being sufficiently close to bounded operators, in terms of the gap metric, are also bounded. That philosophy is followed for the operators describing the input-output relations in switched feedback control systems so as to guarantee the closed-loop stabilization.

  13. Numerical Calabi-Yau metrics

    Douglas, M R; Lukic, S; Reinbacher, R; Douglas, Michael R.; Karp, Robert L.; Lukic, Sergio; Reinbacher, Rene


    We develop numerical methods for approximating Ricci flat metrics on Calabi-Yau hypersurfaces in projective spaces. Our approach is based on finding balanced metrics, and builds on recent theoretical work by Donaldson. We illustrate our methods in detail for a one parameter family of quintics. We also suggest several ways to extend our results.

  14. Area metric gravity and accelerating cosmology

    Punzi, R; Wohlfarth, M N R; Punzi, Raffaele; Schuller, Frederic P.; Wohlfarth, Mattias N.R.


    Area metric manifolds emerge as effective classical backgrounds in quantum string theory and quantum gauge theory, and present a true generalization of metric geometry. Here, we consider area metric manifolds in their own right, and develop in detail the foundations of area metric differential geometry. Based on the construction of an area metric curvature scalar, which reduces in the metric-induced case to the Ricci scalar, we re-interpret the Einstein-Hilbert action as dynamics for an area metric spacetime. In contrast to modifications of general relativity based on metric geometry, no continuous deformation scale needs to be introduced; the extension to area geometry is purely structural and thus rigid. We present an intriguing prediction of area metric gravity: without dark energy or fine-tuning, the late universe exhibits a small acceleration.

  15. Exact computation of emergy based on a mathematical reinterpretation of the rules of emergy algebra


    cited By (since 1996)5; International audience; The emergy algebra is based on four rules, the use of which is sometimes confusing or reserved only to the experts of the domain. The emergy computation does not obey conservation logic (i.e. emergy computation does not obey Kirchoff-like circuit law). In this paper the authors propose to reformulate the emergy rules into three axioms which provide (i) a rigourous mathematical framework for emergy computation and (ii) an exact recursive algorith...

  16. Fusion of Thresholding Rules During Wavelet-Based Noisy Image Compression

    Bekhtin Yury


    Full Text Available The new method for combining semisoft thresholding rules during wavelet-based data compression of images with multiplicative noise is suggested. The method chooses the best thresholding rule and the threshold value using the proposed criteria which provide the best nonlinear approximations and take into consideration errors of quantization. The results of computer modeling have shown that the suggested method provides relatively good image quality after restoration in the sense of some criteria such as PSNR, SSIM, etc.

  17. SQL Based Association Rule Mining%基于SQL的关联规则挖掘


    Data mining is becoming increasingly important since the size of database grows even larger and the need to explore hidden rules from the database becomes widely recognized. Currently database systems are dominated by relational database and the ability to perform data mining using standard SQL queries will definitely ease implementation of data mining. In this paper ,we introduce an association rule mining algorithm based on Apriori and the implementation using SQL. At the end of the paper ,we summarize the paper.

  18. On Equivalence of FIS and ELM for Interpretable Rule-Based Knowledge Representation.

    Wong, Shen Yuong; Yap, Keem Siah; Yap, Hwa Jen; Tan, Shing Chiang; Chang, Siow Wee


    This paper presents a fuzzy extreme learning machine (F-ELM) that embeds fuzzy membership functions and rules into the hidden layer of extreme learning machine (ELM). Similar to the concept of ELM that employed the random initialization technique, three parameters of F-ELM are randomly assigned. They are the standard deviation of the membership functions, matrix-C (rule-combination matrix), and matrix-D [don't care (DC) matrix]. Fuzzy if-then rules are formulated by the rule-combination Matrix of F-ELM, and a DC approach is adopted to minimize the number of input attributes in the rules. Furthermore, F-ELM utilizes the output weights of the ELM to form the target class and confidence factor for each of the rules. This is to indicate that the corresponding consequent parameters are determined analytically. The operations of F-ELM are equivalent to a fuzzy inference system. Several benchmark data sets and a real world fault detection and diagnosis problem have been used to empirically evaluate the efficacy of the proposed F-ELM in handling pattern classification tasks. The results show that the accuracy rates of F-ELM are comparable (if not superior) to ELM with distinctive ability of providing explicit knowledge in the form of interpretable rule base.

  19. A Rule-Based Approach To Prepositional Phrase Attachment Disambiguation

    Brill, E; Brill, Eric; Resnik, Philip


    In this paper, we describe a new corpus-based approach to prepositional phrase attachment disambiguation, and present results comparing performance of this algorithm with other corpus-based approaches to this problem.

  20. The environmental hypersensitivity symptom inventory: metric properties and normative data from a population-based study.

    Nordin, Steven; Palmquist, Eva; Claeson, Anna-Sara; Stenberg, Berndt


    High concomitant intolerance attributed to odorous/pungent chemicals, certain buildings, electromagnetic fields (EMF), and everyday sounds calls for a questionnaire instrument that can assess symptom prevalence in various environmental intolerances. The Environmental Hypersensitivity Symptom Inventory (EHSI) was therefore developed and metrically evaluated, and normative data were established. The EHSI consists of 34 symptom items, requires limited time to respond to, and provides a detailed and broad description of the individual's symptomology. Data from 3406 individuals who took part in the Västerbotten Environmental Health Study were used. The participants constitute a random sample of inhabitants in the county of Västerbotten in Sweden, aged 18 to 79 years, stratified for age and gender. EXPLORATORY FACTOR ANALYSIS IDENTIFIED FIVE SIGNIFICANT FACTORS: airway symptoms (9 items; Kuder-Richardson Formula 20 coefficient, KR-20, of internal consistency = 0.74), skin and eye symptoms (6 items; KR-20 = 0.60), cardiac, dizziness and nausea symptoms (4 items; KR-20 = 0.55), head-related and gastrointestinal symptoms (5 items; KR-20 = 0.55), and cognitive and affective symptoms (10 items; KR-20 = 0.80). The KR-20 was 0.85 for the entire 34-item EHSI. Symptom prevalence rates in percentage for having the specific symptoms every week over the preceding three months constitute normative data. The EHSI can be recommended for assessment of symptom prevalence in various types of environmental hypersensitivity, and with the advantage of comparing prevalence rates with normality.

  1. CT-based compartmental quantification of adipose tissue versus body metrics in colorectal cancer patients

    Nattenmueller, Johanna; Hoegenauer, Hanna; Grenacher, Lars; Kauczor, Hans-Ulrich [University Hospital, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany); Boehm, Juergen; Ulrich, Cornelia [Huntsman Cancer Institute, Department of Population Health Sciences, Salt Lake City, UT (United States); Scherer, Dominique; Paskow, Michael; Gigic, Biljana; Schrotz-King, Petra [National Center for Tumor Diseases and German Cancer Research Center, Division of Preventive Oncology, Heidelberg (Germany)


    While obesity is considered a prognostic factor in colorectal cancer (CRC), there is increasing evidence that not simply body mass index (BMI) alone but specifically abdominal fat distribution is what matters. As part of the ColoCare study, this study measured the distribution of adipose tissue compartments in CRC patients and aimed to identify the body metric that best correlates with these measurements as a useful proxy for adipose tissue distribution. In 120 newly-diagnosed CRC patients who underwent multidetector computed tomography (CT), densitometric quantification of total (TFA), visceral (VFA), intraperitoneal (IFA), retroperitoneal (RFA), and subcutaneous fat area (SFA), as well as the M. erector spinae and psoas was performed to test the association with gender, age, tumor stage, metabolic equivalents, BMI, waist-to-height (WHtR) and waist-to-hip ratio (WHR). VFA was 28.8 % higher in men (p{sub VFA}<0.0001) and 30.5 % higher in patients older than 61 years (p{sub VFA}<0.0001). WHtR correlated best with all adipose tissue compartments (r{sub VFA}=0.69, r{sub TFA}=0.84, p<0.0001) and visceral-to-subcutaneous-fat-ratio (VFR, r{sub VFR}=0.22, p=<0.05). Patients with tumor stages III/IV showed significantly lower overall adipose tissue than I/II. Increased M. erector spinae mass was inversely correlated with all compartments. Densitometric quantification on CT is a highly reproducible and reliable method to show fat distribution across adipose tissue compartments. This distribution might be best reflected by WHtR, rather than by BMI or WHR. (orig.)

  2. Using an improved association rules mining optimization algorithm in web-based mobile-learning system

    Huang, Yin; Chen, Jianhua; Xiong, Shaojun


    Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rules based on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.

  3. Effective Rule Based Classifier using Multivariate Filter and Genetic Miner for Mammographic Image Classification

    Nirase Fathima Abubacker


    Full Text Available Mammography is an important examination in the early detection of breast abnormalities. Automatic classifications of mammogram images into normal, benign or malignant would help the radiologists in diagnosis of breast cancer cases. This study investigates the effectiveness of using rule-based classifiers with multivariate filter and genetic miner to classify mammogram images. The method discovers association rules with the classes as the consequence and classifies the images based on the Highest Average Confidence of the association rules (HAvC matched for the classes. In the association rules mining stage, Correlation based Feature Selection (CFS plays an enormous significance to reduce the complexity of image mining process is used in this study as a feature selection method and a modified genetic association rule mining technique, the GARM, is used to discover the rules. The method is evaluated on mammogram image dataset with 240 images taken from DDSM. The performance of the method is compared against other classifiers such as SMO; Naïve Bayes and J48. The performance of the proposed method is promising with 88% accuracy and outperforms other classifiers in the context of mammogram image classification.

  4. Fuzzy rule-based macroinvertebrate habitat suitability models for running waters

    Broekhoven, Van E.; Adriaenssens, V.; Baets, De B.; Verdonschot, P.F.M.


    A fuzzy rule-based approach was applied to a macroinvertebrate habitat suitability modelling problem. The model design was based on a knowledge base summarising the preferences and tolerances of 86 macroinvertebrate species for four variables describing river sites in springs up to small rivers in t

  5. Use and Cost of Electronic Resources in Central Library of Ferdowsi University Based on E-metrics

    Mohammad Reza Davarpanah


    Full Text Available The purpose of this study was to investigate the usage of electronic journals in Ferdowsi University, Iran based on e-metrics. The paper also aimed to emphasize the analysis of cost-benefit and the correlation between the journal impact factors and the usage data. In this study experiences of Ferdowsi University library on licensing and usage of electronic resources was evaluated by providing a cost-benefit analysis based on the cost and usage statistics of electronic resources. Vendor-provided data were also compared with local usage data. The usage data were collected by tracking web-based access locally, and by collecting vender-provided usage data. The data sources were one-year of vendor-supplied e-resource usage data such as Ebsco, Elsevier, Proquest, Emerald, Oxford and Springer and local usage data collected from the Ferdowsi university web server. The study found that actual usage values differ for vendor-provided data and local usage data. Elsevier has got the highest usage degree in searches, sessions and downloads. Statistics also showed that a small number of journals satisfy significant amount of use while the majority of journals were used less frequent and some were never used at all. The users preferred the PDF rather than HTML format. The data in subject profile suggested that the provided e-resources were best suited to certain subjects. There was no correlation between IF and electronic journal use. Monitoring the usage of e-resources gained increasing importance for acquisition policy and budget decisions. The article provided information about local metrics for the six surveyed vendors/publishers, e.g. usage trends, requests per package, cost per use as related to the scientific specialty of the university.

  6. Privacy Metric for User's Trajectory in Location-Based Services%位置服务中用户轨迹的隐私度量

    王彩梅; 郭亚军; 郭艳华


    针对一种流行的用户轨迹隐私保护方法——Silent Cascade,提出一种新的轨迹隐私度量方法.该度量方法将用户运动轨迹用带权无向图描述,并从信息熵的角度计算用户的轨迹隐私水平.已有文献指出,当攻击者拥有新的背景知识时,任何一种隐私保护方法都会受到隐私威胁.因此,将攻击者的背景知识分级融入到度量方法中,隐私度量的结果由对背景知识的假设和相应的轨迹隐私水平值组成,并提出(KUL(Ki+Ki-).KL(Ki+Ki-))联系规则的方法来描述对背景知识的假设.模拟实验结果表明,此度量方法为移动用户和轨迹隐私保护方法的设计者提供了一个有价值的工具,能够准确地评估在攻击者具有可变背景知识情况下,用户的轨迹隐私水平.%This paper proposes a trajectory privacy measure for Silent Cascade, which is a prevalent trajectory privacy preserving method in LBS (location-based services). In this measure, the user's trajectory is modeled as a weighted undirected graph, and the user's trajectory privacy level is calculated through the use of information entropy. It is pointed out in literatures that any privacy preserving methods will be subject to privacy threats once the attacker has new background knowledge. Therefore, adversarial background knowledge is hierarchically integrated into this measure. The privacy metric result composes of the assumptive background knowledge and the corresponding trajectory privacy level. (KUL(Ki+Ki-)) association rules is also proposed to describe the assumptive background knowledge. Simulation results show that this metric is an effective and valuable tool for mobile users and the designers of trajectory privacy preserving methods to measure the user's trajectory privacy level correctly, even the attacker has variable background knowledge.

  7. A Metrical Theory of Stress and Destressing in English and Dutch

    Kager, R.W.J.


    The topic of this study is word stress, more specifically the relation between rules of stress and destressing within the framework of metrical phonology. Our claims will be largely based on in-depth analyses of two word stress systems: those of English and Dutch. We intend to offer a contribution t

  8. Robust Design Rule with Definite Purpose Character Based on Fuzzy Probability and Study of its Characteristics

    ZHANG Long-ting; HE Zhe-ming; GUO Hui-xin


    The design target with definite purpose character of product quality was described in a real fuzzy number ( named fury target for short in this paper), and its membership functions in common use were given. According to the fury probability theory and the robust design principle, the robust design rule based on fuzzy probability (named fuzzy robust design rule for short) was put forward and its validity and practicability were analyzed and tested with a design example. The theoretical analysis and the design examples make clear that, while the fuzzy robust design rule was used, the fine design effect can be obtained and the fury robust design rule can be very suitable for the choice of the membership function of the fuzzy target; so it has a particular advantage.

  9. Stellar spectra association rule mining method based on the weighted frequent pattern tree

    Jiang-Hui Cai; Xu-Jun Zhao; Shi-Wei Sun; Ji-Fu Zhang; Hai-Feng Yang


    Effective extraction of data association rules can provide a reliable basis for classification of stellar spectra.The concept of stellar spectrum weighted itemsets and stellar spectrum weighted association rules are introduced,and the weight of a single property in the stellar spectrum is determined by information entropy.On that basis,a method is presented to mine the association rules of a stellar spectrum based on the weighted frequent pattern tree.Important properties of the spectral line are highlighted using this method.At the same time,the waveform of the whole spectrum is taken into account.The experimental results show that the data association rules of a stellar spectrum mined with this method are consistent with the main features of stellar spectral types.

  10. An Estimation of Distribution Algorithm with Intelligent Local Search for Rule-based Nurse Rostering

    Uwe, Aickelin; Jingpeng, Li


    This paper proposes a new memetic evolutionary algorithm to achieve explicit learning in rule-based nurse rostering, which involves applying a set of heuristic rules for each nurse's assignment. The main framework of the algorithm is an estimation of distribution algorithm, in which an ant-miner methodology improves the individual solutions produced in each generation. Unlike our previous work (where learning is implicit), the learning in the memetic estimation of distribution algorithm is explicit, i.e. we are able to identify building blocks directly. The overall approach learns by building a probabilistic model, i.e. an estimation of the probability distribution of individual nurse-rule pairs that are used to construct schedules. The local search processor (i.e. the ant-miner) reinforces nurse-rule pairs that receive higher rewards. A challenging real world nurse rostering problem is used as the test problem. Computational results show that the proposed approach outperforms most existing approaches. It is ...

  11. A Rule-Based Model for Bankruptcy Prediction Based on an Improved Genetic Ant Colony Algorithm

    Yudong Zhang


    Full Text Available In this paper, we proposed a hybrid system to predict corporate bankruptcy. The whole procedure consists of the following four stages: first, sequential forward selection was used to extract the most important features; second, a rule-based model was chosen to fit the given dataset since it can present physical meaning; third, a genetic ant colony algorithm (GACA was introduced; the fitness scaling strategy and the chaotic operator were incorporated with GACA, forming a new algorithm—fitness-scaling chaotic GACA (FSCGACA, which was used to seek the optimal parameters of the rule-based model; and finally, the stratified K-fold cross-validation technique was used to enhance the generalization of the model. Simulation experiments of 1000 corporations’ data collected from 2006 to 2009 demonstrated that the proposed model was effective. It selected the 5 most important factors as “net income to stock broker’s equality,” “quick ratio,” “retained earnings to total assets,” “stockholders’ equity to total assets,” and “financial expenses to sales.” The total misclassification error of the proposed FSCGACA was only 7.9%, exceeding the results of genetic algorithm (GA, ant colony algorithm (ACA, and GACA. The average computation time of the model is 2.02 s.

  12. Using rule-based shot dose assignment in model-based MPC applications

    Bork, Ingo; Buck, Peter; Wang, Lin; Müller, Uwe


    Shrinking feature sizes and the need for tighter CD (Critical Dimension) control require the introduction of new technologies in mask making processes. One of those methods is the dose assignment of individual shots on VSB (Variable Shaped Beam) mask writers to compensate CD non-linearity effects and improve dose edge slope. Using increased dose levels only for most critical features, generally only for the smallest CDs on a mask, the change in mask write time is minimal while the increase in image quality can be significant. This paper describes a method combining rule-based shot dose assignment with model-based shot size correction. This combination proves to be very efficient in correcting mask linearity errors while also improving dose edge slope of small features. Shot dose assignment is based on tables assigning certain dose levels to a range of feature sizes. The dose to feature size assignment is derived from mask measurements in such a way that shape corrections are kept to a minimum. For example, if a 50nm drawn line on mask results in a 45nm chrome line using nominal dose, a dose level is chosen which is closest to getting the line back on target. Since CD non-linearity is different for lines, line-ends and contacts, different tables are generated for the different shape categories. The actual dose assignment is done via DRC rules in a pre-processing step before executing the shape correction in the MPC engine. Dose assignment to line ends can be restricted to critical line/space dimensions since it might not be required for all line ends. In addition, adding dose assignment to a wide range of line ends might increase shot count which is undesirable. The dose assignment algorithm is very flexible and can be adjusted based on the type of layer and the best balance between accuracy and shot count. These methods can be optimized for the number of dose levels available for specific mask writers. The MPC engine now needs to be able to handle different dose

  13. Inter-regional metric disadvantages when comparing countries’ happiness on a global scale. A Rasch based consequential validity analysis

    Diego Fernando Rojas-Gualdrón


    Full Text Available Measurement confounding due to socioeconomic differences between world regions may bias the estimations of countries’ happiness and global inequality. Potential implications of this bias have not been researched. In this study, the consequential validity of the Happy Planet Index, 2012 as an indicator of global inequality is evaluated from the Rasch measurement perspective. Differential Item Functioning by world region and bias in the estimated magnitude of inequalities were analyzed. The recalculated measure showed a good fit to Rasch model assumptions. The original index underestimated relative inequalities between world regions by 20%. DIF had no effect on relative measures but affected absolute measures by overestimating world average happiness and underestimating its variance. These findings suggest measurement confounding by unmeasured characteristics. Metric disadvantages must be adjusted to make fair comparisons. Public policy decisions based on biased estimations could have relevant negative consequences on people’s health and well-being by not focusing efforts on real vulnerable populations.

  14. Physiological Signals based Day-Dependence Analysis with Metric Multidimensional Scaling for Sentiment Classification in Wearable Sensors

    Wei Wang


    Full Text Available The interaction of the affective has emerged in implicit human-computer interaction. Given the physiological signals in the recognition process of the affective, the different positions by which the physiological signal sensors are installed in the body, along with the daily habits and moods of human beings, influence the affective physiological signals. The scalar product matrix was calculated in this study based on metric multidimensional scaling with dissimilarity matrix. Subsequently, the matrix of individual attribute reconstructs was obtained using the principal component factor. The method proposed in this study eliminates day dependence, reduces the effect of time in the physiological signals of the affective, and improves the accuracy of affection classification.

  15. Automated implementation of rule-based expert systems with neural networks for time-critical applications

    Ramamoorthy, P. A.; Huang, Song; Govind, Girish


    In fault diagnosis, control and real-time monitoring, both timing and accuracy are critical for operators or machines to reach proper solutions or appropriate actions. Expert systems are becoming more popular in the manufacturing community for dealing with such problems. In recent years, neural networks have revived and their applications have spread to many areas of science and engineering. A method of using neural networks to implement rule-based expert systems for time-critical applications is discussed here. This method can convert a given rule-based system into a neural network with fixed weights and thresholds. The rules governing the translation are presented along with some examples. We also present the results of automated machine implementation of such networks from the given rule-base. This significantly simplifies the translation process to neural network expert systems from conventional rule-based systems. Results comparing the performance of the proposed approach based on neural networks vs. the classical approach are given. The possibility of very large scale integration (VLSI) realization of such neural network expert systems is also discussed.

  16. Rule-based category learning in children: the role of age and executive functioning.

    Rabi, Rahel; Minda, John Paul


    Rule-based category learning was examined in 4-11 year-olds and adults. Participants were asked to learn a set of novel perceptual categories in a classification learning task. Categorization performance improved with age, with younger children showing the strongest rule-based deficit relative to older children and adults. Model-based analyses provided insight regarding the type of strategy being used to solve the categorization task, demonstrating that the use of the task appropriate strategy increased with age. When children and adults who identified the correct categorization rule were compared, the performance deficit was no longer evident. Executive functions were also measured. While both working memory and inhibitory control were related to rule-based categorization and improved with age, working memory specifically was found to marginally mediate the age-related improvements in categorization. When analyses focused only on the sample of children, results showed that working memory ability and inhibitory control were associated with categorization performance and strategy use. The current findings track changes in categorization performance across childhood, demonstrating at which points performance begins to mature and resemble that of adults. Additionally, findings highlight the potential role that working memory and inhibitory control may play in rule-based category learning.

  17. A rule based comprehensive approach for reconfiguration of electrical distribution network

    Zhu, Jizhong [College of Electrical Engineering, Chongqing University (China)]|[AREVA T and D Inc., 10865 Willows Road NE, Redmond, WA 98052 (United States); Xiong, Xiaofu; Zhang, Jun [College of Electrical Engineering, Chongqing University (China); Shen, Guanquan; Xu, Qiuping; Xue, Yi [Guiyang South Power Supply Bureau, China Southern Power Grid (China)


    This paper proposes a rule based comprehensive approach to study distribution network reconfiguration (DNRC). The DNRC model with line power constraints is set up, in which the objective is to minimize the system power loss. In order to get the precise branch current and system power loss, a power summation based radiation distribution network load flow (PSRDNLF) method is applied in the study. The rules that are used to select the optimal reconfiguration of distribution network are formed based on the system operation experiences. The proposed rule based comprehensive approach is implemented in distribution network in Guiyang South Power Supply Bureau. For the purpose of illustrating the proposed approach, two distribution network systems are tested and analyzed in the paper. (author)

  18. SU-E-T-789: Validation of 3DVH Accuracy On Quantifying Delivery Errors Based On Clinical Relevant DVH Metrics

    Ma, T; Kumaraswamy, L [Roswell Park Cancer Institute, Buffalo, NY (United States)


    Purpose: Detection of treatment delivery errors is important in radiation therapy. However, accurate quantification of delivery errors is also of great importance. This study aims to evaluate the 3DVH software’s ability to accurately quantify delivery errors. Methods: Three VMAT plans (prostate, H&N and brain) were randomly chosen for this study. First, we evaluated whether delivery errors could be detected by gamma evaluation. Conventional per-beam IMRT QA was performed with the ArcCHECK diode detector for the original plans and for the following modified plans: (1) induced dose difference error up to ±4.0% and (2) control point (CP) deletion (3 to 10 CPs were deleted) (3) gantry angle shift error (3 degree uniformly shift). 2D and 3D gamma evaluation were performed for all plans through SNC Patient and 3DVH, respectively. Subsequently, we investigated the accuracy of 3DVH analysis for all cases. This part evaluated, using the Eclipse TPS plans as standard, whether 3DVH accurately can model the changes in clinically relevant metrics caused by the delivery errors. Results: 2D evaluation seemed to be more sensitive to delivery errors. The average differences between ECLIPSE predicted and 3DVH results for each pair of specific DVH constraints were within 2% for all three types of error-induced treatment plans, illustrating the fact that 3DVH is fairly accurate in quantifying the delivery errors. Another interesting observation was that even though the gamma pass rates for the error plans are high, the DVHs showed significant differences between original plan and error-induced plans in both Eclipse and 3DVH analysis. Conclusion: The 3DVH software is shown to accurately quantify the error in delivered dose based on clinically relevant DVH metrics, where a conventional gamma based pre-treatment QA might not necessarily detect.

  19. Ruled-based control of off-grid desalination powered by renewable energies

    Alvaro Serna


    Full Text Available A rule-based control is presented for desalination plants operating under variable, renewable power availability. This control algorithm is based on two sets of rules: first, a list that prioritizes the reverse osmosis (RO units of the plant is created, based on the current state and the expected water demand; secondly, the available energy is then dispatched to these units following this prioritized list. The selected strategy is tested on a specific case study: a reverse osmosis plant designed for the production of desalinated water powered by wind and wave energy. Simulation results illustrate the correct performance of the plant under this control.

  20. Automatic detection of esophageal pressure events. Is there an alternative to rule-based criteria?

    Kruse-Andersen, S; Rütz, K; Kolberg, Jens Godsk


    curves generated by muscular contractions, rule-based criteria do not always select the pressure events most relevant for further analysis. We have therefore been searching for a new concept for automatic event recognition. The present study describes a new system, based on the method of neurocomputing.......79-0.99 and accuracies of 0.89-0.98, depending on the recording level within the esophageal lumen. The neural networks often recognized peaks that clearly represented true contractions but that had been rejected by a rule-based system. We conclude that neural networks have potentials for automatic detections...

  1. Metrics for Finite Markov Decision Processes

    Ferns, Norman; Panangaden, Prakash; Precup, Doina


    We present metrics for measuring the similarity of states in a finite Markov decision process (MDP). The formulation of our metrics is based on the notion of bisimulation for MDPs, with an aim towards solving discounted infinite horizon reinforcement learning tasks. Such metrics can be used to aggregate states, as well as to better structure other value function approximators (e.g., memory-based or nearest-neighbor approximators). We provide bounds that relate our metric distances to the opti...

  2. Estimation of Tree Cover in an Agricultural Parkland of Senegal Using Rule-Based Regression Tree Modeling

    Stefanie M. Herrmann


    Full Text Available Field trees are an integral part of the farmed parkland landscape in West Africa and provide multiple benefits to the local environment and livelihoods. While field trees have received increasing interest in the context of strengthening resilience to climate variability and change, the actual extent of farmed parkland and spatial patterns of tree cover are largely unknown. We used the rule-based predictive modeling tool Cubist® to estimate field tree cover in the west-central agricultural region of Senegal. A collection of rules and associated multiple linear regression models was constructed from (1 a reference dataset of percent tree cover derived from very high spatial resolution data (2 m Orbview as the dependent variable, and (2 ten years of 10-day 250 m Moderate Resolution Imaging Spectrometer (MODIS Normalized Difference Vegetation Index (NDVI composites and derived phenological metrics as independent variables. Correlation coefficients between modeled and reference percent tree cover of 0.88 and 0.77 were achieved for training and validation data respectively, with absolute mean errors of 1.07 and 1.03 percent tree cover. The resulting map shows a west-east gradient from high tree cover in the peri-urban areas of horticulture and arboriculture to low tree cover in the more sparsely populated eastern part of the study area. A comparison of current (2000s tree cover along this gradient with historic cover as seen on Corona images reveals dynamics of change but also areas of remarkable stability of field tree cover since 1968. The proposed modeling approach can help to identify locations of high and low tree cover in dryland environments and guide ground studies and management interventions aimed at promoting the integration of field trees in agricultural systems.

  3. Studying the Post-Fire Response of Vegetation in California Protected Areas with NDVI-based Pheno-Metrics

    Jia, S.; Gillespie, T. W.


    Post-fire response from vegetation is determined by the intensity and timing of fires as well as the nature of local biomes. Though the field-based studies focusing on selected study sites helped to understand the mechanisms of post-fire response, there is a need to extend the analysis to a broader spatial extent with the assistance of remotely sensed imagery of fires and vegetation. Pheno-metrics, a series of variables on the growing cycle extracted from basic satellite measurements of vegetation coverage, translate the basic remote sensing measurements such as NDVI to the language of phenology and fire ecology in a quantitative form. In this study, we analyzed the rate of biomass removal after ignition and the speed of post-fire recovery in California protected areas from 2000 to 2014 with USGS MTBS fire data and USGS eMODIS pheno-metrics. NDVI drop caused by fire showed the aboveground biomass of evergreen forest was removed much slower than shrubland because of higher moisture level and greater density of fuel. In addition, the above two major land cover types experienced a greatly weakened immediate post-fire growing season, featuring a later start and peak of season, a shorter length of season, and a lower start and peak of NDVI. Such weakening was highly correlated with burn severity, and also influenced by the season of fire and the land cover type, according to our modeling between the anomalies of pheno-metrics and the difference of normalized burn ratio (dNBR). The influence generally decayed over time, but can remain high within the first 5 years after fire, mostly because of the introduction of exotic species when the native species were missing. Local-specific variables are necessary to better address the variance within the same fire and improve the outcomes of models. This study can help ecologists in validating the theories of post-fire vegetation response mechanisms and assist local fire managers in post-fire vegetation recovery.

  4. Primary motor cortex contributes to the implementation of implicit value-based rules during motor decisions.

    Derosiere, Gerard; Zénon, Alexandre; Alamia, Andrea; Duque, Julie


    In the present study, we investigated the functional contribution of the human primary motor cortex (M1) to motor decisions. Continuous theta burst stimulation (cTBS) was used to alter M1 activity while participants performed a decision-making task in which the reward associated with the subjects' responses (right hand finger movements) depended on explicit and implicit value-based rules. Subjects performed the task over two consecutive days and cTBS occurred in the middle of Day 2, once the subjects were just about to implement implicit rules, in addition to the explicit instructions, to choose their responses, as evident in the control group (cTBS over the right somatosensory cortex). Interestingly, cTBS over the left M1 prevented subjects from implementing the implicit value-based rule while its implementation was enhanced in the group receiving cTBS over the right M1. Hence, cTBS had opposite effects depending on whether it was applied on the contralateral or ipsilateral M1. The use of the explicit value-based rule was unaffected by cTBS in the three groups of subject. Overall, the present study provides evidence for a functional contribution of M1 to the implementation of freshly acquired implicit rules, possibly through its involvement in a cortico-subcortical network controlling value-based motor decisions.

  5. Time-incremental creep–fatigue damage rule for single crystal Ni-base superalloys

    Tinga, T.; Brekelmans, W.A.M.; Geers, M.G.D.


    In the present paper a damage model for single crystal Ni-base superalloys is proposed that integrates time-dependent and cyclic damage into a generally applicable time-incremental damage rule. A criterion based on the Orowan stress is introduced to detect slip reversal on the microscopic level and

  6. Effects of Multimedia on Cognitive Load, Self-Efficacy, and Multiple Rule-Based Problem Solving

    Zheng, Robert; McAlack, Matthew; Wilmes, Barbara; Kohler-Evans, Patty; Williamson, Jacquee


    This study investigates effects of multimedia on cognitive load, self-efficacy and learners' ability to solve multiple rule-based problems. Two hundred twenty-two college students were randomly assigned to interactive and non-interactive multimedia groups. Based on Engelkamp's multimodal theory, the present study investigates the role of…

  7. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman


    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  8. DEVELOP-FPS: a First Person Shooter Development Tool for Rule-based Scripts

    Bruno Correia


    Full Text Available We present DEVELOP-FPS, a software tool specially designed for the development of First Person Shooter (FPS players controlled by Rule Based Scripts. DEVELOP-FPS may be used by FPS developers to create, debug, maintain and compare rule base player behaviours, providing a set of useful functionalities: i for an easy preparation of the right scenarios for game debugging and testing; ii for controlling the game execution: users can stop and resume the game execution at any instant, monitoring and controlling every player in the game, monitoring the state of each player, their rule base activation, being able to issue commands to control their behaviour; and iii to automatically run a certain number of game executions and collect data in order to evaluate and compare the players performance along a sufficient number of similar experiments.

  9. Generalization-based discovery of spatial association rules with linguistic cloud models

    杨斌; 田永青; 朱仲英


    Extraction of interesting and general spatial association rules from large spatial databases is an important task in the development of spatial database systems. In this paper, we investigate the generalization-based knowledge discovery mechanism that integrates attribute-oriented induction on nonspatial data and spatial merging and generalization on spatial data. Furthermore, we present linguistic cloud models for knowledge representation and uncertainty handling to enhance current generalization-based method. With these models, spatial and nonspatial attribute values are well generalized at higher-concept levels, allowing discovery of strong spatial association rules. Combining the cloud model based generalization method with Apriori algorithm for mining association rules from a spatial database shows the benefits in effectiveness and flexibility.

  10. Evolving Rule-Based Systems in two Medical Domains using Genetic Programming

    Tsakonas, A.; Dounias, G.; Jantzen, Jan;


    We demonstrate, compare and discuss the application of two genetic programming methodologies for the construction of rule-based systems in two medical domains: the diagnosis of Aphasia's subtypes and the classification of Pap-Smear Test examinations. The first approach consists of a scheme...... the classification between all common types. A third model consisting of a GP-generated fuzzy rule-based system is tested on the same field. In the classification of Pap-Smear Test examinations, a crisp rule-based system is constructed. Results denote the effectiveness of the proposed systems. Comments...... and comparisons are made between the proposed methods and previous attempts on the selected fields of application....

  11. 基于相似度衡量的决策树自适应迁移%Self-adaptive Transfer for Decision Trees Based on Similarity Metric

    王雪松; 潘杰; 程玉虎; 曹戈


    如何解决迁移学习中的负迁移问题并合理把握迁移的时机与方法,是影响迁移学习广泛应用的关键点.针对这个问题,提出一种基于相似度衡量机制的决策树自适应迁移方法(Self-adaptive transfer for decision trees based on a similarity metric,STDT).首先,根据源任务数据集是否允许访问,自适应地采用成分预测概率或路径预测概率对决策树间的相似性进行判定,其亲和系数作为量化衡量关联任务相似程度的依据.然后,根据多源判定条件确定是否采用多源集成迁移,并将相似度归一化后依次分配给待迁移源决策树作为迁移权值.最后,对源决策树进行集成迁移以辅助目标任务实现决策.基于UCI机器学习库的仿真结果说明,与多源迁移加权求和算法(Weighted sum rule,WSR)和MS-TrAdaBoost相比,STDT能够在保证决策精度的前提下实现更为快速的迁移.

  12. Enforcement of Mask Rule Compliance in Model-Based OPC'ed Layouts during Data Preparation

    Meyer, Dirk H.; Vuletic, Radovan; Seidl, Alexander


    Currently available commercial model-based OPC tools do not always generate layouts which are mask rule compliant. Additional processing is required to remove mask rule violations, which are often too numerous for manual patching. Although physical verification tools can be used to remove simple mask rule violations, the results are often unsatisfactory for more complicated geometrical configurations. The subject of this paper is the development and application of a geometrical processing engine that automatically enforces mask rule compliance of the OPC'ed layout. It is designed as an add-on to a physical verification tool. The engine constructs patches, which remove mask rule violations such as notches or width violations. By employing a Mixed Integer Programming (MIP) optimization method, the edges of each patch are placed in a way that avoids secondary violations while modifying the OPC'ed layout as little as possible. A sequence of enforcement steps is applied to the layout to remove all types of mask rule violations. This approach of locally confined minimal layout modifications retains OPC corrections to a maximum amount. This method has been used successfully in production on a variety of DRAM designs for the non-array regions.

  13. Design a Weight Based sorting distortion algorithm using Association rule Hiding for Privacy Preserving Data mining



    Full Text Available The security of the large database that contains certain crucial information, it will become a serious issue when sharing data to the network against unauthorized access. Privacy preserving data mining is a new research trend in privacy data for data mining and statistical database. Association analysis is a powerful tool for discovering relationships which are hidden in large database. Association rules hiding algorithms get strong an efficient performance for protecting confidential and crucial data. Data modification and rule hiding is one of the most important approaches for secure data. The objective of the proposed Weight Based Sorting Distortion (WBSD algorithm is to distort certain data which satisfies a particular sensitive rule. Then hide those transactions which support a sensitive rule and assigns them a priority and sorts them in ascending order according to the priority value of each rule. Then it uses these weights to compute the priority value for each transaction according to how weak the rule is that a transaction supports. Data distortion is one of the important methods to avoid this kind of scalability issues

  14. Agile Service Development: A Rule-Based Method Engineering Approach

    Hoppenbrouwers, Stijn; Zoet, Martijn; Versendaal, Johan; Weerd, Inge van de


    Agile software development has evolved into an increasingly mature software development approach and has been applied successfully in many software vendors’ development departments. In this position paper, we address the broader agile service development. Based on method engineering principles we de

  15. RB-ARD: A proof of concept rule-based abort

    Smith, Richard; Marinuzzi, John


    The Abort Region Determinator (ARD) is a console program in the space shuttle mission control center. During shuttle ascent, the Flight Dynamics Officer (FDO) uses the ARD to determine the possible abort modes and make abort calls for the crew. The goal of the Rule-based Abort region Determinator (RB/ARD) project was to test the concept of providing an onboard ARD for the shuttle or an automated ARD for the mission control center (MCC). A proof of concept rule-based system was developed on a LMI Lambda computer using PICON, a knowdedge-based system shell. Knowdedge derived from documented flight rules and ARD operation procedures was coded in PICON rules. These rules, in conjunction with modules of conventional code, enable the RB-ARD to carry out key parts of the ARD task. Current capabilities of the RB-ARD include: continuous updating of the available abort mode, recognition of a limited number of main engine faults and recommendation of safing actions. Safing actions recommended by the RB-ARD concern the Space Shuttle Main Engine (SSME) limit shutdown system and powerdown of the SSME Ac buses.

  16. Haunted by a doppelgänger: irrelevant facial similarity affects rule-based judgments.

    von Helversen, Bettina; Herzog, Stefan M; Rieskamp, Jörg


    Judging other people is a common and important task. Every day professionals make decisions that affect the lives of other people when they diagnose medical conditions, grant parole, or hire new employees. To prevent discrimination, professional standards require that decision makers render accurate and unbiased judgments solely based on relevant information. Facial similarity to previously encountered persons can be a potential source of bias. Psychological research suggests that people only rely on similarity-based judgment strategies if the provided information does not allow them to make accurate rule-based judgments. Our study shows, however, that facial similarity to previously encountered persons influences judgment even in situations in which relevant information is available for making accurate rule-based judgments and where similarity is irrelevant for the task and relying on similarity is detrimental. In two experiments in an employment context we show that applicants who looked similar to high-performing former employees were judged as more suitable than applicants who looked similar to low-performing former employees. This similarity effect was found despite the fact that the participants used the relevant résumé information about the applicants by following a rule-based judgment strategy. These findings suggest that similarity-based and rule-based processes simultaneously underlie human judgment.

  17. 2008 GEM Modeling Challenge: Metrics Study of the Dst Index in Physics-Based Magnetosphere and Ring Current Models and in Statistical and Analytic Specifications

    Rastaetter, L.; Kuznetsova, M.; Hesse, M.; Pulkkinen, A.; Glocer, A.; Yu, Y.; Meng, X.; Raeder, J.; Wiltberger, M.; Welling, D.; Jordanova, V.; Zaharia, S.; Sazykin, S.; Weigel, R.; Boynton, R.; Eccles, V.; Gannon, J.


    In this paper the metrics-based results of the Dst part of the 2008-2009 GEM Metrics Challenge are reported. The Metrics Challenge asked modelers to submit results for 4 geomagnetic storm events and 5 different types of observations that can be modeled by statistical or climatological or physics-based (e.g. MHD) models of the magnetosphere-ionosphere system. We present the results of over 25 model settings that were run at the Community Coordinated Modeling Center (CCMC) and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations we use comparisons of one-hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of one-minute model data with the one-minute Dst index calculated by the United States Geologic Survey (USGS).

  18. Introducing the new GRASS module g.infer for data-driven rule-based applications

    Peter Löwe


    Full Text Available This paper introduces the new GRASS GIS add-on module g.infer. The module enables rule-based analysis and workflow management in GRASS GIS, via data-driven inference processes based on the expert system shell CLIPS. The paper discusses the theoretical and developmental background that will help prepare the reader to use the module for Knowledge Engineering applications. In addition, potential application scenarios are sketched out, ranging from the rule-driven formulation of nontrivial GIS-classification tasks and GIS workflows to ontology management and intelligent software agents.

  19. The challenge of defining risk-based metrics to improve food safety: inputs from the BASELINE project.

    Manfreda, Gerardo; De Cesare, Alessandra


    In 2002, the Regulation (EC) 178 of the European Parliament and of the Council states that, in order to achieve the general objective of a high level of protection of human health and life, food law shall be based on risk analysis. However, the Commission Regulation No 2073/2005 on microbiological criteria for foodstuffs requires that food business operators ensure that foodstuffs comply with the relevant microbiological criteria. Such criteria define the acceptability of a product, a batch of foodstuffs or a process, based on the absence, presence or number of micro-organisms, and/or on the quantity of their toxins/metabolites, per unit(s) of mass, volume, area or batch. The same Regulation describes a food safety criterion as a mean to define the acceptability of a product or a batch of foodstuff applicable to products placed on the market; moreover, it states a process hygiene criterion as a mean indicating the acceptable functioning of the production process. Both food safety criteria and process hygiene criteria are not based on risk analysis. On the contrary, the metrics formulated by the Codex Alimentarius Commission in 2004, named Food Safety Objective (FSO) and Performance Objective (PO), are risk-based and fit the indications of Regulation 178/2002. The main aims of this review are to illustrate the key differences between microbiological criteria and the risk-based metrics defined by the Codex Alimentarius Commission and to explore the opportunity and also the possibility to implement future European Regulations including PO and FSO as supporting parameters to microbiological criteria. This review clarifies also the implications of defining an appropriate level of human protection, how to establish FSO and PO and how to implement them in practice linked to each other through quantitative risk assessment models. The contents of this review should clarify the context for application of the results collected during the EU funded project named BASELINE (www

  20. Analysis of Subjects' Vulnerability in a Touch Screen Game Using Behavioral Metrics.

    Parsinejad, Payam; Sipahi, Rifat


    In this article, we report results on an experimental study conducted with volunteer subjects playing a touch-screen game with two unique difficulty levels. Subjects have knowledge about the rules of both game levels, but only sufficient playing experience with the easy level of the game, making them vulnerable with the difficult level. Several behavioral metrics associated with subjects' playing the game are studied in order to assess subjects' mental-workload changes induced by their vulnerability. Specifically, these metrics are calculated based on subjects' finger kinematics and decision making times, which are then compared with baseline metrics, namely, performance metrics pertaining to how well the game is played and a physiological metric called pnn50 extracted from heart rate measurements. In balanced experiments and supported by comparisons with baseline metrics, it is found that some of the studied behavioral metrics have the potential to be used to infer subjects' mental workload changes through different levels of the game. These metrics, which are decoupled from task specifics, relate to subjects' ability to develop strategies to play the game, and hence have the advantage of offering insight into subjects' task-load and vulnerability assessment across various experimental settings.

  1. Rule acquisition in formal decision contexts based on formal, object-oriented and property-oriented concept lattices.

    Ren, Yue; Li, Jinhai; Aswani Kumar, Cherukuri; Liu, Wenqi


    Rule acquisition is one of the main purposes in the analysis of formal decision contexts. Up to now, there have been several types of rules in formal decision contexts such as decision rules, decision implications, and granular rules, which can be viewed as ∧-rules since all of them have the following form: "if conditions 1,2,…, and m hold, then decisions hold." In order to enrich the existing rule acquisition theory in formal decision contexts, this study puts forward two new types of rules which are called ∨-rules and ∨-∧ mixed rules based on formal, object-oriented, and property-oriented concept lattices. Moreover, a comparison of ∨-rules, ∨-∧ mixed rules, and ∧-rules is made from the perspectives of inclusion and inference relationships. Finally, some real examples and numerical experiments are conducted to compare the proposed rule acquisition algorithms with the existing one in terms of the running efficiency.

  2. Architecture For The Optimization Of A Machining Process In Real Time Through Rule-Based Expert System

    Serrano, Rafael; González, Luis Carlos; Martín, Francisco Jesús


    Under the project SENSOR-IA which has had financial funding from the Order of Incentives to the Regional Technology Centers of the Counsil of Innovation, Science and Enterprise of Andalusia, an architecture for the optimization of a machining process in real time through rule-based expert system has been developed. The architecture consists of an acquisition system and sensor data processing engine (SATD) from an expert system (SE) rule-based which communicates with the SATD. The SE has been designed as an inference engine with an algorithm for effective action, using a modus ponens rule model of goal-oriented rules.The pilot test demonstrated that it is possible to govern in real time the machining process based on rules contained in a SE. The tests have been done with approximated rules. Future work includes an exhaustive collection of data with different tool materials and geometries in a database to extract more precise rules.

  3. HIV-GRADE: a publicly available, rules-based drug resistance interpretation algorithm integrating bioinformatic knowledge.

    Obermeier, Martin; Pironti, Alejandro; Berg, Thomas; Braun, Patrick; Däumer, Martin; Eberle, Josef; Ehret, Robert; Kaiser, Rolf; Kleinkauf, Niels; Korn, Klaus; Kücherer, Claudia; Müller, Harm; Noah, Christian; Stürmer, Martin; Thielen, Alexander; Wolf, Eva; Walter, Hauke


    Genotypic drug resistance testing provides essential information for guiding treatment in HIV-infected patients. It may either be used for identifying patients with transmitted drug resistance or to clarify reasons for treatment failure and to check for remaining treatment options. While different approaches for the interpretation of HIV sequence information are already available, no other available rules-based systems specifically have looked into the effects of combinations of drugs. HIV-GRADE (Genotypischer Resistenz Algorithmus Deutschland) was planned as a countrywide approach to establish standardized drug resistance interpretation in Germany and also to introduce rules for estimating the influence of mutations on drug combinations. The rules for HIV-GRADE are taken from the literature, clinical follow-up data and from a bioinformatics-driven interpretation system (geno2pheno([resistance])). HIV-GRADE presents the option of seeing the rules and results of other drug resistance algorithms for a given sequence simultaneously. The HIV-GRADE rules-based interpretation system was developed by the members of the HIV-GRADE registered society. For continuous updates, this expert committee meets twice a year to analyze data from various sources. Besides data from clinical studies and the centers involved, published correlations for mutations with drug resistance and genotype-phenotype correlation data information from the bioinformatic models of geno2pheno are used to generate the rules for the HIV-GRADE interpretation system. A freely available online tool was developed on the basis of the Stanford HIVdb rules interpretation tool using the algorithm specification interface. Clinical validation of the interpretation system was performed on the data of treatment episodes consisting of sequence information, antiretroviral treatment and viral load, before and 3 months after treatment change. Data were analyzed using multiple linear regression. As the developed online

  4. Rule-based image interpretation with models of expected structure

    Shemlon, Stephen; Dunn, Stanley M.


    Consider the problem of localizing and identifying cell organelle in a transmission electron micrograph. Opera. . tions on regions constitute the key tasks in segmenting such images. The construction of meaningful entities from an initial fine partitioning of such an image poses problems that are generally linked to the type of objects to be identified. Restraining region manipulation algorithms to a particular class of images may simplify the process. However the loss of retargetability for the segmentation process is a serious handicap of such a solution. Segmentation must be based on a formal mechanism for reasoning about scenes (cells) their images (trans. . mission electron micrographs) objects in the scene (organelle) and their representation (image regions) if the system is to be suitable for a wide variety of domains. Mathematical morphology offers such a mechanism but the drawbacks of point set topology limit its success to low-level vision tasks. A modification based on expected morphological properties of point sets and not their fixed structure is useful for intermediate vision. This modification makes it possible to develop a theory for region manipulation. The main goal of growing and shrinking is to obtain regions with a given expected structure. The operations involve a search in the region space for candidates satisfying specified conditions and capable of producing final regions that fit the desired goal subject to given constraints. In this paper we describe

  5. Rule Based Ensembles Using Pair Wise Neural Network Classifiers

    Moslem Mohammadi Jenghara


    Full Text Available In value estimation, the inexperienced people's estimation average is good approximation to true value, provided that the answer of these individual are independent. Classifier ensemble is the implementation of mentioned principle in classification tasks that are investigated in two aspects. In the first aspect, feature space is divided into several local regions and each region is assigned with a highly competent classifier and in the second, the base classifiers are applied in parallel and equally experienced in some ways to achieve a group consensus. In this paper combination of two methods are used. An important consideration in classifier combination is that much better results can be achieved if diverse classifiers, rather than similar classifiers, are combined. To achieve diversity in classifiers output, the symmetric pairwise weighted feature space is used and the outputs of trained classifiers over the weighted feature space are combined to inference final result. In this paper MLP classifiers are used as the base classifiers. The Experimental results show that the applied method is promising.

  6. Canopy Height Estimation by Characterizing Waveform LiDAR Geometry Based on Shape-Distance Metric

    Ariel L. Salas, Eric; M. Henebry, Geoffrey


    .... Unlike any existing methods, we illustrate how the new Moment Distance (MD) framework can characterize the canopy height based on the geometry and return power of the LiDAR waveform without having to go through curve modeling processes...

  7. Real-time fault detection method based on belief rule base for aircraft navigation system

    Zhao Xin; Wang Shicheng; Zhang Jinsheng; Fan Zhiliang; Min Haibo


    Real-time and accurate fault detection is essential to enhance the aircraft navigation system's reliability and safety.The existent detection methods based on analytical model draws back at simultaneously detecting gradual and sudden faults.On account of this reason,we propose an online detection solution based on non-analytical model.In this article,the navigation system fault detection model is established based on belief rule base (BRB),where the system measuring residual and its changing rate are used as the inputs of BRB model and the fault detection function as the output.To overcome the drawbacks of current parameter optimization algorithms for BRB and achieve online update,a parameter recursive estimation algorithm is presented for online BRB detection model based on expectation maximization (EM) algorithm.Furthermore,the proposed method is verified by navigation experiment.Experimental results show that the proposed method is able to effectively realize online parameter evaluation in navigation system fault detection model.The output of the detection model can track the fault state very well,and the faults can be diagnosed in real time and accurately.In addition,the detection ability,especially in the probability of false detection,is superior to offline optimization method,and thus the system reliability has great improvement.

  8. Prediction on carbon dioxide emissions based on fuzzy rules

    Pauzi, Herrini; Abdullah, Lazim


    There are several ways to predict air quality, varying from simple regression to models based on artificial intelligence. Most of the conventional methods are not sufficiently able to provide good forecasting performances due to the problems with non-linearity uncertainty and complexity of the data. Artificial intelligence techniques are successfully used in modeling air quality in order to cope with the problems. This paper describes fuzzy inference system (FIS) to predict CO2 emissions in Malaysia. Furthermore, adaptive neuro-fuzzy inference system (ANFIS) is used to compare the prediction performance. Data of five variables: energy use, gross domestic product per capita, population density, combustible renewable and waste and CO2 intensity are employed in this comparative study. The results from the two model proposed are compared and it is clearly shown that the ANFIS outperforms FIS in CO2 prediction.

  9. Online elicitation of Mamdani-type fuzzy rules via TSK-based generalized predictive control.

    Mahfouf, M; Abbod, M F; Linkens, D A


    Many synergies have been proposed between soft-computing techniques, such as neural networks (NNs), fuzzy logic (FL), and genetic algorithms (GAs), which have shown that such hybrid structures can work well and also add more robustness to the control system design. In this paper, a new control architecture is proposed whereby the on-line generated fuzzy rules relating to the self-organizing fuzzy logic controller (SOFLC) are obtained via integration with the popular generalized predictive control (GPC) algorithm using a Takagi-Sugeno-Kang (TSK)-based controlled autoregressive integrated moving average (CARIMA) model structure. In this approach, GPC replaces the performance index (PI) table which, as an incremental model, is traditionally used to discover, amend, and delete the rules. Because the GPC sequence is computed using predicted future outputs, the new hybrid approach rewards the time-delay very well. The new generic approach, named generalized predictive self-organizing fuzzy logic control (GPSOFLC), is simulated on a well-known nonlinear chemical process, the distillation column, and is shown to produce an effective fuzzy rule-base in both qualitative (minimum number of generated rules) and quantitative (good rules) terms.

  10. A new algorithm to extract hidden rules of gastric cancer data based on ontology.

    Mahmoodi, Seyed Abbas; Mirzaie, Kamal; Mahmoudi, Seyed Mostafa


    Cancer is the leading cause of death in economically developed countries and the second leading cause of death in developing countries. Gastric cancers are among the most devastating and incurable forms of cancer and their treatment may be excessively complex and costly. Data mining, a technology that is used to produce analytically useful information, has been employed successfully with medical data. Although the use of traditional data mining techniques such as association rules helps to extract knowledge from large data sets, sometimes the results obtained from a data set are so large that it is a major problem. In fact, one of the disadvantages of this technique is a lot of nonsense and redundant rules due to the lack of attention to the concept and meaning of items or the samples. This paper presents a new method to discover association rules using ontology to solve the expressed problems. This paper reports a data mining based on ontology on a medical database containing clinical data on patients referring to the Imam Reza Hospital at Tabriz. The data set used in this paper is gathered from 490 random visitors to the Imam Reza Hospital at Tabriz, who had been suspicions of having gastric cancer. The proposed data mining algorithm based on ontology makes rules more intuitive, appealing and understandable, eliminates waste and useless rules, and as a minor result, significantly reduces Apriori algorithm running time. The experimental results confirm the efficiency and advantages of this algorithm.

  11. A multilayer perceptron solution to the match phase problem in rule-based artificial intelligence systems

    Sartori, Michael A.; Passino, Kevin M.; Antsaklis, Panos J.


    In rule-based AI planning, expert, and learning systems, it is often the case that the left-hand-sides of the rules must be repeatedly compared to the contents of some 'working memory'. The traditional approach to solve such a 'match phase problem' for production systems is to use the Rete Match Algorithm. Here, a new technique using a multilayer perceptron, a particular artificial neural network model, is presented to solve the match phase problem for rule-based AI systems. A syntax for premise formulas (i.e., the left-hand-sides of the rules) is defined, and working memory is specified. From this, it is shown how to construct a multilayer perceptron that finds all of the rules which can be executed for the current situation in working memory. The complexity of the constructed multilayer perceptron is derived in terms of the maximum number of nodes and the required number of layers. A method for reducing the number of layers to at most three is also presented.

  12. Vehicular Networking Enhancement And Multi-Channel Routing Optimization, Based on Multi-Objective Metric and Minimum Spanning Tree

    Peppino Fazio


    Full Text Available Vehicular Ad hoc NETworks (VANETs represent a particular mobile technology that permits the communication among vehicles, offering security and comfort. Nowadays, distributed mobile wireless computing is becoming a very important communications paradigm, due to its flexibility to adapt to different mobile applications. VANETs are a practical example of data exchanging among real mobile nodes. To enable communications within an ad-hoc network, characterized by continuous node movements, routing protocols are needed to react to frequent changes in network topology. In this paper, the attention is focused mainly on the network layer of VANETs, proposing a novel approach to reduce the interference level during mobile transmission, based on the multi-channel nature of IEEE 802.11p (1609.4 standard. In this work a new routing protocol based on Distance Vector algorithm is presented to reduce the delay end to end and to increase packet delivery ratio (PDR and throughput in VANETs. A new metric is also proposed, based on the maximization of the average Signal-to-Interference Ratio (SIR level and the link duration probability between two VANET nodes. In order to relieve the effects of the co-channel interference perceived by mobile nodes, transmission channels are switched on a basis of a periodical SIR evaluation. A Network Simulator has been used for implementing and testing the proposed idea.

  13. Rule-Based Classification of Chemical Structures by Scaffold.

    Schuffenhauer, Ansgar; Varin, Thibault


    Databases for small organic chemical molecules usually contain millions of structures. The screening decks of pharmaceutical companies contain more than a million of structures. Nevertheless chemical substructure searching in these databases can be performed interactively in seconds. Because of this nobody has really missed structural classification of these databases for the purpose of finding data for individual chemical substructures. However, a full deck high-throughput screen produces also activity data for more than a million of substances. How can this amount of data be analyzed? Which are the active scaffolds identified by an assays? To answer such questions systematic classifications of molecules by scaffolds are needed. In this review it is described how molecules can be hierarchically classified by their scaffolds. It is explained how such classifications can be used to identify active scaffolds in an HTS data set. Once active classes are identified, they need to be visualized in the context of related scaffolds in order to understand SAR. Consequently such visualizations are another topic of this review. In addition scaffold based diversity measures are discussed and an outlook is given about the potential impact of structural classifications on a chemically aware semantic web.

  14. Sound quality prediction based on systematic metric selection and shrinkage: Comparison of stepwise, lasso, and elastic-net algorithms and clustering preprocessing

    Gauthier, Philippe-Aubert; Scullion, William; Berry, Alain


    Sound quality is the impression of quality that is transmitted by the sound of a device. Its importance in sound and acoustical design of consumer products no longer needs to be demonstrated. One of the challenges is the creation of a prediction model that is able to predict the results of a listening test while using metrics derived from the sound stimuli. Often, these models are either derived using linear regression on a limited set of experimenter-selected metrics, or using more complex algorithms such as neural networks. In the former case, the user-selected metrics can bias the model and reflect the engineer pre-conceived idea of sound quality while missing potential features. In the latter case, although prediction might be efficient, the model is often in the form of a black-box which is difficult to use as a sound design guideline for engineers. In this paper, preprocessing by participants clustering and three different algorithms are compared in order to construct a sound quality prediction model that does not suffer from these limitations. The lasso, elastic-net and stepwise algorithms are tested for listening tests of consumer product for which 91 metrics are used as potential predictors. Based on the reported results, it is shown that the most promising algorithm is the lasso which is able to (1) efficiently limit the number of metrics, (2) most accurately predict the results of listening tests, and (3) provide a meaningful model that can be used as understandable design guidelines.

  15. Prediction of speech intelligibility based on a correlation metric in the envelope power spectrum domain

    Relano-Iborra, Helia; May, Tobias; Zaar, Johannes

    -based Envelope Power Spectrum Model (mr-sEPSM) [2], combined with the correlation back-end of the Short-Time Objective Intelligibility measure (STOI) [3]. The sEPSMcorr can accurately predict NH data for a broad range of listening conditions, e.g., additive noise, phase jitter and ideal binary mask processing....

  16. Similarity-Based Restoration of Metrical Information: Different Listening Experiences Result in Different Perceptual Inferences

    Creel, Sarah C.


    How do perceivers apply knowledge to instances they have never experienced before? On one hand, listeners might use idealized representations that do not contain specific details. On the other, they might recognize and process information based on more detailed memory representations. The current study examined the latter possibility with respect…

  17. Designing Industrial Networks Using Ecological Food Web Metrics.

    Layton, Astrid; Bras, Bert; Weissburg, Marc


    Biologically Inspired Design (biomimicry) and Industrial Ecology both look to natural systems to enhance the sustainability and performance of engineered products, systems and industries. Bioinspired design (BID) traditionally has focused on a unit operation and single product level. In contrast, this paper describes how principles of network organization derived from analysis of ecosystem properties can be applied to industrial system networks. Specifically, this paper examines the applicability of particular food web matrix properties as design rules for economically and biologically sustainable industrial networks, using an optimization model developed for a carpet recycling network. Carpet recycling network designs based on traditional cost and emissions based optimization are compared to designs obtained using optimizations based solely on ecological food web metrics. The analysis suggests that networks optimized using food web metrics also were superior from a traditional cost and emissions perspective; correlations between optimization using ecological metrics and traditional optimization ranged generally from 0.70 to 0.96, with flow-based metrics being superior to structural parameters. Four structural food parameters provided correlations nearly the same as that obtained using all structural parameters, but individual structural parameters provided much less satisfactory correlations. The analysis indicates that bioinspired design principles from ecosystems can lead to both environmentally and economically sustainable industrial resource networks, and represent guidelines for designing sustainable industry networks.

  18. Comparing concentration-based (AOT40) and stomatal uptake (PODY) metrics for ozone risk assessment to European forests.

    Anav, Alessandro; De Marco, Alessandra; Proietti, Chiara; Alessandri, Andrea; Dell'Aquila, Alessandro; Cionni, Irene; Friedlingstein, Pierre; Khvorostyanov, Dmitry; Menut, Laurent; Paoletti, Elena; Sicard, Pierre; Sitch, Stephen; Vitale, Marcello


    Tropospheric ozone (O3) produces harmful effects to forests and crops, leading to a reduction of land carbon assimilation that, consequently, influences the land sink and the crop yield production. To assess the potential negative O3 impacts to vegetation, the European Union uses the Accumulated Ozone over Threshold of 40 ppb (AOT40). This index has been chosen for its simplicity and flexibility in handling different ecosystems as well as for its linear relationships with yield or biomass loss. However, AOT40 does not give any information on the physiological O3 uptake into the leaves since it does not include any environmental constraints to O3 uptake through stomata. Therefore, an index based on stomatal O3 uptake (i.e. PODY), which describes the amount of O3 entering into the leaves, would be more appropriate. Specifically, the PODY metric considers the effects of multiple climatic factors, vegetation characteristics and local and phenological inputs rather than the only atmospheric O3 concentration. For this reason, the use of PODY in the O3 risk assessment for vegetation is becoming recommended. We compare different potential O3 risk assessments based on two methodologies (i.e. AOT40 and stomatal O3 uptake) using a framework of mesoscale models that produces hourly meteorological and O3 data at high spatial resolution (12 km) over Europe for the time period 2000-2005. Results indicate a remarkable spatial and temporal inconsistency between the two indices, suggesting that a new definition of European legislative standard is needed in the near future. Besides, our risk assessment based on AOT40 shows a good consistency compared to both in-situ data and other model-based datasets. Conversely, risk assessment based on stomatal O3 uptake shows different spatial patterns compared to other model-based datasets. This strong inconsistency can be likely related to a different vegetation cover and its associated parameterizations.

  19. A metric-based assessment of flood risk and vulnerability of rural communities in the Lower Shire Valley, Malawi

    Adeloye, A. J.; Mwale, F. D.; Dulanya, Z.


    In response to the increasing frequency and economic damages of natural disasters globally, disaster risk management has evolved to incorporate risk assessments that are multi-dimensional, integrated and metric-based. This is to support knowledge-based decision making and hence sustainable risk reduction. In Malawi and most of Sub-Saharan Africa (SSA), however, flood risk studies remain focussed on understanding causation, impacts, perceptions and coping and adaptation measures. Using the IPCC Framework, this study has quantified and profiled risk to flooding of rural, subsistent communities in the Lower Shire Valley, Malawi. Flood risk was obtained by integrating hazard and vulnerability. Flood hazard was characterised in terms of flood depth and inundation area obtained through hydraulic modelling in the valley with Lisflood-FP, while the vulnerability was indexed through analysis of exposure, susceptibility and capacity that were linked to social, economic, environmental and physical perspectives. Data on these were collected through structured interviews of the communities. The implementation of the entire analysis within GIS enabled the visualisation of spatial variability in flood risk in the valley. The results show predominantly medium levels in hazardousness, vulnerability and risk. The vulnerability is dominated by a high to very high susceptibility. Economic and physical capacities tend to be predominantly low but social capacity is significantly high, resulting in overall medium levels of capacity-induced vulnerability. Exposure manifests as medium. The vulnerability and risk showed marginal spatial variability. The paper concludes with recommendations on how these outcomes could inform policy interventions in the Valley.

  20. Realization of the English Assisted Learning System Based on Rule Mining

    Li Kun


    Full Text Available This paper first makes a brief introduction on the research progress of artificial intelligence, then introduces the basic structure of the whole English assisted learning system from the angle of system functional requirements, and finally discusses the realization of functions of the English assisted learning system under the support of rule-based data mining, aiming at attracting more attentions.

  1. Improved Personalized Recommendation Based on Causal Association Rule and Collaborative Filtering

    Lei, Wu; Qing, Fang; Zhou, Jin


    There are usually limited user evaluation of resources on a recommender system, which caused an extremely sparse user rating matrix, and this greatly reduce the accuracy of personalized recommendation, especially for new users or new items. This paper presents a recommendation method based on rating prediction using causal association rules.…

  2. The Books Recommend Service System Based on Improved Algorithm for Mining Association Rules



    The Apriori algorithm is a classical method of association rules mining. Based on analysis of this theory, the paper provides an improved Apriori algorithm. The paper puts foward with algorithm combines HASH table technique and reduction of candidate item sets to en-hance the usage efficiency of resources as well as the individualized service of the data library.

  3. A Rule-based Track Anomaly Detection Algorithm for Maritime Force Protection


    likely to perform better with AIS data than with primary radar data. Rule-based algorithms are transparent , easy to use, and use less computation...8039. 6. Kadar I., “ Perceptual Reasoning Managed Situation Assessment for harbour protection”, Springer Science 2009. 7. Jasenvicius R

  4. Attempts to Dodge Drowning in Data : Rule- and Risk-Based Anti Money Laundering Policies Compared

    Unger, B.; van Waarden, F.

    Both in the US and in Europe anti money laundering policy switched from a rule-to a risk-based reporting system in order to avoid over-reporting by the private sector. However, reporting increased in most countries, while the quality of information decreased. Governments drowned in data because

  5. Rule-based versus probabilistic selection for active surveillance using three definitions of insignificant prostate cancer

    L.D.F. Venderbos (Lionne); M.J. Roobol-Bouts (Monique); C.H. Bangma (Chris); R.C.N. van den Bergh (Roderick); L.P. Bokhorst (Leonard); D. Nieboer (Daan); Godtman, R; J. Hugosson (Jonas); van der Kwast, T; E.W. Steyerberg (Ewout)


    textabstractTo study whether probabilistic selection by the use of a nomogram could improve patient selection for active surveillance (AS) compared to the various sets of rule-based AS inclusion criteria currently used. We studied Dutch and Swedish patients participating in the European Randomized s

  6. Attempts to Dodge Drowning in Data : Rule- and Risk-Based Anti Money Laundering Policies Compared

    Unger, B.; van Waarden, F.


    Both in the US and in Europe anti money laundering policy switched from a rule-to a risk-based reporting system in order to avoid over-reporting by the private sector. However, reporting increased in most countries, while the quality of information decreased. Governments drowned in data because priv

  7. Microtubule-based transport -basic mechanisms, traffic rules and role in neurological pathogenesis

    M.A.M. Franker (Mariella); C.C. Hoogenraad (Casper)


    textabstractMicrotubule-based transport is essential for neuronal function because of the large distances that must be traveled by various building blocks and cellular materials. Recent studies in various model systems have unraveled several regulatory mechanisms and traffic rules that control the s

  8. Re-Evaluation of Acid-Base Prediction Rules in Patients with Chronic Respiratory Acidosis

    Tereza Martinu


    Full Text Available RATIONALE: The prediction rules for the evaluation of the acid-base status in patients with chronic respiratory acidosis, derived primarily from an experimental canine model, suggest that complete compensation should not occur. This appears to contradict frequent observations of normal or near-normal pH levels in patients with chronic hypercapnia.

  9. Development and Validation of a Rule-Based Strength Scaling Method for Musculoskeletal Modelling

    Oomen, Pieter; Annegarn, Janneke; Rasmussen, John


    Rule based strength scaling is an easy, cheap and relatively accurate technique to personalize musculoskeletal (MS) models. This paper presents a new strength scaling approach for MS models and validates it by maximal voluntary contractions (MVC). A heterogeneous group of 63 healthy subjects...

  10. Rule Based Approach for Arabic Part of Speech Tagging and Name Entity Recognition

    Mohammad Hjouj Btoush


    Full Text Available The aim of this study is to build a tool for Part of Speech (POS tagging and Name Entity Recognition for Arabic Language, the approach used to build this tool is a rule base technique. The POS Tagger contains two phases:The first phase is to pass word into a lexicon phase, the second level is the morphological phase, and the tagset are (Noun, Verb and Determine. The Named-Entity detector will apply rules on the text and give the correct Labels for each word, the labels are Person(PERS, Location (LOC and Organization (ORG.

  11. Ant-based extraction of rules in simple decision systems over ontological graphs

    Pancerz Krzysztof


    Full Text Available In the paper, the problem of extraction of complex decision rules in simple decision systems over ontological graphs is considered. The extracted rules are consistent with the dominance principle similar to that applied in the dominancebased rough set approach (DRSA. In our study, we propose to use a heuristic algorithm, utilizing the ant-based clustering approach, searching the semantic spaces of concepts presented by means of ontological graphs. Concepts included in the semantic spaces are values of attributes describing objects in simple decision systems

  12. Modeling Languages: metrics and assessing tools

    Fonte, Daniela; Boas, Ismael Vilas; Azevedo, José; Peixoto, José João; Faria, Pedro; Silva, Pedro; Sá, Tiago de, 1990-; Costa, Ulisses; da Cruz, Daniela; Henriques, Pedro Rangel


    Any traditional engineering field has metrics to rigorously assess the quality of their products. Engineers know that the output must satisfy the requirements, must comply with the production and market rules, and must be competitive. Professionals in the new field of software engineering started a few years ago to define metrics to appraise their product: individual programs and software systems. This concern motivates the need to assess not only the outcome but also the process and tools em...

  13. Multilevel Association Rule Mining for Bridge Resource Management Based on Immune Genetic Algorithm

    Yang Ou


    Full Text Available This paper is concerned with the problem of multilevel association rule mining for bridge resource management (BRM which is announced by IMO in 2010. The goal of this paper is to mine the association rules among the items of BRM and the vessel accidents. However, due to the indirect data that can be collected, which seems useless for the analysis of the relationship between items of BIM and the accidents, the cross level association rules need to be studied, which builds the relation between the indirect data and items of BRM. In this paper, firstly, a cross level coding scheme for mining the multilevel association rules is proposed. Secondly, we execute the immune genetic algorithm with the coding scheme for analyzing BRM. Thirdly, based on the basic maritime investigation reports, some important association rules of the items of BRM are mined and studied. Finally, according to the results of the analysis, we provide the suggestions for the work of seafarer training, assessment, and management.

  14. Optimizing Fuzzy Rule Base for Illumination Compensation in Face Recognition using Genetic Algorithms

    Bima Sena Bayu Dewantara


    Full Text Available Fuzzy rule optimization is a challenging step in the development of a fuzzy model. A simple two inputs fuzzy model may have thousands of combination of fuzzy rules when it deals with large number of input variations. Intuitively and trial‐error determination of fuzzy rule is very difficult. This paper addresses the problem of optimizing Fuzzy rule using Genetic Algorithm to compensate illumination effect in face recognition. Since uneven illumination contributes negative effects to the performance of face recognition, those effects must be compensated. We have developed a novel algorithmbased on a reflectance model to compensate the effect of illumination for human face recognition. We build a pair of model from a single image and reason those modelsusing Fuzzy.Fuzzy rule, then, is optimized using Genetic Algorithm. This approachspendsless computation cost by still keepinga high performance. Based on the experimental result, we can show that our algorithm is feasiblefor recognizing desired person under variable lighting conditions with faster computation time. Keywords: Face recognition, harsh illumination, reflectance model, fuzzy, genetic algorithm

  15. Rule-bases construction through self-learning for a table-based Sugeno-Takagi fuzzy logic control system

    C. Boldisor


    Full Text Available A self-learning based methodology for building the rule-base of a fuzzy logic controller (FLC is presented and verified, aiming to engage intelligent characteristics to a fuzzy logic control systems. The methodology is a simplified version of those presented in today literature. Some aspects are intentionally ignored since it rarely appears in control system engineering and a SISO process is considered here. The fuzzy inference system obtained is a table-based Sugeno-Takagi type. System’s desired performance is defined by a reference model and rules are extracted from recorded data, after the correct control actions are learned. The presented algorithm is tested in constructing the rule-base of a fuzzy controller for a DC drive application. System’s performances and method’s viability are analyzed.

  16. Integration of Rule Based Expert Systems and Case Based Reasoning in an Acute Bacterial Meningitis Clinical Decision Support System

    Cabrera, Mariana Maceiras


    This article presents the results of the research carried out on the development of a medical diagnostic system applied to the Acute Bacterial Meningitis, using the Case Based Reasoning methodology. The research was focused on the implementation of the adaptation stage, from the integration of Case Based Reasoning and Rule Based Expert Systems. In this adaptation stage we use a higher level RBC that stores and allows reutilizing change experiences, combined with a classic rule-based inference engine. In order to take into account the most evident clinical situation, a pre-diagnosis stage is implemented using a rule engine that, given an evident situation, emits the corresponding diagnosis and avoids the complete process.

  17. Adaptive ant-based routing in wireless sensor networks using Energy Delay metrics

    Yao-feng WEN; Yu-quan CHEN; Min PAN


    To find the optimal routing is always an important topic in wireless sensor networks (WSNs). Considering a WSN where the nodes have limited energy, we propose a novel Energy*Delay model based on ant algorithms ("E&D ANTS" for short)to minimize the time delay in transferring a fixed number of data packets in an energy-constrained manner in one round. Our goal is not only to maximize the lifetime of the network but also to provide real-time data transmission services. However, because of the tradeoff of energy and delay in wireless network systems, the reinforcement learning (RL) algorithm is introduced to train the model. In this survey, the paradigm of E&D ANTS is explicated and compared to other ant-based routing algorithms like AntNet and AntChain about the issues of routing information, routing overhead and adaptation. Simulation results show that our method performs about seven times better than AntNet and also outperforms AntChain by more than 150% in terms of energy cost and delay per round.

  18. Adding A Spending Metric To Medicare's Value-Based Purchasing Program Rewarded Low-Quality Hospitals.

    Das, Anup; Norton, Edward C; Miller, David C; Ryan, Andrew M; Birkmeyer, John D; Chen, Lena M


    In fiscal year 2015 the Centers for Medicare and Medicaid Services expanded its Hospital Value-Based Purchasing program by rewarding or penalizing hospitals for their performance on both spending and quality. This represented a sharp departure from the program's original efforts to incentivize hospitals for quality alone. How this change redistributed hospital bonuses and penalties was unknown. Using data from 2,679 US hospitals that participated in the program in fiscal years 2014 and 2015, we found that the new emphasis on spending rewarded not only low-spending hospitals but some low-quality hospitals as well. Thirty-eight percent of low-spending hospitals received bonuses in fiscal year 2014, compared to 100 percent in fiscal year 2015. However, low-quality hospitals also began to receive bonuses (0 percent in fiscal year 2014 compared to 17 percent in 2015). All high-quality hospitals received bonuses in both years. The Centers for Medicare and Medicaid Services should consider incorporating a minimum quality threshold into the Hospital Value-Based Purchasing program to avoid rewarding low-quality, low-spending hospitals.

  19. A rule-based expert system for chemical prioritization using effects-based chemical categories.

    Schmieder, P K; Kolanczyk, R C; Hornung, M W; Tapper, M A; Denny, J S; Sheedy, B R; Aladjov, H


    A rule-based expert system (ES) was developed to predict chemical binding to the estrogen receptor (ER) patterned on the research approaches championed by Gilman Veith to whom this article and journal issue are dedicated. The ERES was built to be mechanistically transparent and meet the needs of a specific application, i.e. predict for all chemicals within two well-defined inventories (industrial chemicals used as pesticide inerts and antimicrobial pesticides). These chemicals all lack structural features associated with high affinity binders and thus any binding should be low affinity. Similar to the high-quality fathead minnow database upon which Veith QSARs were built, the ERES was derived from what has been termed gold standard data, systematically collected in assays optimized to detect even low affinity binding and maximizing confidence in the negatives determinations. The resultant logic-based decision tree ERES, determined to be a robust model, contains seven major nodes with multiple effects-based chemicals categories within each. Predicted results are presented in the context of empirical data within local chemical structural groups facilitating informed decision-making. Even using optimized detection assays, the ERES applied to two inventories of >600 chemicals resulted in only ~5% of the chemicals predicted to bind ER.


    ZHANG Zhiying; LI Zhen; JIANG Zhibin


    Computer-aided block assembly process planning based on rule-reasoning are developed in order to improve the assembly efficiency and implement the automated block assembly process planning generation in shipbuilding. First, weighted directed liaison graph (WDLG) is proposed to represent the model of block assembly process according to the characteristics of assembly relation, and edge list (EL) is used to describe assembly sequences. Shapes and assembly attributes of block parts are analyzed to determine the assembly position and matched parts of parts used frequently. Then, a series of assembly rules are generalized, and assembly sequences for block are obtained by means of rule reasoning. Final, a prototype system of computer-aided block assembly process planning is built. The system has been tested on actual block, and the results were found to be quite efficiency. Meanwhile, the fundament for the automation of block assembly process generation and integration with other systems is established.

  1. ChemTok: A New Rule Based Tokenizer for Chemical Named Entity Recognition

    Abbas Akkasi


    Full Text Available Named Entity Recognition (NER from text constitutes the first step in many text mining applications. The most important preliminary step for NER systems using machine learning approaches is tokenization where raw text is segmented into tokens. This study proposes an enhanced rule based tokenizer, ChemTok, which utilizes rules extracted mainly from the train data set. The main novelty of ChemTok is the use of the extracted rules in order to merge the tokens split in the previous steps, thus producing longer and more discriminative tokens. ChemTok is compared to the tokenization methods utilized by ChemSpot and tmChem. Support Vector Machines and Conditional Random Fields are employed as the learning algorithms. The experimental results show that the classifiers trained on the output of ChemTok outperforms all classifiers trained on the output of the other two tokenizers in terms of classification performance, and the number of incorrectly segmented entities.

  2. Long-Term Homeostatic Properties Complementary to Hebbian Rules in CuPc-Based Multifunctional Memristor

    Wang, Laiyuan; Wang, Zhiyong; Lin, Jinyi; Yang, Jie; Xie, Linghai; Yi, Mingdong; Li, Wen; Ling, Haifeng; Ou, Changjin; Huang, Wei


    Most simulations of neuroplasticity in memristors, which are potentially used to develop artificial synapses, are confined to the basic biological Hebbian rules. However, the simplex rules potentially can induce excessive excitation/inhibition, even collapse of neural activities, because they neglect the properties of long-term homeostasis involved in the frameworks of realistic neural networks. Here, we develop organic CuPc-based memristors of which excitatory and inhibitory conductivities can implement both Hebbian rules and homeostatic plasticity, complementary to Hebbian patterns and conductive to the long-term homeostasis. In another adaptive situation for homeostasis, in thicker samples, the overall excitement under periodic moderate stimuli tends to decrease and be recovered under intense inputs. Interestingly, the prototypes can be equipped with bio-inspired habituation and sensitization functions outperforming the conventional simplified algorithms. They mutually regulate each other to obtain the homeostasis. Therefore, we develop a novel versatile memristor with advanced synaptic homeostasis for comprehensive neural functions.

  3. Predicting speech intelligibility based on a correlation metric in the envelope power spectrum domain

    Relaño-Iborra, Helia; May, Tobias; Zaar, Johannes;


    by the short-time objective intelligibility measure [STOI; Taal, Hendriks, Heusdens, and Jensen (2011). IEEE Trans. Audio Speech Lang. Process. 19(7), 2125–2136]. This “hybrid” model, named sEPSMcorr, is shown to account for the effects of stationary and fluctuating additive interferers as well...... of fluctuating interferers), albeit with lower accuracy than the source models in some individual conditions. Similar to other models that employ a short-term correlation-based back end, including STOI, the proposed model fails to account for the effects of room reverberation on speech intelligibility. Overall......, the model might be valuable for evaluating the effects of a large range of interferers and distortions on speech intelligibility, including consequences of hearing impairment and hearing-instrument signal processing....

  4. Hierarchical graphs for better annotations of rule-based models of biochemical systems

    Hu, Bin [Los Alamos National Laboratory; Hlavacek, William [Los Alamos National Laboratory


    In the graph-based formalism of the BioNetGen language (BNGL), graphs are used to represent molecules, with a colored vertex representing a component of a molecule, a vertex label representing the internal state of a component, and an edge representing a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions, with a rule that specifies addition (removal) of an edge representing a class of association (dissociation) reactions and with a rule that specifies a change of vertex label representing a class of reactions that affect the internal state of a molecular component. A set of rules comprises a mathematical/computational model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Here, for purposes of model annotation, we propose an extension of BNGL that involves the use of hierarchical graphs to represent (1) relationships among components and subcomponents of molecules and (2) relationships among classes of reactions defined by rules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR)/CD3 complex. Likewise, we illustrate how hierarchical graphs can be used to document the similarity of two related rules for kinase-catalyzed phosphorylation of a protein substrate. We also demonstrate how a hierarchical graph representing a protein can be encoded in an XML-based format.

  5. The effects of age on associative and rule-based causal learning and generalization.

    Mutter, Sharon A; Plumlee, Leslie F


    We assessed how age influences associative and rule-based processes in causal learning using the Shanks and Darby (1998) concurrent patterning discrimination task. In Experiment 1, participants were divided into groups based on their learning performance after 6 blocks of training trials. High discrimination mastery young adults learned the patterning discrimination more rapidly and accurately than moderate mastery young adults. They were also more likely to induce the patterning rule and use this rule to generate predictions for novel cues, whereas moderate mastery young adults were more likely to use cue similarity as the basis for their predictions. Like moderate mastery young adults, older adults used similarity-based generalization for novel cues, but they did not achieve the same level of patterning discrimination. In Experiment 2, young and older adults were trained to the same learning criterion. Older adults again showed deficits in patterning discrimination and, in contrast to young adults, even when they reported awareness of the patterning rule, they used only similarity-based generalization in their predictions for novel cues. These findings suggest that it is important to consider how the ability to code or use cue representations interacts with the requirements of the causal learning task. In particular, age differences in causal learning seem to be greatest for tasks that require rapid coding of configural representations to control associative interference between similar cues. Configural coding may also be related to the success of rule-based processes in these types of learning tasks. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  6. On the effects of adaptive reservoir operating rules in hydrological physically-based models

    Giudici, Federico; Anghileri, Daniela; Castelletti, Andrea; Burlando, Paolo


    Recent years have seen a significant increase of the human influence on the natural systems both at the global and local scale. Accurately modeling the human component and its interaction with the natural environment is key to characterize the real system dynamics and anticipate future potential changes to the hydrological regimes. Modern distributed, physically-based hydrological models are able to describe hydrological processes with high level of detail and high spatiotemporal resolution. Yet, they lack in sophistication for the behavior component and human decisions are usually described by very simplistic rules, which might underperform in reproducing the catchment dynamics. In the case of water reservoir operators, these simplistic rules usually consist of target-level rule curves, which represent the average historical level trajectory. Whilst these rules can reasonably reproduce the average seasonal water volume shifts due to the reservoirs' operation, they cannot properly represent peculiar conditions, which influence the actual reservoirs' operation, e.g., variations in energy price or water demand, dry or wet meteorological conditions. Moreover, target-level rule curves are not suitable to explore the water system response to climate and socio economic changing contexts, because they assume a business-as-usual operation. In this work, we quantitatively assess how the inclusion of adaptive reservoirs' operating rules into physically-based hydrological models contribute to the proper representation of the hydrological regime at the catchment scale. In particular, we contrast target-level rule curves and detailed optimization-based behavioral models. We, first, perform the comparison on past observational records, showing that target-level rule curves underperform in representing the hydrological regime over multiple time scales (e.g., weekly, seasonal, inter-annual). Then, we compare how future hydrological changes are affected by the two modeling

  7. Random Kaehler metrics

    Ferrari, Frank, E-mail: [Service de Physique Theorique et Mathematique, Universite Libre de Bruxelles and International Solvay Institutes, Campus de la Plaine, CP 231, 1050 Bruxelles (Belgium); Klevtsov, Semyon, E-mail: [Service de Physique Theorique et Mathematique, Universite Libre de Bruxelles and International Solvay Institutes, Campus de la Plaine, CP 231, 1050 Bruxelles (Belgium); ITEP, B. Cheremushkinskaya 25, Moscow 117218 (Russian Federation); Zelditch, Steve, E-mail: [Department of Mathematics, Northwestern University, Evanston, IL 60208 (United States)


    The purpose of this article is to propose a new method to define and calculate path integrals over metrics on a Kaehler manifold. The main idea is to use finite dimensional spaces of Bergman metrics, as an approximation to the full space of Kaehler metrics. We use the theory of large deviations to decide when a sequence of probability measures on the spaces of Bergman metrics tends to a limit measure on the space of all Kaehler metrics. Several examples are considered.

  8. Exact hybrid particle/population simulation of rule-based models of biochemical systems.

    Justin S Hogg


    Full Text Available Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that

  9. The relevance of a rules-based maize marketing policy: an experimental case study of Zambia.

    Abbink, Klaus; Jayne, Thomas S; Moller, Lars C


    Strategic interaction between public and private actors is increasingly recognised as an important determinant of agricultural market performance in Africa and elsewhere. Trust and consultation tends to positively affect private activity while uncertainty of government behaviour impedes it. This paper reports on a laboratory experiment based on a stylised model of the Zambian maize market. The experiment facilitates a comparison between discretionary interventionism and a rules-based policy in which the government pre-commits itself to a future course of action. A simple precommitment rule can, in theory, overcome the prevailing strategic dilemma by encouraging private sector participation. Although this result is also borne out in the economic experiment, the improvement in private sector activity is surprisingly small and not statistically significant due to irrationally cautious choices by experimental governments. Encouragingly, a rules-based policy promotes a much more stable market outcome thereby substantially reducing the risk of severe food shortages. These results underscore the importance of predictable and transparent rules for the state's involvement in agricultural markets.

  10. An Associate Rules Mining Algorithm Based on Artificial Immune Network for SAR Image Segmentation

    Mengling Zhao


    Full Text Available As a computational intelligence method, artificial immune network (AIN algorithm has been widely applied to pattern recognition and data classification. In the existing artificial immune network algorithms, the calculating affinity for classifying is based on calculating a certain distance, which may lead to some unsatisfactory results in dealing with data with nominal attributes. To overcome the shortcoming, the association rules are introduced into AIN algorithm, and we propose a new classification algorithm an associate rules mining algorithm based on artificial immune network (ARM-AIN. The new method uses the association rules to represent immune cells and mine the best association rules rather than searching optimal clustering centers. The proposed algorithm has been extensively compared with artificial immune network classification (AINC algorithm, artificial immune network classification algorithm based on self-adaptive PSO (SPSO-AINC, and PSO-AINC over several large-scale data sets, target recognition of remote sensing image, and segmentation of three different SAR images. The result of experiment indicates the superiority of ARM-AIN in classification accuracy and running time.

  11. Microcomputer-based tests for repeated-measures: Metric properties and predictive validities

    Kennedy, Robert S.; Baltzley, Dennis R.; Dunlap, William P.; Wilkes, Robert L.; Kuntz, Lois-Ann


    A menu of psychomotor and mental acuity tests were refined. Field applications of such a battery are, for example, a study of the effects of toxic agents or exotic environments on performance readiness, or the determination of fitness for duty. The key requirement of these tasks is that they be suitable for repeated-measures applications, and so questions of stability and reliability are a continuing, central focus of this work. After the initial (practice) session, seven replications of 14 microcomputer-based performance tests (32 measures) were completed by 37 subjects. Each test in the battery had previously been shown to stabilize in less than five 90-second administrations and to possess retest reliabilities greater than r = 0.707 for three minutes of testing. However, all the tests had never been administered together as a battery and they had never been self-administered. In order to provide predictive validity for intelligence measurement, the Wechsler Adult Intelligence Scale-Revised and the Wonderlic Personnel Test were obtained on the same subjects.

  12. Key Performance Indicators in Irish Hospital Libraries: Developing Outcome-Based Metrics

    Michelle Dalton


    Full Text Available Objective – To develop a set of generic outcome-based performance measures for Irishhospital libraries.Methods – Various models and frameworks of performance measurement were used as atheoretical paradigm to link the impact of library services directly with measurablehealthcare objectives and outcomes. Strategic objectives were identified, mapped toperformance indicators, and finally translated into response choices to a single-questiononline survey for distribution via email.Results – The set of performance indicators represents an impact assessment tool whichis easy to administer across a variety of healthcare settings. In using a model directlyaligned with the mission and goals of the organization, and linked to core activities andoperations in an accountable way, the indicators can also be used as a channel throughwhich to implement action, change, and improvement.Conclusion – The indicators can be adopted at a local and potentially a national level, asboth a tool for advocacy and to assess and improve service delivery at a macro level. Toovercome the constraints posed by necessary simplifications, substantial further research is needed by hospital libraries to develop more sophisticated and meaningful measures of impact to further aid decision making at a micro level.

  13. Development of economic and environmental metrics for forest-based biomass harvesting

    Zhang, F. L.; Wang, J. J.; Liu, S. H.; Zhang, S. M.


    An assessment of the economic, energy consumption, and greenhouse gas (GHG) emission dimensions of forest-based biomass harvest stage in the state of Michigan, U.S. through gathering data from literature, database, and other relevant sources, was performed. The assessment differentiates harvesting systems (cut-to-length harvesting, whole tree harvesting, and motor-manual harvesting), harvest types (30%, 70%, and 100% cut) and forest types (hardwoods, softwoods, mixed hardwood/softwood, and softwood plantations) that characterize Michigan's logging industry. Machine rate methods were employed to determine unit harvesting cost. A life cycle inventory was applied to calculating energy demand and GHG emissions of different harvesting scenarios, considering energy and material inputs (diesel, machinery, etc.) and outputs (emissions) for each process (cutting, forwarding/skidding, etc.). A sensitivity analysis was performed for selected input variables for the harvesting operation in order to explore their relative importance. The results indicated that productivity had the largest impact on harvesting cost followed by machinery purchase price, yearly scheduled hours, and expected utilization. Productivity and fuel use, as well as fuel factors, are the most influential environmental impacts of harvesting operations.

  14. Determination of a Screening Metric for High Diversity DNA Libraries.

    Guido, Nicholas J; Handerson, Steven; Joseph, Elaine M; Leake, Devin; Kung, Li A


    The fields of antibody engineering, enzyme optimization and pathway construction rely increasingly on screening complex variant DNA libraries. These highly diverse libraries allow researchers to sample a maximized sequence space; and therefore, more rapidly identify proteins with significantly improved activity. The current state of the art in synthetic biology allows for libraries with billions of variants, pushing the limits of researchers' ability to qualify libraries for screening by measuring the traditional quality metrics of fidelity and diversity of variants. Instead, when screening variant libraries, researchers typically use a generic, and often insufficient, oversampling rate based on a common rule-of-thumb. We have developed methods to calculate a library-specific oversampling metric, based on fidelity, diversity, and representation of variants, which informs researchers, prior to screening the library, of the amount of oversampling required to ensure that the desired fraction of variant molecules will be sampled. To derive this oversampling metric, we developed a novel alignment tool to efficiently measure frequency counts of individual nucleotide variant positions using next-generation sequencing data. Next, we apply a method based on the "coupon collector" probability theory to construct a curve of upper bound estimates of the sampling size required for any desired variant coverage. The calculated oversampling metric will guide researchers to maximize their efficiency in using highly variant libraries.

  15. Estimating animal biodiversity across taxa in tropical forests using shape-based waveform lidar metrics and Landsat image time series

    Muss, J. D.; Aguilar-Amuchastegui, N.; Henebry, G. M.


    Studies have shown that forest structural heterogeneity is a key variable for estimating the diversity, richness, and community structure of forest species such as birds, butterflies, and dung beetles. These relationships are especially relevant in tropical forests when assessing the impacts of forest management plans on indicator groups and species. Typically, forest structure and biodiversity are evaluated using field surveys, which are expensive and spatially limited. An alternative is to use the growing archive of imagery to assess the impacts that disturbances (such as those caused by selective logging) have on habitats and biodiversity. But it can be difficult to capture subtle differences in the three-dimensional (3D) forest structure at the landscape scale that are important for modeling these relationships. We use a unique confluence of active and passive optical sensor data, field surveys of biodiversity, and stand management data to link metrics of spatial and spatio-temporal heterogeneity with key indicators of sustainable forest management. Field sites were selected from tropical forest stands along the Atlantic Slope of Costa Rica for which the management history was known and in which biodiversity surveys were conducted. The vertical dimension of forest structure was assessed by applying two shape-based metrics, the centroid (C) and radius of gyration (RG), to full waveform lidar data collected by the LVIS platform over central Costa Rica in 2005. We developed a map of the vertical structure of the forest by implementing a recursive function that used C and RG to identify major segments of each waveform. Differences in 3D structure were related to estimates of animal biodiversity, size and type of disturbance, and time since disturbance—critical measurements for achieving verifiable sustainable management and conservation of biodiversity in tropical forests. Moreover, the relationships found between 3D structure and biodiversity suggests that it

  16. Metric Entropy of Nonautonomous Dynamical Systems

    Kawan Christoph


    Full Text Available We introduce the notion of metric entropy for a nonautonomous dynamical system given by a sequence (Xn; μn of probability spaces and a sequence of measurable maps fn : Xn → Xn+1 with fnμn = μn+1. This notion generalizes the classical concept of metric entropy established by Kolmogorov and Sinai, and is related via a variational inequality to the topological entropy of nonautonomous systems as defined by Kolyada, Misiurewicz, and Snoha. Moreover, it shares several properties with the classical notion of metric entropy. In particular, invariance with respect to appropriately defined isomorphisms, a power rule, and a Rokhlin-type inequality are proved

  17. Applying Sigma Metrics to Reduce Outliers.

    Litten, Joseph


    Sigma metrics can be used to predict assay quality, allowing easy comparison of instrument quality and predicting which tests will require minimal quality control (QC) rules to monitor the performance of the method. A Six Sigma QC program can result in fewer controls and fewer QC failures for methods with a sigma metric of 5 or better. The higher the number of methods with a sigma metric of 5 or better, the lower the costs for reagents, supplies, and control material required to monitor the performance of the methods.

  18. Selection of Equipment Software Reliability Metrics Based on GQM%基于GQM的装备软件可靠性参数选取方法

    韩坤; 吴纬; 陈守华; 帅勇


    针对装备软件无可靠性定量要求以及开发过程缺少监管的问题,提出基于GQM(Goal-Question-Metric)的装备软件可靠性参数选取方法。首先构建了软件可靠性通用参数集和装备软件可靠性特有参数集,然后按照GQM方法的框架,从不同角度出发,制定装备软件可靠性度量目标,列出为实现目标需要回答的一系列问题,进而以回答问题的方式,确定不同情况下适用的软件可靠性参数,最终建立装备软件可靠性参数体系。%Aiming at the problem of no equipment software reliability quantitative requirement and lacking of supervision to development process,a method for selecting of equipment software reliability metrics based on GQM (Goal-Question-Metric)is proposed. Firstly,universal metrics of software reliability and specific metrics of equipment software reliability are set up separately. Secondly, following the framework of GQM,goals of equipment software reliability measurement are established from different point of view,and a series of questions are listed according to the goals. Software reliability metrics suitable for specific situation are selected in way of answering question,and system of equipment software reliability metrics is established finally.

  19. Knowledge representation and rule-based solution system for dynamic programming model

    胡祥培; 王旭茵


    A knowledge representation has been proposed using the state-space theory of Artificial Intelligencefor Dynamic Programming Model, in which a model can be defined as a six-tuple M = (I,G,O,T,D,S). Abuilding block modeling method uses the modules of a six-tuple to form a rule-based solution model. Moreover,a rule-based system has been designed and set up to solve the Dynamic Programming Model. This knowledge-based representation can be easily used to express symbolical knowledge and dynamic characteristics for Dynam-ic Programming Model, and the inference based on the knowledge in the process of solving Dynamic Program-ming Model can also be conveniently realized in computer.

  20. A Belief Rule Based Expert System to Assess Tuberculosis under Uncertainty.

    Hossain, Mohammad Shahadat; Ahmed, Faisal; Fatema-Tuj-Johora; Andersson, Karl


    The primary diagnosis of Tuberculosis (TB) is usually carried out by looking at the various signs and symptoms of a patient. However, these signs and symptoms cannot be measured with 100 % certainty since they are associated with various types of uncertainties such as vagueness, imprecision, randomness, ignorance and incompleteness. Consequently, traditional primary diagnosis, based on these signs and symptoms, which is carried out by the physicians, cannot deliver reliable results. Therefore, this article presents the design, development and applications of a Belief Rule Based Expert System (BRBES) with the ability to handle various types of uncertainties to diagnose TB. The knowledge base of this system is constructed by taking experts' suggestions and by analyzing historical data of TB patients. The experiments, carried out, by taking the data of 100 patients demonstrate that the BRBES's generated results are more reliable than that of human expert as well as fuzzy rule based expert system.

  1. Clustering and rule-based classifications of chemical structures evaluated in the biological activity space.

    Schuffenhauer, Ansgar; Brown, Nathan; Ertl, Peter; Jenkins, Jeremy L; Selzer, Paul; Hamon, Jacques


    Classification methods for data sets of molecules according to their chemical structure were evaluated for their biological relevance, including rule-based, scaffold-oriented classification methods and clustering based on molecular descriptors. Three data sets resulting from uniformly determined in vitro biological profiling experiments were classified according to their chemical structures, and the results were compared in a Pareto analysis with the number of classes and their average spread in the profile space as two concurrent objectives which were to be minimized. It has been found that no classification method is overall superior to all other studied methods, but there is a general trend that rule-based, scaffold-oriented methods are the better choice if classes with homogeneous biological activity are required, but a large number of clusters can be tolerated. On the other hand, clustering based on chemical fingerprints is superior if fewer and larger classes are required, and some loss of homogeneity in biological activity can be accepted.

  2. A Rule-Based Data Transfer Protocol for On-Demand Data Exchange in Vehicular Environment

    Liao Hsien-Chou


    Full Text Available The purpose of Intelligent Transport System (ITS is mainly to increase the driving safety and efficiency. Data exchange is an important way to achieve the purpose. An on-demand data exchange is especially useful to assist a driver avoiding some emergent events. In order to handle the data exchange under dynamic situations, a rule-based data transfer protocol is proposed in this paper. A set of rules is designed according to the principle of request-forward-reply (RFR. That is, they are used to determine the timing of data broadcasting, forwarding, and replying automatically. Two typical situations are used to demonstrate the operation of rules. One is the front view of a driver occluded by other vehicles. The other is the traffic jam. The proposed protocol is flexible and extensible for unforeseen situations. Three simulation tools were also implemented to demonstrate the feasibility of the protocol and measure the network transmission under high density of vehicles. The simulation results show that the rule-based protocol is efficient on data exchange to increase the driving safety.

  3. Domain XML semantic integration based on extraction rules and ontology mapping

    Huayu LI


    Full Text Available A plenty of XML documents exist in petroleum engineering field, but traditional XML integration solution can’t provide semantic query, which leads to low data use efficiency. In light of WeXML(oil&gas well XML data semantic integration and query requirement, this paper proposes a semantic integration method based on extraction rules and ontology mapping. The method firstly defines a series of extraction rules with which elements and properties of WeXML Schema are mapped to classes and properties in WeOWL ontology, respectively; secondly, an algorithm is used to transform WeXML documents into WeOWL instances. Because WeOWL provides limited semantics, ontology mappings between two ontologies are then built to explain class and property of global ontology with terms of WeOWL, and semantic query based on global domain concepts model is provided. By constructing a WeXML data semantic integration prototype system, the proposed transformational rule, the transfer algorithm and the mapping rule are tested.

  4. Interference-aware Routing Metrics Based on the Wireless Mesh Network%基于无线Mesh网络的干扰感知型路由判据

    方华建; 吕光宏


    在无线Mesh网络路由判据的研究中,最小跳数、ETX、ETT等路由判据没有考虑到无线网络中的干扰问题,据此选出的一般不是最佳路由。因此,基于它们的路由协议会对整个无线Mesh网络的延时、丢包率、吞吐量等性能产生较大影响。干扰感知型路由判据的提出对无线Mesh网络性能的提升起到了一定的作用。%In the study of routing metrics in wireless mesh networks,the metrics such as minimum hop counts,ETX,ETT to take the interference problems of wireless networks into consideration,and the paths selected based on these metrics generally are not the optimal path.Therefore,those routing protocols based on them will affect the performance of the whole mesh network such as the network delay,packet loss rate,and throughput.The proposition of interference aware metrics improves the performance of wireless mesh network greatly.

  5. NASA metric transition plan

    NASA science publications have used the metric system of measurement since 1970. Although NASA has maintained a metric use policy since 1979, practical constraints have restricted actual use of metric units. In 1988, an amendment to the Metric Conversion Act of 1975 required the Federal Government to adopt the metric system except where impractical. In response to Public Law 100-418 and Executive Order 12770, NASA revised its metric use policy and developed this Metric Transition Plan. NASA's goal is to use the metric system for program development and functional support activities to the greatest practical extent by the end of 1995. The introduction of the metric system into new flight programs will determine the pace of the metric transition. Transition of institutional capabilities and support functions will be phased to enable use of the metric system in flight program development and operations. Externally oriented elements of this plan will introduce and actively support use of the metric system in education, public information, and small business programs. The plan also establishes a procedure for evaluating and approving waivers and exceptions to the required use of the metric system for new programs. Coordination with other Federal agencies and departments (through the Interagency Council on Metric Policy) and industry (directly and through professional societies and interest groups) will identify sources of external support and minimize duplication of effort.

  6. Assessment of Matrix Multiplication Learning with a Rule-Based Analytical Model--"A Bayesian Network Representation"

    Zhang, Zhidong


    This study explored an alternative assessment procedure to examine learning trajectories of matrix multiplication. It took rule-based analytical and cognitive task analysis methods specifically to break down operation rules for a given matrix multiplication. Based on the analysis results, a hierarchical Bayesian network, an assessment model,…

  7. Genetic Programming for the Generation of Crisp and Fuzzy Rule Bases in Classification and Diagnosis of Medical Data

    Dounias, George; Tsakonas, Athanasios; Jantzen, Jan


    This paper demonstrates two methodologies for the construction of rule-based systems in medical decision making. The first approach consists of a method combining genetic programming and heuristic hierarchical rule-base construction. The second model is composed by a strongly-typed genetic progra...

  8. Spatial Queries Entity Recognition and Disambiguation Using Rule-Based Approach

    Hamzei, E.; Hakimpour, F.; Forati, A.


    In the digital world, search engines have been proposed as one of challenging research areas. One of the main issues in search engines studies is query processing, which its aim is to understand user's needs. If unsuitable spatial query processing approach is employed, the results will be associated with high degree of ambiguity. To evade such degree of ambiguity, in this paper we present a new algorithm which depends on rule-based systems to process queries. Our algorithm is implemented in the three basic steps including: deductively iterative splitting the query; finding candidates for the location names, the location types and spatial relationships; and finally checking the relationships logically and conceptually using a rule based system. As we finally present in the paper using our proposed method have two major advantages: the search engines can provide the capability of spatial analysis based on the specific process and secondly because of its disambiguation technique, user reaches the more desirable result.

  9. Syntactic Analysis Of Natural Language Using Linguistic Rules And Corpus-based Patterns

    Tapanainen, P; Tapanainen, Pasi


    We are concerned with the syntactic annotation of unrestricted text. We combine a rule-based analysis with subsequent exploitation of empirical data. The rule-based surface syntactic analyser leaves some amount of ambiguity in the output that is resolved using empirical patterns. We have implemented a system for generating and applying corpus-based patterns. Some patterns describe the main constituents in the sentence and some the local context of the each syntactic function. There are several (partly) reduntant patterns, and the ``pattern'' parser selects analysis of the sentence that matches the strictest possible pattern(s). The system is applied to an experimental corpus. We present the results and discuss possible refinements of the method from a linguistic point of view.

  10. Collision avoidance planning in multi-robot system based on improved artificial potential field and rules

    YUAN Xin; ZHU Qi-dan; YAN Yong-jie


    For real-time and distributed features of multi-robot system, the strategy of combining the improved artificial potential field method and the mles based on priority is proposed to study the collision avoidance planning in multi-robot systems. The improved artificial potential field based on simulated annealing algorithm satisfactorily overcomes the drawbacks of traditional artificial potential field method, so that robots can find a local collision-free path in the complex environment. According to the movement vector trail of robots, collisions between robots can be detected, thereby the collision avoidance rules can be obtained. Coordination between robots by the priority based rules improves the real-time property of multi-robot system. The combination of these two methods can help a robot to find a collision-free path from a starting point to the goal quickly in an environment with many obstacles. The feasibility of the proposed method is validated in the VC-basod simulated environment.

  11. 置信规则库规则约简的粗糙集方法%Rough set method for rule reduction in belief rule base

    王应明; 杨隆浩; 常雷雷; 傅仰耿


    The number of rules in belief rule base(BRB) may induce the problem of combinatorial explosion. However, most previous works on rule reduction are based on feature extraction, whose effectiveness depends on the expert knowledge. Therefore, an objective method based on rough set theory is proposed, which does not depends on any knowledge in addition to belief rule base. The method of rule reduction analyzes each belief rule according to equivalence class division thought, and then eliminates the redundancy of referential values. Finally, a numerical case study to evaluate armored system is analyzed and compared with the typical subjective method in the number of reduced rule and the accuracy of decision-making. The results show that the proposed method is feasible and effective, and superior to the existing subjective method of rule reduction.%针对置信规则中规则数的“组合爆炸”问题,目前的解决方法主要是基于特征提取的规则约简方法,有效性依赖于专家知识。鉴于此,提出基于粗糙集理论的无需依赖规则库以外知识的客观方法,按照等价类划分思想逐条分析置信规则,进而消除冗余的候选值。最后,以装甲装备能力评估作为实例进行分析,分别从规则约简数、决策准确性方面与具有代表性的主观方法进行对比,结果表明,所提出方法是有效可行的,且优于现有规则约简主观方法。

  12. Forecasting Peak Load Electricity Demand Using Statistics and Rule Based Approach

    Z. Ismail


    Full Text Available Problem statement: Forecasting of electricity load demand is an essential activity and an important function in power system planning and development. It is a prerequisite to power system expansion planning as the world of electricity is dominated by substantial lead times between decision making and its implementation. The importance of demand forecasting needs to be emphasized at all level as the consequences of under or over forecasting the demand are serious and will affect all stakeholders in the electricity supply industry. Approach: If under estimated, the result is serious since plant installation cannot easily be advanced, this will affect the economy, business, loss of time and image. If over estimated, the financial penalty for excess capacity (i.e., over-estimated and wasting of resources. Therefore this study aimed to develop new forecasting model for forecasting electricity load demand which will minimize the error of forecasting. In this study, we explored the development of rule-based method for forecasting electricity peak load demand. The rule-based system synergized human reasoning style of fuzzy systems through the use of set of rules consisting of IF-THEN approximators with the learning and connectionist structure. Prior to the implementation of rule-based models, SARIMAT model and Regression time series were used. Results: Modification of the basic regression model and modeled it using Box-Jenkins auto regressive error had produced a satisfactory and adequate model with 2.41% forecasting error. With rule-based based forecasting, one can apply forecaster expertise and domain knowledge that is appropriate to the conditions of time series. Conclusion: This study showed a significant improvement in forecast accuracy when compared with the traditional time series model. Good domain knowledge of the experts had contributed to the increase in forecast accuracy. In general, the improvement will depend on the conditions of the data

  13. A unifying process capability metric

    John Jay Flaig


    Full Text Available A new economic approach to process capability assessment is presented, which differs from the commonly used engineering metrics. The proposed metric consists of two economic capability measures – the expected profit and the variation in profit of the process. This dual economic metric offers a number of significant advantages over other engineering or economic metrics used in process capability analysis. First, it is easy to understand and communicate. Second, it is based on a measure of total system performance. Third, it unifies the fraction nonconforming approach and the expected loss approach. Fourth, it reflects the underlying interest of management in knowing the expected financial performance of a process and its potential variation.

  14. A Rules-Based Service for Suggesting Visualizations to Analyze Earth Science Phenomena.

    Prabhu, A.; Zednik, S.; Fox, P. A.; Ramachandran, R.; Maskey, M.; Shie, C. L.; Shen, S.


    Current Earth Science Information Systems lack support for new or interdisciplinary researchers, who may be unfamiliar with the domain vocabulary or the breadth of relevant data available. We need to evolve the current information systems, to reduce the time required for data preparation, processing and analysis. This can be done by effectively salvaging the "dark" resources in Earth Science. We assert that Earth science metadata assets are dark resources, information resources that organizations collect, process, and store for regular business or operational activities but fail to utilize for other purposes. In order to effectively use these dark resources, especially for data processing and visualization, we need a combination of domain, data product and processing knowledge, i.e. a knowledge base from which specific data operations can be performed. In this presentation, we describe a semantic, rules based approach to provide i.e. a service to visualize Earth Science phenomena, based on the data variables extracted using the "dark" metadata resources. We use Jena rules to make assertions about compatibility between a phenomena and various visualizations based on multiple factors. We created separate orthogonal rulesets to map each of these factors to the various phenomena. Some of the factors we have considered include measurements, spatial resolution and time intervals. This approach enables easy additions and deletions based on newly obtained domain knowledge or phenomena related information and thus improving the accuracy of the rules service overall.

  15. A comprehensive revisit of the ρ meson with improved Monte-Carlo based QCD sum rules

    Wang, Qi-Nan; Zhang, Zhu-Feng; Steele, T. G.; Jin, Hong-Ying; Huang, Zhuo-Ran


    We improve the Monte-Carlo based QCD sum rules by introducing the rigorous Hölder-inequality-determined sum rule window and a Breit-Wigner type parametrization for the phenomenological spectral function. In this improved sum rule analysis methodology, the sum rule analysis window can be determined without any assumptions on OPE convergence or the QCD continuum. Therefore, an unbiased prediction can be obtained for the phenomenological parameters (the hadronic mass and width etc.). We test the new approach in the ρ meson channel with re-examination and inclusion of α s corrections to dimension-4 condensates in the OPE. We obtain results highly consistent with experimental values. We also discuss the possible extension of this method to some other channels. Supported by NSFC (11175153, 11205093, 11347020), Open Foundation of the Most Important Subjects of Zhejiang Province, and K. C. Wong Magna Fund in Ningbo University, TGS is Supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), Z. F. Zhang and Z. R. Huang are Grateful to the University of Saskatchewan for its Warm Hospitality

  16. Two projects in theoretical neuroscience: A convolution-based metric for neural membrane potentials and a combinatorial connectionist semantic network method

    Evans, Garrett Nolan

    In this work, I present two projects that both contribute to the aim of discovering how intelligence manifests in the brain. The first project is a method for analyzing recorded neural signals, which takes the form of a convolution-based metric on neural membrane potential recordings. Relying only on integral and algebraic operations, the metric compares the timing and number of spikes within recordings as well as the recordings' subthreshold features: summarizing differences in these with a single "distance" between the recordings. Like van Rossum's (2001) metric for spike trains, the metric is based on a convolution operation that it performs on the input data. The kernel used for the convolution is carefully chosen such that it produces a desirable frequency space response and, unlike van Rossum's kernel, causes the metric to be first order both in differences between nearby spike times and in differences between same-time membrane potential values: an important trait. The second project is a combinatorial syntax method for connectionist semantic network encoding. Combinatorial syntax has been a point on which those who support a symbol-processing view of intelligent processing and those who favor a connectionist view have had difficulty seeing eye-to-eye. Symbol-processing theorists have persuasively argued that combinatorial syntax is necessary for certain intelligent mental operations, such as reasoning by analogy. Connectionists have focused on the versatility and adaptability offered by self-organizing networks of simple processing units. With this project, I show that there is a way to reconcile the two perspectives and to ascribe a combinatorial syntax to a connectionist network. The critical principle is to interpret nodes, or units, in the connectionist network as bound integrations of the interpretations for nodes that they share links with. Nodes need not correspond exactly to neurons and may correspond instead to distributed sets, or assemblies, of

  17. A knowledge representation meta-model for rule-based modelling of signalling networks

    Adrien Basso-Blandin


    Full Text Available The study of cellular signalling pathways and their deregulation in disease states, such as cancer, is a large and extremely complex task. Indeed, these systems involve many parts and processes but are studied piecewise and their literatures and data are consequently fragmented, distributed and sometimes—at least apparently—inconsistent. This makes it extremely difficult to build significant explanatory models with the result that effects in these systems that are brought about by many interacting factors are poorly understood. The rule-based approach to modelling has shown some promise for the representation of the highly combinatorial systems typically found in signalling where many of the proteins are composed of multiple binding domains, capable of simultaneous interactions, and/or peptide motifs controlled by post-translational modifications. However, the rule-based approach requires highly detailed information about the precise conditions for each and every interaction which is rarely available from any one single source. Rather, these conditions must be painstakingly inferred and curated, by hand, from information contained in many papers—each of which contains only part of the story. In this paper, we introduce a graph-based meta-model, attuned to the representation of cellular signalling networks, which aims to ease this massive cognitive burden on the rule-based curation process. This meta-model is a generalization of that used by Kappa and BNGL which allows for the flexible representation of knowledge at various levels of granularity. In particular, it allows us to deal with information which has either too little, or too much, detail with respect to the strict rule-based meta-model. Our approach provides a basis for the gradual aggregation of fragmented biological knowledge extracted from the literature into an instance of the meta-model from which we can define an automated translation into executable Kappa programs.

  18. A step-indexed Kripke model of hidden state via recursive properties on recursively defined metric spaces

    Birkedal, Lars; Schwinghammer, Jan; Støvring, Kristian


    for Chargu´eraud and Pottier’s type and capability system including frame and anti-frame rules, based on the operational semantics and step-indexed heap relations. The worlds are constructed as a recursively defined predicate on a recursively defined metric space, which provides a considerably simpler...

  19. A Belief Rule Based Expert System to Assess Mental Disorder under Uncertainty

    Hossain, Mohammad Shahadat; Afif Monrat, Ahmed; Hasan, Mamun;


    Mental disorder is a change of mental or behavioral pattern that causes sufferings and impairs the ability to function in ordinary life. In psychopathology, the assessment methods of mental disorder contain various types of uncertainties associated with signs and symptoms. This study identifies...... a method that addresses the issue of uncertainty in assessing mental disorder. The fuzzy logic knowledge representation schema can address uncertainty associated with linguistic terms including ambiguity, imprecision, and vagueness. However, fuzzy logic is incapable of addressing uncertainty due...... to ignorance, incompleteness, and randomness. So, a belief rule-based expert system (BRBES) has been designed and developed with the capability of handling the uncertainties mentioned. Evidential reasoning works as the inference engine and the belief rule base as the knowledge representation schema...

  20. Fifty years of computer analysis in chest imaging: rule-based, machine learning, deep learning.

    van Ginneken, Bram


    Half a century ago, the term "computer-aided diagnosis" (CAD) was introduced in the scientific literature. Pulmonary imaging, with chest radiography and computed tomography, has always been one of the focus areas in this field. In this study, I describe how machine learning became the dominant technology for tackling CAD in the lungs, generally producing better results than do classical rule-based approaches, and how the field is now rapidly changing: in the last few years, we have seen how even better results can be obtained with deep learning. The key differences among rule-based processing, machine learning, and deep learning are summarized and illustrated for various applications of CAD in the chest.

  1. Design And Efficient Deployment Of Honeypot And Dynamic Rule Based Live Network Intrusion Collaborative System

    Renuka Prasad.B


    Full Text Available The continuously emerging, operationally and managerially independent, geographically distributedcomputer networks deployable in an evolutionarily manner have created greater challenges in securingthem. Several research works and experiments have convinced the security expert that Network IntrusionDetection Systems (NIDS or Network Intrusion Prevention Systems (NIPS alone are not capable ofsecuring the Computer Networks from internal and external threats completely. In this paper we presentthe design of Intrusion Collaborative System which is a combination of NIDS,NIPS, Honeypots, softwaretools like nmap, iptables etc. Our Design is tested against existing attacks based on Snort Rules andseveral customized DDOS , remote and guest attacks. Dynamic rules are generated during every unusualbehavior that helps Intrusion Collaborative System to continuously learn about new attacks. Also aformal approach to deploy Live Intrusion Collaboration Systems based on System of Systems Concept isProposed.

  2. Fuzzy rule-based models for decision support in ecosystem management.

    Adriaenssens, Veronique; De Baets, Bernard; Goethals, Peter L M; De Pauw, Niels


    To facilitate decision support in the ecosystem management, ecological expertise and site-specific data need to be integrated. Fuzzy logic can deal with highly variable, linguistic, vague and uncertain data or knowledge and, therefore, has the ability to allow for a logical, reliable and transparent information stream from data collection down to data usage in decision-making. Several environmental applications already implicate the use of fuzzy logic. Most of these applications have been set up by trial and error and are mainly limited to the domain of environmental assessment. In this article, applications of fuzzy logic for decision support in ecosystem management are reviewed and assessed, with an emphasis on rule-based models. In particular, the identification, optimisation, validation, the interpretability and uncertainty aspects of fuzzy rule-based models for decision support in ecosystem management are discussed.

  3. A New Approach To The Rule-Based Systems Design And Implementation Process

    Grzegorz J. Nalepa


    Full Text Available The paper discusses selected problems encountered in practical rule-based systems (RBS design and implementation. To solve them XTT, a new visual knowledge representation is introduced. Then a complete, integrated RBS design, implementation and analysis methodology is presented. This methodology is supported by a visual CASE tool called Mirella.The main goal is to move the design procedure to a more abstract, logical level, where knowledge specification is based on use of abstract rule representation. The design specification is automatically translated into Prolog code, so the designer can focus on logical specification of safety and reliability. On the other hand, system formal aspects are automatically verified on-line during the design, so that its verifiable characteristics are preserved.

  4. A New Approach to the Rule-Based Systems Design and Implementation Process

    Grzegorz J. Nalepa


    Full Text Available The paper discusses selected problems encountered in practical rule-based systems (RBS design and implementation. To solve them XTT, a new visual knowledge representation is introduced. Then a complete, integrated RBS design, implementation and analysis methodology is presented. This methodology is supported by a visual CASE tool called Mirella. The main goal is to move the design procedure to a more abstract, logical level, where knowledge specification is based on use of abstract rule representation. The design specification is automatically translated into Prolog code, so the designer can focus on logical specification of safety and reliability. On the other hand, system formal aspects are automatically verified on-line during the design, so that its verifiable characteristics are preserved.

  5. Towards a framework for threaded inference in rule-based systems

    Luis Casillas Santillan


    Full Text Available nformation and communication technologies have shown a significant advance and fast pace in their performance and pervasiveness. Knowledge has become a significant asset for organizations, which need to deal with large amounts of data and information to produce valuable knowledge. Dealing with knowledge is turning the axis for organizations in the new economy. One of the choices to gather the goal of knowledge managing is the use of rule-based systems. This kind of approach is the new chance for expert-systems’ technology. Modern languages and cheap computing allow the implementation of concurrent systems for dealing huge volumes of information in organizations. The present work is aimed at proposing the use of contemporary programming elements, as easy to exploit threading, when implementing rule-based treatment over huge data volumes.

  6. Rule Extraction from Trained Artificial Neural Network Based on Genetic Algorithm

    WANG Wen-jian; ZHANG Li-xia


    This paper discusses how to extract symbolic rules from trained artificial neural network (ANN) in domains involving classification using genetic algorithms (GA). Previous methods based on an exhaustive analysis of network connections and output values have already been demonstrated to be intractable in that the scale-up factor increases with the number of nodes and connections in the network.Some experiments explaining effectiveness of the presented method are given as well.

  7. Depfix, a Tool for Automatic Rule-based Post-editing of SMT

    Rudolf Rosa


    Full Text Available We present Depfix, an open-source system for automatic post-editing of phrase-based machine translation outputs. Depfix employs a range of natural language processing tools to obtain analyses of the input sentences, and uses a set of rules to correct common or serious errors in machine translation outputs. Depfix is currently implemented only for English-to-Czech translation direction, but extending it to other languages is planned.

  8. Ant-cycle based on Metropolis rules for the traveling salesman problem

    Gong Qu; He Xian-yang


    In this paper, recent developments of some heuristic algorithms were discussed. The focus was laid on the improvements of ant-cycle (AC) algorithm based on the analysis of the performances of simulated annealing (SA) and AC for the traveling salesman problem (TSP). The Metropolis rules in SA were applied to AC and turned out an improved AC. The computational results obtained from the case study indicated that the improved AC algorithm has advantages over the sheer SA or unmixed AC.

  9. An Associate Rules Mining Algorithm Based on Artificial Immune Network for SAR Image Segmentation

    Mengling Zhao; Hongwei Liu


    As a computational intelligence method, artificial immune network (AIN) algorithm has been widely applied to pattern recognition and data classification. In the existing artificial immune network algorithms, the calculating affinity for classifying is based on calculating a certain distance, which may lead to some unsatisfactory results in dealing with data with nominal attributes. To overcome the shortcoming, the association rules are introduced into AIN algorithm, and we propose a new class...

  10. Auto-control of pumping operations in sewerage systems by rule-based fuzzy neural networks

    Chiang, Y.-M.; Chang, L.-C.; Tsai, M.-J.; Wang, Y. -F.; Chang, F.-J.


    Pumping stations play an important role in flood mitigation in metropolitan areas. The existing sewerage systems, however, are facing a great challenge of fast rising peak flow resulting from urbanization and climate change. It is imperative to construct an efficient and accurate operating prediction model for pumping stations to simulate the drainage mechanism for discharging the rainwater in advance. In this study, we propose two rule-based fuzzy neural networks, adaptive neuro-fuzzy infere...

  11. Design Rules and Issues with Respect to Rocket Based Combined Cycles


    Section Analysis As we seek for the accelerator, the inlet design is quite art of compromise. To make benefits due to air- breathing propulsion, the...Design Rules and Issues with Respect to Rocket Based Combined Cycles 3 - 4 RTO-EN-AVT-185 2.1.2 Combustor Section Analysis Embedded rocket chamber...cause thrust augmentation due to the ejector effects, which in turn, can reduce the requirement for the rocket engine output. In the speed regime with

  12. Analyzing large gene expression and methylation data profiles using StatBicRM: statistical biclustering-based rule mining.

    Ujjwal Maulik

    Full Text Available Microarray and beadchip are two most efficient techniques for measuring gene expression and methylation data in bioinformatics. Biclustering deals with the simultaneous clustering of genes and samples. In this article, we propose a computational rule mining framework, StatBicRM (i.e., statistical biclustering-based rule mining to identify special type of rules and potential biomarkers using integrated approaches of statistical and binary inclusion-maximal biclustering techniques from the biological datasets. At first, a novel statistical strategy has been utilized to eliminate the insignificant/low-significant/redundant genes in such way that significance level must satisfy the data distribution property (viz., either normal distribution or non-normal distribution. The data is then discretized and post-discretized, consecutively. Thereafter, the biclustering technique is applied to identify maximal frequent closed homogeneous itemsets. Corresponding special type of rules are then extracted from the selected itemsets. Our proposed rule mining method performs better than the other rule mining algorithms as it generates maximal frequent closed homogeneous itemsets instead of frequent itemsets. Thus, it saves elapsed time, and can work on big dataset. Pathway and Gene Ontology analyses are conducted on the genes of the evolved rules using David database. Frequency analysis of the genes appearing in the evolved rules is performed to determine potential biomarkers. Furthermore, we also classify the data to know how much the evolved rules are able to describe accurately the remaining test (unknown data. Subsequently, we also compare the average classification accuracy, and other related factors with other rule-based classifiers. Statistical significance tests are also performed for verifying the statistical relevance of the comparative results. Here, each of the other rule mining methods or rule-based classifiers is also starting with the same post

  13. Analyzing large gene expression and methylation data profiles using StatBicRM: statistical biclustering-based rule mining.

    Maulik, Ujjwal; Mallik, Saurav; Mukhopadhyay, Anirban; Bandyopadhyay, Sanghamitra


    Microarray and beadchip are two most efficient techniques for measuring gene expression and methylation data in bioinformatics. Biclustering deals with the simultaneous clustering of genes and samples. In this article, we propose a computational rule mining framework, StatBicRM (i.e., statistical biclustering-based rule mining) to identify special type of rules and potential biomarkers using integrated approaches of statistical and binary inclusion-maximal biclustering techniques from the biological datasets. At first, a novel statistical strategy has been utilized to eliminate the insignificant/low-significant/redundant genes in such way that significance level must satisfy the data distribution property (viz., either normal distribution or non-normal distribution). The data is then discretized and post-discretized, consecutively. Thereafter, the biclustering technique is applied to identify maximal frequent closed homogeneous itemsets. Corresponding special type of rules are then extracted from the selected itemsets. Our proposed rule mining method performs better than the other rule mining algorithms as it generates maximal frequent closed homogeneous itemsets instead of frequent itemsets. Thus, it saves elapsed time, and can work on big dataset. Pathway and Gene Ontology analyses are conducted on the genes of the evolved rules using David database. Frequency analysis of the genes appearing in the evolved rules is performed to determine potential biomarkers. Furthermore, we also classify the data to know how much the evolved rules are able to describe accurately the remaining test (unknown) data. Subsequently, we also compare the average classification accuracy, and other related factors with other rule-based classifiers. Statistical significance tests are also performed for verifying the statistical relevance of the comparative results. Here, each of the other rule mining methods or rule-based classifiers is also starting with the same post-discretized data

  14. Spatial Rule-Based Modeling: A Method and Its Application to the Human Mitotic Kinetochore

    Jan Huwald


    Full Text Available A common problem in the analysis of biological systems is the combinatorial explosion that emerges from the complexity of multi-protein assemblies. Conventional formalisms, like differential equations, Boolean networks and Bayesian networks, are unsuitable for dealing with the combinatorial explosion, because they are designed for a restricted state space with fixed dimensionality. To overcome this problem, the rule-based modeling language, BioNetGen, and the spatial extension, SRSim, have been developed. Here, we describe how to apply rule-based modeling to integrate experimental data from different sources into a single spatial simulation model and how to analyze the output of that model. The starting point for this approach can be a combination of molecular interaction data, reaction network data, proximities, binding and diffusion kinetics and molecular geometries at different levels of detail. We describe the technique and then use it to construct a model of the human mitotic inner and outer kinetochore, including the spindle assembly checkpoint signaling pathway. This allows us to demonstrate the utility of the procedure, show how a novel perspective for understanding such complex systems becomes accessible and elaborate on challenges that arise in the formulation, simulation and analysis of spatial rule-based models.

  15. Using reduced rule base with Expert System for the diagnosis of disease in hypertension.

    Başçiftçi, Fatih; Eldem, Ayşe


    Hypertension, also called the "Silent Killer", is a dangerous and widespread disease that seriously threatens the health of individuals and communities worldwide, often leading to fatal outcomes such as heart attack, stroke, and renal failure. It affects approximately one billion people worldwide with increasing incidence. In Turkey, over 15 million people have hypertension. In this study, a new Medical Expert System (MES) procedure with reduced rule base was developed to determine hypertension. The aim was to determine the disease by taking all symptoms of hypertension into account in the Medical Expert System (7 symptoms, 2(7) = 128 different conditions). In this new MES procedure, instead of checking all the symptoms, the reduced rule bases were used. In order to get the reduced rule bases, the method of two-level simplification of Boolean functions was used. Through the use of this method, instead of assessing 2(7) = 128 individual conditions by taking 7 symptoms of hypertension into account, reduced cases were evaluated. The average rate of success was 97.6 % with the new MES procedure.

  16. Executable specifications for hypothesis-based reasoning with Prolog and Constraint Handling Rules

    Christiansen, Henning


    Constraint Handling Rules (CHR) is an extension to Prolog which opens up a  spectrum of hypotheses-based reasoning in logic programs without additional interpretation overhead. Abduction with integrity constraints is one example of hypotheses-based reasoning which can be implemented directly...... in Prolog and CHR with a straightforward use of available and efficiently implemented facilities The present paper clarifies the semantic foundations for this way of doing abduction in CHR and Prolog as well as other examples  of hypotheses-based reasoning that is possible, including assumptive logic...

  17. Study of Three rules of Skopostheorie in Translating Scientific Texts---Based on the translation of Pictures of the Future



    This thesis is based on the translation of scientific texts taken from Pictures of the Future. The characteristics and under-standing of the scientific texts causes many dif-ficulties to the translation. Guided by skopos rule, coherence rule and fidelity rule, the au-thor studies the application of three rules of Skopostheorie in translating scientific texts by taking typical examples from his own translation project. This translational experience is expect-ed to shed much light on the translation of simi-lar text types in future.

  18. Hopc: a Novel Similarity Metric Based on Geometric Structural Properties for Multi-Modal Remote Sensing Image Matching

    Ye, Yuanxin; Shen, Li


    Automatic matching of multi-modal remote sensing images (e.g., optical, LiDAR, SAR and maps) remains a challenging task in remote sensing image analysis due to significant non-linear radiometric differences between these images. This paper addresses this problem and proposes a novel similarity metric for multi-modal matching using geometric structural properties of images. We first extend the phase congruency model with illumination and contrast invariance, and then use the extended model to build a dense descriptor called the Histogram of Orientated Phase Congruency (HOPC) that captures geometric structure or shape features of images. Finally, HOPC is integrated as the similarity metric to detect tie-points between images by designing a fast template matching scheme. This novel metric aims to represent geometric structural similarities between multi-modal remote sensing datasets and is robust against significant non-linear radiometric changes. HOPC has been evaluated with a variety of multi-modal images including optical, LiDAR, SAR and map data. Experimental results show its superiority to the recent state-of-the-art similarity metrics (e.g., NCC, MI, etc.), and demonstrate its improved matching performance.

  19. Does a research group increase impact on the scientific community or general public discussion? Alternative metric-based evaluation.

    De Gregori, Manuela; Scotti, Valeria; De Silvestri, Annalisa; Curti, Moreno; Fanelli, Guido; Allegri, Massimo; Schatman, Michael E


    In this study, we investigated the impact of scientific publications of the Italian SIMPAR (Study In Multidisciplinary PAin Research) group by using altmetrics, defined as nontraditional metrics constituting an alternative to more traditional citation-impact metrics, such as impact factor and H-index. By correlating traditional and alternative metrics, we attempted to verify whether publications by the SIMPAR group collectively had more impact than those performed by its individual members, either in solo publications or in publications coauthored by non-SIMPAR group investigators (which for the purpose of this study we will refer to as "individual publications"). For all the 12 members of the group analyzed (pain therapists, biologists, and pharmacologists), we created Open Researcher and Contributor ID and Impact Story accounts, and synchronized these data. Manually, we calculated the level metrics for each article by dividing the data obtained from the research community by those obtained from the public community. We analyzed 759 articles, 18 of which were published by the SIMPAR group. Altmetrics demonstrated that SIMPAR group publications were more likely to be saved (77.8% vs 45.9%), discussed (61.1% vs 1.1%, Paltmetrics in estimating the value of the research products of a group.

  20. Comparing Single Case Design Overlap-Based Effect Size Metrics from Studies Examining Speech Generating Device Interventions

    Chen, Mo; Hyppa-Martin, Jolene K.; Reichle, Joe E.; Symons, Frank J.


    Meaningfully synthesizing single case experimental data from intervention studies comprised of individuals with low incidence conditions and generating effect size estimates remains challenging. Seven effect size metrics were compared for single case design (SCD) data focused on teaching speech generating device use to individuals with…