WorldWideScience

Sample records for expert algorithms approach

  1. Nuclear expert web search and crawler algorithm

    International Nuclear Information System (INIS)

    Reis, Thiago; Barroso, Antonio C.O.; Baptista, Benedito Filho D.

    2013-01-01

    In this paper we present preliminary research on web search and crawling algorithm applied specifically to nuclear-related web information. We designed a web-based nuclear-oriented expert system guided by a web crawler algorithm and a neural network able to search and retrieve nuclear-related hyper textual web information in autonomous and massive fashion. Preliminary experimental results shows a retrieval precision of 80% for web pages related to any nuclear theme and a retrieval precision of 72% for web pages related only to nuclear power theme. (author)

  2. Nuclear expert web search and crawler algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Reis, Thiago; Barroso, Antonio C.O.; Baptista, Benedito Filho D., E-mail: thiagoreis@usp.br, E-mail: barroso@ipen.br, E-mail: bdbfilho@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this paper we present preliminary research on web search and crawling algorithm applied specifically to nuclear-related web information. We designed a web-based nuclear-oriented expert system guided by a web crawler algorithm and a neural network able to search and retrieve nuclear-related hyper textual web information in autonomous and massive fashion. Preliminary experimental results shows a retrieval precision of 80% for web pages related to any nuclear theme and a retrieval precision of 72% for web pages related only to nuclear power theme. (author)

  3. Using hybrid expert system approaches for engineering applications

    Science.gov (United States)

    Allen, R. H.; Boarnet, M. G.; Culbert, C. J.; Savely, R. T.

    1987-01-01

    In this paper, the use of hybrid expert system shells and hybrid (i.e., algorithmic and heuristic) approaches for solving engineering problems is reported. Aspects of various engineering problem domains are reviewed for a number of examples with specific applications made to recently developed prototype expert systems. Based on this prototyping experience, critical evaluations of and comparisons between commercially available tools, and some research tools, in the United States and Australia, and their underlying problem-solving paradigms are made. Characteristics of the implementation tool and the engineering domain are compared and practical software engineering issues are discussed with respect to hybrid tools and approaches. Finally, guidelines are offered with the hope that expert system development will be less time consuming, more effective, and more cost-effective than it has been in the past.

  4. Algorithmic approach to diagram techniques

    International Nuclear Information System (INIS)

    Ponticopoulos, L.

    1980-10-01

    An algorithmic approach to diagram techniques of elementary particles is proposed. The definition and axiomatics of the theory of algorithms are presented, followed by the list of instructions of an algorithm formalizing the construction of graphs and the assignment of mathematical objects to them. (T.A.)

  5. Expert System and Heuristics Algorithm for Cloud Resource Scheduling

    Directory of Open Access Journals (Sweden)

    Mamatha E

    2017-03-01

    Full Text Available Rule-based scheduling algorithms have been widely used on cloud computing systems and there is still plenty of room to improve their performance. This paper proposes to develop an expert system to allocate resources in cloud by using Rule based Algorithm, thereby measuring the performance of the system by letting the system adapt new rules based on the feedback. Here performance of the action helps to make better allocation of the resources to improve quality of services, scalability and flexibility. The performance measure is based on how the allocation of the resources is dynamically optimized and how the resources are utilized properly. It aims to maximize the utilization of the resources. The data and resource are given to the algorithm which allocates the data to resources and an output is obtained based on the action occurred. Once the action is completed, the performance of every action is measured that contains how the resources are allocated and how efficiently it worked. In addition to performance, resource allocation in cloud environment is also considered.

  6. Efficient fuzzy Bayesian inference algorithms for incorporating expert knowledge in parameter estimation

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad

    2016-05-01

    Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert

  7. THE APPROACHING TRAIN DETECTION ALGORITHM

    OpenAIRE

    S. V. Bibikov

    2015-01-01

    The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...

  8. Network approaches for expert decisions in sports.

    Science.gov (United States)

    Glöckner, Andreas; Heinen, Thomas; Johnson, Joseph G; Raab, Markus

    2012-04-01

    This paper focuses on a model comparison to explain choices based on gaze behavior via simulation procedures. We tested two classes of models, a parallel constraint satisfaction (PCS) artificial neuronal network model and an accumulator model in a handball decision-making task from a lab experiment. Both models predict action in an option-generation task in which options can be chosen from the perspective of a playmaker in handball (i.e., passing to another player or shooting at the goal). Model simulations are based on a dataset of generated options together with gaze behavior measurements from 74 expert handball players for 22 pieces of video footage. We implemented both classes of models as deterministic vs. probabilistic models including and excluding fitted parameters. Results indicated that both classes of models can fit and predict participants' initially generated options based on gaze behavior data, and that overall, the classes of models performed about equally well. Early fixations were thereby particularly predictive for choices. We conclude that the analyses of complex environments via network approaches can be successfully applied to the field of experts' decision making in sports and provide perspectives for further theoretical developments. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. An expert system approach for safety diagnosis

    International Nuclear Information System (INIS)

    Erdmann, R.C.; Sun, B.K.H.

    1988-01-01

    An expert system was developed with the intent to provide real-time information about an accident to an operator who is in the process of diagnosing and bringing that accident under control. Explicit use was made of probabilistic risk analysis techniques and plant accident response information in constructing this system. The expert system developed contains 70 logic rules and provides contextual messages during simulated accident sequences and logic sequence information on the entire sequence in graphical form for accident diagnosis. The present analysis focuses on integrated control system-related transients with Babcock and Wilcox-type reactors. While the system developed here is limited in extent and was built for a composite reactor, it demonstrates that an expert system may enhance the operator's capability in the control room

  10. Development of Type 2 Diabetes Mellitus Phenotyping Framework Using Expert Knowledge and Machine Learning Approach.

    Science.gov (United States)

    Kagawa, Rina; Kawazoe, Yoshimasa; Ida, Yusuke; Shinohara, Emiko; Tanaka, Katsuya; Imai, Takeshi; Ohe, Kazuhiko

    2017-07-01

    Phenotyping is an automated technique that can be used to distinguish patients based on electronic health records. To improve the quality of medical care and advance type 2 diabetes mellitus (T2DM) research, the demand for T2DM phenotyping has been increasing. Some existing phenotyping algorithms are not sufficiently accurate for screening or identifying clinical research subjects. We propose a practical phenotyping framework using both expert knowledge and a machine learning approach to develop 2 phenotyping algorithms: one is for screening; the other is for identifying research subjects. We employ expert knowledge as rules to exclude obvious control patients and machine learning to increase accuracy for complicated patients. We developed phenotyping algorithms on the basis of our framework and performed binary classification to determine whether a patient has T2DM. To facilitate development of practical phenotyping algorithms, this study introduces new evaluation metrics: area under the precision-sensitivity curve (AUPS) with a high sensitivity and AUPS with a high positive predictive value. The proposed phenotyping algorithms based on our framework show higher performance than baseline algorithms. Our proposed framework can be used to develop 2 types of phenotyping algorithms depending on the tuning approach: one for screening, the other for identifying research subjects. We develop a novel phenotyping framework that can be easily implemented on the basis of proper evaluation metrics, which are in accordance with users' objectives. The phenotyping algorithms based on our framework are useful for extraction of T2DM patients in retrospective studies.

  11. Advanced language modeling approaches, case study: Expert search

    NARCIS (Netherlands)

    Hiemstra, Djoerd

    2008-01-01

    This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the

  12. Expert-guided evolutionary algorithm for layout design of complex space stations

    Science.gov (United States)

    Qian, Zhiqin; Bi, Zhuming; Cao, Qun; Ju, Weiguo; Teng, Hongfei; Zheng, Yang; Zheng, Siyu

    2017-08-01

    The layout of a space station should be designed in such a way that different equipment and instruments are placed for the station as a whole to achieve the best overall performance. The station layout design is a typical nondeterministic polynomial problem. In particular, how to manage the design complexity to achieve an acceptable solution within a reasonable timeframe poses a great challenge. In this article, a new evolutionary algorithm has been proposed to meet such a challenge. It is called as the expert-guided evolutionary algorithm with a tree-like structure decomposition (EGEA-TSD). Two innovations in EGEA-TSD are (i) to deal with the design complexity, the entire design space is divided into subspaces with a tree-like structure; it reduces the computation and facilitates experts' involvement in the solving process. (ii) A human-intervention interface is developed to allow experts' involvement in avoiding local optimums and accelerating convergence. To validate the proposed algorithm, the layout design of one-space station is formulated as a multi-disciplinary design problem, the developed algorithm is programmed and executed, and the result is compared with those from other two algorithms; it has illustrated the superior performance of the proposed EGEA-TSD.

  13. COMPETENCE APPROACH TO TRAINING OF EXPERTS IN RADIATION HYGIENE

    Directory of Open Access Journals (Sweden)

    T. B. Baltrukova

    2015-01-01

    Full Text Available Modification of attitude to labor in the society, in professional communities and among people is necessary for further development of society and national economy. This goal may be achieved if the system of professional training is modified: switched to competence approach which should include training of experts, including those in radiation hygiene, with a set of general cultural and professional competences. The system of future experts training should be based on traditions of domestic and international education; it should use modern forms of active and interactive education (computer simulations, business games and role-playing, analysis of concrete situations, portfolio, psychological and other trainings, remote education, etc. It should consider actuality of knowledge and skills and develop independence and responsibility that will enable the young expert to be competitive at the modern labor market and to meet employers’ expectations. Under the new federal educational standard on radiation hygiene accepted in 2014 at present primary specialization in radiation hygiene takes place in internship. At training of experts the new standard provides great use of on-the-job training, independent work, scientific and practical work. Employers should play an important role in training of experts.

  14. A STUDENT MODEL AND LEARNING ALGORITHM FOR THE EXPERT TUTORING SYSTEM OF POLISH GRAMMAR

    Directory of Open Access Journals (Sweden)

    Kostikov Mykola

    2014-11-01

    Full Text Available When creating computer-assisted language learning software, it is necessary to use the potential of information technology in controlling the learning process fully. Modern intelligent tutoring systems help to make this process adaptive and personalized thanks to modeling the domain and students’ knowledge. The aim of the paper is to investigate possibilities for applying these methods in teaching Polish grammar in Ukraine taking into account its specifics. The article is concerned with the approaches of using student models in modern intelligent tutoring systems in order to provide personalized learning. A structure of the student model and a general working algorithm of the expert tutoring system of Polish grammar have been developed. The modeling of knowing and forgetting particular learning elements within the probabilistic (stochastic model has been studied, as well as the prognostication of future probabilities of students’ knowledge, taking into account their individual forgetting rates. The objective function of instruction quality with allowance for frequency of grammar rules within a certain amount of words being learned and their connections to another rules has been formulated. The problem of generating the next learning step taking into account the need for mastering previous, connected rules has been studied, as well as determining the optimal time period between the lessons depending on the current knowledge level.

  15. Theoretical and expert system approach to photoionization theories

    Directory of Open Access Journals (Sweden)

    Petrović Ivan D.

    2016-01-01

    Full Text Available The influence of the ponderomotive and the Stark shifts on the tunneling transition rate was observed, for non-relativistic linearly polarized laser field for alkali atoms, with three different theoretical models, the Keldysh theory, the Perelomov, Popov, Terent'ev (PPT theory, and the Ammosov, Delone, Krainov (ADK theory. We showed that aforementioned shifts affect the transition rate differently for different approaches. Finally, we presented a simple expert system for analysis of photoionization theories.

  16. Guidance, navigation, and control subsystem equipment selection algorithm using expert system methods

    Science.gov (United States)

    Allen, Cheryl L.

    1991-01-01

    Enhanced engineering tools can be obtained through the integration of expert system methodologies and existing design software. The application of these methodologies to the spacecraft design and cost model (SDCM) software provides an improved technique for the selection of hardware for unmanned spacecraft subsystem design. The knowledge engineering system (KES) expert system development tool was used to implement a smarter equipment section algorithm than that which is currently achievable through the use of a standard data base system. The guidance, navigation, and control subsystems of the SDCM software was chosen as the initial subsystem for implementation. The portions of the SDCM code which compute the selection criteria and constraints remain intact, and the expert system equipment selection algorithm is embedded within this existing code. The architecture of this new methodology is described and its implementation is reported. The project background and a brief overview of the expert system is described, and once the details of the design are characterized, an example of its implementation is demonstrated.

  17. Validating hierarchical verbal autopsy expert algorithms in a large data set with known causes of death.

    Science.gov (United States)

    Kalter, Henry D; Perin, Jamie; Black, Robert E

    2016-06-01

    Physician assessment historically has been the most common method of analyzing verbal autopsy (VA) data. Recently, the World Health Organization endorsed two automated methods, Tariff 2.0 and InterVA-4, which promise greater objectivity and lower cost. A disadvantage of the Tariff method is that it requires a training data set from a prior validation study, while InterVA relies on clinically specified conditional probabilities. We undertook to validate the hierarchical expert algorithm analysis of VA data, an automated, intuitive, deterministic method that does not require a training data set. Using Population Health Metrics Research Consortium study hospital source data, we compared the primary causes of 1629 neonatal and 1456 1-59 month-old child deaths from VA expert algorithms arranged in a hierarchy to their reference standard causes. The expert algorithms were held constant, while five prior and one new "compromise" neonatal hierarchy, and three former child hierarchies were tested. For each comparison, the reference standard data were resampled 1000 times within the range of cause-specific mortality fractions (CSMF) for one of three approximated community scenarios in the 2013 WHO global causes of death, plus one random mortality cause proportions scenario. We utilized CSMF accuracy to assess overall population-level validity, and the absolute difference between VA and reference standard CSMFs to examine particular causes. Chance-corrected concordance (CCC) and Cohen's kappa were used to evaluate individual-level cause assignment. Overall CSMF accuracy for the best-performing expert algorithm hierarchy was 0.80 (range 0.57-0.96) for neonatal deaths and 0.76 (0.50-0.97) for child deaths. Performance for particular causes of death varied, with fairly flat estimated CSMF over a range of reference values for several causes. Performance at the individual diagnosis level was also less favorable than that for overall CSMF (neonatal: best CCC = 0.23, range 0

  18. Dynamic traffic assignment : genetic algorithms approach

    Science.gov (United States)

    1997-01-01

    Real-time route guidance is a promising approach to alleviating congestion on the nations highways. A dynamic traffic assignment model is central to the development of guidance strategies. The artificial intelligence technique of genetic algorithm...

  19. The validation index: a new metric for validation of segmentation algorithms using two or more expert outlines with application to radiotherapy planning.

    Science.gov (United States)

    Juneja, Prabhjot; Evans, Philp M; Harris, Emma J

    2013-08-01

    Validation is required to ensure automated segmentation algorithms are suitable for radiotherapy target definition. In the absence of true segmentation, algorithmic segmentation is validated against expert outlining of the region of interest. Multiple experts are used to overcome inter-expert variability. Several approaches have been studied in the literature, but the most appropriate approach to combine the information from multiple expert outlines, to give a single metric for validation, is unclear. None consider a metric that can be tailored to case-specific requirements in radiotherapy planning. Validation index (VI), a new validation metric which uses experts' level of agreement was developed. A control parameter was introduced for the validation of segmentations required for different radiotherapy scenarios: for targets close to organs-at-risk and for difficult to discern targets, where large variation between experts is expected. VI was evaluated using two simulated idealized cases and data from two clinical studies. VI was compared with the commonly used Dice similarity coefficient (DSCpair - wise) and found to be more sensitive than the DSCpair - wise to the changes in agreement between experts. VI was shown to be adaptable to specific radiotherapy planning scenarios.

  20. Fuzzy Expert System based on a Novel Hybrid Stem Cell (HSC) Algorithm for Classification of Micro Array Data.

    Science.gov (United States)

    Vijay, S Arul Antran; GaneshKumar, P

    2018-02-21

    In the growing scenario, microarray data is extensively used since it provides a more comprehensive understanding of genetic variants among diseases. As the gene expression samples have high dimensionality it becomes tedious to analyze the samples manually. Hence an automated system is needed to analyze these samples. The fuzzy expert system offers a clear classification when compared to the machine learning and statistical methodologies. In fuzzy classification, knowledge acquisition would be a major concern. Despite several existing approaches for knowledge acquisition much effort is necessary to enhance the learning process. This paper proposes an innovative Hybrid Stem Cell (HSC) algorithm that utilizes Ant Colony optimization and Stem Cell algorithm for designing fuzzy classification system to extract the informative rules to form the membership functions from the microarray dataset. The HSC algorithm uses a novel Adaptive Stem Cell Optimization (ASCO) to improve the points of membership function and Ant Colony Optimization to produce the near optimum rule set. In order to extract the most informative genes from the large microarray dataset a method called Mutual Information is used. The performance results of the proposed technique evaluated using the five microarray datasets are simulated. These results prove that the proposed Hybrid Stem Cell (HSC) algorithm produces a precise fuzzy system than the existing methodologies.

  1. A semi-linguistic approach based on fuzzy set theory: application to expert judgments aggregation

    International Nuclear Information System (INIS)

    Ghyym, Seong Ho

    1998-01-01

    In the present work, a semi-linguistic fuzzy algorithm is proposed to obtain the fuzzy weighting values for multi-criterion, multi-alternative performance evaluation problem, with application to the aggregated estimate in the aggregation process of multi-expert judgments. The algorithm framework proposed is composed of the hierarchical structure, the semi-linguistic approach, the fuzzy R-L type integral value, and the total risk attitude index. In this work, extending the Chang/Chen method for triangular fuzzy numbers, the total risk attitude index is devised for a trapezoidal fuzzy number system. To illustrate the application of the algorithm proposed, a case problem available in literature is studied in connection to the weighting value evaluation of three-alternative (i.e., the aggregation of three-expert judgments) under seven-criterion. The evaluation results such as overall utility value, aggregation weighting value, and aggregated estimate obtained using the present fuzzy model are compared with those for other fuzzy models based on the Kim/Park method, the Liou/Wang method, and the Chang/Chen method

  2. A semi-linguistic approach based on fuzzy set theory: application to expert judgments aggregation

    Energy Technology Data Exchange (ETDEWEB)

    Ghyym, Seong Ho [KEPRI, Taejon (Korea, Republic of)

    1998-10-01

    In the present work, a semi-linguistic fuzzy algorithm is proposed to obtain the fuzzy weighting values for multi-criterion, multi-alternative performance evaluation problem, with application to the aggregated estimate in the aggregation process of multi-expert judgments. The algorithm framework proposed is composed of the hierarchical structure, the semi-linguistic approach, the fuzzy R-L type integral value, and the total risk attitude index. In this work, extending the Chang/Chen method for triangular fuzzy numbers, the total risk attitude index is devised for a trapezoidal fuzzy number system. To illustrate the application of the algorithm proposed, a case problem available in literature is studied in connection to the weighting value evaluation of three-alternative (i.e., the aggregation of three-expert judgments) under seven-criterion. The evaluation results such as overall utility value, aggregation weighting value, and aggregated estimate obtained using the present fuzzy model are compared with those for other fuzzy models based on the Kim/Park method, the Liou/Wang method, and the Chang/Chen method.

  3. System engineering approach to GPM retrieval algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rose, C. R. (Chris R.); Chandrasekar, V.

    2004-01-01

    System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Ground validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do

  4. An hierarchical approach to performance evaluation of expert systems

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Kavi, Srinu

    1985-01-01

    The number and size of expert systems is growing rapidly. Formal evaluation of these systems - which is not performed for many systems - increases the acceptability by the user community and hence their success. Hierarchical evaluation that had been conducted for computer systems is applied for expert system performance evaluation. Expert systems are also evaluated by treating them as software systems (or programs). This paper reports many of the basic concepts and ideas in the Performance Evaluation of Expert Systems Study being conducted at the University of Southwestern Louisiana.

  5. Spectral information interpretation of X-ray analysis based on expert system approach

    International Nuclear Information System (INIS)

    Drakunov, Yu.M.; Lezin, A.N.; Pukha, N.P.; Silachev, I.Yu.

    2000-01-01

    An expert subprogram for automated identification for element composition of the samples of different nature according to the result of energy-dispersive X-ray fluorescence analysis is elaborated, The flowchart of the subprogram is presented, brief description of expert system structure and its algorithm is given. (author)

  6. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  7. Expert System Approach For Generating And Evaluating Engine Design Alternatives

    Science.gov (United States)

    Shen, Stewart N. T.; Chew, Meng-Sang; Issa, Ghassan F.

    1989-03-01

    Artificial intelligence is becoming an increasingly important subject of study for computer scientists, engineering designers, as well as professionals in other fields. Even though AI technology is a relatively new discipline, many of its concepts have already found practical applications. Expert systems, in particular, have made significant contributions to technologies in such fields as business, medicine, engineering design, chemistry, and particle physics. This paper describes an expert system developed to aid the mechanical designer with the preliminary design of variable-stroke internal-combustion engines. The expert system accomplished its task by generating and evaluating a large number of design alternatives represented in the form of graphs. Through the application of structural and design rules directly to the graphs, optimal and near optimal preliminary design configurations of engines are deduced.

  8. A statistical modeling approach to build expert credit risk rating systems

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus

    2010-01-01

    This paper presents an efficient method for extracting expert knowledge when building a credit risk rating system. Experts are asked to rate a sample of counterparty cases according to creditworthiness. Next, a statistical model is used to capture the relation between the characteristics...... of a counterparty and the expert rating. For any counterparty the model can identify the rating, which would be agreed upon by the majority of experts. Furthermore, the model can quantify the concurrence among experts. The approach is illustrated by a case study regarding the construction of an application score...

  9. An Expert System Approach to Online Catalog Subject Searching.

    Science.gov (United States)

    Khoo, Christopher S. G.; Poo, Danny C. C.

    1994-01-01

    Reviews methods to improve online catalogs for subject searching and describes the design of an expert system front-end to improve subject access in online public access catalogs that focuses on search strategies. Implementation of a prototype system at the National University of Singapore is described, and reformulation strategies are discussed.…

  10. A new algorithm for reducing the workload of experts in performing systematic reviews.

    Science.gov (United States)

    Matwin, Stan; Kouznetsov, Alexandre; Inkpen, Diana; Frunza, Oana; O'Blenis, Peter

    2010-01-01

    To determine whether a factorized version of the complement naïve Bayes (FCNB) classifier can reduce the time spent by experts reviewing journal articles for inclusion in systematic reviews of drug class efficacy for disease treatment. The proposed classifier was evaluated on a test collection built from 15 systematic drug class reviews used in previous work. The FCNB classifier was constructed to classify each article as containing high-quality, drug class-specific evidence or not. Weight engineering (WE) techniques were added to reduce underestimation for Medical Subject Headings (MeSH)-based and Publication Type (PubType)-based features. Cross-validation experiments were performed to evaluate the classifier's parameters and performance. Work saved over sampling (WSS) at no less than a 95% recall was used as the main measure of performance. The minimum workload reduction for a systematic review for one topic, achieved with a FCNB/WE classifier, was 8.5%; the maximum was 62.2% and the average over the 15 topics was 33.5%. This is 15.0% higher than the average workload reduction obtained using a voting perceptron-based automated citation classification system. The FCNB/WE classifier is simple, easy to implement, and produces significantly better results in reducing the workload than previously achieved. The results support it being a useful algorithm for machine-learning-based automation of systematic reviews of drug class efficacy for disease treatment.

  11. Land-cover classification with an expert classification algorithm using digital aerial photographs

    Directory of Open Access Journals (Sweden)

    José L. de la Cruz

    2010-05-01

    Full Text Available The purpose of this study was to evaluate the usefulness of the spectral information of digital aerial sensors in determining land-cover classification using new digital techniques. The land covers that have been evaluated are the following, (1 bare soil, (2 cereals, including maize (Zea mays L., oats (Avena sativa L., rye (Secale cereale L., wheat (Triticum aestivum L. and barley (Hordeun vulgare L., (3 high protein crops, such as peas (Pisum sativum L. and beans (Vicia faba L., (4 alfalfa (Medicago sativa L., (5 woodlands and scrublands, including holly oak (Quercus ilex L. and common retama (Retama sphaerocarpa L., (6 urban soil, (7 olive groves (Olea europaea L. and (8 burnt crop stubble. The best result was obtained using an expert classification algorithm, achieving a reliability rate of 95%. This result showed that the images of digital airborne sensors hold considerable promise for the future in the field of digital classifications because these images contain valuable information that takes advantage of the geometric viewpoint. Moreover, new classification techniques reduce problems encountered using high-resolution images; while reliabilities are achieved that are better than those achieved with traditional methods.

  12. Information systems development of analysis company financial state based on the expert-statistical approach

    Directory of Open Access Journals (Sweden)

    M. N. Ivliev

    2016-01-01

    Full Text Available The work is devoted to methods of analysis the company financial condition, including aggregated ratings. It is proposed to use the generalized solvency and liquidity indicator and the capital structure composite index. Mathematically, the generalized index is a sum of variables-characteristics and weighting factors characterizing the relative importance of individual characteristics composition. It is offered to select the significant features from a set of standard financial ratios, calculated according to enterprises balance sheets. To obtain the weighting factors values it is proposed to use one of the expert statistical approaches, the analytic hierarchy process. The method is as follows: we choose the most important characteristic and after the experts determine the degree of preference for the main feature based on the linguistic scale. Further, matrix of pairwise comparisons based on the assigned ranks is compiled, which characterizes the relative importance of attributes. The required coefficients are determined as elements of a vector of priorities, which is the first vector of the matrix of paired comparisons. The paper proposes a mechanism for finding the fields for rating numbers analysis. In addition, the paper proposes a method for the statistical evaluation of the balance sheets of various companies by calculating the mutual correlation matrices. Based on the considered mathematical methods to determine quantitative characteristics of technical objects financial and economic activities, was developed algorithms, information and software allowing to realize of different systems economic analysis.

  13. An Expert System-Based Approach to Hospitality Company Diagnosis

    OpenAIRE

    Balfe, Andrew; O'Connor, Peter; McDonnell, Ciaran

    1994-01-01

    This paper describes the development of a prototype Expert System-based Analysis and Diagnostic (ESAD) package for the Hotel and Catering Industry. This computerised tool aids the hospitality manager in methodically scrutinising the hotel unit and environment, combining key information with systematic reasoning. The system searches through its extensive knowledge base, investigating complicated relationships. The number of possibilities considered is increased which will broaden the depth and...

  14. Knowledge-based radiation therapy (KBRT) treatment planning versus planning by experts: validation of a KBRT algorithm for prostate cancer treatment planning

    International Nuclear Information System (INIS)

    Nwankwo, Obioma; Mekdash, Hana; Sihono, Dwi Seno Kuncoro; Wenz, Frederik; Glatting, Gerhard

    2015-01-01

    A knowledge-based radiation therapy (KBRT) treatment planning algorithm was recently developed. The purpose of this work is to investigate how plans that are generated with the objective KBRT approach compare to those that rely on the judgment of the experienced planner. Thirty volumetric modulated arc therapy plans were randomly selected from a database of prostate plans that were generated by experienced planners (expert plans). The anatomical data (CT scan and delineation of organs) of these patients and the KBRT algorithm were given to a novice with no prior treatment planning experience. The inexperienced planner used the knowledge-based algorithm to predict the dose that the OARs receive based on their proximity to the treated volume. The population-based OAR constraints were changed to the predicted doses. A KBRT plan was subsequently generated. The KBRT and expert plans were compared for the achieved target coverage and OAR sparing. The target coverages were compared using the Uniformity Index (UI), while 5 dose-volume points (D 10 , D 30, D 50 , D 70 and D 90 ) were used to compare the OARs (bladder and rectum) doses. Wilcoxon matched-pairs signed rank test was used to check for significant differences (p < 0.05) between both datasets. The KBRT and expert plans achieved mean UI values of 1.10 ± 0.03 and 1.10 ± 0.04, respectively. The Wilcoxon test showed no statistically significant difference between both results. The D 90 , D 70, D 50 , D 30 and D 10 values of the two planning strategies, and the Wilcoxon test results suggests that the KBRT plans achieved a statistically significant lower bladder dose (at D 30 ), while the expert plans achieved a statistically significant lower rectal dose (at D 10 and D 30 ). The results of this study show that the KBRT treatment planning approach is a promising method to objectively incorporate patient anatomical variations in radiotherapy treatment planning

  15. An approach for evaluating expert performance in emergency situations

    International Nuclear Information System (INIS)

    Ujita, Hiroshi; Kawano, Ryutaro; Yoshimura, Sandanori

    1995-01-01

    To understand expert behavior and define what constitutes good performance in emergency situations in huge and complex plants, human performance evaluation should be made from viewpoints of not only error, but also various cognitive, psychological, and behavioral characteristics. Quantitative and qualitative measures of human performance are proposed for both individual operators and crews, based on the operator performance analysis experiment, among which cognitive and behavioral aspects are the most important. Operator performance should be further analyzed experimentally from the cognitive and behavioral viewpoints, using an evaluation based on various gross indexes considering operator's tasks which should be done in response to plant situations

  16. Perceptual-cognitive expertise in sport: some considerations when applying the expert performance approach.

    Science.gov (United States)

    Williams, A Mark; Ericsson, K Anders

    2005-06-01

    The number of researchers studying perceptual-cognitive expertise in sport is increasing. The intention in this paper is to review the currently accepted framework for studying expert performance and to consider implications for undertaking research work in the area of perceptual-cognitive expertise in sport. The expert performance approach presents a descriptive and inductive approach for the systematic study of expert performance. The nature of expert performance is initially captured in the laboratory using representative tasks that identify reliably superior performance. Process-tracing measures are employed to determine the mechanisms that mediate expert performance on the task. Finally, the specific types of activities that lead to the acquisition and development of these mediating mechanisms are identified. General principles and mechanisms may be discovered and then validated by more traditional experimental designs. The relevance of this approach to the study of perceptual-cognitive expertise in sport is discussed and suggestions for future work highlighted.

  17. Knowledge based expert system approach to instrumentation selection (INSEL

    Directory of Open Access Journals (Sweden)

    S. Barai

    2004-08-01

    Full Text Available The selection of appropriate instrumentation for any structural measurement of civil engineering structure is a complex task. Recent developments in Artificial Intelligence (AI can help in an organized use of experiential knowledge available on instrumentation for laboratory and in-situ measurement. Usually, the instrumentation decision is based on the experience and judgment of experimentalists. The heuristic knowledge available for different types of measurement is domain dependent and the information is scattered in varied knowledge sources. The knowledge engineering techniques can help in capturing the experiential knowledge. This paper demonstrates a prototype knowledge based system for INstrument SELection (INSEL assistant where the experiential knowledge for various structural domains can be captured and utilized for making instrumentation decision. In particular, this Knowledge Based Expert System (KBES encodes the heuristics on measurement and demonstrates the instrument selection process with reference to steel bridges. INSEL runs on a microcomputer and uses an INSIGHT 2+ environment.

  18. Photolithography diagnostic expert systems: a systematic approach to problem solving in a wafer fabrication facility

    Science.gov (United States)

    Weatherwax Scott, Caroline; Tsareff, Christopher R.

    1990-06-01

    One of the main goals of process engineering in the semiconductor industry is to improve wafer fabrication productivity and throughput. Engineers must work continuously toward this goal in addition to performing sustaining and development tasks. To accomplish these objectives, managers must make efficient use of engineering resources. One of the tools being used to improve efficiency is the diagnostic expert system. Expert systems are knowledge based computer programs designed to lead the user through the analysis and solution of a problem. Several photolithography diagnostic expert systems have been implemented at the Hughes Technology Center to provide a systematic approach to process problem solving. This systematic approach was achieved by documenting cause and effect analyses for a wide variety of processing problems. This knowledge was organized in the form of IF-THEN rules, a common structure for knowledge representation in expert system technology. These rules form the knowledge base of the expert system which is stored in the computer. The systems also include the problem solving methodology used by the expert when addressing a problem in his area of expertise. Operators now use the expert systems to solve many process problems without engineering assistance. The systems also facilitate the collection of appropriate data to assist engineering in solving unanticipated problems. Currently, several expert systems have been implemented to cover all aspects of the photolithography process. The systems, which have been in use for over a year, include wafer surface preparation (HMDS), photoresist coat and softbake, align and expose on a wafer stepper, and develop inspection. These systems are part of a plan to implement an expert system diagnostic environment throughout the wafer fabrication facility. In this paper, the systems' construction is described, including knowledge acquisition, rule construction, knowledge refinement, testing, and evaluation. The roles

  19. Improving Polyp Detection Algorithms for CT Colonography: Pareto Front Approach.

    Science.gov (United States)

    Huang, Adam; Li, Jiang; Summers, Ronald M; Petrick, Nicholas; Hara, Amy K

    2010-03-21

    We investigated a Pareto front approach to improving polyp detection algorithms for CT colonography (CTC). A dataset of 56 CTC colon surfaces with 87 proven positive detections of 53 polyps sized 4 to 60 mm was used to evaluate the performance of a one-step and a two-step curvature-based region growing algorithm. The algorithmic performance was statistically evaluated and compared based on the Pareto optimal solutions from 20 experiments by evolutionary algorithms. The false positive rate was lower (pPareto optimization process can effectively help in fine-tuning and redesigning polyp detection algorithms.

  20. A Study on Satellite Diagnostic Expert Systems Using Case-Based Approach

    Directory of Open Access Journals (Sweden)

    Young-Tack Park

    1997-06-01

    Full Text Available Many research works are on going to monitor and diagnose diverse malfunctions of satellite systems as the complexity and number of satellites increase. Currently, many works on monitoring and diagnosis are carried out by human experts but there are needs to automate much of the routine works of them. Hence, it is necessary to study on using expert systems which can assist human experts routine work by doing automatically, thereby allow human experts devote their expertise more critical and important areas of monitoring and diagnosis. In this paper, we are employing artificial intelligence techniques to model human experts' knowledge and inference the constructed knowledge. Especially, case-based approaches are used to construct a knowledge base to model human expert capabilities which use previous typical exemplars. We have designed and implemented a prototype case-based system for diagnosing satellite malfunctions using cases. Our system remembers typical failure cases and diagnoses a current malfunction by indexing the case base. Diverse methods are used to build a more user friendly interface which allows human experts can build a knowledge base in as easy way.

  1. Expert Opinion on the Management of Lennox–Gastaut Syndrome: Treatment Algorithms and Practical Considerations

    Directory of Open Access Journals (Sweden)

    J. Helen Cross

    2017-09-01

    Full Text Available Lennox–Gastaut syndrome (LGS is a severe epileptic and developmental encephalopathy that is associated with a high rate of morbidity and mortality. It is characterized by multiple seizure types, abnormal electroencephalographic features, and intellectual disability. Although intellectual disability and associated behavioral problems are characteristic of LGS, they are not necessarily present at its outset and are therefore not part of its diagnostic criteria. LGS is typically treated with a variety of pharmacological and non-pharmacological therapies, often in combination. Management and treatment decisions can be challenging, due to the multiple seizure types and comorbidities associated with the condition. A panel of five epileptologists met to discuss consensus recommendations for LGS management, based on the latest available evidence from literature review and clinical experience. Treatment algorithms were formulated. Current evidence favors the continued use of sodium valproate (VPA as the first-line treatment for patients with newly diagnosed de novo LGS. If VPA is ineffective alone, evidence supports lamotrigine, or subsequently rufinamide, as adjunctive therapy. If seizure control remains inadequate, the choice of next adjunctive antiepileptic drug (AED should be discussed with the patient/parent/caregiver/clinical team, as current evidence is limited. Non-pharmacological therapies, including resective surgery, the ketogenic diet, vagus nerve stimulation, and callosotomy, should be considered for use alongside AED therapy from the outset of treatment. For patients with LGS that has evolved from another type of epilepsy who are already being treated with an AED other than VPA, VPA therapy should be considered if not trialed previously. Thereafter, the approach for a de novo patient should be followed. Where possible, no more than two AEDs should be used concomitantly. Patients with established LGS should undergo review by a neurologist

  2. An expert panel approach to support risk-informed decision making

    International Nuclear Information System (INIS)

    Pulkkinen, U.; Simola, K.

    2000-01-01

    The report describes the expert panel methodology developed for supporting risk-informed decision making. The aim of an expert panel is to achieve a balanced utilisation of information and expertise from several disciplines in decision-making including probabilistic safety assessment as one decision criterion. We also summarise the application of the methodology in the STUK's RI-ISI (Risk-Informed In-Service Inspection) pilot study, where the expert panel approach was used to combine the deterministic information on degradation mechanisms and probabilistic information on pipe break consequences. The expert panel served both as a critical review of the preliminary results and as a decision support for the final definition of risk categories of piping. (orig.)

  3. Land Degradation Monitoring in the Ordos Plateau of China Using an Expert Knowledge and BP-ANN-Based Approach

    Directory of Open Access Journals (Sweden)

    Yaojie Yue

    2016-11-01

    Full Text Available Land degradation monitoring is of vital importance to provide scientific information for promoting sustainable land utilization. This paper presents an expert knowledge and BP-ANN-based approach to detect and monitor land degradation in an effort to overcome the deficiencies of image classification and vegetation index-based approaches. The proposed approach consists of three generic steps: (1 extraction of knowledge on the relationship between land degradation degree and predisposing factors, which are NDVI and albedo, from domain experts; (2 establishment of a land degradation detecting model based on the BP-ANN algorithm; and (3 land degradation dynamic analysis. A comprehensive analysis was conducted on the development of land degradation in the Ordos Plateau of China in 1990, 2000 and 2010. The results indicate that the proposed approach is reliable for monitoring land degradation, with an overall accuracy of 91.2%. From 1990–2010, a reverse trend of land degradation is observed in Ordos Plateau. Regions with relatively high land degradation dynamic were mostly located in the northeast of Ordos Plateau. Additionally, most of the regions have transferred from a hot spot of land degradation to a less changed area. It is suggested that land utilization optimization plays a key role for effective land degradation control. However, it should be highlighted that the goals of such strategies should aim at the main negative factors causing land degradation, and the land use type and its quantity must meet the demand of population and be reconciled with natural conditions. Results from this case study suggest that the expert knowledge and BP-ANN-based approach is effective in mapping land degradation.

  4. Who Owns Educational Theory? Big Data, Algorithms and the Expert Power of Education Data Science

    Science.gov (United States)

    Williamson, Ben

    2017-01-01

    "Education data science" is an emerging methodological field which possesses the algorithm-driven technologies required to generate insights and knowledge from educational big data. This article consists of an analysis of the Lytics Lab, Stanford University's laboratory for research and development in learning analytics, and the Center…

  5. Optimal Solutions of Multiproduct Batch Chemical Process Using Multiobjective Genetic Algorithm with Expert Decision System

    Science.gov (United States)

    Mokeddem, Diab; Khellaf, Abdelhafid

    2009-01-01

    Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537

  6. Algorithmic crystal chemistry: A cellular automata approach

    International Nuclear Information System (INIS)

    Krivovichev, S. V.

    2012-01-01

    Atomic-molecular mechanisms of crystal growth can be modeled based on crystallochemical information using cellular automata (a particular case of finite deterministic automata). In particular, the formation of heteropolyhedral layered complexes in uranyl selenates can be modeled applying a one-dimensional three-colored cellular automaton. The use of the theory of calculations (in particular, the theory of automata) in crystallography allows one to interpret crystal growth as a computational process (the realization of an algorithm or program with a finite number of steps).

  7. A new algorithmic approach for fingers detection and identification

    Science.gov (United States)

    Mubashar Khan, Arslan; Umar, Waqas; Choudhary, Taimoor; Hussain, Fawad; Haroon Yousaf, Muhammad

    2013-03-01

    Gesture recognition is concerned with the goal of interpreting human gestures through mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Hand gesture detection in a real time environment, where the time and memory are important issues, is a critical operation. Hand gesture recognition largely depends on the accurate detection of the fingers. This paper presents a new algorithmic approach to detect and identify fingers of human hand. The proposed algorithm does not depend upon the prior knowledge of the scene. It detects the active fingers and Metacarpophalangeal (MCP) of the inactive fingers from an already detected hand. Dynamic thresholding technique and connected component labeling scheme are employed for background elimination and hand detection respectively. Algorithm proposed a new approach for finger identification in real time environment keeping the memory and time constraint as low as possible.

  8. A Hybrid Genetic Algorithm Approach for Optimal Power Flow

    Directory of Open Access Journals (Sweden)

    Sydulu Maheswarapu

    2011-08-01

    Full Text Available This paper puts forward a reformed hybrid genetic algorithm (GA based approach to the optimal power flow. In the approach followed here, continuous variables are designed using real-coded GA and discrete variables are processed as binary strings. The outcomes are compared with many other methods like simple genetic algorithm (GA, adaptive genetic algorithm (AGA, differential evolution (DE, particle swarm optimization (PSO and music based harmony search (MBHS on a IEEE30 bus test bed, with a total load of 283.4 MW. Its found that the proposed algorithm is found to offer lowest fuel cost. The proposed method is found to be computationally faster, robust, superior and promising form its convergence characteristics.

  9. An Airborne Conflict Resolution Approach Using a Genetic Algorithm

    Science.gov (United States)

    Mondoloni, Stephane; Conway, Sheila

    2001-01-01

    An airborne conflict resolution approach is presented that is capable of providing flight plans forecast to be conflict-free with both area and traffic hazards. This approach is capable of meeting constraints on the flight plan such as required times of arrival (RTA) at a fix. The conflict resolution algorithm is based upon a genetic algorithm, and can thus seek conflict-free flight plans meeting broader flight planning objectives such as minimum time, fuel or total cost. The method has been applied to conflicts occurring 6 to 25 minutes in the future in climb, cruise and descent phases of flight. The conflict resolution approach separates the detection, trajectory generation and flight rules function from the resolution algorithm. The method is capable of supporting pilot-constructed resolutions, cooperative and non-cooperative maneuvers, and also providing conflict resolution on trajectories forecast by an onboard FMC.

  10. Artificial Intelligence and Expert Systems in Medicine and Their Ability to Prediction as Therapy Planning Systems by CADIAG-2 Algorithm

    OpenAIRE

    Mohammad Madadpour Inallou; Zeinab Ajurlou; Bahman Mehri

    2012-01-01

    Expert Systems in Medicine is a collection, storage, retrieval, communication and processing of medical data for the purposes of interpretation, inference, decision-support, research and so other purposes in medicine. Expert System is an interactive computer-based decision tool that uses both facts and heuristics to solve difficult decision problems based on knowledge acquired from an expert. Expert systems provide expert advice and guidance in a wide variety of activities, from computer diag...

  11. DNA Microarray Data Analysis: A Novel Biclustering Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Tewfik Ahmed H

    2006-01-01

    Full Text Available Biclustering algorithms refer to a distinct class of clustering algorithms that perform simultaneous row-column clustering. Biclustering problems arise in DNA microarray data analysis, collaborative filtering, market research, information retrieval, text mining, electoral trends, exchange analysis, and so forth. When dealing with DNA microarray experimental data for example, the goal of biclustering algorithms is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this study, we develop novel biclustering algorithms using basic linear algebra and arithmetic tools. The proposed biclustering algorithms can be used to search for all biclusters with constant values, biclusters with constant values on rows, biclusters with constant values on columns, and biclusters with coherent values from a set of data in a timely manner and without solving any optimization problem. We also show how one of the proposed biclustering algorithms can be adapted to identify biclusters with coherent evolution. The algorithms developed in this study discover all valid biclusters of each type, while almost all previous biclustering approaches will miss some.

  12. An approach to build a knowledge base for reactor accident diagnostic expert system

    International Nuclear Information System (INIS)

    Yoshida, K.; Fujii, M.; Fujiki, K.; Yokobayashi, M.; Kohsaka, A.; Aoyagi, T.; Hirota, Y.

    1987-01-01

    In the development of a rule based expert system, one of the key issues is how to acquire knowledge and to build knowledge base (KB). On building the KB of DISKET, which is an expert system for nuclear reactor accident diagnosis developed in JAERI, several problems have been experienced as follows. To write rules is a time consuming task, and it is difficult to keep the objectivity and consistency of rules as the number of rules increase. Further, certainty factors (CFs) must be often determined according to engineering judgment, i.e., empirically or intuitively. A systematic approach was attempted to handle these difficulties and to build an objective KB efficiently. The approach described in this paper is based on the concept that a prototype KB, colloquially speaking an initial guess, should first be generated in a systematic way and then is to be modified and/or improved by human experts for practical use. Statistical methods, principally Factor Analysis, were used as the systematic way to build a prototype KB for the DISKET using a PWR plant simulator data. The source information is a number of data obtained from the simulation of transients, such as the status of components and annunciator etc., and major process parameters like pressures, temperatures and so on

  13. A Heuristics Approach for Classroom Scheduling Using Genetic Algorithm Technique

    Science.gov (United States)

    Ahmad, Izah R.; Sufahani, Suliadi; Ali, Maselan; Razali, Siti N. A. M.

    2018-04-01

    Reshuffling and arranging classroom based on the capacity of the audience, complete facilities, lecturing time and many more may lead to a complexity of classroom scheduling. While trying to enhance the productivity in classroom planning, this paper proposes a heuristic approach for timetabling optimization. A new algorithm was produced to take care of the timetabling problem in a university. The proposed of heuristics approach will prompt a superior utilization of the accessible classroom space for a given time table of courses at the university. Genetic Algorithm through Java programming languages were used in this study and aims at reducing the conflicts and optimizes the fitness. The algorithm considered the quantity of students in each class, class time, class size, time accessibility in each class and lecturer who in charge of the classes.

  14. Multicontroller: an object programming approach to introduce advanced control algorithms for the GCS large scale project

    CERN Document Server

    Cabaret, S; Coppier, H; Rachid, A; Barillère, R; CERN. Geneva. IT Department

    2007-01-01

    The GCS (Gas Control System) project team at CERN uses a Model Driven Approach with a Framework - UNICOS (UNified Industrial COntrol System) - based on PLC (Programming Language Controller) and SCADA (Supervisory Control And Data Acquisition) technologies. The first' UNICOS versions were able to provide a PID (Proportional Integrative Derivative) controller whereas the Gas Systems required more advanced control strategies. The MultiController is a new UNICOS object which provides the following advanced control algorithms: Smith Predictor, PFC (Predictive Function Control), RST* and GPC (Global Predictive Control). Its design is based on a monolithic entity with a global structure definition which is able to capture the desired set of parameters of any specific control algorithm supported by the object. The SCADA system -- PVSS - supervises the MultiController operation. The PVSS interface provides users with supervision faceplate, in particular it links any MultiController with recipes: the GCS experts are ab...

  15. Branch-pipe-routing approach for ships using improved genetic algorithm

    Science.gov (United States)

    Sui, Haiteng; Niu, Wentie

    2016-09-01

    Branch-pipe routing plays fundamental and critical roles in ship-pipe design. The branch-pipe-routing problem is a complex combinatorial optimization problem and is thus difficult to solve when depending only on human experts. A modified genetic-algorithm-based approach is proposed in this paper to solve this problem. The simplified layout space is first divided into threedimensional (3D) grids to build its mathematical model. Branch pipes in layout space are regarded as a combination of several two-point pipes, and the pipe route between two connection points is generated using an improved maze algorithm. The coding of branch pipes is then defined, and the genetic operators are devised, especially the complete crossover strategy that greatly accelerates the convergence speed. Finally, simulation tests demonstrate the performance of proposed method.

  16. A hardware-algorithm co-design approach to optimize seizure detection algorithms for implantable applications.

    Science.gov (United States)

    Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P

    2010-10-30

    Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. Stall Recovery Guidance Algorithms Based on Constrained Control Approaches

    Science.gov (United States)

    Stepanyan, Vahram; Krishnakumar, Kalmanje; Kaneshige, John; Acosta, Diana

    2016-01-01

    Aircraft loss-of-control, in particular approach to stall or fully developed stall, is a major factor contributing to aircraft safety risks, which emphasizes the need to develop algorithms that are capable of assisting the pilots to identify the problem and providing guidance to recover the aircraft. In this paper we present several stall recovery guidance algorithms, which are implemented in the background without interfering with flight control system and altering the pilot's actions. They are using input and state constrained control methods to generate guidance signals, which are provided to the pilot in the form of visual cues. It is the pilot's decision to follow these signals. The algorithms are validated in the pilot-in-the loop medium fidelity simulation experiment.

  18. Differences between the CME fronts tracked by an expert, an automated algorithm, and the Solar Stormwatch project

    Science.gov (United States)

    Barnard, L.; Scott, C. J.; Owens, M.; Lockwood, M.; Crothers, S. R.; Davies, J. A.; Harrison, R. A.

    2015-10-01

    Observations from the Heliospheric Imager (HI) instruments aboard the twin STEREO spacecraft have enabled the compilation of several catalogues of coronal mass ejections (CMEs), each characterizing the propagation of CMEs through the inner heliosphere. Three such catalogues are the Rutherford Appleton Laboratory (RAL)-HI event list, the Solar Stormwatch CME catalogue, and, presented here, the J-tracker catalogue. Each catalogue uses a different method to characterize the location of CME fronts in the HI images: manual identification by an expert, the statistical reduction of the manual identifications of many citizen scientists, and an automated algorithm. We provide a quantitative comparison of the differences between these catalogues and techniques, using 51 CMEs common to each catalogue. The time-elongation profiles of these CME fronts are compared, as are the estimates of the CME kinematics derived from application of three widely used single-spacecraft-fitting techniques. The J-tracker and RAL-HI profiles are most similar, while the Solar Stormwatch profiles display a small systematic offset. Evidence is presented that these differences arise because the RAL-HI and J-tracker profiles follow the sunward edge of CME density enhancements, while Solar Stormwatch profiles track closer to the antisunward (leading) edge. We demonstrate that the method used to produce the time-elongation profile typically introduces more variability into the kinematic estimates than differences between the various single-spacecraft-fitting techniques. This has implications for the repeatability and robustness of these types of analyses, arguably especially so in the context of space weather forecasting, where it could make the results strongly dependent on the methods used by the forecaster.

  19. A Web Based Sweet Orange Crop Expert System using Rule Based System and Artificial Bee Colony Optimization Algorithm

    OpenAIRE

    Prof.M.S.Prasad Babu,; Mrs.J.Anitha,; K.Hari Krishna

    2010-01-01

    Citrus fruits have a prominent place among popular and exclusively grown tropical and sub-tropical fruits. Their nature ,multifold nutritional and medicinal values have made them so important. Sweet Orange Crop expert advisory system is aimed at a collaborative venture with eminent Agriculture Scientist and Experts in the area of Sweet Orange Plantation with an excellent team of computer Engineers, Programmers and designers. This Expert System contains two main parts one is Sweet Orange Infor...

  20. A diagnostic expert system for NPP based on hybrid knowledge approach

    International Nuclear Information System (INIS)

    Yang, Joon On; Chang, Soon Heung

    1989-01-01

    This paper describes a diagnostic expert system, HYPOSS (Hybrid Knowledge Based Plant Operation Supporting System), which has been developed to support operators' decision making during the transients of nuclear power plant. HYPOSS adopts the hybrid knowledge approach which combines shallow and deep knowledge to couple the merits of both approaches. In HYPOSS, four types of knowledge are used according to the steps of diagnosis procedure: structural, functional, behavioral and heuristic knowledge. The structural and functional knowledge is represented by three fundamental primitives and five types of functions respectively. The behavioral knowledge is represented using constraints. The inference procedure is based on the human problem solving behavior modeled in HYPOSS. For the validation of HYPOSS, several tests have been performed based on the data produced by a plant simulator. The results of validation studies showed a good applicability of HYPOSS to the anomaly diagnosis of nuclear power plant

  1. Multiobjective genetic algorithm approaches to project scheduling under risk

    OpenAIRE

    Kılıç, Murat; Kilic, Murat

    2003-01-01

    In this thesis, project scheduling under risk is chosen as the topic of research. Project scheduling under risk is defined as a biobjective decision problem and is formulated as a 0-1 integer mathematical programming model. In this biobjective formulation, one of the objectives is taken as the expected makespan minimization and the other is taken as the expected cost minimization. As the solution approach to this biobjective formulation genetic algorithm (GA) is chosen. After carefully invest...

  2. Naïve Bayes Approach for Expert System Design of Children Skin Identification Based on Android

    Science.gov (United States)

    Hartatik; Purnomo, A.; Hartono, R.; Munawaroh, H.

    2018-03-01

    The development of technology gives some benefits to each person that we can use it properly and correctly. Technology has helped humans in every way. Such as the excess task of an expert in providing information or answers to a problem. Thus problem that often occurs is skin disease that affecting on child. That because the skin of children still vulnerable to the environment. The application was developed using the naïve Bayes algorithm. Through this application, users can consult with a system like an expert to know the symptoms that occur to the child and find the correct treatment to solve the problems.

  3. METHODOLOGICAL APPROACHES TO EXPERT EVALUATION OF PRECLINICAL AND CLINICAL TRIALS OF HUMAN IMMUNOGLOBULIN PRODUCTS

    Directory of Open Access Journals (Sweden)

    V. B. Ivanov

    2017-01-01

    Full Text Available The article considers the experience of Russian and leading foreign regulatory agencies in organisation and conduction of preclinical and clinical trials of human immunoglobulin products. The authors suggest a classification of human immunoglobulins and provide updated information on authorization of these products in Russia. The article summarizes methodological approaches, basic scientific principles and criteria relating to expert evaluation of preclinical and clinical trials of blood products. The authors further define the expert body’s requirements for data on preclinical and clinical trials of human normal immuniglobulins and human specific immunoglobulins for the prevention and/or treatment of infectious and non-infectious diseases which are submitted as part of applications for marketing authorization or marketing authorization variation. The article suggests programs of preclinical and clinical trials for human normal immunoglobulins and human specific immunoglobulins for the prevention and/or treatment of infectious and non-infectious diseases that are aligned with the Russian legislation and Eurasian Economic Union’s regulations on medicines circulation, and have been elaborated with respect to the guidelines of the European Medicines Agency.

  4. A standardized approach to verification and validation to assist in expert system development

    International Nuclear Information System (INIS)

    Hines, J.W.; Hajek, B.K.; Miller, D.W.; Haas, M.A.

    1992-01-01

    For the past six years, the Nuclear Engineering Program's Artificial Intelligence (AI) Group at The Ohio State University has been developing an integration of expert systems to act as an aid to nuclear power plant operators. This Operator Advisor consists of four modules that monitor plant parameters, detect deviations from normality, diagnose the root cause of the abnormality, manage procedures to effectively respond to the abnormality, and mitigate its consequences. To aid in the development of this new system, a standardized Verification and Validation (V and V) approach is being implemented. The primary functions are to guide the development of the expert system and to ensure that the end product fulfills the initial objectives. The development process has been divided into eight life-cycle V and V phases from concept to operation and maintenance. Each phase has specific V and V tasks to be performed to ensure a quality end product. Four documents are being used to guide development. The Software Verification and Validation Plan (SVVP) outlines the V and V tasks necessary to verify the product at the end of each software development phase, and to validate that the end product complies with the established software and system requirements and meets the needs of the user. The Software Requirements Specification (SRS) documents the essential requirements of the system. The Software Design Description (SDD) represents these requirements with a specific design. And lastly, the Software Test Document establishes a testing methodology to be used throughout the development life-cycle

  5. A hierarchical approach to reducing communication in parallel graph algorithms

    KAUST Repository

    Harshvardhan,

    2015-01-01

    Large-scale graph computing has become critical due to the ever-increasing size of data. However, distributed graph computations are limited in their scalability and performance due to the heavy communication inherent in such computations. This is exacerbated in scale-free networks, such as social and web graphs, which contain hub vertices that have large degrees and therefore send a large number of messages over the network. Furthermore, many graph algorithms and computations send the same data to each of the neighbors of a vertex. Our proposed approach recognizes this, and reduces communication performed by the algorithm without change to user-code, through a hierarchical machine model imposed upon the input graph. The hierarchical model takes advantage of locale information of the neighboring vertices to reduce communication, both in message volume and total number of bytes sent. It is also able to better exploit the machine hierarchy to further reduce the communication costs, by aggregating traffic between different levels of the machine hierarchy. Results of an implementation in the STAPL GL shows improved scalability and performance over the traditional level-synchronous approach, with 2.5 × - 8× improvement for a variety of graph algorithms at 12, 000+ cores.

  6. Comparison of HMM experts with MLP experts in the Full Combination Multi-Band Approach to Robust ASR

    OpenAIRE

    Hagen, Astrid; Morris, Andrew

    2000-01-01

    In this paper we apply the Full Combination (FC) multi-band approach, which has originally been introduced in the framework of posterior-based HMM/ANN (Hidden Markov Model/Artificial Neural Network) hybrid systems, to systems in which the ANN (or Multilayer Perceptron (MLP)) is itself replaced by a Multi Gaussian HMM (MGM). Both systems represent the most widely used statistical models for robust ASR (automatic speech recognition). It is shown how the FC formula for the likelihood--based MGMs...

  7. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    Science.gov (United States)

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  8. Designing synthetic networks in silico: a generalised evolutionary algorithm approach.

    Science.gov (United States)

    Smith, Robert W; van Sluijs, Bob; Fleck, Christian

    2017-12-02

    Evolution has led to the development of biological networks that are shaped by environmental signals. Elucidating, understanding and then reconstructing important network motifs is one of the principal aims of Systems & Synthetic Biology. Consequently, previous research has focused on finding optimal network structures and reaction rates that respond to pulses or produce stable oscillations. In this work we present a generalised in silico evolutionary algorithm that simultaneously finds network structures and reaction rates (genotypes) that can satisfy multiple defined objectives (phenotypes). The key step to our approach is to translate a schema/binary-based description of biological networks into systems of ordinary differential equations (ODEs). The ODEs can then be solved numerically to provide dynamic information about an evolved networks functionality. Initially we benchmark algorithm performance by finding optimal networks that can recapitulate concentration time-series data and perform parameter optimisation on oscillatory dynamics of the Repressilator. We go on to show the utility of our algorithm by finding new designs for robust synthetic oscillators, and by performing multi-objective optimisation to find a set of oscillators and feed-forward loops that are optimal at balancing different system properties. In sum, our results not only confirm and build on previous observations but we also provide new designs of synthetic oscillators for experimental construction. In this work we have presented and tested an evolutionary algorithm that can design a biological network to produce desired output. Given that previous designs of synthetic networks have been limited to subregions of network- and parameter-space, the use of our evolutionary optimisation algorithm will enable Synthetic Biologists to construct new systems with the potential to display a wider range of complex responses.

  9. A new distributed systems scheduling algorithm: a swarm intelligence approach

    Science.gov (United States)

    Haghi Kashani, Mostafa; Sarvizadeh, Raheleh; Jameii, Mahdi

    2011-12-01

    The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.

  10. A Hybrid Harmony Search Algorithm Approach for Optimal Power Flow

    Directory of Open Access Journals (Sweden)

    Mimoun YOUNES

    2012-08-01

    Full Text Available Optimal Power Flow (OPF is one of the main functions of Power system operation. It determines the optimal settings of generating units, bus voltage, transformer tap and shunt elements in Power System with the objective of minimizing total production costs or losses while the system is operating within its security limits. The aim of this paper is to propose a novel methodology (BCGAs-HSA that solves OPF including both active and reactive power dispatch It is based on combining the binary-coded genetic algorithm (BCGAs and the harmony search algorithm (HSA to determine the optimal global solution. This method was tested on the modified IEEE 30 bus test system. The results obtained by this method are compared with those obtained with BCGAs or HSA separately. The results show that the BCGAs-HSA approach can converge to the optimum solution with accuracy compared to those reported recently in the literature.

  11. Selection of engineering materials for heat exchangers (An expert system approach)

    International Nuclear Information System (INIS)

    Ahmed, K.; Abou-Ali, M.; Bassuni, M.

    1997-01-01

    The materials selection as a part of the design process of the heat exchangers is one of the most important steps in the whole industry. The clear recognition of the service requirements of the different types of the heat exchangers is very important to select the adequate and economic materials to meet such requirements. of course the manufacturer should ensure that failure does not occur in service specially it is one of the main and fetal component of the nuclear reactor, pressurized water type (PWR). It is necessary to know the possible mechanisms of failure. Also the achievement of the materials selection using the expert system approach in the process sequence of heat exchanger manufacturing is introduced. Different parameters and requirements controlling each process and the linkage between these parameters and the final product will be shown. 2 figs., 3 tabs

  12. Eigenvalues calculation algorithms for {lambda}-modes determination. Parallelization approach

    Energy Technology Data Exchange (ETDEWEB)

    Vidal, V. [Universidad Politecnica de Valencia (Spain). Departamento de Sistemas Informaticos y Computacion; Verdu, G.; Munoz-Cobo, J.L. [Universidad Politecnica de Valencia (Spain). Departamento de Ingenieria Quimica y Nuclear; Ginestart, D. [Universidad Politecnica de Valencia (Spain). Departamento de Matematica Aplicada

    1997-03-01

    In this paper, we review two methods to obtain the {lambda}-modes of a nuclear reactor, Subspace Iteration method and Arnoldi`s method, which are popular methods to solve the partial eigenvalue problem for a given matrix. In the developed application for the neutron diffusion equation we include improved acceleration techniques for both methods. Also, we propose two parallelization approaches for these methods, a coarse grain parallelization and a fine grain one. We have tested the developed algorithms with two realistic problems, focusing on the efficiency of the methods according to the CPU times. (author).

  13. A standardized approach to verification and validation to assist in expert system development

    International Nuclear Information System (INIS)

    Hines, J.W.; Hajek, B.K.; Miller, D.W.; Haas, M.A.

    1992-01-01

    For the past six years, the Nuclear Engineering Program's Artificial Intelligence (AI) Group at The Ohio State University has been developing an integration of expert systems to act as an aid to nuclear power plant operators. This Operator Advisor consists of four modules that monitor plant parameters, detect deviations from normality, diagnose the root cause of the abnormality, manage procedures to effectively respond to the abnormality, and mitigate its consequences. Recently Ohio State University received a grant from the Department of Energy's Special Research Grant Program to utilize the methodologies developed for the Operator Advisor for Heavy Water Reactor (HWR) malfunction root cause diagnosis. To aid in the development of this new system, a standardized Verification and Validation (V ampersand V) approach is being implemented. Its primary functions are to guide the development of the expert system and to ensure that the end product fulfills the initial objectives. The development process has been divided into eight life-cycle V ampersand V phases from concept to operation and maintenance. Each phase has specific V ampersand V tasks to be performed to ensure a quality end product. Four documents are being used to guide development. The Software Verification and Validation Plan (SVVP) outlines the V ampersand V tasks necessary to verify the product at the end of each software development phase, and to validate that the end product complies with the established software and system requirements and meets the needs of the user. The Software Requirements Specification (SRS) documents the essential requirements of the system. The Software Design Description (SDD) represents these requirements with a specific design. And lastly, the Software Test Document establishes a testing methodology to be used throughout the development life-cycle. 10 refs., 1 fig

  14. Nuclear power plant maintenance scheduling dilemma: a genetic algorithm approach

    International Nuclear Information System (INIS)

    Mahdavi, M.H.; Modarres, M.

    2004-01-01

    There are huge numbers of components scheduled for maintenance when a nuclear power plant is shut down. Among these components, a number of them are safety related which their operability as well as reliability when plant becomes up is main concerns. Not performing proper maintenance on this class of components/system would impose substantial risk on operating the NPP. In this paper a new approach based on genetic algorithms is presented to optimize the NPP maintenance schedule during shutdown. following this approach the cost incurred by maintenance activities for each schedule is balanced with the risk imposed by the maintenance scheduling plan to the plant operation status when it is up. The risk model implemented in the GA scheduler as its evaluation function is developed on the basis of the probabilistic risk assessment methodology. the Ga optimizers itself is shown to be superior compared to other optimization methods such as the monte carlo technique

  15. A heuristic approach to possibilistic clustering algorithms and applications

    CERN Document Server

    Viattchenin, Dmitri A

    2013-01-01

    The present book outlines a new approach to possibilistic clustering in which the sought clustering structure of the set of objects is based directly on the formal definition of fuzzy cluster and the possibilistic memberships are determined directly from the values of the pairwise similarity of objects.   The proposed approach can be used for solving different classification problems. Here, some techniques that might be useful at this purpose are outlined, including a methodology for constructing a set of labeled objects for a semi-supervised clustering algorithm, a methodology for reducing analyzed attribute space dimensionality and a methods for asymmetric data processing. Moreover,  a technique for constructing a subset of the most appropriate alternatives for a set of weak fuzzy preference relations, which are defined on a universe of alternatives, is described in detail, and a method for rapidly prototyping the Mamdani’s fuzzy inference systems is introduced. This book addresses engineers, scientist...

  16. A diagnostic expert system for the nuclear power plant b ased on the hybrid knowledge approach

    International Nuclear Information System (INIS)

    Yang, J.O.; Chang, S.H.

    1989-01-01

    A diagnostic expert system, the hybrid knowledge based plant operation supporting system (HYPOSS), which has been developed to support operators' decisionmaking during the transients of the nuclear power plant, is described. HYPOSS adopts the hybrid knowledge approach, which combines both shallow and deep knowledge to take advantage of the merits of both approaches. In HYPOSS, four types of knowledge are used according to the steps of diagnosis procedure. They are structural, functional, behavioral, and heuristic knowledge. The structural and functional knowledge is represented by three fundamental primitives and five types of functions, respectively. The behavioral knowledge is represented using constraints. The inference procedure is based on the human problem-solving behavior modeled in HYPOSS. The event-based operational guidelines are provided to the operator according to the diagnosed results. If the exact anomalies cannot be identified while some of the critical safety functions are challenged, the function-based operational guidelines are provided to the operator. For the validation of HYPOSS, several tests have been performed based on the data produced by a plant simulator. The results of validation studies show good applicability of HYPOSS to the anomaly diagnosis of nuclear power plant

  17. Can masses of non-experts train highly accurate image classifiers? A crowdsourcing approach to instrument segmentation in laparoscopic images.

    Science.gov (United States)

    Maier-Hein, Lena; Mersmann, Sven; Kondermann, Daniel; Bodenstedt, Sebastian; Sanchez, Alexandro; Stock, Christian; Kenngott, Hannes Gotz; Eisenmann, Mathias; Speidel, Stefanie

    2014-01-01

    Machine learning algorithms are gaining increasing interest in the context of computer-assisted interventions. One of the bottlenecks so far, however, has been the availability of training data, typically generated by medical experts with very limited resources. Crowdsourcing is a new trend that is based on outsourcing cognitive tasks to many anonymous untrained individuals from an online community. In this work, we investigate the potential of crowdsourcing for segmenting medical instruments in endoscopic image data. Our study suggests that (1) segmentations computed from annotations of multiple anonymous non-experts are comparable to those made by medical experts and (2) training data generated by the crowd is of the same quality as that annotated by medical experts. Given the speed of annotation, scalability and low costs, this implies that the scientific community might no longer need to rely on experts to generate reference or training data for certain applications. To trigger further research in endoscopic image processing, the data used in this study will be made publicly available.

  18. A genetic algorithm approach to recognition and data mining

    Energy Technology Data Exchange (ETDEWEB)

    Punch, W.F.; Goodman, E.D.; Min, Pei [Michigan State Univ., East Lansing, MI (United States)] [and others

    1996-12-31

    We review here our use of genetic algorithm (GA) and genetic programming (GP) techniques to perform {open_quotes}data mining,{close_quotes} the discovery of particular/important data within large datasets, by finding optimal data classifications using known examples. Our first experiments concentrated on the use of a K-nearest neighbor algorithm in combination with a GA. The GA selected weights for each feature so as to optimize knn classification based on a linear combination of features. This combined GA-knn approach was successfully applied to both generated and real-world data. We later extended this work by substituting a GP for the GA. The GP-knn could not only optimize data classification via linear combinations of features but also determine functional relationships among the features. This allowed for improved performance and new information on important relationships among features. We review the effectiveness of the overall approach on examples from biology and compare the effectiveness of the GA and GP.

  19. Experts as facilitators for the implementation of social media in the library?: a social network approach

    OpenAIRE

    Vanwynsberghe, Hadewijch; Boudry, Elke; Vanderlinde, Ruben; Verdegem, Pieter

    2014-01-01

    Purpose – Based on the social capital theory, we assume that personal and professional experts are both relevant to people’s competence development. However, to date, there is little empirical evidence of how professional experts can support, or impede, people in learning how to deal with social media. The goal of this study is to examine the role and position of social media experts in the distribution of information on social media within the library as organization. Design/ methodology/...

  20. Politics or law: what is more in the approaches of public expert monopoly?

    Directory of Open Access Journals (Sweden)

    Оксана Михайлівна Калужна

    2018-03-01

    It is concluded that the model of judicial expert support of legal proceedings in Ukraine, that is established by «judicial reform» (Law No. 2147-VIII in the wording that will come into force on March 18, 2018 is a milestone in its historical development, which certainly should be modified depending on its effectiveness and the demand of the society, public, professional and state institutions. So public forensic expert monopoly is not an ideal model of forensic expert support of justice because of corporate and political interests, corruption component, abuse of forensic experts etc. Therefore, it will undergo a review and transformation.

  1. Direct estimates of national neonatal and child cause–specific mortality proportions in Niger by expert algorithm and physician–coded analysis of verbal autopsy interviews

    Directory of Open Access Journals (Sweden)

    Henry D. Kalter

    2015-06-01

    Full Text Available Background This study was one of a set of verbal autopsy investigations undertaken by the WHO/UNCEF–supported Child Health Epidemiology Reference Group (CHERG to derive direct estimates of the causes of neonatal and child deaths in high priority countries of sub–Saharan Africa. The objective of the study was to determine the cause distributions of neonatal (0–27 days and child (1–59 months mortality in Niger. Methods Verbal autopsy interviews were conducted of random samples of 453 neonatal deaths and 620 child deaths from 2007 to 2010 identified by the 2011 Niger National Mortality Survey. The cause of each death was assigned using two methods: computerized expert algorithms arranged in a hierarchy and physician completion of a death certificate for each child. The findings of the two methods were compared to each other, and plausibility checks were conducted to assess which is the preferred method. Comparison of some direct measures from this study with CHERG modeled cause of death estimates are discussed. Findings The cause distributions of neonatal deaths as determined by expert algorithms and the physician were similar, with the same top three causes by both methods and all but two other causes within one rank of each other. Although child causes of death differed more, the reasons often could be discerned by analyzing algorithmic criteria alongside the physician's application of required minimal diagnostic criteria. Including all algorithmic (primary and co–morbid and physician (direct, underlying and contributing diagnoses in the comparison minimized the differences, with kappa coefficients greater than 0.40 for five of 11 neonatal diagnoses and nine of 13 child diagnoses. By algorithmic diagnosis, early onset neonatal infection was significantly associated (χ2 = 13.2, P < 0.001 with maternal infection, and the geographic distribution of child meningitis deaths closely corresponded with that for meningitis surveillance

  2. Mechanisms and risk of cumulative impacts to coastal ecosystem services: An expert elicitation approach

    KAUST Repository

    Singh, Gerald G.

    2017-05-23

    Coastal environments are some of the most populated on Earth, with greater pressures projected in the future. Managing coastal systems requires the consideration of multiple uses, which both benefit from and threaten multiple ecosystem services. Thus understanding the cumulative impacts of human activities on coastal ecosystem services would seem fundamental to management, yet there is no widely accepted approach for assessing these. This study trials an approach for understanding the cumulative impacts of anthropogenic change, focusing on Tasman and Golden Bays, New Zealand. Using an expert elicitation procedure, we collected information on three aspects of cumulative impacts: the importance and magnitude of impacts by various activities and stressors on ecosystem services, and the causal processes of impact on ecosystem services. We assessed impacts to four ecosystem service benefits — fisheries, shellfish aquaculture, marine recreation and existence value of biodiversity—addressing three main research questions: (1) how severe are cumulative impacts on ecosystem services (correspondingly, what potential is there for restoration)?; (2) are threats evenly distributed across activities and stressors, or do a few threats dominate?; (3) do prominent activities mainly operate through direct stressors, or do they often exacerbate other impacts? We found (1) that despite high uncertainty in the threat posed by individual stressors and impacts, total cumulative impact is consistently severe for all four ecosystem services. (2) A subset of drivers and stressors pose important threats across the ecosystem services explored, including climate change, commercial fishing, sedimentation and pollution. (3) Climate change and commercial fishing contribute to prominent indirect impacts across ecosystem services by exacerbating regional impacts, namely sedimentation and pollution. The prevalence and magnitude of these indirect, networked impacts highlights the need for

  3. Mechanisms and risk of cumulative impacts to coastal ecosystem services: An expert elicitation approach.

    Science.gov (United States)

    Singh, Gerald G; Sinner, Jim; Ellis, Joanne; Kandlikar, Milind; Halpern, Benjamin S; Satterfield, Terre; Chan, Kai M A

    2017-09-01

    Coastal environments are some of the most populated on Earth, with greater pressures projected in the future. Managing coastal systems requires the consideration of multiple uses, which both benefit from and threaten multiple ecosystem services. Thus understanding the cumulative impacts of human activities on coastal ecosystem services would seem fundamental to management, yet there is no widely accepted approach for assessing these. This study trials an approach for understanding the cumulative impacts of anthropogenic change, focusing on Tasman and Golden Bays, New Zealand. Using an expert elicitation procedure, we collected information on three aspects of cumulative impacts: the importance and magnitude of impacts by various activities and stressors on ecosystem services, and the causal processes of impact on ecosystem services. We assessed impacts to four ecosystem service benefits - fisheries, shellfish aquaculture, marine recreation and existence value of biodiversity-addressing three main research questions: (1) how severe are cumulative impacts on ecosystem services (correspondingly, what potential is there for restoration)?; (2) are threats evenly distributed across activities and stressors, or do a few threats dominate?; (3) do prominent activities mainly operate through direct stressors, or do they often exacerbate other impacts? We found (1) that despite high uncertainty in the threat posed by individual stressors and impacts, total cumulative impact is consistently severe for all four ecosystem services. (2) A subset of drivers and stressors pose important threats across the ecosystem services explored, including climate change, commercial fishing, sedimentation and pollution. (3) Climate change and commercial fishing contribute to prominent indirect impacts across ecosystem services by exacerbating regional impacts, namely sedimentation and pollution. The prevalence and magnitude of these indirect, networked impacts highlights the need for approaches

  4. Nuclear fuel cycle. Which way forward for multilateral approaches? An international expert group examines options

    International Nuclear Information System (INIS)

    Pellaud, Bruno

    2005-01-01

    For several years now, the debate on the proliferation of nuclear weapons has been dominated by individuals and countries that violate rules of good behaviour - as sellers or acquirers of clandestine nuclear technology. As a result, the 1968 Treaty on the Non-Proliferation of Nuclear Weapons (NPT) has been declared to be 'inadequate' by some, 'full of loopholes' by others. Two basic approaches have been put forward to tighten up the NPT; both seek to ensure that the nuclear non-proliferation regime maintains its authority and credibility in the face of these very real challenges. One calls for non-nuclear weapon States to accept a partial denial of technology through a reinterpretation of the NPT's provisions governing the rights of access to nuclear technologies. The unwillingness of most non-nuclear-weapon States to accept additional restrictions under the NPT makes this approach difficult. The other approach would apply multinational alternatives to the national operation of uranium-enrichment and plutonium-separation technologies, and to the disposal of spent nuclear fuel. In this perspective, IAEA Director General Mohamed ElBaradei proposed in 2003 to revisit the concept of multilateral nuclear approaches (MNA) that was intensively discussed several decades ago. Several such approaches were adopted at that time in Europe, which became the true homeland of MNAs. Nonetheless, MNAs have failed so far to materialise outside Europe due to different political and economic perceptions. In June 2004, the Director General appointed an international group of experts to consider possible multilateral approaches to the nuclear fuel cycle. The overall purpose was to assess MNAs in the framework of a double objective: strengthening the international nuclear non-proliferation regime and making the peaceful uses of nuclear energy more economical and attractive. In the report submitted to the Director General in February 2005, the Group identified a number of options - options

  5. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  6. Evaluation of a practical expert defined approach to patient population segmentation: a case study in Singapore

    Directory of Open Access Journals (Sweden)

    Lian Leng Low

    2017-11-01

    Full Text Available Abstract Background Segmenting the population into groups that are relatively homogeneous in healthcare characteristics or needs is crucial to facilitate integrated care and resource planning. We aimed to evaluate the feasibility of segmenting the population into discrete, non-overlapping groups using a practical expert and literature driven approach. We hypothesized that this approach is feasible utilizing the electronic health record (EHR in SingHealth. Methods In addition to well-defined segments of “Mostly healthy”, “Serious acute illness but curable” and “End of life” segments that are also present in the Ministry of Health Singapore framework, patients with chronic diseases were segmented into “Stable chronic disease”, “Complex chronic diseases without frequent hospital admissions”, and “Complex chronic diseases with frequent hospital admissions”. Using the electronic health record (EHR, we applied this framework to all adult patients who had a healthcare encounter in the Singapore Health Services Regional Health System in 2012. ICD-9, 10 and polyclinic codes were used to define chronic diseases with a comprehensive look-back period of 5 years. Outcomes (hospital admissions, emergency attendances, specialist outpatient clinic attendances and mortality were analyzed for years 2012 to 2015. Results Eight hundred twenty five thousand eight hundred seventy four patients were included in this study with the majority being healthy without chronic diseases. The most common chronic disease was hypertension. Patients with “complex chronic disease” with frequent hospital admissions segment represented 0.6% of the eligible population, but accounted for the highest hospital admissions (4.33 ± 2.12 admissions; p < 0.001 and emergency attendances (ED (3.21 ± 3.16 ED visits; p < 0.001 per patient, and a high mortality rate (16%. Patients with metastatic disease accounted for the highest specialist outpatient

  7. Induced seismicity hazard and risk by enhanced geothermal systems: an expert elicitation approach

    Science.gov (United States)

    Trutnevyte, Evelina; Azevedo, Inês L.

    2018-03-01

    Induced seismicity is a concern for multiple geoenergy applications, including low-carbon enhanced geothermal systems (EGS). We present the results of an international expert elicitation (n = 14) on EGS induced seismicity hazard and risk. Using a hypothetical scenario of an EGS plant and its geological context, we show that expert best-guess estimates of annualized exceedance probabilities of an M ≥ 3 event range from 0.2%-95% during reservoir stimulation and 0.2%-100% during operation. Best-guess annualized exceedance probabilities of M ≥ 5 event span from 0.002%-2% during stimulation and 0.003%-3% during operation. Assuming that tectonic M7 events could occur, some experts do not exclude induced (triggered) events of up to M7 too. If an induced M = 3 event happens at 5 km depth beneath a town with 10 000 inhabitants, most experts estimate a 50% probability that the loss is contained within 500 000 USD without any injuries or fatalities. In the case of an induced M = 5 event, there is 50% chance that the loss is below 50 million USD with the most-likely outcome of 50 injuries and one fatality or none. As we observe a vast diversity in quantitative expert judgements and underlying mental models, we conclude with implications for induced seismicity risk governance. That is, we suggest documenting individual expert judgements in induced seismicity elicitations before proceeding to consensual judgements, to convene larger expert panels in order not to cherry-pick the experts, and to aim for multi-organization multi-model assessments of EGS induced seismicity hazard and risk.

  8. Expert consensus on an in vitro approach to assess pulmonary fibrogenic potential of aerosolized nanomaterials.

    Science.gov (United States)

    Clippinger, Amy J; Ahluwalia, Arti; Allen, David; Bonner, James C; Casey, Warren; Castranova, Vincent; David, Raymond M; Halappanavar, Sabina; Hotchkiss, Jon A; Jarabek, Annie M; Maier, Monika; Polk, William; Rothen-Rutishauser, Barbara; Sayes, Christie M; Sayre, Phil; Sharma, Monita; Stone, Vicki

    2016-07-01

    The increasing use of multi-walled carbon nanotubes (MWCNTs) in consumer products and their potential to induce adverse lung effects following inhalation has lead to much interest in better understanding the hazard associated with these nanomaterials (NMs). While the current regulatory requirement for substances of concern, such as MWCNTs, in many jurisdictions is a 90-day rodent inhalation test, the monetary, ethical, and scientific concerns associated with this test led an international expert group to convene in Washington, DC, USA, to discuss alternative approaches to evaluate the inhalation toxicity of MWCNTs. Pulmonary fibrosis was identified as a key adverse outcome linked to MWCNT exposure, and recommendations were made on the design of an in vitro assay that is predictive of the fibrotic potential of MWCNTs. While fibrosis takes weeks or months to develop in vivo, an in vitro test system may more rapidly predict fibrogenic potential by monitoring pro-fibrotic mediators (e.g., cytokines and growth factors). Therefore, the workshop discussions focused on the necessary specifications related to the development and evaluation of such an in vitro system. Recommendations were made for designing a system using lung-relevant cells co-cultured at the air-liquid interface to assess the pro-fibrogenic potential of aerosolized MWCNTs, while considering human-relevant dosimetry and NM life cycle transformations. The workshop discussions provided the fundamental design components of an air-liquid interface in vitro test system that will be subsequently expanded to the development of an alternative testing strategy to predict pulmonary toxicity and to generate data that will enable effective risk assessment of NMs.

  9. A Clustering Approach Using Cooperative Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Wenping Zou

    2010-01-01

    Full Text Available Artificial Bee Colony (ABC is one of the most recently introduced algorithms based on the intelligent foraging behavior of a honey bee swarm. This paper presents an extended ABC algorithm, namely, the Cooperative Article Bee Colony (CABC, which significantly improves the original ABC in solving complex optimization problems. Clustering is a popular data analysis and data mining technique; therefore, the CABC could be used for solving clustering problems. In this work, first the CABC algorithm is used for optimizing six widely used benchmark functions and the comparative results produced by ABC, Particle Swarm Optimization (PSO, and its cooperative version (CPSO are studied. Second, the CABC algorithm is used for data clustering on several benchmark data sets. The performance of CABC algorithm is compared with PSO, CPSO, and ABC algorithms on clustering problems. The simulation results show that the proposed CABC outperforms the other three algorithms in terms of accuracy, robustness, and convergence speed.

  10. The physician expert witness and the U.S. Supreme court--an epidemiologic approach.

    Science.gov (United States)

    Norton, Martin L

    2002-01-01

    It is a fact of life that the physician is occasionally called upon to provide Expert Witness evidence. This is clearly distinct from evidence of a participatory nature where the physician is a party to the act by virtue of the doctor-patient relationship. The purpose of this presentation is to alert the physician to new criteria, imposed by the court, for acceptance of Expert Testimony. Prior to March 23, 1999, expert witness testimony fell into three categories, Scientific, technical, and other specialized knowledge. Scientific knowledge included the conclusions that could be subjected to analysis of a statistical nature, or could be validated by methodology such as epidemiologic criteria. Technical knowledge was based on factors such as mechanical or stress analysis utilized in engineering. Other "specialized knowledge" could be based on experiential data and information not necessarily subject to epidemiologic or other scientific analysis. Therefore, the physician presented his reasoning often based on years of professional practice and publication in journals of clinical practice. On March 23rd 1999, the Supreme Court of the United States changed the criteria for all categories stating that there is "no relevant distinction between 'scientific' knowledge' and 'technical' or 'other specialized knowledge' in Federal Rule of Evidence 702. This momentous decision [Kumho Tire Co. v. Carmichael, (97-1709), 131 F.3d 1433) reversed.] referred back to a previous case [Daubert v. Merrell Dow Pharmaceuticals Inc., 509 US. 579,589], which established four criteria based on methods of analysis for t he courts, and was now extended for all expert evidence. Thus the area of expert witness evidence was changed by this momentous act placing the judge as arbiter of all expert evidence, including that of the physician. This paper will offer a brief review and an analysis of the significance of this for the professional involved in the legal system as an expert witness.

  11. Classification of neuropathic pain in cancer patients: A Delphi expert survey report and EAPC/IASP proposal of an algorithm for diagnostic criteria.

    Science.gov (United States)

    Brunelli, Cinzia; Bennett, Michael I; Kaasa, Stein; Fainsinger, Robin; Sjøgren, Per; Mercadante, Sebastiano; Løhre, Erik T; Caraceni, Augusto

    2014-12-01

    Neuropathic pain (NP) in cancer patients lacks standards for diagnosis. This study is aimed at reaching consensus on the application of the International Association for the Study of Pain (IASP) special interest group for neuropathic pain (NeuPSIG) criteria to the diagnosis of NP in cancer patients and on the relevance of patient-reported outcome (PRO) descriptors for the screening of NP in this population. An international group of 42 experts was invited to participate in a consensus process through a modified 2-round Internet-based Delphi survey. Relevant topics investigated were: peculiarities of NP in patients with cancer, IASP NeuPSIG diagnostic criteria adaptation and assessment, and standardized PRO assessment for NP screening. Median consensus scores (MED) and interquartile ranges (IQR) were calculated to measure expert consensus after both rounds. Twenty-nine experts answered, and good agreement was found on the statement "the pathophysiology of NP due to cancer can be different from non-cancer NP" (MED=9, IQR=2). Satisfactory consensus was reached for the first 3 NeuPSIG criteria (pain distribution, history, and sensory findings; MEDs⩾8, IQRs⩽3), but not for the fourth one (diagnostic test/imaging; MED=6, IQR=3). Agreement was also reached on clinical examination by soft brush or pin stimulation (MEDs⩾7 and IQRs⩽3) and on the use of PRO descriptors for NP screening (MED=8, IQR=3). Based on the study results, a clinical algorithm for NP diagnostic criteria in cancer patients with pain was proposed. Clinical research on PRO in the screening phase and on the application of the algorithm will be needed to examine their effectiveness in classifying NP in cancer patients. Copyright © 2014 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  12. A nuclear reload optimization approach using a real coded genetic algorithm with random keys

    International Nuclear Information System (INIS)

    Lima, Alan M.M. de; Schirru, Roberto; Medeiros, Jose A.C.C.

    2009-01-01

    The fuel reload of a Pressurized Water Reactor is made whenever the burn up of the fuel assemblies in the nucleus of the reactor reaches a certain value such that it is not more possible to maintain a critical reactor producing energy at nominal power. The problem of fuel reload optimization consists on determining the positioning of the fuel assemblies within the nucleus of the reactor in an optimized way to minimize the cost benefit relationship of fuel assemblies cost per maximum burn up, and also satisfying symmetry and safety restrictions. The fuel reload optimization problem difficulty grows exponentially with the number of fuel assemblies in the nucleus of the reactor. During decades the fuel reload optimization problem was solved manually by experts that used their knowledge and experience to build configurations of the reactor nucleus, and testing them to verify if safety restrictions of the plant are satisfied. To reduce this burden, several optimization techniques have been used, included the binary code genetic algorithm. In this work we show the use of a real valued coded approach of the genetic algorithm, with different recombination methods, together with a transformation mechanism called random keys, to transform the real values of the genes of each chromosome in a combination of discrete fuel assemblies for evaluation of the reload optimization. Four different recombination methods were tested: discrete recombination, intermediate recombination, linear recombination and extended linear recombination. For each of the 4 recombination methods 10 different tests using different seeds for the random number generator were conducted 10 generating, totaling 40 tests. The results of the application of the genetic algorithm are shown with formulation of real numbers for the problem of the nuclear reload of the plant Angra 1 type PWR. Since the best results in the literature for this problem were found by the parallel PSO we will it use for comparison

  13. A conversational case-based reasoning approach to assisting experts in solving professional problems

    Directory of Open Access Journals (Sweden)

    Negar Armaghan

    2018-03-01

    Full Text Available Nowadays, organizations attempt to retrieve, collect, preserve and manage knowledge and experience of experts in order to reuse them later and to promote innovation. In this sense, Experience Management is one of the important organizational issues. This article is discussed the main ideas of a future Conversational Case-Based Reasoning (CCBR intended to assist the experts of after-sales service in a French industrial company. The aim of this research is to formalize the experience of experts in after-sales service in order to better reuse them for similar problems in future. The research opts for an action research method which consists of two main parts: description of failure and proposition of decision protocol. The data were complemented by questionnaires, documentary analysis (including technical reports and other technical documents, observation and many interviews with experts. The findings include several aspects: the formalization of Problem-solving Cards, proposing the structure of case base, as well as the framework of proposed system. These formalizations permit after-sales service experts to provide effective diagnosis and problem-solving.

  14. Expert Opinion Is Necessary: Delphi Panel Methodology Facilitates a Scientific Approach to Consensus.

    Science.gov (United States)

    Hohmann, Erik; Brand, Jefferson C; Rossi, Michael J; Lubowitz, James H

    2018-02-01

    Our current trend and focus on evidence-based medicine is biased in favor of randomized controlled trials, which are ranked highest in the hierarchy of evidence while devaluing expert opinion, which is ranked lowest in the hierarchy. However, randomized controlled trials have weaknesses as well as strengths, and no research method is flawless. Moreover, stringent application of scientific research techniques, such as the Delphi Panel methodology, allows survey of experts in a high quality and scientific manner. Level V evidence (expert opinion) remains a necessary component in the armamentarium used to determine the answer to a clinical question. Copyright © 2017 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  15. Expert systems: A new approach to radon mitigation training and quality assurance

    Energy Technology Data Exchange (ETDEWEB)

    Brambley, M.R.; Hanlon, R.L.; Parker, G.B.

    1990-07-01

    Training radon mitigators and ensuring that they provide high-quality work on the scale necessary to reduce radon to acceptable levels in the large number of homes and schools requiring some mitigation is a challenging problem. The US Environmental Protection Agency and several states have made commendable efforts to train mitigators and ensure that they provide quality services to the public. Expert systems could be used to extend and improve the effectiveness of these efforts. The purpose of this paper is to introduce the radon community to this promising new technology. The paper includes a description of a prototype system developed by Pacific Northwest Laboratory that illustrates several of the capabilities that expert systems can provide, a brief explanation of how the prototype works, and a discussion of the potential roles and benefits of fully-developed expert systems for radon mitigation. 4 refs., 3 figs.

  16. Expert systems: A new approach to radon mitigation training and quality assurance

    International Nuclear Information System (INIS)

    Brambley, M.R.; Hanlon, R.L.; Parker, G.B.

    1990-07-01

    Training radon mitigators and ensuring that they provide high-quality work on the scale necessary to reduce radon to acceptable levels in the large number of homes and schools requiring some mitigation is a challenging problem. The US Environmental Protection Agency and several states have made commendable efforts to train mitigators and ensure that they provide quality services to the public. Expert systems could be used to extend and improve the effectiveness of these efforts. The purpose of this paper is to introduce the radon community to this promising new technology. The paper includes a description of a prototype system developed by Pacific Northwest Laboratory that illustrates several of the capabilities that expert systems can provide, a brief explanation of how the prototype works, and a discussion of the potential roles and benefits of fully-developed expert systems for radon mitigation. 4 refs., 3 figs

  17. Hierarchical mixture of experts and diagnostic modeling approach to reduce hydrologic model structural uncertainty: STRUCTURAL UNCERTAINTY DIAGNOSTICS

    Energy Technology Data Exchange (ETDEWEB)

    Moges, Edom [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Demissie, Yonas [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Li, Hong-Yi [Hydrology Group, Pacific Northwest National Laboratory, Richland Washington USA

    2016-04-01

    In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integrate expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.

  18. GSM Channel Equalization Algorithm - Modern DSP Coprocessor Approach

    Directory of Open Access Journals (Sweden)

    M. Drutarovsky

    1999-12-01

    Full Text Available The paper presents basic equations of efficient GSM Viterbi equalizer algorithm based on approximation of GMSK modulation by linear superposition of amplitude modulated pulses. This approximation allows to use Ungerboeck form of channel equalizer with significantly reduced arithmetic complexity. Proposed algorithm can be effectively implemented on the Viterbi and Filter coprocessors of new Motorola DSP56305 digital signal processor. Short overview of coprocessor features related to the proposed algorithm is included.

  19. The mathematical approach to EQPS - an expert system for oil quality prediction

    Energy Technology Data Exchange (ETDEWEB)

    Hartman, J. [Israel Institute for Biological Research, Ness Ziona (Israel)

    1995-05-01

    EQPS is an expert system for prediction of ageing processes in long term storage of oil products. EQPS contains a data base with detailed information on the user`s stored stocks, and a diagnostic Expert System which is used for analysis, evaluation and quality prediction of a given storage site. An extensive body of knowledge and information concerning oil products is included in the program. Petrochemical and petrobiological laboratory test results, source and product processing data, storage conditions, environmental and climatic factors, are all considered in the evaluation.

  20. An algorithmic approach for the treatment of severe uncontrolled asthma

    Science.gov (United States)

    Zervas, Eleftherios; Samitas, Konstantinos; Papaioannou, Andriana I.; Bakakos, Petros; Loukides, Stelios; Gaga, Mina

    2018-01-01

    A small subgroup of patients with asthma suffers from severe disease that is either partially controlled or uncontrolled despite intensive, guideline-based treatment. These patients have significantly impaired quality of life and although they constitute asthma patients, they are responsible for more than half of asthma-related healthcare costs. Here, we review a definition for severe asthma and present all therapeutic options currently available for these severe asthma patients. Moreover, we suggest a specific algorithmic treatment approach for the management of severe, difficult-to-treat asthma based on specific phenotype characteristics and biomarkers. The diagnosis and management of severe asthma requires specialised experience, time and effort to comprehend the needs and expectations of each individual patient and incorporate those as well as his/her specific phenotype characteristics into the management planning. Although some new treatment options are currently available for these patients, there is still a need for further research into severe asthma and yet more treatment options. PMID:29531957

  1. Comparative evaluation of community detection algorithms: a topological approach

    International Nuclear Information System (INIS)

    Orman, Günce Keziban; Labatut, Vincent; Cherifi, Hocine

    2012-01-01

    Community detection is one of the most active fields in complex network analysis, due to its potential value in practical applications. Many works inspired by different paradigms are devoted to the development of algorithmic solutions allowing the network structure in such cohesive subgroups to be revealed. Comparative studies reported in the literature usually rely on a performance measure considering the community structure as a partition (Rand index, normalized mutual information, etc). However, this type of comparison neglects the topological properties of the communities. In this paper, we present a comprehensive comparative study of a representative set of community detection methods, in which we adopt both types of evaluation. Community-oriented topological measures are used to qualify the communities and evaluate their deviation from the reference structure. In order to mimic real-world systems, we use artificially generated realistic networks. It turns out there is no equivalence between the two approaches: a high performance does not necessarily correspond to correct topological properties, and vice versa. They can therefore be considered as complementary, and we recommend applying both of them in order to perform a complete and accurate assessment. (paper)

  2. Digital image processing an algorithmic approach with Matlab

    CERN Document Server

    Qidwai, Uvais

    2009-01-01

    Introduction to Image Processing and the MATLAB EnvironmentIntroduction Digital Image Definitions: Theoretical Account Image Properties MATLAB Algorithmic Account MATLAB CodeImage Acquisition, Types, and File I/OImage Acquisition Image Types and File I/O Basics of Color Images Other Color Spaces Algorithmic Account MATLAB CodeImage ArithmeticIntroduction Operator Basics Theoretical TreatmentAlgorithmic Treatment Coding ExamplesAffine and Logical Operations, Distortions, and Noise in ImagesIntroduction Affine Operations Logical Operators Noise in Images Distortions in ImagesAlgorithmic Account

  3. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  4. Cost estimation: An expert-opinion approach. [cost analysis of research projects using the Delphi method (forecasting)

    Science.gov (United States)

    Buffalano, C.; Fogleman, S.; Gielecki, M.

    1976-01-01

    A methodology is outlined which can be used to estimate the costs of research and development projects. The approach uses the Delphi technique a method developed by the Rand Corporation for systematically eliciting and evaluating group judgments in an objective manner. The use of the Delphi allows for the integration of expert opinion into the cost-estimating process in a consistent and rigorous fashion. This approach can also signal potential cost-problem areas. This result can be a useful tool in planning additional cost analysis or in estimating contingency funds. A Monte Carlo approach is also examined.

  5. Expert Meeting. Recommended Approaches to Humidity Control in High Performance Homes

    Energy Technology Data Exchange (ETDEWEB)

    Rudd, Armin [Building Science Corporation (BSC), Somerville, MA (United States)

    2013-07-01

    This meeting was held on October 16, 2012, in Westford, MA, and brought together experts in the field of residential humidity control to address modeling issues for dehumidification. The presentations and discussions centered on computer simulation and field experience with these systems, with the goal of developing foundational information to support the development of a Building America Measure Guideline on this topic.

  6. A Functional Programming Approach to AI Search Algorithms

    Science.gov (United States)

    Panovics, Janos

    2012-01-01

    The theory and practice of search algorithms related to state-space represented problems form the major part of the introductory course of Artificial Intelligence at most of the universities and colleges offering a degree in the area of computer science. Students usually meet these algorithms only in some imperative or object-oriented language…

  7. Method of immersion of a problem of comparison financial conditions of the enterprises in an expert cover in a class algorithms of artificial intelligence

    Directory of Open Access Journals (Sweden)

    S. V. Bukharin

    2016-01-01

    Full Text Available The financial condition of the enterprise can be estimated by a set of characteristics (solvency and liquidity, structure of the capital, profitability, etc.. The part of financial coefficients is low-informative, and other part contains the interconnected sizes. Therefore for elimination of ambiguity we will pass to the generalized indicators – rating numbers, and as the main means of research it is offered to use the theory of expert systems. As characteristic of the modern theory of expert systems it is necessary to consider application of intellectual ways of data processing of data mining, or simply data mining. The method of immersion of a problem of comparison of a financial condition of economic objects in an expert cover in a class of systems of artificial intelligence is offered (algorithms of a method of the analysis of hierarchies, contiguity leaning of a neural network, algorithm of training with function of activation softmax. The generalized indicator of structure of the capital in the form of rating number is entered and the sign (factorial space for seven concrete enterprises is created. Quantitative signs (financial coefficients of structure of the capital are allocated and their normalization by rules of the theory of expert systems is carried out. To the received set of the generalized indicators the method of the analysis of hierarchies is applied: on the basis of a linguistic scale of T. Saaty the ranks of signs reflecting the relative importance of various financial coefficients are defined and the matrix of pair comparisons is constructed. The vector of priority signs on the basis of the solution of the equation for own numbers and own vectors of the mentioned matrix is calculated. As a result the visualization of the received results which has allowed to eliminate difficulties of interpretation of small and negative values of the generalized indicator is carried out. The neural network with contiguity leaning and

  8. Expert and Novice Approaches to Using Graphs: Evidence from Eye-Track Experiments

    Science.gov (United States)

    Wirth, K. R.; Lindgren, J. M.

    2015-12-01

    Professionals and students in geology use an array of graphs to study the earth, but relatively little detail is known about how users interact with these graphs. Comprehension of graphical information in the earth sciences is further complicated by the common use of non-traditional formats (e.g., inverted axes, logarithmic scales, normalized plots, ternary diagrams). Many educators consider graph-reading skills an important outcome of general education science curricula, so it is critical that we understand both the development of graph-reading skills and the instructional practices that are most efficacious. Eye-tracking instruments provide quantitative information about eye movements and offer important insights into the development of expertise in graph use. We measured the graph reading skills and eye movements of novices (students with a variety of majors and educational attainment) and experts (faculty and staff from a variety of disciplines) while observing traditional and non-traditional graph formats. Individuals in the expert group consistently demonstrated significantly greater accuracy in responding to questions (e.g., retrieval, interpretation, prediction) about graphs. Among novices, only the number of college math and science courses correlated with response accuracy. Interestingly, novices and experts exhibited similar eye-tracks when they first encountered a new graph; they typically scanned through the title, x and y-axes, and data regions in the first 5-15 seconds. However, experts are readily distinguished from novices by a greater number of eye movements (20-35%) between the data and other graph elements (e.g., title, x-axis, y-axis) both during and after the initial orientation phase. We attribute the greater eye movements between the different graph elements an outcome of the generally better-developed self-regulation skills (goal-setting, monitoring, self-evaluation) that likely characterize individuals in our expert group.

  9. Multilateral approaches to the nuclear fuel cycle. Expert group report to the Director General of the IAEA

    International Nuclear Information System (INIS)

    2005-04-01

    An international expert group has been appointed to consider options for possible multilateral approaches to the nuclear fuel cycle. The terms of reference for the Expert Group were to: 1)Identify and provide an analysis of issues and options relevant to multilateral approaches to the front and back ends of the nuclear fuel cycle; 2)Provide an overview of the policy, legal, security, economic and technological incentives and disincentives for cooperation in multilateral arrangements for the front and back ends of the nuclear fuel cycle; and 3)Provide a brief review of the historical and current experience in this area, and of the various analyses relating to multilateral fuel cycle arrangements relevant to the work of the Expert Group. The Group examined the nuclear fuel cycle and multinational approaches at meetings convened over a seven month period. Their report, presented in the paper, was released on 22 February 2005, and circulated for discussion among the IAEA Member States, as well as others, as an IAEA Information Circular (INFCIRC/640)

  10. Computationally efficient model predictive control algorithms a neural network approach

    CERN Document Server

    Ławryńczuk, Maciej

    2014-01-01

    This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: ·         A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. ·         Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. ·         The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). ·         The MPC algorithms with neural approximation with no on-line linearization. ·         The MPC algorithms with guaranteed stability and robustness. ·         Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...

  11. Expert consensus v. evidence-based approaches in the revision of the DSM.

    Science.gov (United States)

    Kendler, K S; Solomon, M

    2016-08-01

    The development of DSM-III through DSM-5 has relied heavily on expert consensus. In this essay, we provide an historical and critical perspective on this process. Over the last 40 years, medicine has struggled to find appropriate methods for summarizing research results and making clinical recommendations. When such recommendations are issued by authorized organizations, they can have widespread influence (i.e. DSM-III and its successors). In the 1970s, expert consensus conferences, led by the NIH, reviewed research about controversial medical issues and successfully disseminated results. However, these consensus conferences struggled with aggregating the complex available evidence. In the 1990s, the rise of evidence-based medicine cast doubt on the reliability of expert consensus. Since then, medicine has increasingly relied on systematic reviews, as developed by the evidence-based medicine movement, and advocated for their early incorporation in expert consensus efforts. With the partial exception of DSM-IV, such systematic evidence-based reviews have not been consistently integrated into the development of the DSMs, leaving their development out of step with the larger medical field. Like the recommendations made for the NIH consensus conferences, we argue that the DSM process should be modified to require systematic evidence-based reviews before Work Groups make their assessments. Our suggestions - which would require leadership and additional resources to set standards for appropriate evidence hierarchies, carry out systematic reviews, and upgrade the group process - should improve the objectivity of the DSM, increase the validity of its results, and improve the reception of any changes in nosology.

  12. Breaking Bad News: Different Approaches in Different Countries of Iran and Germany- an Expert Panel

    Directory of Open Access Journals (Sweden)

    Carl Eduard Scheidt

    2017-10-01

    Full Text Available In this expert panel report which was held in Isfahan, Iran, the participants were Carl Eduard Scheidt, Alexander Wunsch, Hamid Afshar, Farzad Goli, Azadeh Malekian, Mohammad Reza Sharbafchi, Masoud Ferdosi, Farzad Taslimi, and Mitra Molaeinezhad. Professor Scheidt was the facilitator and coordinator of the discussion. Therefore, he started it with a brief introduction. After all is said and done, he ended the discussion with a conclusion.

  13. Experts and Machines against Bullies: A Hybrid Approach to Detect Cyberbullies

    OpenAIRE

    Dadvar, M.; Trieschnigg, Rudolf Berend; de Jong, Franciska M.G.

    2014-01-01

    Cyberbullying is becoming a major concern in online environments with troubling consequences. However, most of the technical studies have focused on the detection of cyberbullying through identifying harassing comments rather than preventing the incidents by detecting the bullies. In this work we study the automatic detection of bully users on YouTube. We compare three types of automatic detection: an expert system, supervised machine learning models, and a hybrid type combining the two. All ...

  14. Evolutionary Algorithms Approach to the Solution of Damage Detection Problems

    Science.gov (United States)

    Salazar Pinto, Pedro Yoajim; Begambre, Oscar

    2010-09-01

    In this work is proposed a new Self-Configured Hybrid Algorithm by combining the Particle Swarm Optimization (PSO) and a Genetic Algorithm (GA). The aim of the proposed strategy is to increase the stability and accuracy of the search. The central idea is the concept of Guide Particle, this particle (the best PSO global in each generation) transmits its information to a particle of the following PSO generation, which is controlled by the GA. Thus, the proposed hybrid has an elitism feature that improves its performance and guarantees the convergence of the procedure. In different test carried out in benchmark functions, reported in the international literature, a better performance in stability and accuracy was observed; therefore the new algorithm was used to identify damage in a simple supported beam using modal data. Finally, it is worth noting that the algorithm is independent of the initial definition of heuristic parameters.

  15. A hybrid multi-objective evolutionary algorithm approach for ...

    Indian Academy of Sciences (India)

    V K MANUPATI

    for handling sequence- and machine-dependent set-up times ... algorithm has been compared to that of multi-objective particle swarm optimization (MOPSO) and conventional ..... position and cognitive learning factor are considered for.

  16. A novel approach in recognizing magnetic material with simplified algorithm

    KAUST Repository

    Talukdar, Abdul Hafiz Ibne; Sultana, Mahbuba Q.; Useinov, Arthur

    2011-01-01

    . This signal was further analyzed (recognized) in frequency domain creating the Fourier frequency spectrum which is easily used to detect the response of magnetic sample. The novel algorithm in detecting magnetic field is presented here with both simulation

  17. A novel approach in recognizing magnetic material with simplified algorithm

    KAUST Repository

    Talukdar, Abdul Hafiz Ibne

    2011-04-01

    In this article a cost-effective and simple system (circuit and algorithm) which allows recognizing different kinds of films by their magneto-field conductive properties is demonstrated. The studied signals are generated by a proposed circuit. This signal was further analyzed (recognized) in frequency domain creating the Fourier frequency spectrum which is easily used to detect the response of magnetic sample. The novel algorithm in detecting magnetic field is presented here with both simulation and experimental results. © 2011 IEEE.

  18. An Expert System And Simulation Approach For Sensor Management & Control In A Distributed Surveillance Network

    Science.gov (United States)

    Leon, Barbara D.; Heller, Paul R.

    1987-05-01

    A surveillance network is a group of multiplatform sensors cooperating to improve network performance. Network control is distributed as a measure to decrease vulnerability to enemy threat. The network may contain diverse sensor types such as radar, ESM (Electronic Support Measures), IRST (Infrared search and track) and E-0 (Electro-Optical). Each platform may contain a single sensor or suite of sensors. In a surveillance network it is desirable to control sensors to make the overall system more effective. This problem has come to be known as sensor management and control (SM&C). Two major facets of network performance are surveillance and survivability. In a netted environment, surveillance can be enhanced if information from all sensors is combined and sensor operating conditions are controlled to provide a synergistic effect. In contrast, when survivability is the main concern for the network, the best operating status for all sensors would be passive or off. Of course, improving survivability tends to degrade surveillance. Hence, the objective of SM&C is to optimize surveillance and survivability of the network. Too voluminous data of various formats and the quick response time are two characteristics of this problem which make it an ideal application for Artificial Intelligence. A solution to the SM&C problem, presented as a computer simulation, will be presented in this paper. The simulation is a hybrid production written in LISP and FORTRAN. It combines the latest conventional computer programming methods with Artificial Intelligence techniques to produce a flexible state-of-the-art tool to evaluate network performance. The event-driven simulation contains environment models coupled with an expert system. These environment models include sensor (track-while-scan and agile beam) and target models, local tracking, and system tracking. These models are used to generate the environment for the sensor management and control expert system. The expert system

  19. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    Science.gov (United States)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  20. Desirability and feasibility of sustainable urban transport systems. An expert-based strategic scenario approach

    Energy Technology Data Exchange (ETDEWEB)

    Nijkamp, P.; Ouwersloot, H.; Rienstra, S.A. [Department of Spatial Economics, Faculty of Economics and Econometrics, Vrije Universiteit, Amsterdam (Netherlands)

    1995-09-01

    Current trends in transport indicate that the system is moving away from sustainability and that major changes are necessary to make the transport system more compatible with environmental sustainability. Main problems may occur in urban transport, where not many promising solutions are expected, while the problems are severe. In view of the great number of uncertainties, we will in our paper resort to scenarios. In the paper, expert scenarios, which lead to a sustainable transport system are constructed by applying the recently developed `Spider model`. Based on a set of distinct characteristics, leading to eight axes in the spatial, institutional, economic and social-psychological field, an evaluation framework is constructed, which visualizes the driving forces that largely influence the future of the transport system. Next, expected and desired scenarios are constructed by means of opinions of Dutch transport experts - both average scenarios and scenarios of segments of the respondents - which have been investigated by means of a survey. The expected scenarios indicate that many current trends will continue, while the transport system is largely the same as the current one. The desired scenarios on the other hand, suggest the emergence and the need for a more collective system, in which also many new modes are operating. In the paper the resulting urban transport systems are also discussed. By calculating the CO2 emissions in the average expected and desired scenario, it appears that the expected scenario does not lead to a large scale reduction of those emissions; the desired scenario however, may lead to a large scale reduction of the emissions. The conclusion is that the differences in expert opinion are small and that the road towards a sustainable (urban) transport system is still far away, although the compact city concept may perhaps offer some solution. 6 figs., 2 tabs., 18 refs.

  1. A Harmony Search Algorithm approach for optimizing traffic signal timings

    Directory of Open Access Journals (Sweden)

    Mauro Dell'Orco

    2013-07-01

    Full Text Available In this study, a bi-level formulation is presented for solving the Equilibrium Network Design Problem (ENDP. The optimisation of the signal timing has been carried out at the upper-level using the Harmony Search Algorithm (HSA, whilst the traffic assignment has been carried out through the Path Flow Estimator (PFE at the lower level. The results of HSA have been first compared with those obtained using the Genetic Algorithm, and the Hill Climbing on a two-junction network for a fixed set of link flows. Secondly, the HSA with PFE has been applied to the medium-sized network to show the applicability of the proposed algorithm in solving the ENDP. Additionally, in order to test the sensitivity of perceived travel time error, we have used the HSA with PFE with various level of perceived travel time. The results showed that the proposed method is quite simple and efficient in solving the ENDP.

  2. Genetic algorithm approach to thin film optical parameters determination

    International Nuclear Information System (INIS)

    Jurecka, S.; Jureckova, M.; Muellerova, J.

    2003-01-01

    Optical parameters of thin film are important for several optical and optoelectronic applications. In this work the genetic algorithm proposed to solve optical parameters of thin film values. The experimental reflectance is modelled by the Forouhi - Bloomer dispersion relations. The refractive index, the extinction coefficient and the film thickness are the unknown parameters in this model. Genetic algorithm use probabilistic examination of promissing areas of the parameter space. It creates a population of solutions based on the reflectance model and then operates on the population to evolve the best solution by using selection, crossover and mutation operators on the population individuals. The implementation of genetic algorithm method and the experimental results are described too (Authors)

  3. Algorithmic Approach to Abstracting Linear Systems by Timed Automata

    DEFF Research Database (Denmark)

    Sloth, Christoffer; Wisniewski, Rafael

    2011-01-01

    This paper proposes an LMI-based algorithm for abstracting dynamical systems by timed automata, which enables automatic formal verification of linear systems. The proposed abstraction is based on partitioning the state space of the system using positive invariant sets, generated by Lyapunov...... functions. This partitioning ensures that the vector field of the dynamical system is transversal to all facets of the cells, which induces some desirable properties of the abstraction. The algorithm is based on identifying intersections of level sets of quadratic Lyapunov functions, and determining...

  4. Development of an expert system for tsunami warning: a unit source approach

    International Nuclear Information System (INIS)

    Roshan, A.D.; Pisharady, Ajai S.; Bishnoi, L.R.; Shah, Meet

    2015-01-01

    Coastal region of India has been experiencing tsunamis since historical times. Many nuclear facilities including nuclear power plants (NPPs), located along the coast are thus exposed to the hazards of tsunami. For the safety of these facilities as well as the safety of the citizens it is necessary to predict the possibility of occurrence of tsunamis for a recorded earthquake event and evaluate the tsunami hazard posed by the earthquake. To address these concerns, this work aims to design an expert system for Tsunami Warning for the Indian Coast with emphasis on evaluation of tsunami heights and arrival times at various nuclear facility sites. The expert system identifies possibility or otherwise of a tsunamigenic event based on earthquake data inputs. Rupture parameters are worked out for the event and unit tsunami source estimations which are available as precomputed database are combined appropriately to estimate the wave heights and time of arrivals at desired locations along the coast. The system also predicts tsunami wave heights at some pre-defined locations such as Nuclear Power Plant (NPP) and other nuclear facility sites. Time of arrivals of first wave along Indian coast is also evaluated

  5. Alara and countermeasures: the approach proposed by the article 31 group of experts

    International Nuclear Information System (INIS)

    Kaul, A.

    1989-01-01

    Based upon ICRP Publication 40 of 1984 and the Radiological Protection Criteria for Controlling Doses to the Public in the Event of Accidental Releases of Radioactive Material of the Commission of the European Communities of 1982, the Group of Experts according to Article 31 has derived reference Levels for the activity of foodstuffs. There have been established three categories of radionuclides - radioisotopes of iodine and strontium, alpha-emitting radioisotopes of plutonium and transplutonium elements, radionuclides of half-lives longer than 10 days - and food groups such as dairy produce, other major foodstuffs, drinking water and beverages (liquid foodstuffs) as well as baby foods ready for consumption. The values of the Derived Reference Level of activity have been calculated on the basis of a lower limit of 5 mSv committed effective dose equivalent or 50 mSv committed organ dose equivalent, age-dependent yearly food consumption rates and dose factors. The relative contamination of a foodstuff was assumed to be 10% of the full value of the Derived Reference Level of the activity for the whole of one year. The results of the calculations of the Group of Experts are compared to the maximum permitted levels according to the Council Regulation (EEC) of Dec. 22, 1987

  6. DG Allocation Based on Reliability, Losses and Voltage Sag Considerations: an expert system approach

    Directory of Open Access Journals (Sweden)

    Sahar Abdel Moneim Moussa

    2017-03-01

    Full Text Available Expert System (ES as a branch of Artificial Intelligence (AI methodology can potentially help in solving complicated power system problems. This may be more appropriate methodology than conventional optimization techniques when contradiction between objectives appears in reaching the optimum solution. When this contradiction is the hindrance in reaching the required system operation through the application of traditional methods ES can give a hand in such case. In this paper, the  knowledge- based ES technique is proposed to reach near-optimum solution which is further directed to the optimum solution through particle swarm optimization (PSO technique. This idea is known as Hybrid-Expert-System (HES. The proposed idea is used in getting the optimum allocation of a number of distributed generation (DG units on Distribution System (DS busbars taking into consideration three issues; reliability, voltage sag, and line losses. Optimality is assessed on the economic basis by calculating money benefits (or losses resulting from DG addition considering the three aforementioned issues. The effectiveness of the proposed technique is ascertained through example.

  7. Experts' Perspectives Toward a Population Health Approach for Children With Medical Complexity.

    Science.gov (United States)

    Barnert, Elizabeth S; Coller, Ryan J; Nelson, Bergen B; Thompson, Lindsey R; Chan, Vincent; Padilla, Cesar; Klitzner, Thomas S; Szilagyi, Moira; Chung, Paul J

    2017-08-01

    Because children with medical complexity (CMC) display very different health trajectories, needs, and resource utilization than other children, it is unclear how well traditional conceptions of population health apply to CMC. We sought to identify key health outcome domains for CMC as a step toward determining core health metrics for this distinct population of children. We conducted and analyzed interviews with 23 diverse national experts on CMC to better understand population health for CMC. Interviewees included child and family advocates, health and social service providers, and research, health systems, and policy leaders. We performed thematic content analyses to identify emergent themes regarding population health for CMC. Overall, interviewees conveyed that defining and measuring population health for CMC is an achievable, worthwhile goal. Qualitative themes from interviews included: 1) CMC share unifying characteristics that could serve as the basis for population health outcomes; 2) optimal health for CMC is child specific and dynamic; 3) health of CMC is intertwined with health of families; 4) social determinants of health are especially important for CMC; and 5) measuring population health for CMC faces serious conceptual and logistical challenges. Experts have taken initial steps in defining the population health of CMC. Population health for CMC involves a dynamic concept of health that is attuned to individual, health-related goals for each child. We propose a framework that can guide the identification and development of population health metrics for CMC. Copyright © 2017 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  8. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  9. A hierarchical approach to reducing communication in parallel graph algorithms

    KAUST Repository

    Harshvardhan,; Amato, Nancy M.; Rauchwerger, Lawrence

    2015-01-01

    . This is exacerbated in scale-free networks, such as social and web graphs, which contain hub vertices that have large degrees and therefore send a large number of messages over the network. Furthermore, many graph algorithms and computations send the same data to each

  10. Iterative reconstruction of transcriptional regulatory networks: an algorithmic approach.

    Directory of Open Access Journals (Sweden)

    Christian L Barrett

    2006-05-01

    Full Text Available The number of complete, publicly available genome sequences is now greater than 200, and this number is expected to rapidly grow in the near future as metagenomic and environmental sequencing efforts escalate and the cost of sequencing drops. In order to make use of this data for understanding particular organisms and for discerning general principles about how organisms function, it will be necessary to reconstruct their various biochemical reaction networks. Principal among these will be transcriptional regulatory networks. Given the physical and logical complexity of these networks, the various sources of (often noisy data that can be utilized for their elucidation, the monetary costs involved, and the huge number of potential experiments approximately 10(12 that can be performed, experiment design algorithms will be necessary for synthesizing the various computational and experimental data to maximize the efficiency of regulatory network reconstruction. This paper presents an algorithm for experimental design to systematically and efficiently reconstruct transcriptional regulatory networks. It is meant to be applied iteratively in conjunction with an experimental laboratory component. The algorithm is presented here in the context of reconstructing transcriptional regulation for metabolism in Escherichia coli, and, through a retrospective analysis with previously performed experiments, we show that the produced experiment designs conform to how a human would design experiments. The algorithm is able to utilize probability estimates based on a wide range of computational and experimental sources to suggest experiments with the highest potential of discovering the greatest amount of new regulatory knowledge.

  11. An algorithmic approach to construct crystallizations of 3-manifolds ...

    Indian Academy of Sciences (India)

    Gagliardi introduced an algorithm to find a presentation of the fundamental group of a closed connected ..... ij is the sum over 1 ≤ i

  12. Modified multiblock partial least squares path modeling algorithm with backpropagation neural networks approach

    Science.gov (United States)

    Yuniarto, Budi; Kurniawan, Robert

    2017-03-01

    PLS Path Modeling (PLS-PM) is different from covariance based SEM, where PLS-PM use an approach based on variance or component, therefore, PLS-PM is also known as a component based SEM. Multiblock Partial Least Squares (MBPLS) is a method in PLS regression which can be used in PLS Path Modeling which known as Multiblock PLS Path Modeling (MBPLS-PM). This method uses an iterative procedure in its algorithm. This research aims to modify MBPLS-PM with Back Propagation Neural Network approach. The result is MBPLS-PM algorithm can be modified using the Back Propagation Neural Network approach to replace the iterative process in backward and forward step to get the matrix t and the matrix u in the algorithm. By modifying the MBPLS-PM algorithm using Back Propagation Neural Network approach, the model parameters obtained are relatively not significantly different compared to model parameters obtained by original MBPLS-PM algorithm.

  13. A comparison of CLIPS- and LISP-based approaches to the development of a real-time expert system

    Science.gov (United States)

    Frainier, R.; Groleau, N.; Bhatnagar, R.; Lam, C.; Compton, M.; Colombano, S.; Lai, S.; Szolovits, P.; Manahan, M.; Statler, I.

    1990-01-01

    This paper describes an ongoing expert system development effort started in 1988 which is evaluating both CLIPS- and LISP- based approaches. The expert system is being developed to a project schedule and is planned for flight on Space Shuttle Mission SLS-2 in 1992. The expert system will help astronauts do the best possible science for a vestibular physiology experiment already scheduled for that mission. The system gathers and reduces data from the experiment, flags 'interesting' results, and proposes changes in the experiment both to exploit the in-flight observations and to stay within the time allowed by Mission Control for the experiment. These tasks must all be performed in real time. Two Apple Macintosh computers are used. The CLIPS- and LISP- based environments are layered above the Macintosh computer Operating System. The 'CLIPS-based' environment includes CLIPS and HyperCard. The LlSP-based environment includes Common LISP, Parmenides (a frame system), and FRuleKit (a rule system). Important evaluation factors include ease of programming, performance against real-time requirements, usability by an astronaut, robustness, and ease of maintenance. Current results on the factors of ease of programming, performance against real-time requirements, and ease of maintenance are discussed.

  14. Genetic algorithm and neural network hybrid approach for job-shop scheduling

    OpenAIRE

    Zhao, Kai; Yang, Shengxiang; Wang, Dingwei

    1998-01-01

    Copyright @ 1998 ACTA Press This paper proposes a genetic algorithm (GA) and constraint satisfaction adaptive neural network (CSANN) hybrid approach for job-shop scheduling problems. In the hybrid approach, GA is used to iterate for searching optimal solutions, CSANN is used to obtain feasible solutions during the iteration of genetic algorithm. Simulations have shown the valid performance of the proposed hybrid approach for job-shop scheduling with respect to the quality of solutions and ...

  15. Retention or deletion of personality disorder diagnoses for DSM-5: an expert consensus approach.

    Science.gov (United States)

    Mullins-Sweatt, Stephanie N; Bernstein, David P; Widiger, Thomas A

    2012-10-01

    One of the official proposals for the fifth edition of the American Psychiatric Association's (APA) diagnostic manual (DSM-5) is to delete half of the existing personality disorders (i.e., dependent, histrionic, narcissistic, paranoid, and schizoid). Within the APA guidelines for DSM-5 decisions, it is stated that there should be expert consensus agreement for the deletion of a diagnostic category. Additionally, categories to be deleted should have low clinical utility and/or minimal evidence for validity. The current study surveyed members of two personality disorder associations (n = 146) with respect to the utility, validity, and status of each DSM-IV-TR personality disorder diagnosis. Findings indicated that the proposal to delete five of the personality disorders lacks consensus support within the personality disorder community.

  16. The Appropriateness of Renal Angioplasty. The ANPARIA Software: A Multidisciplinary Expert Panel Approach

    International Nuclear Information System (INIS)

    Gerbaud, Laurent; Manhes, Geraud; Debourse, Juliette; Gouby, Gerald; Glanddier, Phyllis-Yvonne; Vader, John-Paul; Boyer, Louis; Deteix, Patrice

    2008-01-01

    Percutaneous transluminal renal angioplasty (PTRA) is an invasive technique that is costly and involves the risk of complications and renal failure. The ability of PTRA to reduce the administration of antihypertensive drugs has been demonstrated. A potentially greater benefit, which nevertheless remains to be proven, is the deferral of the need for chronic dialysis. The aim of the study (ANPARIA) was to assess the appropriateness of PTRA to impact on the evolution of renal function. A standardized expert panel method was used to assess the appropriateness of medical treatment alone or medical treatment with revascularization in various clinical situations. The choice of revascularization by either PTRA or surgery was examined for each clinical situation. Analysis was based on a detailed literature review and on systematically elicited expert opinion, which were obtained during a two-round modified Delphi process. The study provides detailed responses on the appropriateness of PTRA for 1848 distinct clinical scenarios. Depending on the major clinical presentation, appropriateness of revascularization varied from 32% to 75% for individual scenarios (overal 48%). Uncertainty as to revascularization was 41% overall. When revascularization was appropriate, PTRA was favored over surgery in 94% of the scenarios, except in certain cases of aortic atheroma where sugery was the preferred choice. Kidney size >7 cm, absence of coexisting disease, acute renal failure, a high degree of stenosis (≥70%), and absence of multiple arteries were identified as predictive variables of favorable appropriateness ratings. Situations such as cardiac failure with pulmonary edema or acute thrombosis of the renal artery were defined as indications for PTRA. This study identified clinical situations in which PTRA or surgery are appropriate for renal artery disease. We built a decision tree which can be used via Internet: the ANPARIA software (http://www

  17. A deep learning approach for predicting the quality of online health expert question-answering services.

    Science.gov (United States)

    Hu, Ze; Zhang, Zhan; Yang, Haiqin; Chen, Qing; Zuo, Decheng

    2017-07-01

    Recently, online health expert question-answering (HQA) services (systems) have attracted more and more health consumers to ask health-related questions everywhere at any time due to the convenience and effectiveness. However, the quality of answers in existing HQA systems varies in different situations. It is significant to provide effective tools to automatically determine the quality of the answers. Two main characteristics in HQA systems raise the difficulties of classification: (1) physicians' answers in an HQA system are usually written in short text, which yields the data sparsity issue; (2) HQA systems apply the quality control mechanism, which refrains the wisdom of crowd. The important information, such as the best answer and the number of users' votes, is missing. To tackle these issues, we prepare the first HQA research data set labeled by three medical experts in 90days and formulate the problem of predicting the quality of answers in the system as a classification task. We not only incorporate the standard textual feature of answers, but also introduce a set of unique non-textual features, i.e., the popular used surface linguistic features and the novel social features, from other modalities. A multimodal deep belief network (DBN)-based learning framework is then proposed to learn the high-level hidden semantic representations of answers from both textual features and non-textual features while the learned joint representation is fed into popular classifiers to determine the quality of answers. Finally, we conduct extensive experiments to demonstrate the effectiveness of including the non-textual features and the proposed multimodal deep learning framework. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. An approach of traffic signal control based on NLRSQP algorithm

    Science.gov (United States)

    Zou, Yuan-Yang; Hu, Yu

    2017-11-01

    This paper presents a linear program model with linear complementarity constraints (LPLCC) to solve traffic signal optimization problem. The objective function of the model is to obtain the minimization of total queue length with weight factors at the end of each cycle. Then, a combination algorithm based on the nonlinear least regression and sequence quadratic program (NLRSQP) is proposed, by which the local optimal solution can be obtained. Furthermore, four numerical experiments are proposed to study how to set the initial solution of the algorithm that can get a better local optimal solution more quickly. In particular, the results of numerical experiments show that: The model is effective for different arrival rates and weight factors; and the lower bound of the initial solution is, the better optimal solution can be obtained.

  19. Partially Adaptive STAP Algorithm Approaches to functional MRI

    OpenAIRE

    Huang, Lejian; Thompson, Elizabeth A.; Schmithorst, Vincent; Holland, Scott K.; Talavage, Thomas M.

    2008-01-01

    In this work, the architectures of three partially adaptive STAP algorithms are introduced, one of which is explored in detail, that reduce dimensionality and improve tractability over fully adaptive STAP when used in construction of brain activation maps in fMRI. Computer simulations incorporating actual MRI noise and human data analysis indicate that element space partially adaptive STAP can attain close to the performance of fully adaptive STAP while significantly decreasing processing tim...

  20. A Graph-Algorithmic Approach for the Study of Metastability in Markov Chains

    Science.gov (United States)

    Gan, Tingyue; Cameron, Maria

    2017-06-01

    Large continuous-time Markov chains with exponentially small transition rates arise in modeling complex systems in physics, chemistry, and biology. We propose a constructive graph-algorithmic approach to determine the sequence of critical timescales at which the qualitative behavior of a given Markov chain changes, and give an effective description of the dynamics on each of them. This approach is valid for both time-reversible and time-irreversible Markov processes, with or without symmetry. Central to this approach are two graph algorithms, Algorithm 1 and Algorithm 2, for obtaining the sequences of the critical timescales and the hierarchies of Typical Transition Graphs or T-graphs indicating the most likely transitions in the system without and with symmetry, respectively. The sequence of critical timescales includes the subsequence of the reciprocals of the real parts of eigenvalues. Under a certain assumption, we prove sharp asymptotic estimates for eigenvalues (including pre-factors) and show how one can extract them from the output of Algorithm 1. We discuss the relationship between Algorithms 1 and 2 and explain how one needs to interpret the output of Algorithm 1 if it is applied in the case with symmetry instead of Algorithm 2. Finally, we analyze an example motivated by R. D. Astumian's model of the dynamics of kinesin, a molecular motor, by means of Algorithm 2.

  1. Text mining and natural language processing approaches for automatic categorization of lay requests to web-based expert forums.

    Science.gov (United States)

    Himmel, Wolfgang; Reincke, Ulrich; Michelmann, Hans Wilhelm

    2009-07-22

    Both healthy and sick people increasingly use electronic media to obtain medical information and advice. For example, Internet users may send requests to Web-based expert forums, or so-called "ask the doctor" services. To automatically classify lay requests to an Internet medical expert forum using a combination of different text-mining strategies. We first manually classified a sample of 988 requests directed to a involuntary childlessness forum on the German website "Rund ums Baby" ("Everything about Babies") into one or more of 38 categories belonging to two dimensions ("subject matter" and "expectations"). After creating start and synonym lists, we calculated the average Cramer's V statistic for the association of each word with each category. We also used principle component analysis and singular value decomposition as further text-mining strategies. With these measures we trained regression models and determined, on the basis of best regression models, for any request the probability of belonging to each of the 38 different categories, with a cutoff of 50%. Recall and precision of a test sample were calculated as a measure of quality for the automatic classification. According to the manual classification of 988 documents, 102 (10%) documents fell into the category "in vitro fertilization (IVF)," 81 (8%) into the category "ovulation," 79 (8%) into "cycle," and 57 (6%) into "semen analysis." These were the four most frequent categories in the subject matter dimension (consisting of 32 categories). The expectation dimension comprised six categories; we classified 533 documents (54%) as "general information" and 351 (36%) as a wish for "treatment recommendations." The generation of indicator variables based on the chi-square analysis and Cramer's V proved to be the best approach for automatic classification in about half of the categories. In combination with the two other approaches, 100% precision and 100% recall were realized in 18 (47%) out of the 38

  2. Multilateral approaches to the Nuclear Fuel Cycle: Expert Group Report submitted to the Director General of the International Atomic Energy Agency

    International Nuclear Information System (INIS)

    2005-01-01

    The text of the report of the independent Expert Group on Multilateral Approaches to the Nuclear Fuel Cycle, commissioned by the Director General, is reproduced in this document for the information of Member States

  3. An Approach for State Observation in Dynamical Systems Based on the Twisting Algorithm

    DEFF Research Database (Denmark)

    Schmidt, Lasse; Andersen, Torben Ole; Pedersen, Henrik C.

    2013-01-01

    This paper discusses a novel approach for state estimation in dynamical systems, with the special focus on hydraulic valve-cylinder drives. The proposed observer structure is based on the framework of the so-called twisting algorithm. This algorithm utilizes the sign of the state being the target...

  4. The Primordial Soup Algorithm : a systematic approach to the specification of parallel parsers

    NARCIS (Netherlands)

    Janssen, Wil; Janssen, W.P.M.; Poel, Mannes; Sikkel, Nicolaas; Zwiers, Jakob

    1992-01-01

    A general framework for parallel parsing is presented, which allows for a unitied, systematic approach to parallel parsing. The Primordial Soup Algorithm creates trees by allowing partial parse trees to combine arbitrarily. By adding constraints to the general algorithm, a large, class of parallel

  5. Appropriate Urban Livability Indicators for Metropolitan Johor, Malaysia via Expert-Stakeholder Approach: a Delphi technique

    Directory of Open Access Journals (Sweden)

    Dario Gallares Pampanga

    2015-11-01

    Full Text Available Metro Johor is one of the fast emerging metropolitan urban centers where its current progress and spatial transformation have made it a key player in the economic growth of Malaysia. The recent creation of the Iskandar Malaysia, an economic strategy which aims to be a global player as potential destination for high-value investments, has certainly added social, environmental and economic stress to its urban citizens. This paper intends to develop urban livability indicators for Metropolitan Johor anchored on the changing urban complexion in the face of climate change, economic, governance, social and cultural dynamics, among others. The urban livability conundrum of Metro Johor illustrates that indicators are imperative, especially policy-based indicators, which would aid to scale-up the desired progress according to urban livability metrics. The study involves iterative 3-rounds of Delphi blind survey with Likert scale’s degree of agreement, and finally assigning weightings to each sub-indicator. Thus, with the expert-stakeholders involvement, constituting broad-sectoral community representation, a robust and appropriate urban livability index for Metro Johor was generated - a comprehensive framework yet prospective benchmark in appropriating timely policy decisions that would redound to the benefit of urban citizens ensuring a livable Metropolitan Johor.

  6. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  7. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  8. Partially Adaptive STAP Algorithm Approaches to functional MRI

    Science.gov (United States)

    Huang, Lejian; Thompson, Elizabeth A.; Schmithorst, Vincent; Holland, Scott K.; Talavage, Thomas M.

    2010-01-01

    In this work, the architectures of three partially adaptive STAP algorithms are introduced, one of which is explored in detail, that reduce dimensionality and improve tractability over fully adaptive STAP when used in construction of brain activation maps in fMRI. Computer simulations incorporating actual MRI noise and human data analysis indicate that element space partially adaptive STAP can attain close to the performance of fully adaptive STAP while significantly decreasing processing time and maximum memory requirements, and thus demonstrates potential in fMRI analysis. PMID:19272913

  9. A New Approach to Tuning Heuristic Parameters of Genetic Algorithms

    Czech Academy of Sciences Publication Activity Database

    Holeňa, Martin

    2006-01-01

    Roč. 3, č. 3 (2006), s. 562-569 ISSN 1790-0832. [AIKED'06. WSEAS International Conference on Artificial Intelligence , Knowledge Engineering and Data Bases. Madrid, 15.02.2006-17.02.2006] R&D Projects: GA ČR(CZ) GA201/05/0325; GA ČR(CZ) GA201/05/0557 Institutional research plan: CEZ:AV0Z10300504 Keywords : evolutionary optimization * genetic algorithms * heuristic parameters * parameter tuning * artificial neural networks * convergence speed * population diversity Subject RIV: IN - Informatics, Computer Science

  10. An approach to combining heuristic and qualitative reasoning in an expert system

    Science.gov (United States)

    Jiang, Wei-Si; Han, Chia Yung; Tsai, Lian Cheng; Wee, William G.

    1988-01-01

    An approach to combining the heuristic reasoning from shallow knowledge and the qualitative reasoning from deep knowledge is described. The shallow knowledge is represented in production rules and under the direct control of the inference engine. The deep knowledge is represented in frames, which may be put in a relational DataBase Management System. This approach takes advantage of both reasoning schemes and results in improved efficiency as well as expanded problem solving ability.

  11. A Comparative Study on Sugeno Integral Approach to Aggregation of Experts' Opinion

    International Nuclear Information System (INIS)

    Kim, Seong Ho; Kim, Kil Yoo; Kim, Tae Woon

    2007-01-01

    A multicriteria decision-making (MCDM) problem of preference ranking of various alternatives is common in science and engineering fields. Usually, the MCDM problem is characterized in terms of two factors: relative importance of each evaluation criterion and appropriateness of each alternative. The ranking is determined by a relative degree of appropriateness of decision alternatives. In reality, there are different grades of interaction among decision criteria. One of well-known approaches to aggregation of those two factors is the weighted arithmetic mean (WAM) approach. Here, importance weights for criteria are viewed as probability measures. The weights are linearly aggregated with appropriateness values. In the present work, the main objective is to study an aggregation model with various grades of interactions among the decision elements. The successful applications of fuzzy integral aggregation operators to subjective MCDM problems have been motivating this work. On the basis of λ-fuzzy measures and Sugeno integral (SI), the SI aggregation approach is proposed. Here, interaction among criteria is dealt with λ-fuzzy measures. Aggregation of these measures and appropriateness values is implemented, especially, by the Sugeno integral as one of fuzzy integrals. Aggregated values obtained by the SI approach are viewed as decision maker's pessimistic (or conservative) attitude towards information aggregation, compared to the WAM approach. Firstly, the concepts of the λ-fuzzy measure and the Sugeno fuzzy integral are introduced. Then, as an application of the SI approach, an illustrative example is given

  12. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  13. An Expert System-based Context-Aware Ubiquitous Learning Approach for Conducting Science Learning Activities

    Science.gov (United States)

    Wu, Po-Han; Hwang, Gwo-Jen; Tsai, Wen-Hung

    2013-01-01

    Context-aware ubiquitous learning has been recognized as being a promising approach that enables students to interact with real-world learning targets with supports from the digital world. Several researchers have indicated the importance of providing learning guidance or hints to individual students during the context-aware ubiquitous learning…

  14. A New Optimization Approach for Maximizing the Photovoltaic Panel Power Based on Genetic Algorithm and Lagrange Multiplier Algorithm

    Directory of Open Access Journals (Sweden)

    Mahdi M. M. El-Arini

    2013-01-01

    Full Text Available In recent years, the solar energy has become one of the most important alternative sources of electric energy, so it is important to operate photovoltaic (PV panel at the optimal point to obtain the possible maximum efficiency. This paper presents a new optimization approach to maximize the electrical power of a PV panel. The technique which is based on objective function represents the output power of the PV panel and constraints, equality and inequality. First the dummy variables that have effect on the output power are classified into two categories: dependent and independent. The proposed approach is a multistage one as the genetic algorithm, GA, is used to obtain the best initial population at optimal solution and this initial population is fed to Lagrange multiplier algorithm (LM, then a comparison between the two algorithms, GA and LM, is performed. The proposed technique is applied to solar radiation measured at Helwan city at latitude 29.87°, Egypt. The results showed that the proposed technique is applicable.

  15. Building dialogue POMDPs from expert dialogues an end-to-end approach

    CERN Document Server

    Chinaei, Hamidreza

    2016-01-01

    This book discusses the Partially Observable Markov Decision Process (POMDP) framework applied in dialogue systems. It presents POMDP as a formal framework to represent uncertainty explicitly while supporting automated policy solving. The authors propose and implement an end-to-end learning approach for dialogue POMDP model components. Starting from scratch, they present the state, the transition model, the observation model and then finally the reward model from unannotated and noisy dialogues. These altogether form a significant set of contributions that can potentially inspire substantial further work. This concise manuscript is written in a simple language, full of illustrative examples, figures, and tables. Provides insights on building dialogue systems to be applied in real domain Illustrates learning dialogue POMDP model components from unannotated dialogues in a concise format Introduces an end-to-end approach that makes use of unannotated and noisy dialogue for learning each component of dialogue POM...

  16. A Computerised Business Ethics Expert System -A new approach to improving the ethical quality of business decision-making

    Directory of Open Access Journals (Sweden)

    Bernie Brenner

    2007-06-01

    Full Text Available Abstract Where unethical business decision-making arises from failures of ethical perception, there is an important role for ethical training and decision-making tools. These may help business people to consider all relevant issues when assessing the ethical status of potential decisions. Ethical training programmes give business people a basic understanding of the principles which underlie ethical judgements and equip them with many of the necessary skills for dealing with the ethical dilemmas which they face in their jobs. Similarly, ethical decision-making tools may guide managers through the various ethical considerations which are relevant to business decision-making and help them to develop their ethical-perceptual skills. Furthermore, by establishing and reinforcing good ethical decision-making practices, training programmes and decision-making tools may also reduce the incidence of self-consciously unethical decision-making. A new approach to improving the ethical quality of business decision-making by the use of computerized business ethics expert systems is proposed. These systems have the potential to guide business people through a process of ethical evaluation while simultaneously fulfilling an educational role, thus providing many of the benefits of both training programmes and decision-making tools. While the prospect of a computer system which could simply make ethical judgements for business people is both unrealistic and undesirable, a system which leads human decision-makers through a structured assessment process has the potential for genuine benefits. Keywords: Expert Systems, Ethical Decision Making

  17. A Genetic-Algorithms-Based Approach for Programming Linear and Quadratic Optimization Problems with Uncertainty

    Directory of Open Access Journals (Sweden)

    Weihua Jin

    2013-01-01

    Full Text Available This paper proposes a genetic-algorithms-based approach as an all-purpose problem-solving method for operation programming problems under uncertainty. The proposed method was applied for management of a municipal solid waste treatment system. Compared to the traditional interactive binary analysis, this approach has fewer limitations and is able to reduce the complexity in solving the inexact linear programming problems and inexact quadratic programming problems. The implementation of this approach was performed using the Genetic Algorithm Solver of MATLAB (trademark of MathWorks. The paper explains the genetic-algorithms-based method and presents details on the computation procedures for each type of inexact operation programming problems. A comparison of the results generated by the proposed method based on genetic algorithms with those produced by the traditional interactive binary analysis method is also presented.

  18. Translating U-500R Randomized Clinical Trial Evidence to the Practice Setting: A Diabetes Educator/Expert Prescriber Team Approach.

    Science.gov (United States)

    Bergen, Paula M; Kruger, Davida F; Taylor, April D; Eid, Wael E; Bhan, Arti; Jackson, Jeffrey A

    2017-06-01

    Purpose The purpose of this article is to provide recommendations to the diabetes educator/expert prescriber team for the use of human regular U-500 insulin (U-500R) in patients with severely insulin-resistant type 2 diabetes, including its initiation and titration, by utilizing dosing charts and teaching materials translated from a recent U-500R clinical trial. Conclusions Clinically relevant recommendations and teaching materials for the optimal use and management of U-500R in clinical practice are provided based on the efficacy and safety results of and lessons learned from the U-500R clinical trial by Hood et al, current standards of practice, and the authors' clinical expertise. This trial was the first robustly powered, randomized, titration-to-target trial to compare twice-daily and three-times-daily U-500R dosing regimens. Modifications were made to the initiation and titration dosing algorithms used in this trial to simplify dosing strategies for the clinical setting and align with current glycemic targets recommended by the American Diabetes Association. Leveraging the expertise, resources, and patient interactions of the diabetes educator who can provide diabetes self-management education and support in collaboration with the multidisciplinary diabetes team is strongly recommended to ensure patients treated with U-500R receive the timely and comprehensive care required to safely and effectively use this highly concentrated insulin.

  19. Speed Bump Detection Using Accelerometric Features: A Genetic Algorithm Approach.

    Science.gov (United States)

    Celaya-Padilla, Jose M; Galván-Tejada, Carlos E; López-Monteagudo, F E; Alonso-González, O; Moreno-Báez, Arturo; Martínez-Torteya, Antonio; Galván-Tejada, Jorge I; Arceo-Olague, Jose G; Luna-García, Huizilopoztli; Gamboa-Rosales, Hamurabi

    2018-02-03

    Among the current challenges of the Smart City, traffic management and maintenance are of utmost importance. Road surface monitoring is currently performed by humans, but the road surface condition is one of the main indicators of road quality, and it may drastically affect fuel consumption and the safety of both drivers and pedestrians. Abnormalities in the road, such as manholes and potholes, can cause accidents when not identified by the drivers. Furthermore, human-induced abnormalities, such as speed bumps, could also cause accidents. In addition, while said obstacles ought to be signalized according to specific road regulation, they are not always correctly labeled. Therefore, we developed a novel method for the detection of road abnormalities (i.e., speed bumps). This method makes use of a gyro, an accelerometer, and a GPS sensor mounted in a car. After having the vehicle cruise through several streets, data is retrieved from the sensors. Then, using a cross-validation strategy, a genetic algorithm is used to find a logistic model that accurately detects road abnormalities. The proposed model had an accuracy of 0.9714 in a blind evaluation, with a false positive rate smaller than 0.018, and an area under the receiver operating characteristic curve of 0.9784. This methodology has the potential to detect speed bumps in quasi real-time conditions, and can be used to construct a real-time surface monitoring system.

  20. Effective and efficient optics inspection approach using machine learning algorithms

    International Nuclear Information System (INIS)

    Abdulla, G.; Kegelmeyer, L.; Liao, Z.; Carr, W.

    2010-01-01

    The Final Optics Damage Inspection (FODI) system automatically acquires and utilizes the Optics Inspection (OI) system to analyze images of the final optics at the National Ignition Facility (NIF). During each inspection cycle up to 1000 images acquired by FODI are examined by OI to identify and track damage sites on the optics. The process of tracking growing damage sites on the surface of an optic can be made more effective by identifying and removing signals associated with debris or reflections. The manual process to filter these false sites is daunting and time consuming. In this paper we discuss the use of machine learning tools and data mining techniques to help with this task. We describe the process to prepare a data set that can be used for training and identifying hardware reflections in the image data. In order to collect training data, the images are first automatically acquired and analyzed with existing software and then relevant features such as spatial, physical and luminosity measures are extracted for each site. A subset of these sites is 'truthed' or manually assigned a class to create training data. A supervised classification algorithm is used to test if the features can predict the class membership of new sites. A suite of self-configuring machine learning tools called 'Avatar Tools' is applied to classify all sites. To verify, we used 10-fold cross correlation and found the accuracy was above 99%. This substantially reduces the number of false alarms that would otherwise be sent for more extensive investigation.

  1. CLASSIFICATION ALGORITHMS FOR BIG DATA ANALYSIS, A MAP REDUCE APPROACH

    Directory of Open Access Journals (Sweden)

    V. A. Ayma

    2015-03-01

    Full Text Available Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of InterIMAGE Cloud Platform (ICP, which is an open-source, distributed framework for automatic image interpretation, is presented. The tool, named ICP: Data Mining Package, is able to perform supervised classification procedures on huge amounts of data, usually referred as big data, on a distributed infrastructure using Hadoop MapReduce. The tool has four classification algorithms implemented, taken from WEKA’s machine learning library, namely: Decision Trees, Naïve Bayes, Random Forest and Support Vector Machines (SVM. The results of an experimental analysis using a SVM classifier on data sets of different sizes for different cluster configurations demonstrates the potential of the tool, as well as aspects that affect its performance.

  2. A New Reversible Database Watermarking Approach with Firefly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Mustafa Bilgehan Imamoglu

    2017-01-01

    Full Text Available Up-to-date information is crucial in many fields such as medicine, science, and stock market, where data should be distributed to clients from a centralized database. Shared databases are usually stored in data centers where they are distributed over insecure public access network, the Internet. Sharing may result in a number of problems such as unauthorized copies, alteration of data, and distribution to unauthorized people for reuse. Researchers proposed using watermarking to prevent problems and claim digital rights. Many methods are proposed recently to watermark databases to protect digital rights of owners. Particularly, optimization based watermarking techniques draw attention, which results in lower distortion and improved watermark capacity. Difference expansion watermarking (DEW with Firefly Algorithm (FFA, a bioinspired optimization technique, is proposed to embed watermark into relational databases in this work. Best attribute values to yield lower distortion and increased watermark capacity are selected efficiently by the FFA. Experimental results indicate that FFA has reduced complexity and results in less distortion and improved watermark capacity compared to similar works reported in the literature.

  3. Speed Bump Detection Using Accelerometric Features: A Genetic Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Jose M. Celaya-Padilla

    2018-02-01

    Full Text Available Among the current challenges of the Smart City, traffic management and maintenance are of utmost importance. Road surface monitoring is currently performed by humans, but the road surface condition is one of the main indicators of road quality, and it may drastically affect fuel consumption and the safety of both drivers and pedestrians. Abnormalities in the road, such as manholes and potholes, can cause accidents when not identified by the drivers. Furthermore, human-induced abnormalities, such as speed bumps, could also cause accidents. In addition, while said obstacles ought to be signalized according to specific road regulation, they are not always correctly labeled. Therefore, we developed a novel method for the detection of road abnormalities (i.e., speed bumps. This method makes use of a gyro, an accelerometer, and a GPS sensor mounted in a car. After having the vehicle cruise through several streets, data is retrieved from the sensors. Then, using a cross-validation strategy, a genetic algorithm is used to find a logistic model that accurately detects road abnormalities. The proposed model had an accuracy of 0.9714 in a blind evaluation, with a false positive rate smaller than 0.018, and an area under the receiver operating characteristic curve of 0.9784. This methodology has the potential to detect speed bumps in quasi real-time conditions, and can be used to construct a real-time surface monitoring system.

  4. The "expert patient" approach for non-communicable disease management in low and middle income settings: When the reality confronts the rhetoric.

    Science.gov (United States)

    Xiao, Yue

    2015-09-01

    This paper seeks to explore the relevance between the Western "expert patient" rhetoric and the reality of non-communicable diseases (NCDs) control and management in low and middle income settings from the health sociological perspective. It firstly sets up a conceptual framework of the "expert patient" or the patient self-management approach, showing the rhetoric of the initiative in the developed countries. Then by examining the situation of NCDs control and management in low income settings, the paper tries to evaluate the possibilities of implementing the "expert patient" approach in these countries. Kober and Van Damme's study on the relevance of the "expert patient" for an HIV/AIDS program in low income settings is critically studied to show the relevance of the developed countries' rhetoric of the "expert patient" approach for the reality of developing countries. In addition, the MoPoTsyo diabetes peer educator program is analyzed to show the challenges faced by the low income countries in implementing patient self-management programs. Finally, applications of the expert patient approach in China are discussed as well, to remind us of the possible difficulties in introducing it into rural settings.

  5. On the balanced blending of formally structured and simplified approaches for utilizing judgments of experts in the assessment of uncertain issues

    International Nuclear Information System (INIS)

    Ahn, Kwang Il; Yang, Joon Eon; Ha, Jae Joo

    2003-01-01

    Expert judgment is frequently employed in the search for the solution to various engineering and decision-making problems where relevant data is not sufficient or where there is little consensus as to the correct models to apply. When expert judgments are required to solve the underlying problem, our main concern is how to formally derive their technical expertise and their personal degree of familiarity about the related questions. Formal methods for gathering judgments from experts and assessing the effects of the judgments on the results of the analysis have been developed in a variety of ways. The most important interest of such methods is to establish the robustness of an expert's knowledge upon which the elicitation of judgments is made and an effective trace of the elicitation process as possible as one can. While the resultant expert judgments can remain to a large extent substantiated with formal elicitation methods, their applicability however is often limited due to restriction of available resources (e.g., time, budget, and number of qualified experts, etc) as well as a scope of the analysis. For this reason, many engineering and decision-making problems have not always performed with a formal/structured pattern, but rather relied on a pertinent transition of the formal process to the simplified approach. The purpose of this paper is (a) to address some insights into the balanced use of formally structured and simplified approaches for the explicit use of expert judgments under resource constraints and (b) to discuss related decision-theoretic issues

  6. Expert Programmer versus Parallelizing Compiler: A Comparative Study of Two Approaches for Distributed Shared Memory

    Directory of Open Access Journals (Sweden)

    M. F. P. O'Boyle

    1996-01-01

    Full Text Available This article critically examines current parallel programming practice and optimizing compiler development. The general strategies employed by compiler and programmer to optimize a Fortran program are described, and then illustrated for a specific case by applying them to a well-known scientific program, TRED2, using the KSR-1 as the target architecture. Extensive measurement is applied to the resulting versions of the program, which are compared with a version produced by a commercial optimizing compiler, KAP. The compiler strategy significantly outperforms KAP and does not fall far short of the performance achieved by the programmer. Following the experimental section each approach is critiqued by the other. Perceived flaws, advantages, and common ground are outlined, with an eye to improving both schemes.

  7. Automatic Algorithm Selection for Complex Simulation Problems

    CERN Document Server

    Ewald, Roland

    2012-01-01

    To select the most suitable simulation algorithm for a given task is often difficult. This is due to intricate interactions between model features, implementation details, and runtime environment, which may strongly affect the overall performance. An automated selection of simulation algorithms supports users in setting up simulation experiments without demanding expert knowledge on simulation. Roland Ewald analyzes and discusses existing approaches to solve the algorithm selection problem in the context of simulation. He introduces a framework for automatic simulation algorithm selection and

  8. Expert systems

    International Nuclear Information System (INIS)

    Haldy, P.A.

    1988-01-01

    The definitions of the terms 'artificial intelligence' and 'expert systems', the methodology, areas of employment and limits of expert systems are discussed. The operation of an expert system is described, especially the presentation and organization of knowledge as well as interference and control. Methods and tools for expert system development are presented and their application in nuclear energy are briefly addressed. 7 figs., 2 tabs., 6 refs

  9. Expert Systems

    OpenAIRE

    Lucas, P.J.F.

    2005-01-01

    Expert systems mimic the problem-solving activity of human experts in specialized domains by capturing and representing expert knowledge. Expert systems include a knowledge base, an inference engine that derives conclusions from the knowledge, and a user interface. Knowledge may be stored as if-then rules, orusing other formalisms such as frames and predicate logic. Uncertain knowledge may be represented using certainty factors, Bayesian networks, Dempster-Shafer belief functions, or fuzzy se...

  10. A Balancing Algorithm in Wireless Sensor Network Based on the Assistance of Approaching Nodes

    Directory of Open Access Journals (Sweden)

    Chengpei Tang

    2013-03-01

    Full Text Available Sensor node in wireless sensor network is a micro-embedded system with limited memory, energy and communication capabilities. Some nodes will run out of energy and exit the network earlier than other nodes because of the uneven energy consumption. This will lead to partial or complete paralysis of the whole wireless sensor network. A balancing algorithm based on the assistance of approaching nodes is proposed. Via the set theory, notes are divided into neighbor nodes set and approaching nodes set. Approaching nodes will help weaker nodes forward part of massages to balance energy consumption. Simulation result has verified the rationality and feasibility of the balancing algorithm.

  11. Scheduling language and algorithm development study. Appendix: Study approach and activity summary

    Science.gov (United States)

    1974-01-01

    The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented.

  12. A Genetic Algorithm-based Antenna Selection Approach for Large-but-Finite MIMO Networks

    KAUST Repository

    Makki, Behrooz

    2016-12-29

    We study the performance of antenna selectionbased multiple-input-multiple-output (MIMO) networks with large but finite number of transmit antennas and receivers. Considering the continuous and bursty communication scenarios with different users’ data request probabilities, we develop an efficient antenna selection scheme using genetic algorithms (GA). As demonstrated, the proposed algorithm is generic in the sense that it can be used in the cases with different objective functions, precoding methods, levels of available channel state information and channel models. Our results show that the proposed GAbased algorithm reaches (almost) the same throughput as the exhaustive search-based optimal approach, with substantially less implementation complexity.

  13. A Genetic Algorithm-based Antenna Selection Approach for Large-but-Finite MIMO Networks

    KAUST Repository

    Makki, Behrooz; Ide, Anatole; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2016-01-01

    We study the performance of antenna selectionbased multiple-input-multiple-output (MIMO) networks with large but finite number of transmit antennas and receivers. Considering the continuous and bursty communication scenarios with different users’ data request probabilities, we develop an efficient antenna selection scheme using genetic algorithms (GA). As demonstrated, the proposed algorithm is generic in the sense that it can be used in the cases with different objective functions, precoding methods, levels of available channel state information and channel models. Our results show that the proposed GAbased algorithm reaches (almost) the same throughput as the exhaustive search-based optimal approach, with substantially less implementation complexity.

  14. Stochastic approach for round-off error analysis in computing application to signal processing algorithms

    International Nuclear Information System (INIS)

    Vignes, J.

    1986-01-01

    Any result of algorithms provided by a computer always contains an error resulting from floating-point arithmetic round-off error propagation. Furthermore signal processing algorithms are also generally performed with data containing errors. The permutation-perturbation method, also known under the name CESTAC (controle et estimation stochastique d'arrondi de calcul) is a very efficient practical method for evaluating these errors and consequently for estimating the exact significant decimal figures of any result of algorithms performed on a computer. The stochastic approach of this method, its probabilistic proof, and the perfect agreement between the theoretical and practical aspects are described in this paper [fr

  15. A high-performance spatial database based approach for pathology imaging algorithm evaluation

    Directory of Open Access Journals (Sweden)

    Fusheng Wang

    2013-01-01

    Full Text Available Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS data model. Aims: (1 Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2 Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3 Develop a set of queries to support data sampling and result comparisons; (4 Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1 algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2 algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The

  16. A high-performance spatial database based approach for pathology imaging algorithm evaluation.

    Science.gov (United States)

    Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A D; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J; Saltz, Joel H

    2013-01-01

    Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. WE HAVE CONSIDERED TWO SCENARIOS FOR ALGORITHM EVALUATION: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and

  17. Evolutionary algorithms approach for integrated bioenergy supply chains optimization

    International Nuclear Information System (INIS)

    Ayoub, Nasser; Elmoshi, Elsayed; Seki, Hiroya; Naka, Yuji

    2009-01-01

    In this paper, we propose an optimization model and solution approach for designing and evaluating integrated system of bioenergy production supply chains, SC, at the local level. Designing SC that simultaneously utilize a set of bio-resources together is a complicated task, considered here. The complication arises from the different nature and sources of bio-resources used in bioenergy production i.e., wet, dry or agriculture, industrial etc. Moreover, the different concerns that decision makers should take into account, to overcome the tradeoff anxieties of the socialists and investors, i.e., social, environmental and economical factors, was considered through the options of multi-criteria optimization. A first part of this research was introduced in earlier research work explaining the general Bioenergy Decision System gBEDS [Ayoub N, Martins R, Wang K, Seki H, Naka Y. Two levels decision system for efficient planning and implementation of bioenergy production. Energy Convers Manage 2007;48:709-23]. In this paper, brief introduction and emphasize on gBEDS are given; the optimization model is presented and followed by a case study on designing a supply chain of nine bio-resources at Iida city in the middle part of Japan.

  18. A risk-informed approach of quantification of epistemic uncertainty for the long-term radioactive waste disposal. Improving reliability of expert judgements with an advanced elicitation procedure

    International Nuclear Information System (INIS)

    Sugiyama, Daisuke; Chida, Taiji; Fujita, Tomonari; Tsukamoto, Masaki

    2011-01-01

    A quantification methodology of epistemic uncertainty by expert judgement based on the risk-informed approach is developed to assess inevitable uncertainty for the long-term safety assessment of radioactive waste disposal. The proposed method in this study employs techniques of logic tree, by which options of models and/or scenarios are identified, and Evidential Support Logic (ESL), by which possibility of each option is quantified. In this report, the effect of a feedback process of discussion between experts and input of state-of-the-art knowledge in the proposed method is discussed to estimate alteration of the distribution of expert judgements which is one of the factors causing uncertainty. In a preliminary quantification experiment of uncertainty of degradation of the engineering barrier materials in a tentative sub-surface disposal using the proposed methodology, experts themselves modified questions appropriately to facilitate sound judgements and to correlate those with scientific evidences clearly. The result suggests that the method effectively improves confidence of expert judgement. Also, the degree of consensus of expert judgement was sort of improved in some cases, since scientific knowledge and information of expert judgement in other fields became common understanding. It is suggested that the proposed method could facilitate consensus on uncertainty between interested persons. (author)

  19. A literature review of expert problem solving using analogy

    OpenAIRE

    Mair, C; Martincova, M; Shepperd, MJ

    2009-01-01

    We consider software project cost estimation from a problem solving perspective. Taking a cognitive psychological approach, we argue that the algorithmic basis for CBR tools is not representative of human problem solving and this mismatch could account for inconsistent results. We describe the fundamentals of problem solving, focusing on experts solving ill-defined problems. This is supplemented by a systematic literature review of empirical studies of expert problem solving of non-trivial pr...

  20. A new automatic algorithm for quantification of myocardial infarction imaged by late gadolinium enhancement cardiovascular magnetic resonance: experimental validation and comparison to expert delineations in multi-center, multi-vendor patient data.

    Science.gov (United States)

    Engblom, Henrik; Tufvesson, Jane; Jablonowski, Robert; Carlsson, Marcus; Aletras, Anthony H; Hoffmann, Pavel; Jacquier, Alexis; Kober, Frank; Metzler, Bernhard; Erlinge, David; Atar, Dan; Arheden, Håkan; Heiberg, Einar

    2016-05-04

    Late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) using magnitude inversion recovery (IR) or phase sensitive inversion recovery (PSIR) has become clinical standard for assessment of myocardial infarction (MI). However, there is no clinical standard for quantification of MI even though multiple methods have been proposed. Simple thresholds have yielded varying results and advanced algorithms have only been validated in single center studies. Therefore, the aim of this study was to develop an automatic algorithm for MI quantification in IR and PSIR LGE images and to validate the new algorithm experimentally and compare it to expert delineations in multi-center, multi-vendor patient data. The new automatic algorithm, EWA (Expectation Maximization, weighted intensity, a priori information), was implemented using an intensity threshold by Expectation Maximization (EM) and a weighted summation to account for partial volume effects. The EWA algorithm was validated in-vivo against triphenyltetrazolium-chloride (TTC) staining (n = 7 pigs with paired IR and PSIR images) and against ex-vivo high resolution T1-weighted images (n = 23 IR and n = 13 PSIR images). The EWA algorithm was also compared to expert delineation in 124 patients from multi-center, multi-vendor clinical trials 2-6 days following first time ST-elevation myocardial infarction (STEMI) treated with percutaneous coronary intervention (PCI) (n = 124 IR and n = 49 PSIR images). Infarct size by the EWA algorithm in vivo in pigs showed a bias to ex-vivo TTC of -1 ± 4%LVM (R = 0.84) in IR and -2 ± 3%LVM (R = 0.92) in PSIR images and a bias to ex-vivo T1-weighted images of 0 ± 4%LVM (R = 0.94) in IR and 0 ± 5%LVM (R = 0.79) in PSIR images. In multi-center patient studies, infarct size by the EWA algorithm showed a bias to expert delineation of -2 ± 6 %LVM (R = 0.81) in IR images (n = 124) and 0 ± 5%LVM (R = 0.89) in

  1. A decision-algorithm defining the rehabilitation approach: 'Facial oral tract therapy'

    DEFF Research Database (Denmark)

    Hansen, Trine S; Jakobsen, Daniela

    2010-01-01

    was developed by a research occupational therapist and an F.O.T.T. senior instructor. We used an inductive approach combining existing knowledge from: F.O.T.T. instructors, therapists trained in using the F.O.T.T. approach, and existing literature. A group of F.O.T.T. instructors and the originator...... and eating; oral hygiene; breathing, voice, and speech articulation; facial expression, giving guidance on interventions. The algorithm outlines all important components in the treatment that the therapist should decide to use or not to use in the intervention. The algorithm is supported by a manual...

  2. Expert Approaches to Analysis

    Science.gov (United States)

    1999-03-01

    analysis that takes place in anatomy or circuit diagrams. The goal is to break an entity down into a set of non- overlapping parts, and to specify the...components. For example, one subject in predicting the fate of different species, broke them into three types: animals that humans would save (e.g., gorillas

  3. Routing and scheduling of hazardous materials shipments: algorithmic approaches to managing spent nuclear fuel transport

    International Nuclear Information System (INIS)

    Cox, R.G.

    1984-01-01

    Much controversy surrounds government regulation of routing and scheduling of Hazardous Materials Transportation (HMT). Increases in operating costs must be balanced against expected benefits from local HMT bans and curfews when promulgating or preempting HMT regulations. Algorithmic approaches for evaluating HMT routing and scheduling regulatory policy are described. A review of current US HMT regulatory policy is presented to provide a context for the analysis. Next, a multiobjective shortest path algorithm to find the set of efficient routes under conflicting objectives is presented. This algorithm generates all efficient routes under any partial ordering in a single pass through the network. Also, scheduling algorithms are presented to estimate the travel time delay due to HMT curfews along a route. Algorithms are presented assuming either deterministic or stochastic travel times between curfew cities and also possible rerouting to avoid such cities. These algorithms are applied to the case study of US highway transport of spent nuclear fuel from reactors to permanent repositories. Two data sets were used. One data set included the US Interstate Highway System (IHS) network with reactor locations, possible repository sites, and 150 heavily populated areas (HPAs). The other data set contained estimates of the population residing with 0.5 miles of the IHS and the Eastern US. Curfew delay is dramatically reduced by optimally scheduling departure times unless inter-HPA travel times are highly uncertain. Rerouting shipments to avoid HPAs is a less efficient approach to reducing delay

  4. A strategic flight conflict avoidance approach based on a memetic algorithm

    Directory of Open Access Journals (Sweden)

    Guan Xiangmin

    2014-02-01

    Full Text Available Conflict avoidance (CA plays a crucial role in guaranteeing the airspace safety. The current approaches, mostly focusing on a short-term situation which eliminates conflicts via local adjustment, cannot provide a global solution. Recently, long-term conflict avoidance approaches, which are proposed to provide solutions via strategically planning traffic flow from a global view, have attracted more attentions. With consideration of the situation in China, there are thousands of flights per day and the air route network is large and complex, which makes the long-term problem to be a large-scale combinatorial optimization problem with complex constraints. To minimize the risk of premature convergence being faced by current approaches and obtain higher quality solutions, in this work, we present an effective strategic framework based on a memetic algorithm (MA, which can markedly improve search capability via a combination of population-based global search and local improvements made by individuals. In addition, a specially designed local search operator and an adaptive local search frequency strategy are proposed to improve the solution quality. Furthermore, a fast genetic algorithm (GA is presented as the global optimization method. Empirical studies using real traffic data of the Chinese air route network and daily flight plans show that our approach outperformed the existing approaches including the GA based approach and the cooperative coevolution based approach as well as some well-known memetic algorithm based approaches.

  5. SLIM-MAUD: an approach to assessing human error probabilities using structured expert judgment. Volume II. Detailed analysis of the technical issues

    International Nuclear Information System (INIS)

    Embrey, D.E.; Humphreys, P.; Rosa, E.A.; Kirwan, B.; Rea, K.

    1984-07-01

    This two-volume report presents the procedures and analyses performed in developing an approach for structuring expert judgments to estimate human error probabilities. Volume I presents an overview of work performed in developing the approach: SLIM-MAUD (Success Likelihood Index Methodology, implemented through the use of an interactive computer program called MAUD-Multi-Attribute Utility Decomposition). Volume II provides a more detailed analysis of the technical issues underlying the approach

  6. Inverse Kinematics of a Humanoid Robot with Non-Spherical Hip: A Hybrid Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Rafael Cisneros Limón

    2013-04-01

    Full Text Available This paper describes an approach to solve the inverse kinematics problem of humanoid robots whose construction shows a small but non negligible offset at the hip which prevents any purely analytical solution to be developed. Knowing that a purely numerical solution is not feasible due to variable efficiency problems, the proposed one first neglects the offset presence in order to obtain an approximate “solution” by means of an analytical algorithm based on screw theory, and then uses it as the initial condition of a numerical refining procedure based on the Levenberg-Marquardt algorithm. In this way, few iterations are needed for any specified attitude, making it possible to implement the algorithm for real-time applications. As a way to show the algorithm's implementation, one case of study is considered throughout the paper, represented by the SILO2 humanoid robot.

  7. A Formal Approach for RT-DVS Algorithms Evaluation Based on Statistical Model Checking

    Directory of Open Access Journals (Sweden)

    Shengxin Dai

    2015-01-01

    Full Text Available Energy saving is a crucial concern in embedded real time systems. Many RT-DVS algorithms have been proposed to save energy while preserving deadline guarantees. This paper presents a novel approach to evaluate RT-DVS algorithms using statistical model checking. A scalable framework is proposed for RT-DVS algorithms evaluation, in which the relevant components are modeled as stochastic timed automata, and the evaluation metrics including utilization bound, energy efficiency, battery awareness, and temperature awareness are expressed as statistical queries. Evaluation of these metrics is performed by verifying the corresponding queries using UPPAAL-SMC and analyzing the statistical information provided by the tool. We demonstrate the applicability of our framework via a case study of five classical RT-DVS algorithms.

  8. EXPERT SYSTEMS

    OpenAIRE

    Georgiana Marin; Mihai Catalin Andrei

    2011-01-01

    In recent decades IT and computer systems have evolved rapidly in economic informatics field. The goal is to create user friendly information systems that respond promptly and accurately to requests. Informatics systems evolved into decision assisted systems, and such systems are converted, based on gained experience, in expert systems for creative problem solving that an organization is facing. Expert systems are aimed at rebuilding human reasoning on the expertise obtained from experts, sto...

  9. Expert System

    DEFF Research Database (Denmark)

    Hildebrandt, Thomas Troels; Cattani, Gian Luca

    2016-01-01

    An expert system is a computer system for inferring knowledge from a knowledge base, typically by using a set of inference rules. When the concept of expert systems was introduced at Stanford University in the early 1970s, the knowledge base was an unstructured set of facts. Today the knowledge b...... for the application of expert systems, but also raises issues regarding privacy and legal liability....

  10. HYBRID APPROACHES TO THE FORMALISATION OF EXPERT KNOWLEDGE CONCERNING TEMPORAL REGULARITIES IN THE TIME SERIES GROUP OF A SYSTEM MONITORING DATABASE

    Directory of Open Access Journals (Sweden)

    E. S. Staricov

    2016-01-01

    Full Text Available Objectives. The presented research problem concerns data regularities for an unspecified time series based on an approach to the expert formalisation of knowledge integrated into a decision-making mechanism. Method. A context-free grammar, consisting of a modification of universal temporal grammar, is used to describe regularities. Using the rules of the developed grammar, an expert can describe patterns in the group of time series. A multi-dimensional matrix pattern of the behaviour of a group of time series is used in a real-time decision-making regime in the expert system to implements a universal approach to the description of the dynamics of these changes in the expert system. The multidimensional matrix pattern is specifically intended for decision-making in an expert system; the modified temporal grammar is used to identify patterns in the data. Results. It is proposed to use the temporal relations of the series and fix observation values in the time interval as ―From-To‖, ―Before‖, ―After‖, ―Simultaneously‖ and ―Duration‖. A syntactically oriented converter of descriptions is developed. A schema for the creation and application of matrix patterns in expert systems is drawn up. Conclusion. The advantage of the implementation of the proposed hybrid approaches consists in a reduction of the time taken for identifying temporal patterns and an automation of the matrix pattern of the decision-making system based on expert descriptions verified using live data derived from relationships in the monitoring data. 

  11. Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.

    Science.gov (United States)

    Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen

    2017-11-01

    A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.

  12. The Method of Immersion the Problem of Comparing Technical Objects in an Expert Shell in the Class of Artificial Intelligence Algorithms

    Science.gov (United States)

    Sergey Vasilievich, Buharin; Aleksandr Vladimirovich, Melnikov; Svetlana Nikolaevna, Chernyaeva; Lyudmila Anatolievna, Korobova

    2017-08-01

    The method of dip of the underlying computational problem of comparing technical object in an expert shell in the class of data mining methods is examined. An example of using the proposed method is given.

  13. Assessment of occupational risks to extremely low frequency magnetic fields: Validation of an empirical non-expert approach

    Directory of Open Access Journals (Sweden)

    Mariam El-Zein

    2016-12-01

    Full Text Available The expert method of exposure assignment involves relying on chemists or hygienists to estimate occupational exposures using information collected on study subjects. Once the estimation method for a particular contaminant has been made available in the literature, it is not known whether a non-expert, briefly trained by an expert remaining available to answer ad hoc questions, can provide reliable exposure estimates. We explored this issue by comparing estimates of exposure to extremely low frequency magnetic fields (ELF-MF obtained by an expert to those from a non-expert. Using a published exposure matrix, both the expert and non-expert independently calculated a weekly time-weighted average exposure for 208 maternal jobs by considering three main determinants: the work environment, magnetic field sources, and duration of use or exposure to given sources. Agreement between assessors was tested using the Bland-Altman 95% limits of agreement. The overall mean difference in estimates between the expert and non-expert was 0.004 μT (standard deviation 0.104. The 95% limits of agreement were −0.20 μT and +0.21 μT. The work environments and exposure sources were almost always similarly identified but there were differences in estimating exposure duration. This occurred mainly when information collected from study subjects was not sufficiently detailed. Our results suggest that following a short training period and the availability of a clearly described method for estimating exposures, a non-expert can cost-efficiently and reliably assign exposure, at least to ELF-MF.

  14. A new collaborative recommendation approach based on users clustering using artificial bee colony algorithm.

    Science.gov (United States)

    Ju, Chunhua; Xu, Chonghuan

    2013-01-01

    Although there are many good collaborative recommendation methods, it is still a challenge to increase the accuracy and diversity of these methods to fulfill users' preferences. In this paper, we propose a novel collaborative filtering recommendation approach based on K-means clustering algorithm. In the process of clustering, we use artificial bee colony (ABC) algorithm to overcome the local optimal problem caused by K-means. After that we adopt the modified cosine similarity to compute the similarity between users in the same clusters. Finally, we generate recommendation results for the corresponding target users. Detailed numerical analysis on a benchmark dataset MovieLens and a real-world dataset indicates that our new collaborative filtering approach based on users clustering algorithm outperforms many other recommendation methods.

  15. A New Collaborative Recommendation Approach Based on Users Clustering Using Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Chunhua Ju

    2013-01-01

    Full Text Available Although there are many good collaborative recommendation methods, it is still a challenge to increase the accuracy and diversity of these methods to fulfill users’ preferences. In this paper, we propose a novel collaborative filtering recommendation approach based on K-means clustering algorithm. In the process of clustering, we use artificial bee colony (ABC algorithm to overcome the local optimal problem caused by K-means. After that we adopt the modified cosine similarity to compute the similarity between users in the same clusters. Finally, we generate recommendation results for the corresponding target users. Detailed numerical analysis on a benchmark dataset MovieLens and a real-world dataset indicates that our new collaborative filtering approach based on users clustering algorithm outperforms many other recommendation methods.

  16. Physiotherapy movement based classification approaches to low back pain: comparison of subgroups through review and developer/expert survey

    Directory of Open Access Journals (Sweden)

    Karayannis Nicholas V

    2012-02-01

    Full Text Available Abstract Background Several classification schemes, each with its own philosophy and categorizing method, subgroup low back pain (LBP patients with the intent to guide treatment. Physiotherapy derived schemes usually have a movement impairment focus, but the extent to which other biological, psychological, and social factors of pain are encompassed requires exploration. Furthermore, within the prevailing 'biological' domain, the overlap of subgrouping strategies within the orthopaedic examination remains unexplored. The aim of this study was "to review and clarify through developer/expert survey, the theoretical basis and content of physical movement classification schemes, determine their relative reliability and similarities/differences, and to consider the extent of incorporation of the bio-psycho-social framework within the schemes". Methods A database search for relevant articles related to LBP and subgrouping or classification was conducted. Five dominant movement-based schemes were identified: Mechanical Diagnosis and Treatment (MDT, Treatment Based Classification (TBC, Pathoanatomic Based Classification (PBC, Movement System Impairment Classification (MSI, and O'Sullivan Classification System (OCS schemes. Data were extracted and a survey sent to the classification scheme developers/experts to clarify operational criteria, reliability, decision-making, and converging/diverging elements between schemes. Survey results were integrated into the review and approval obtained for accuracy. Results Considerable diversity exists between schemes in how movement informs subgrouping and in the consideration of broader neurosensory, cognitive, emotional, and behavioural dimensions of LBP. Despite differences in assessment philosophy, a common element lies in their objective to identify a movement pattern related to a pain reduction strategy. Two dominant movement paradigms emerge: (i loading strategies (MDT, TBC, PBC aimed at eliciting a phenomenon

  17. An efficient identification approach for stable and unstable nonlinear systems using Colliding Bodies Optimization algorithm.

    Science.gov (United States)

    Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P

    2015-11-01

    This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Finding Important Terms for Patients in Their Electronic Health Records: A Learning-to-Rank Approach Using Expert Annotations

    Science.gov (United States)

    Zheng, Jiaping; Yu, Hong

    2016-01-01

    Background Many health organizations allow patients to access their own electronic health record (EHR) notes through online patient portals as a way to enhance patient-centered care. However, EHR notes are typically long and contain abundant medical jargon that can be difficult for patients to understand. In addition, many medical terms in patients’ notes are not directly related to their health care needs. One way to help patients better comprehend their own notes is to reduce information overload and help them focus on medical terms that matter most to them. Interventions can then be developed by giving them targeted education to improve their EHR comprehension and the quality of care. Objective We aimed to develop a supervised natural language processing (NLP) system called Finding impOrtant medical Concepts most Useful to patientS (FOCUS) that automatically identifies and ranks medical terms in EHR notes based on their importance to the patients. Methods First, we built an expert-annotated corpus. For each EHR note, 2 physicians independently identified medical terms important to the patient. Using the physicians’ agreement as the gold standard, we developed and evaluated FOCUS. FOCUS first identifies candidate terms from each EHR note using MetaMap and then ranks the terms using a support vector machine-based learn-to-rank algorithm. We explored rich learning features, including distributed word representation, Unified Medical Language System semantic type, topic features, and features derived from consumer health vocabulary. We compared FOCUS with 2 strong baseline NLP systems. Results Physicians annotated 90 EHR notes and identified a mean of 9 (SD 5) important terms per note. The Cohen’s kappa annotation agreement was .51. The 10-fold cross-validation results show that FOCUS achieved an area under the receiver operating characteristic curve (AUC-ROC) of 0.940 for ranking candidate terms from EHR notes to identify important terms. When including term

  19. Expert ease

    Energy Technology Data Exchange (ETDEWEB)

    1984-04-01

    Expert-ease allows the most inexperienced of computer users to build an expert system in a matter of hours. It is nothing more or less than a computer based problem-solving system. It allows the expert to preserve his or her knowledge in the form of rules, which can be applied to problems put to the system by the non-expert. The crucial piece of software at the heart of Expert-Ease extracts rules from data, and is called the analogue concept learning system. It was developed by Intelligent Terminals Ltd. and supplied to Export Software International to be incorporated into a commercially attractive package for business users. The resulting product runs on the Act Sirius and the IBM PC and compatibles. It is a well conceived and polished product with a popular appeal that should ensure widespread acceptance even at a cost of >1500 plus vat.

  20. Generalizable Aspects of the Development of Expertise in Ballet across Countries and Cultures: A Perspective from the Expert-Performance Approach

    Science.gov (United States)

    Hutchinson, Carla U.; Sachs-Ericsson, Natalie J.; Ericsson, K. Anders

    2013-01-01

    The expert-performance approach guided the collection of survey data on the developmental history of elite professional ballet dancers from three different countries/cultures (USA, Mexico, and Russia). The level of ballet expertise attained by age 18 was found to be uniquely predicted by only two factors, namely the total number of accumulated…

  1. Opposition-Based Memetic Algorithm and Hybrid Approach for Sorting Permutations by Reversals.

    Science.gov (United States)

    Soncco-Álvarez, José Luis; Muñoz, Daniel M; Ayala-Rincón, Mauricio

    2018-02-21

    Sorting unsigned permutations by reversals is a difficult problem; indeed, it was proved to be NP-hard by Caprara (1997). Because of its high complexity, many approximation algorithms to compute the minimal reversal distance were proposed until reaching the nowadays best-known theoretical ratio of 1.375. In this article, two memetic algorithms to compute the reversal distance are proposed. The first one uses the technique of opposition-based learning leading to an opposition-based memetic algorithm; the second one improves the previous algorithm by applying the heuristic of two breakpoint elimination leading to a hybrid approach. Several experiments were performed with one-hundred randomly generated permutations, single benchmark permutations, and biological permutations. Results of the experiments showed that the proposed OBMA and Hybrid-OBMA algorithms achieve the best results for practical cases, that is, for permutations of length up to 120. Also, Hybrid-OBMA showed to improve the results of OBMA for permutations greater than or equal to 60. The applicability of our proposed algorithms was checked processing permutations based on biological data, in which case OBMA gave the best average results for all instances.

  2. DC Voltage Droop Control Implementation in the AC/DC Power Flow Algorithm: Combinational Approach

    DEFF Research Database (Denmark)

    Akhter, F.; Macpherson, D.E.; Harrison, G.P.

    2015-01-01

    of operational flexibility, as more than one VSC station controls the DC link voltage of the MTDC system. This model enables the study of the effects of DC droop control on the power flows of the combined AC/DC system for steady state studies after VSC station outages or transient conditions without needing...... to use its complete dynamic model. Further, the proposed approach can be extended to include multiple AC and DC grids for combined AC/DC power flow analysis. The algorithm is implemented by modifying the MATPOWER based MATACDC program and the results shows that the algorithm works efficiently....

  3. A “Tuned” Mask Learnt Approach Based on Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Youchuan Wan

    2016-01-01

    Full Text Available Texture image classification is an important topic in many applications in machine vision and image analysis. Texture feature extracted from the original texture image by using “Tuned” mask is one of the simplest and most effective methods. However, hill climbing based training methods could not acquire the satisfying mask at a time; on the other hand, some commonly used evolutionary algorithms like genetic algorithm (GA and particle swarm optimization (PSO easily fall into the local optimum. A novel approach for texture image classification exemplified with recognition of residential area is detailed in the paper. In the proposed approach, “Tuned” mask is viewed as a constrained optimization problem and the optimal “Tuned” mask is acquired by maximizing the texture energy via a newly proposed gravitational search algorithm (GSA. The optimal “Tuned” mask is achieved through the convergence of GSA. The proposed approach has been, respectively, tested on some public texture and remote sensing images. The results are then compared with that of GA, PSO, honey-bee mating optimization (HBMO, and artificial immune algorithm (AIA. Moreover, feature extracted by Gabor wavelet is also utilized to make a further comparison. Experimental results show that the proposed method is robust and adaptive and exhibits better performance than other methods involved in the paper in terms of fitness value and classification accuracy.

  4. Improving the Fine-Tuning of Metaheuristics: An Approach Combining Design of Experiments and Racing Algorithms

    Directory of Open Access Journals (Sweden)

    Eduardo Batista de Moraes Barbosa

    2017-01-01

    Full Text Available Usually, metaheuristic algorithms are adapted to a large set of problems by applying few modifications on parameters for each specific case. However, this flexibility demands a huge effort to correctly tune such parameters. Therefore, the tuning of metaheuristics arises as one of the most important challenges in the context of research of these algorithms. Thus, this paper aims to present a methodology combining Statistical and Artificial Intelligence methods in the fine-tuning of metaheuristics. The key idea is a heuristic method, called Heuristic Oriented Racing Algorithm (HORA, which explores a search space of parameters looking for candidate configurations close to a promising alternative. To confirm the validity of this approach, we present a case study for fine-tuning two distinct metaheuristics: Simulated Annealing (SA and Genetic Algorithm (GA, in order to solve the classical traveling salesman problem. The results are compared considering the same metaheuristics tuned through a racing method. Broadly, the proposed approach proved to be effective in terms of the overall time of the tuning process. Our results reveal that metaheuristics tuned by means of HORA achieve, with much less computational effort, similar results compared to the case when they are tuned by the other fine-tuning approach.

  5. Computation-aware algorithm selection approach for interlaced-to-progressive conversion

    Science.gov (United States)

    Park, Sang-Jun; Jeon, Gwanggil; Jeong, Jechang

    2010-05-01

    We discuss deinterlacing results in a computationally constrained and varied environment. The proposed computation-aware algorithm selection approach (CASA) for fast interlaced to progressive conversion algorithm consists of three methods: the line-averaging (LA) method for plain regions, the modified edge-based line-averaging (MELA) method for medium regions, and the proposed covariance-based adaptive deinterlacing (CAD) method for complex regions. The proposed CASA uses two criteria, mean-squared error (MSE) and CPU time, for assigning the method. We proposed a CAD method. The principle idea of CAD is based on the correspondence between the high and low-resolution covariances. We estimated the local covariance coefficients from an interlaced image using Wiener filtering theory and then used these optimal minimum MSE interpolation coefficients to obtain a deinterlaced image. The CAD method, though more robust than most known methods, was not found to be very fast compared to the others. To alleviate this issue, we proposed an adaptive selection approach using a fast deinterlacing algorithm rather than using only one CAD algorithm. The proposed hybrid approach of switching between the conventional schemes (LA and MELA) and our CAD was proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes was presented after a wide set of initial training processes. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.

  6. Optimization of Electrical System for Offshore Wind Farms via a Genetic Algorithm Approach

    DEFF Research Database (Denmark)

    Zhao, Menghua

    , and the LTC limitation of transformers, the power generation limits and the voltage operation range are considered as the constraints. The optimization method combined with probabilistic analysis is used to obtain the capacity of a given wind farm site. The OES-OWF is approached by Genetic Algorithm (GA...... to very different costs, system reliability, power quality, and power losses etc. Therefore, the optimization of electrical system design for offshore wind farms becomes more and more necessary. There are two tasks in this project: 1) the first one is to construct an algorithm for finding the capacity......). This platform is based on a knowledge database, and composed of several functional modules such as cost calculation, reliability evaluation, losses calculation, AC-DC integrated load flow algorithm etc. All these modules are based on a spreadsheet database which provides an interface for users to input...

  7. Expert Witness

    African Journals Online (AJOL)

    Adele

    formal rules of evidence apply) to help it understand the issues of a case and ... statements on medical expert witness by professional representative bodies in .... determining the size of the financial settlement that may have to be made to the.

  8. Analysis of stock investment selection based on CAPM using covariance and genetic algorithm approach

    Science.gov (United States)

    Sukono; Susanti, D.; Najmia, M.; Lesmana, E.; Napitupulu, H.; Supian, S.; Putra, A. S.

    2018-03-01

    Investment is one of the economic growth factors of countries, especially in Indonesia. Stocks is a form of investment, which is liquid. In determining the stock investment decisions which need to be considered by investors is to choose stocks that can generate maximum returns with a minimum risk level. Therefore, we need to know how to allocate the capital which may give the optimal benefit. This study discusses the issue of stock investment based on CAPM which is estimated using covariance and Genetic Algorithm approach. It is assumed that the stocks analyzed follow the CAPM model. To do the estimation of beta parameter on CAPM equation is done by two approach, first is to be represented by covariance approach, and second with genetic algorithm optimization. As a numerical illustration, in this paper analyzed ten stocks traded on the capital market in Indonesia. The results of the analysis show that estimation of beta parameters using covariance and genetic algorithm approach, give the same decision, that is, six underpriced stocks with buying decision, and four overpriced stocks with a sales decision. Based on the analysis, it can be concluded that the results can be used as a consideration for investors buying six under-priced stocks, and selling four overpriced stocks.

  9. Pre-emptive resource-constrained multimode project scheduling using genetic algorithm: A dynamic forward approach

    Directory of Open Access Journals (Sweden)

    Aidin Delgoshaei

    2016-09-01

    Full Text Available Purpose: The issue resource over-allocating is a big concern for project engineers in the process of scheduling project activities. Resource over-allocating drawback is frequently seen after scheduling of a project in practice which causes a schedule to be useless. Modifying an over-allocated schedule is very complicated and needs a lot of efforts and time. In this paper, a new and fast tracking method is proposed to schedule large scale projects which can help project engineers to schedule the project rapidly and with more confidence. Design/methodology/approach: In this article, a forward approach for maximizing net present value (NPV in multi-mode resource constrained project scheduling problem while assuming discounted positive cash flows (MRCPSP-DCF is proposed. The progress payment method is used and all resources are considered as pre-emptible. The proposed approach maximizes NPV using unscheduled resources through resource calendar in forward mode. For this purpose, a Genetic Algorithm is applied to solve. Findings: The findings show that the proposed method is an effective way to maximize NPV in MRCPSP-DCF problems while activity splitting is allowed. The proposed algorithm is very fast and can schedule experimental cases with 1000 variables and 100 resources in few seconds. The results are then compared with branch and bound method and simulated annealing algorithm and it is found the proposed genetic algorithm can provide results with better quality. Then algorithm is then applied for scheduling a hospital in practice. Originality/value: The method can be used alone or as a macro in Microsoft Office Project® Software to schedule MRCPSP-DCF problems or to modify resource over-allocated activities after scheduling a project. This can help project engineers to schedule project activities rapidly with more accuracy in practice.

  10. INTEGRATING CASE-BASED REASONING, KNOWLEDGE-BASED APPROACH AND TSP ALGORITHM FOR MINIMUM TOUR FINDING

    Directory of Open Access Journals (Sweden)

    Hossein Erfani

    2009-07-01

    Full Text Available Imagine you have traveled to an unfamiliar city. Before you start your daily tour around the city, you need to know a good route. In Network Theory (NT, this is the traveling salesman problem (TSP. A dynamic programming algorithm is often used for solving this problem. However, when the road network of the city is very complicated and dense, which is usually the case, it will take too long for the algorithm to find the shortest path. Furthermore, in reality, things are not as simple as those stated in AT. For instance, the cost of travel for the same part of the city at different times may not be the same. In this project, we have integrated TSP algorithm with AI knowledge-based approach and case-based reasoning in solving the problem. With this integration, knowledge about the geographical information and past cases are used to help TSP algorithm in finding a solution. This approach dramatically reduces the computation time required for minimum tour finding.

  11. Earthquake—explosion discrimination using genetic algorithm-based boosting approach

    Science.gov (United States)

    Orlic, Niksa; Loncaric, Sven

    2010-02-01

    An important and challenging problem in seismic data processing is to discriminate between natural seismic events such as earthquakes and artificial seismic events such as explosions. Many automatic techniques for seismogram classification have been proposed in the literature. Most of these methods have a similar approach to seismogram classification: a predefined set of features based on ad-hoc feature selection criteria is extracted from the seismogram waveform or spectral data and these features are used for signal classification. In this paper we propose a novel approach for seismogram classification. A specially formulated genetic algorithm has been employed to automatically search for a near-optimal seismogram feature set, instead of using ad-hoc feature selection criteria. A boosting method is added to the genetic algorithm when searching for multiple features in order to improve classification performance. A learning set of seismogram data is used by the genetic algorithm to discover a near-optimal feature set. The feature set identified by the genetic algorithm is then used for seismogram classification. The described method is developed to classify seismograms in two groups, whereas a brief overview of method extension for multiple group classification is given. For method verification, a learning set consisting of 40 local earthquake seismograms and 40 explosion seismograms was used. The method was validated on seismogram set consisting of 60 local earthquake seismograms and 60 explosion seismograms, with correct classification of 85%.

  12. Defining landscape resistance values in least-cost connectivity models for the invasive grey squirrel: a comparison of approaches using expert-opinion and habitat suitability modelling.

    Directory of Open Access Journals (Sweden)

    Claire D Stevenson-Holt

    Full Text Available Least-cost models are widely used to study the functional connectivity of habitat within a varied landscape matrix. A critical step in the process is identifying resistance values for each land cover based upon the facilitating or impeding impact on species movement. Ideally resistance values would be parameterised with empirical data, but due to a shortage of such information, expert-opinion is often used. However, the use of expert-opinion is seen as subjective, human-centric and unreliable. This study derived resistance values from grey squirrel habitat suitability models (HSM in order to compare the utility and validity of this approach with more traditional, expert-led methods. Models were built and tested with MaxEnt, using squirrel presence records and a categorical land cover map for Cumbria, UK. Predictions on the likelihood of squirrel occurrence within each land cover type were inverted, providing resistance values which were used to parameterise a least-cost model. The resulting habitat networks were measured and compared to those derived from a least-cost model built with previously collated information from experts. The expert-derived and HSM-inferred least-cost networks differ in precision. The HSM-informed networks were smaller and more fragmented because of the higher resistance values attributed to most habitats. These results are discussed in relation to the applicability of both approaches for conservation and management objectives, providing guidance to researchers and practitioners attempting to apply and interpret a least-cost approach to mapping ecological networks.

  13. Defining landscape resistance values in least-cost connectivity models for the invasive grey squirrel: a comparison of approaches using expert-opinion and habitat suitability modelling.

    Science.gov (United States)

    Stevenson-Holt, Claire D; Watts, Kevin; Bellamy, Chloe C; Nevin, Owen T; Ramsey, Andrew D

    2014-01-01

    Least-cost models are widely used to study the functional connectivity of habitat within a varied landscape matrix. A critical step in the process is identifying resistance values for each land cover based upon the facilitating or impeding impact on species movement. Ideally resistance values would be parameterised with empirical data, but due to a shortage of such information, expert-opinion is often used. However, the use of expert-opinion is seen as subjective, human-centric and unreliable. This study derived resistance values from grey squirrel habitat suitability models (HSM) in order to compare the utility and validity of this approach with more traditional, expert-led methods. Models were built and tested with MaxEnt, using squirrel presence records and a categorical land cover map for Cumbria, UK. Predictions on the likelihood of squirrel occurrence within each land cover type were inverted, providing resistance values which were used to parameterise a least-cost model. The resulting habitat networks were measured and compared to those derived from a least-cost model built with previously collated information from experts. The expert-derived and HSM-inferred least-cost networks differ in precision. The HSM-informed networks were smaller and more fragmented because of the higher resistance values attributed to most habitats. These results are discussed in relation to the applicability of both approaches for conservation and management objectives, providing guidance to researchers and practitioners attempting to apply and interpret a least-cost approach to mapping ecological networks.

  14. Evaluation of expert systems - An approach and case study. [of determining software functional requirements for command management of satellites

    Science.gov (United States)

    Liebowitz, J.

    1985-01-01

    Techniques that were applied in defining an expert system prototype for first-cut evaluations of the software functional requirements of NASA satellite command management activities are described. The prototype was developed using the Knowledge Engineering System. Criteria were selected for evaluating the satellite software before defining the expert system prototype. Application of the prototype system is illustrated in terms of the evaluation procedures used with the COBE satellite to be launched in 1988. The limited number of options which can be considered by the program mandates that biases in the system output must be well understood by the users.

  15. Group prioritisation with unknown expert weights in incomplete linguistic context

    Science.gov (United States)

    Cheng, Dong; Cheng, Faxin; Zhou, Zhili; Wang, Juan

    2017-09-01

    In this paper, we study a group prioritisation problem in situations when the expert weights are completely unknown and their judgement preferences are linguistic and incomplete. Starting from the theory of relative entropy (RE) and multiplicative consistency, an optimisation model is provided for deriving an individual priority vector without estimating the missing value(s) of an incomplete linguistic preference relation. In order to address the unknown expert weights in the group aggregating process, we define two new kinds of expert weight indicators based on RE: proximity entropy weight and similarity entropy weight. Furthermore, a dynamic-adjusting algorithm (DAA) is proposed to obtain an objective expert weight vector and capture the dynamic properties involved in it. Unlike the extant literature of group prioritisation, the proposed RE approach does not require pre-allocation of expert weights and can solve incomplete preference relations. An interesting finding is that once all the experts express their preference relations, the final expert weight vector derived from the DAA is fixed irrespective of the initial settings of expert weights. Finally, an application example is conducted to validate the effectiveness and robustness of the RE approach.

  16. Optimal planning approaches with multiple impulses for rendezvous based on hybrid genetic algorithm and control method

    Directory of Open Access Journals (Sweden)

    JingRui Zhang

    2015-03-01

    Full Text Available In this article, we focus on safe and effective completion of a rendezvous and docking task by looking at planning approaches and control with fuel-optimal rendezvous for a target spacecraft running on a near-circular reference orbit. A variety of existent practical path constraints are considered, including the constraints of field of view, impulses, and passive safety. A rendezvous approach is calculated by using a hybrid genetic algorithm with those constraints. Furthermore, a control method of trajectory tracking is adopted to overcome the external disturbances. Based on Clohessy–Wiltshire equations, we first construct the mathematical model of optimal planning approaches of multiple impulses with path constraints. Second, we introduce the principle of hybrid genetic algorithm with both stronger global searching ability and local searching ability. We additionally explain the application of this algorithm in the problem of trajectory planning. Then, we give three-impulse simulation examples to acquire an optimal rendezvous trajectory with the path constraints presented in this article. The effectiveness and applicability of the tracking control method are verified with the optimal trajectory above as control objective through the numerical simulation.

  17. Method of transient identification based on a possibilistic approach, optimized by genetic algorithm

    International Nuclear Information System (INIS)

    Almeida, Jose Carlos Soares de

    2001-02-01

    This work develops a method for transient identification based on a possible approach, optimized by Genetic Algorithm to optimize the number of the centroids of the classes that represent the transients. The basic idea of the proposed method is to optimize the partition of the search space, generating subsets in the classes within a partition, defined as subclasses, whose centroids are able to distinguish the classes with the maximum correct classifications. The interpretation of the subclasses as fuzzy sets and the possible approach provided a heuristic to establish influence zones of the centroids, allowing to achieve the 'don't know' answer for unknown transients, that is, outside the training set. (author)

  18. A Hierarchical and Distributed Approach for Mapping Large Applications to Heterogeneous Grids using Genetic Algorithms

    Science.gov (United States)

    Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak

    2003-01-01

    In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.

  19. A probabilistic multi objective CLSC model with Genetic algorithm-ε_Constraint approach

    Directory of Open Access Journals (Sweden)

    Alireza TaheriMoghadam

    2014-05-01

    Full Text Available In this paper an uncertain multi objective closed-loop supply chain is developed. The first objective function is maximizing the total profit. The second objective function is minimizing the use of row materials. In the other word, the second objective function is maximizing the amount of remanufacturing and recycling. Genetic algorithm is used for optimization and for finding the pareto optimal line, Epsilon-constraint method is used. Finally a numerical example is solved with proposed approach and performance of the model is evaluated in different sizes. The results show that this approach is effective and useful for managerial decisions.

  20. An Approach for Externalization of Expert Tacit Knowledge Using a Query Management System in an E-Learning Environment

    Science.gov (United States)

    Khan, Abdul Azeez; Khader, Sheik Abdul

    2014-01-01

    E-learning or electronic learning platforms facilitate delivery of the knowledge spectrum to the learning community through information and communication technologies. The transfer of knowledge takes place from experts to learners, and externalization of the knowledge transfer is significant. In the e-learning environment, the learners seek…

  1. Evolution of transoral approaches, endoscopic endonasal approaches, and reduction strategies for treatment of craniovertebral junction pathology: a treatment algorithm update.

    Science.gov (United States)

    Dlouhy, Brian J; Dahdaleh, Nader S; Menezes, Arnold H

    2015-04-01

    The craniovertebral junction (CVJ), or the craniocervical junction (CCJ) as it is otherwise known, houses the crossroads of the CNS and is composed of the occipital bone that surrounds the foramen magnum, the atlas vertebrae, the axis vertebrae, and their associated ligaments and musculature. The musculoskeletal organization of the CVJ is unique and complex, resulting in a wide range of congenital, developmental, and acquired pathology. The refinements of the transoral approach to the CVJ by the senior author (A.H.M.) in the late 1970s revolutionized the treatment of CVJ pathology. At the same time, a physiological approach to CVJ management was adopted at the University of Iowa Hospitals and Clinics in 1977 based on the stability and motion dynamics of the CVJ and the site of encroachment, incorporating the transoral approach for irreducible ventral CVJ pathology. Since then, approaches and techniques to treat ventral CVJ lesions have evolved. In the last 40 years at University of Iowa Hospitals and Clinics, multiple approaches to the CVJ have evolved and a better understanding of CVJ pathology has been established. In addition, new reduction strategies that have diminished the need to perform ventral decompressive approaches have been developed and implemented. In this era of surgical subspecialization, to properly treat complex CVJ pathology, the CVJ specialist must be trained in skull base transoral and endoscopic endonasal approaches, pediatric and adult CVJ spine surgery, and must understand and be able to treat the complex CSF dynamics present in CVJ pathology to provide the appropriate, optimal, and tailored treatment strategy for each individual patient, both child and adult. This is a comprehensive review of the history and evolution of the transoral approaches, extended transoral approaches, endoscopie assisted transoral approaches, endoscopie endonasal approaches, and CVJ reduction strategies. Incorporating these advancements, the authors update the

  2. An Approach to a Comprehensive Test Framework for Analysis and Evaluation of Text Line Segmentation Algorithms

    Directory of Open Access Journals (Sweden)

    Zoran N. Milivojevic

    2011-09-01

    Full Text Available The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  3. Optimization approaches to mpi and area merging-based parallel buffer algorithm

    Directory of Open Access Journals (Sweden)

    Junfu Fan

    Full Text Available On buffer zone construction, the rasterization-based dilation method inevitably introduces errors, and the double-sided parallel line method involves a series of complex operations. In this paper, we proposed a parallel buffer algorithm based on area merging and MPI (Message Passing Interface to improve the performances of buffer analyses on processing large datasets. Experimental results reveal that there are three major performance bottlenecks which significantly impact the serial and parallel buffer construction efficiencies, including the area merging strategy, the task load balance method and the MPI inter-process results merging strategy. Corresponding optimization approaches involving tree-like area merging strategy, the vertex number oriented parallel task partition method and the inter-process results merging strategy were suggested to overcome these bottlenecks. Experiments were carried out to examine the performance efficiency of the optimized parallel algorithm. The estimation results suggested that the optimization approaches could provide high performance and processing ability for buffer construction in a cluster parallel environment. Our method could provide insights into the parallelization of spatial analysis algorithm.

  4. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    Science.gov (United States)

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  5. On the multivariate total least-squares approach to empirical coordinate transformations. Three algorithms

    Science.gov (United States)

    Schaffrin, Burkhard; Felus, Yaron A.

    2008-06-01

    The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.

  6. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    Science.gov (United States)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  7. On Directed Edge-Disjoint Spanning Trees in Product Networks, An Algorithmic Approach

    Directory of Open Access Journals (Sweden)

    A.R. Touzene

    2014-12-01

    Full Text Available In (Ku et al. 2003, the authors have proposed a construction of edge-disjoint spanning trees EDSTs in undirected product networks. Their construction method focuses more on showing the existence of a maximum number (n1+n2-1 of EDSTs in product network of two graphs, where factor graphs have respectively n1 and n2 EDSTs. In this paper, we propose a new systematic and algorithmic approach to construct (n1+n2 directed routed EDST in the product networks. The direction of an edge is added to support bidirectional links in interconnection networks. Our EDSTs can be used straightforward to develop efficient collective communication algorithms for both models store-and-forward and wormhole.

  8. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    Science.gov (United States)

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  9. Systems approach to modeling the Token Bucket algorithm in computer networks

    Directory of Open Access Journals (Sweden)

    Ahmed N. U.

    2002-01-01

    Full Text Available In this paper, we construct a new dynamic model for the Token Bucket (TB algorithm used in computer networks and use systems approach for its analysis. This model is then augmented by adding a dynamic model for a multiplexor at an access node where the TB exercises a policing function. In the model, traffic policing, multiplexing and network utilization are formally defined. Based on the model, we study such issues as (quality of service QoS, traffic sizing and network dimensioning. Also we propose an algorithm using feedback control to improve QoS and network utilization. Applying MPEG video traces as the input traffic to the model, we verify the usefulness and effectiveness of our model.

  10. An Algorithm-Based Approach for Behavior and Disease Management in Children.

    Science.gov (United States)

    Meyer, Beau D; Lee, Jessica Y; Thikkurissy, S; Casamassimo, Paul S; Vann, William F

    2018-03-15

    Pharmacologic behavior management for dental treatment is an approach to provide invasive yet compassionate care for young children; it can facilitate the treatment of children who otherwise may not cooperate for traditional in-office care. Some recent highly publicized procedural sedation-related tragedies have drawn attention to risks associated with pharmacologic management. However, it remains widely accepted that, by adhering to proper guidelines, procedural sedation can assist in the provision of high-quality dental care while minimizing morbidity and mortality from the procedure. The purpose of this paper was to propose an algorithm for clinicians to consider when selecting a behavior and disease management strategy for early childhood caries. This algorithm will not ensure a positive outcome but can assist clinicians when counseling caregivers about risks, benefits, and alternatives. It also emphasizes and underscores best-safety practices.

  11. A Genetic Algorithms-based Approach for Optimized Self-protection in a Pervasive Service Middleware

    DEFF Research Database (Denmark)

    Zhang, Weishan; Ingstrup, Mads; Hansen, Klaus Marius

    2009-01-01

    With increasingly complex and heterogeneous systems in pervasive service computing, it becomes more and more important to provide self-protected services to end users. In order to achieve self-protection, the corresponding security should be provided in an optimized manner considering...... the constraints of heterogeneous devices and networks. In this paper, we present a Genetic Algorithms-based approach for obtaining optimized security configurations at run time, supported by a set of security OWL ontologies and an event-driven framework. This approach has been realized as a prototype for self-protection...... in the Hydra middleware, and is integrated with a framework for enforcing the computed solution at run time using security obligations. The experiments with the prototype on configuring security strategies for a pervasive service middleware show that this approach has acceptable performance, and could be used...

  12. Expert Systems: What Is an Expert System?

    Science.gov (United States)

    Duval, Beverly K.; Main, Linda

    1994-01-01

    Describes expert systems and discusses their use in libraries. Highlights include parts of an expert system; expert system shells; an example of how to build an expert system; a bibliography of 34 sources of information on expert systems in libraries; and a list of 10 expert system shells used in libraries. (Contains five references.) (LRW)

  13. Overview of the structured assessment approach and documentation of algorithms to compute the probability of adversary detection

    International Nuclear Information System (INIS)

    Rice, T.R.; Derby, S.L.

    1978-01-01

    The Structured Assessment Approach was applied to material control and accounting systems at facilities that process Special Nuclear Material. Four groups of analytical techniques were developed for four general adversory types. Probabilistic algorithms were developed and compared with existing algorithms. 20 figures

  14. A Combined Approach to Cartographic Displacement for Buildings Based on Skeleton and Improved Elastic Beam Algorithm

    Science.gov (United States)

    Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya

    2014-01-01

    Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727

  15. DESIGNING ALGORITHMS FOR SERVICE ROBOTS ON THE BASIS OF MIVAR APPROACH

    Directory of Open Access Journals (Sweden)

    Alexey Andreevich Panferov

    2017-05-01

    Full Text Available Opportunities of mivar-based approach for robots have been analyzed. Mivar-based method of rapid logical inference for calculating random algorithms of service robot functioning has been tested successfully. The logical model of office robot-guide functioning with the application of mivar-based method of rapid logical inference in the software environment “KESMI” (Wi!Mi 1.1 has been developed. Formalized map of the office for service robot has been described in mivar matrix, 63 objects for 100 rules. Simulation of robot functioning in the software environment V-REP has been performed.

  16. Gravitation and Special Relativity from Compton Wave Interactions at the Planck Scale: An Algorithmic Approach

    Science.gov (United States)

    Blackwell, William C., Jr.

    2004-01-01

    In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.

  17. Application of algorithms and artificial-intelligence approach for locating multiple harmonics in distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Y.-Y.; Chen, Y.-C. [Chung Yuan University (China). Dept. of Electrical Engineering

    1999-05-01

    A new method is proposed for locating multiple harmonic sources in distribution systems. The proposed method first determines the proper locations for metering measurement using fuzzy clustering. Next, an artificial neural network based on the back-propagation approach is used to identify the most likely location for multiple harmonic sources. A set of systematic algorithmic steps is developed until all harmonic locations are identified. The simulation results for an 18-busbar system show that the proposed method is very efficient in locating the multiple harmonics in a distribution system. (author)

  18. Lumbar Spondylolysis and Spondylolytic Spondylolisthesis: Who Should Be Have Surgery? An Algorithmic Approach

    Science.gov (United States)

    Ebrahimzadeh, Mohamad Hossein; Salari, Saman

    2014-01-01

    Lumbar spondylolysis and spondylolisthesis are common spinal disorders that most of the times are incidental findings or respond favorably to conservative treatment. In a small percentage of the patients, surgical intervention becomes necessary. Because too much attention has been paid to novel surgical techniques and new modern spinal implants, some of fundamental concepts have been forgotten. Identifying that small but important number of patients with lumbar spondylolysis or spondylolisthesis who would really benefit from lumbar surgery is one of those forgotten concepts. In this paper, we have developed an algorithmic approach to determine who is a good candidate for surgery due to lumbar spondylolysis or spondylolisthesis. PMID:25558333

  19. Approach to estimation of level of information security at enterprise based on genetic algorithm

    Science.gov (United States)

    V, Stepanov L.; V, Parinov A.; P, Korotkikh L.; S, Koltsov A.

    2018-05-01

    In the article, the way of formalization of different types of threats of information security and vulnerabilities of an information system of the enterprise and establishment is considered. In a type of complexity of ensuring information security of application of any new organized system, the concept and decisions in the sphere of information security are expedient. One of such approaches is the method of a genetic algorithm. For the enterprises of any fields of activity, the question of complex estimation of the level of security of information systems taking into account the quantitative and qualitative factors characterizing components of information security is relevant.

  20. An Accurate and Impartial Expert Assignment Method for Scientific Project Review

    Directory of Open Access Journals (Sweden)

    Mingliang Yue

    2017-12-01

    Full Text Available Purpose: This paper proposes an expert assignment method for scientific project review that considers both accuracy and impartiality. As impartial and accurate peer review is extremely important to ensure the quality and feasibility of scientific projects, enhanced methods for managing the process are needed. Design/methodology/approach: To ensure both accuracy and impartiality, we design four criteria, the reviewers’ fitness degree, research intensity, academic association, and potential conflict of interest, to express the characteristics of an appropriate peer review expert. We first formalize the expert assignment problem as an optimization problem based on the designed criteria, and then propose a randomized algorithm to solve the expert assignment problem of identifying reviewer adequacy. Findings: Simulation results show that the proposed method is quite accurate and impartial during expert assignment. Research limitations: Although the criteria used in this paper can properly show the characteristics of a good and appropriate peer review expert, more criteria/conditions can be included in the proposed scheme to further enhance accuracy and impartiality of the expert assignment. Practical implications: The proposed method can help project funding agencies (e.g. the National Natural Science Foundation of China find better experts for project peer review. Originality/value: To the authors’ knowledge, this is the first publication that proposes an algorithm that applies an impartial approach to the project review expert assignment process. The simulation results show the effectiveness of the proposed method.

  1. ELICITED EXPERT PERCEPTIONS FOR CLIMATE CHANGE RISKS AND ADAPTATION IN AGRICULTURE AND FOOD PRODUCTION THROUGH MENTAL MODELS APPROACH

    Science.gov (United States)

    Suda, Eiko; Kubota, Hiromi; Baba, Kenshi; Hijioka, Yasuaki; Takahashi, Kiyoshi; Hanasaki, Naota

    Impacts of climate change have become obvious in agriculture and food production in Japan these days, and researches to adapt to their risks have been conducted as a key effort to cope with the climate change. Numerous scientific findings on climate change impacts have been presented so far; however, prospective risks to be adapted to and their management in the context of individual on-site situations have not been investigated in detail. The structure of climate change risks and their management vary depending on geographical and social features in the regions where the adaptation options should be applied; therefore, a practical adaptation strategy should consider actual on-site situations. This study intended to clarify climate change risks to be adapted to in the Japanese agricultural sector, and factors to be considered in adaptation options, for encouragement of decision-making on adaptation implementation in the field. Semi-structured individual interviews have been conducted with 9 multidisciplinary experts engaging in climate change impacts research in agricultural production, economics, engineering, policy, and so on. Based on the results of the interviews, and the latest literatures available for risk assessment and adaptation, an expert mental model including their perceptions which cover the process from climate change impacts assessment to adaptation has been developed. The prospective risks, adaptation options, and issues to be examined to progress the development of practical and effective adaptation options and to support individual or social decision-making, have been shown on the developed expert mental model. It is the basic information for developing social communication and stakeholders cooperations in climate change adaptation strategies in agriculture and food production in Japan.

  2. Enhanced gene ranking approaches using modified trace ratio algorithm for gene expression data

    Directory of Open Access Journals (Sweden)

    Shruti Mishra

    Full Text Available Microarray technology enables the understanding and investigation of gene expression levels by analyzing high dimensional datasets that contain few samples. Over time, microarray expression data have been collected for studying the underlying biological mechanisms of disease. One such application for understanding the mechanism is by constructing a gene regulatory network (GRN. One of the foremost key criteria for GRN discovery is gene selection. Choosing a generous set of genes for the structure of the network is highly desirable. For this role, two suitable methods were proposed for selection of appropriate genes. The first approach comprises a gene selection method called Information gain, where the dataset is reformed and fused with another distinct algorithm called Trace Ratio (TR. Our second method is the implementation of our projected modified TR algorithm, where the scoring base for finding weight matrices has been re-designed. Both the methods' efficiency was shown with different classifiers that include variants of the Artificial Neural Network classifier, such as Resilient Propagation, Quick Propagation, Back Propagation, Manhattan Propagation and Radial Basis Function Neural Network and also the Support Vector Machine (SVM classifier. In the study, it was confirmed that both of the proposed methods worked well and offered high accuracy with a lesser number of iterations as compared to the original Trace Ratio algorithm. Keywords: Gene regulatory network, Gene selection, Information gain, Trace ratio, Canonical correlation analysis, Classification

  3. A systematic approach to robust preconditioning for gradient-based inverse scattering algorithms

    International Nuclear Information System (INIS)

    Nordebo, Sven; Fhager, Andreas; Persson, Mikael; Gustafsson, Mats

    2008-01-01

    This paper presents a systematic approach to robust preconditioning for gradient-based nonlinear inverse scattering algorithms. In particular, one- and two-dimensional inverse problems are considered where the permittivity and conductivity profiles are unknown and the input data consist of the scattered field over a certain bandwidth. A time-domain least-squares formulation is employed and the inversion algorithm is based on a conjugate gradient or quasi-Newton algorithm together with an FDTD-electromagnetic solver. A Fisher information analysis is used to estimate the Hessian of the error functional. A robust preconditioner is then obtained by incorporating a parameter scaling such that the scaled Fisher information has a unit diagonal. By improving the conditioning of the Hessian, the convergence rate of the conjugate gradient or quasi-Newton methods are improved. The preconditioner is robust in the sense that the scaling, i.e. the diagonal Fisher information, is virtually invariant to the numerical resolution and the discretization model that is employed. Numerical examples of image reconstruction are included to illustrate the efficiency of the proposed technique

  4. Design Approach and Implementation of Application Specific Instruction Set Processor for SHA-3 BLAKE Algorithm

    Science.gov (United States)

    Zhang, Yuli; Han, Jun; Weng, Xinqian; He, Zhongzhu; Zeng, Xiaoyang

    This paper presents an Application Specific Instruction-set Processor (ASIP) for the SHA-3 BLAKE algorithm family by instruction set extensions (ISE) from an RISC (reduced instruction set computer) processor. With a design space exploration for this ASIP to increase the performance and reduce the area cost, we accomplish an efficient hardware and software implementation of BLAKE algorithm. The special instructions and their well-matched hardware function unit improve the calculation of the key section of the algorithm, namely G-functions. Also, relaxing the time constraint of the special function unit can decrease its hardware cost, while keeping the high data throughput of the processor. Evaluation results reveal the ASIP achieves 335Mbps and 176Mbps for BLAKE-256 and BLAKE-512. The extra area cost is only 8.06k equivalent gates. The proposed ASIP outperforms several software approaches on various platforms in cycle per byte. In fact, both high throughput and low hardware cost achieved by this programmable processor are comparable to that of ASIC implementations.

  5. Personalized therapy algorithms for type 2 diabetes: a phenotype-based approach

    Directory of Open Access Journals (Sweden)

    Ceriello A

    2014-06-01

    treatment is related to the identified phenotype. With one exception, these algorithms contain a stepwise approach for patients with type 2 diabetes who are metformin-intolerant. The glycemic targets (HbA1c, fasting/preprandial and postprandial glycemia are also personalized. This accessible and easy to use algorithm may help physicians to choose a personalized treatment plan for each patient and to optimize it in a timely manner, thereby lessening clinical inertia.Keywords: type 2 diabetes, treatment guidelines, personalized treatment, Italian Association of Medical Diabetologists, Italian algorithm

  6. Combined mixed approach algorithm for in-line phase-contrast x-ray imaging

    International Nuclear Information System (INIS)

    De Caro, Liberato; Scattarella, Francesco; Giannini, Cinzia; Tangaro, Sabina; Rigon, Luigi; Longo, Renata; Bellotti, Roberto

    2010-01-01

    Purpose: In the past decade, phase-contrast imaging (PCI) has been applied to study different kinds of tissues and human body parts, with an increased improvement of the image quality with respect to simple absorption radiography. A technique closely related to PCI is phase-retrieval imaging (PRI). Indeed, PCI is an imaging modality thought to enhance the total contrast of the images through the phase shift introduced by the object (human body part); PRI is a mathematical technique to extract the quantitative phase-shift map from PCI. A new phase-retrieval algorithm for the in-line phase-contrast x-ray imaging is here proposed. Methods: The proposed algorithm is based on a mixed transfer-function and transport-of-intensity approach (MA) and it requires, at most, an initial approximate estimate of the average phase shift introduced by the object as prior knowledge. The accuracy in the initial estimate determines the convergence speed of the algorithm. The proposed algorithm retrieves both the object phase and its complex conjugate in a combined MA (CMA). Results: Although slightly less computationally effective with respect to other mixed-approach algorithms, as two phases have to be retrieved, the results obtained by the CMA on simulated data have shown that the obtained reconstructed phase maps are characterized by particularly low normalized mean square errors. The authors have also tested the CMA on noisy experimental phase-contrast data obtained by a suitable weakly absorbing sample consisting of a grid of submillimetric nylon fibers as well as on a strongly absorbing object made of a 0.03 mm thick lead x-ray resolution star pattern. The CMA has shown a good efficiency in recovering phase information, also in presence of noisy data, characterized by peak-to-peak signal-to-noise ratios down to a few dBs, showing the possibility to enhance with phase radiography the signal-to-noise ratio for features in the submillimetric scale with respect to the attenuation

  7. A genetic algorithm approach to optimization for the radiological worker allocation problem

    International Nuclear Information System (INIS)

    Yan Chen; Masakuni Narita; Masashi Tsuji; Sangduk Sa

    1996-01-01

    The worker allocation optimization problem in radiological facilities inevitably involves various types of requirements and constraints relevant to radiological protection and labor management. Some of these goals and constraints are not amenable to a rigorous mathematical formulation. Conventional methods for this problem rely heavily on sophisticated algebraic or numerical algorithms, which cause difficulties in the search for optimal solutions in the search space of worker allocation optimization problems. Genetic algorithms (GAB) are stochastic search algorithms introduced by J. Holland in the 1970s based on ideas and techniques from genetic and evolutionary theories. The most striking characteristic of GAs is the large flexibility allowed in the formulation of the optimal problem and the process of the search for the optimal solution. In the formulation, it is not necessary to define the optimal problem in rigorous mathematical terms, as required in the conventional methods. Furthermore, by designing a model of evolution for the optimal search problem, the optimal solution can be sought efficiently with computational simple manipulations without highly complex mathematical algorithms. We reported a GA approach to the worker allocation problem in radiological facilities in the previous study. In this study, two types of hard constraints were employed to reduce the huge search space, where the optimal solution is sought in such a way as to satisfy as many of soft constraints as possible. It was demonstrated that the proposed evolutionary method could provide the optimal solution efficiently compared with conventional methods. However, although the employed hard constraints could localize the search space into a very small region, it brought some complexities in the designed genetic operators and demanded additional computational burdens. In this paper, we propose a simplified evolutionary model with less restrictive hard constraints and make comparisons between

  8. Applying Aspects of the Expert Performance Approach to Better Understand the Structure of Skill and Mechanisms of Skill Acquisition in Video Games.

    Science.gov (United States)

    Boot, Walter R; Sumner, Anna; Towne, Tyler J; Rodriguez, Paola; Anders Ericsson, K

    2017-04-01

    Video games are ideal platforms for the study of skill acquisition for a variety of reasons. However, our understanding of the development of skill and the cognitive representations that support skilled performance can be limited by a focus on game scores. We present an alternative approach to the study of skill acquisition in video games based on the tools of the Expert Performance Approach. Our investigation was motivated by a detailed analysis of the behaviors responsible for the superior performance of one of the highest scoring players of the video game Space Fortress (Towne, Boot, & Ericsson, ). This analysis revealed how certain behaviors contributed to his exceptional performance. In this study, we recruited a participant for a similar training regimen, but we collected concurrent and retrospective verbal protocol data throughout training. Protocol analysis revealed insights into strategies, errors, mental representations, and shifting game priorities. We argue that these insights into the developing representations that guided skilled performance could only easily have been derived from the tools of the Expert Performance Approach. We propose that the described approach could be applied to understand performance and skill acquisition in many different video games (and other short- to medium-term skill acquisition paradigms) and help reveal mechanisms of transfer from gameplay to other measures of laboratory and real-world performance. Copyright © 2016 Cognitive Science Society, Inc.

  9. A Novel Approach for Bi-Level Segmentation of Tuberculosis Bacilli Based on Meta-Heuristic Algorithms

    Directory of Open Access Journals (Sweden)

    AYAS, S.

    2018-02-01

    Full Text Available Image thresholding is the most crucial step in microscopic image analysis to distinguish bacilli objects causing of tuberculosis disease. Therefore, several bi-level thresholding algorithms are widely used to increase the bacilli segmentation accuracy. However, bi-level microscopic image thresholding problem has not been solved using optimization algorithms. This paper introduces a novel approach for the segmentation problem using heuristic algorithms and presents visual and quantitative comparisons of heuristic and state-of-art thresholding algorithms. In this study, well-known heuristic algorithms such as Firefly Algorithm, Particle Swarm Optimization, Cuckoo Search, Flower Pollination are used to solve bi-level microscopic image thresholding problem, and the results are compared with the state-of-art thresholding algorithms such as K-Means, Fuzzy C-Means, Fast Marching. Kapur's entropy is chosen as the entropy measure to be maximized. Experiments are performed to make comparisons in terms of evaluation metrics and execution time. The quantitative results are calculated based on ground truth segmentation. According to the visual results, heuristic algorithms have better performance and the quantitative results are in accord with the visual results. Furthermore, experimental time comparisons show the superiority and effectiveness of the heuristic algorithms over traditional thresholding algorithms.

  10. A Fuzzy Approach Using Generalized Dinkelbach’s Algorithm for Multiobjective Linear Fractional Transportation Problem

    Directory of Open Access Journals (Sweden)

    Nurdan Cetin

    2014-01-01

    Full Text Available We consider a multiobjective linear fractional transportation problem (MLFTP with several fractional criteria, such as, the maximization of the transport profitability like profit/cost or profit/time, and its two properties are source and destination. Our aim is to introduce MLFTP which has not been studied in literature before and to provide a fuzzy approach which obtain a compromise Pareto-optimal solution for this problem. To do this, first, we present a theorem which shows that MLFTP is always solvable. And then, reducing MLFTP to the Zimmermann’s “min” operator model which is the max-min problem, we construct Generalized Dinkelbach’s Algorithm for solving the obtained problem. Furthermore, we provide an illustrative numerical example to explain this fuzzy approach.

  11. Thermogram breast cancer prediction approach based on Neutrosophic sets and fuzzy c-means algorithm.

    Science.gov (United States)

    Gaber, Tarek; Ismail, Gehad; Anter, Ahmed; Soliman, Mona; Ali, Mona; Semary, Noura; Hassanien, Aboul Ella; Snasel, Vaclav

    2015-08-01

    The early detection of breast cancer makes many women survive. In this paper, a CAD system classifying breast cancer thermograms to normal and abnormal is proposed. This approach consists of two main phases: automatic segmentation and classification. For the former phase, an improved segmentation approach based on both Neutrosophic sets (NS) and optimized Fast Fuzzy c-mean (F-FCM) algorithm was proposed. Also, post-segmentation process was suggested to segment breast parenchyma (i.e. ROI) from thermogram images. For the classification, different kernel functions of the Support Vector Machine (SVM) were used to classify breast parenchyma into normal or abnormal cases. Using benchmark database, the proposed CAD system was evaluated based on precision, recall, and accuracy as well as a comparison with related work. The experimental results showed that our system would be a very promising step toward automatic diagnosis of breast cancer using thermograms as the accuracy reached 100%.

  12. Medical imaging in clinical applications algorithmic and computer-based approaches

    CERN Document Server

    Bhateja, Vikrant; Hassanien, Aboul

    2016-01-01

    This volume comprises of 21 selected chapters, including two overview chapters devoted to abdominal imaging in clinical applications supported computer aided diagnosis approaches as well as different techniques for solving the pectoral muscle extraction problem in the preprocessing part of the CAD systems for detecting breast cancer in its early stage using digital mammograms. The aim of this book is to stimulate further research in medical imaging applications based algorithmic and computer based approaches and utilize them in real-world clinical applications. The book is divided into four parts, Part-I: Clinical Applications of Medical Imaging, Part-II: Classification and clustering, Part-III: Computer Aided Diagnosis (CAD) Tools and Case Studies and Part-IV: Bio-inspiring based Computer Aided diagnosis techniques. .

  13. Process planning optimization on turning machine tool using a hybrid genetic algorithm with local search approach

    Directory of Open Access Journals (Sweden)

    Yuliang Su

    2015-04-01

    Full Text Available A turning machine tool is a kind of new type of machine tool that is equipped with more than one spindle and turret. The distinctive simultaneous and parallel processing abilities of turning machine tool increase the complexity of process planning. The operations would not only be sequenced and satisfy precedence constraints, but also should be scheduled with multiple objectives such as minimizing machining cost, maximizing utilization of turning machine tool, and so on. To solve this problem, a hybrid genetic algorithm was proposed to generate optimal process plans based on a mixed 0-1 integer programming model. An operation precedence graph is used to represent precedence constraints and help generate a feasible initial population of hybrid genetic algorithm. Encoding strategy based on data structure was developed to represent process plans digitally in order to form the solution space. In addition, a local search approach for optimizing the assignments of available turrets would be added to incorporate scheduling with process planning. A real-world case is used to prove that the proposed approach could avoid infeasible solutions and effectively generate a global optimal process plan.

  14. Proposed prediction algorithms based on hybrid approach to deal with anomalies of RFID data in healthcare

    Directory of Open Access Journals (Sweden)

    A. Anny Leema

    2013-07-01

    Full Text Available The RFID technology has penetrated the healthcare sector due to its increased functionality, low cost, high reliability, and easy-to-use capabilities. It is being deployed for various applications and the data captured by RFID readers increase according to timestamp resulting in an enormous volume of data duplication, false positive, and false negative. The dirty data stream generated by the RFID readers is one of the main factors limiting the widespread adoption of RFID technology. In order to provide reliable data to RFID application, it is necessary to clean the collected data and this should be done in an effective manner before they are subjected to warehousing. The existing approaches to deal with anomalies are physical, middleware, and deferred approach. The shortcomings of existing approaches are analyzed and found that robust RFID system can be built by integrating the middleware and deferred approach. Our proposed algorithms based on hybrid approach are tested in the healthcare environment which predicts false positive, false negative, and redundant data. In this paper, healthcare environment is simulated using RFID and the data observed by RFID reader consist of anomalies false positive, false negative, and duplication. Experimental evaluation shows that our cleansing methods remove errors in RFID data more accurately and efficiently. Thus, with the aid of the planned data cleaning technique, we can bring down the healthcare costs, optimize business processes, streamline patient identification processes, and improve patient safety.

  15. A possibilistic approach to rotorcraft design through a multi-objective evolutionary algorithm

    Science.gov (United States)

    Chae, Han Gil

    constraints, and there has been no clear explanation for constraint handling in SPEA so far. In this thesis work, it is proposed that through a slight modification of the notion of dominance, it is possible to make SPEA manage constraints successfully. In light of the notion of possibility, a concept of solution that ensures a certain confidence level is proposed and implemented in a new evolutionary algorithm with a newly defined fuzzied version of the multi-objective optimization problem statement. In the new problem statement, function values and constraints are softened by possibility distributions that reflect the intuitive assessment of the expert. Multiple alternative solutions to the problem are found by the modified SPEA. Furthermore, the new method is applied to the sizing problem of a gyrodyne cofiguration which employs a tip-jet-driven rotor on top of a fixed-wing aircraft. The sizing environment includes a 6-DOF rotor trim model, a tip-jet model, a blade duct model and engine models for various concepts of air compression. However, the design problem of the gyrodyne is ill-defined, and there are only a few data available. Therefore, a large portion of the analysis involves intuitive information. The intuitive information is quantified, and sizing is performed through the possibilistic MOEA investigating the influences of the various factors. The trade-offs includes discrete variables for engine type and an optional tip burner, as well as continuous variables for rotor parameters and engine parameters.

  16. Institutional and Actor-Oriented Factors Constraining Expert-Based Forest Information Exchange in Europe: A Policy Analysis from an Actor-Centred Institutionalist Approach

    Directory of Open Access Journals (Sweden)

    Tanya Baycheva-Merger

    2018-03-01

    Full Text Available Adequate and accessible expert-based forest information has become increasingly in demand for effective decisions and informed policies in the forest and forest-related sectors in Europe. Such accessibility requires a collaborative environment and constant information exchange between various actors at different levels and across sectors. However, information exchange in complex policy environments is challenging, and is often constrained by various institutional, actor-oriented, and technical factors. In forest policy research, no study has yet attempted to simultaneously account for these multiple factors influencing expert-based forest information exchange. By employing a policy analysis from an actor-centred institutionalist perspective, this paper aims to provide an overview of the most salient institutional and actor-oriented factors that are perceived as constraining forest information exchange at the national level across European countries. We employ an exploratory research approach, and utilise both qualitative and quantitative methods to analyse our data. The data was collected through a semi-structured survey targeted at forest and forest-related composite actors in 21 European countries. The results revealed that expert-based forest information exchange is constrained by a number of compound and closely interlinked institutional and actor-oriented factors, reflecting the complex interplay of institutions and actors at the national level. The most salient institutional factors that stand out include restrictive or ambiguous data protection policies, inter-organisational information arrangements, different organisational cultures, and a lack of incentives. Forest information exchange becomes even more complex when actors are confronted with actor-oriented factors such as issues of distrust, diverging preferences and perceptions, intellectual property rights, and technical capabilities. We conclude that expert-based forest information

  17. The International Cancer Expert Corps: a unique approach for sustainable cancer care in low and lower-middle income countries

    Directory of Open Access Journals (Sweden)

    C Norman eColeman

    2014-11-01

    Full Text Available The growing burden of non-communicable diseases including cancer in low- and lower-middle income countries (LMICs and in geographic-access limited settings within resource-rich countries requires effective and sustainable solutions. The International Cancer Expert Corps is pioneering a novel global mentorship-partnership model to address workforce capability and capacity within cancer disparities regions built on the requirement for local investment in personnel and infrastructure. Radiation oncology will be a key component given its efficacy for cure even for the advanced stages of disease often encountered and for palliation. The goal for an ICEC Center within these health disparities settings is to develop and retain a high quality sustainable workforce who can provide the best possible cancer care, conduct research and become a regional center of excellence. The ICEC Center can also serve as a focal point for economic, social and healthcare system improvement. ICEC is establishing teams of Experts with expertise to mentor in the broad range of subjects required to establish and sustain cancer care programs. The Hubs are cancer centers or other groups and professional societies in resource-rich settings that will comprise the global infrastructure coordinated by ICEC Central. A transformational tenet of ICEC is that altruistic, human-service activity should be an integral part of a healthcare career. To achieve a critical mass of mentors ICEC is working with three groups: academia, private practice and senior mentors/retirees. While in-kind support will be important, ICEC seeks support for the career time dedicated to this activity through grants, government support, industry and philanthropy. Providing care for people with cancer in LMICs has been a recalcitrant problem. The alarming increase in the global burden of cancer in LMICs underscores the urgency and makes this an opportune time for novel and sustainable solutions to transform

  18. The international cancer expert corps: a unique approach for sustainable cancer care in low and lower-middle income countries.

    Science.gov (United States)

    Coleman, C Norman; Formenti, Silvia C; Williams, Tim R; Petereit, Daniel G; Soo, Khee C; Wong, John; Chao, Nelson; Shulman, Lawrence N; Grover, Surbhi; Magrath, Ian; Hahn, Stephen; Liu, Fei-Fei; DeWeese, Theodore; Khleif, Samir N; Steinberg, Michael; Roth, Lawrence; Pistenmaa, David A; Love, Richard R; Mohiuddin, Majid; Vikram, Bhadrasain

    2014-01-01

    The growing burden of non-communicable diseases including cancer in low- and lower-middle income countries (LMICs) and in geographic-access limited settings within resource-rich countries requires effective and sustainable solutions. The International Cancer Expert Corps (ICEC) is pioneering a novel global mentorship-partnership model to address workforce capability and capacity within cancer disparities regions built on the requirement for local investment in personnel and infrastructure. Radiation oncology will be a key component given its efficacy for cure even for the advanced stages of disease often encountered and for palliation. The goal for an ICEC Center within these health disparities settings is to develop and retain a high-quality sustainable workforce who can provide the best possible cancer care, conduct research, and become a regional center of excellence. The ICEC Center can also serve as a focal point for economic, social, and healthcare system improvement. ICEC is establishing teams of Experts with expertise to mentor in the broad range of subjects required to establish and sustain cancer care programs. The Hubs are cancer centers or other groups and professional societies in resource-rich settings that will comprise the global infrastructure coordinated by ICEC Central. A transformational tenet of ICEC is that altruistic, human-service activity should be an integral part of a healthcare career. To achieve a critical mass of mentors ICEC is working with three groups: academia, private practice, and senior mentors/retirees. While in-kind support will be important, ICEC seeks support for the career time dedicated to this activity through grants, government support, industry, and philanthropy. Providing care for people with cancer in LMICs has been a recalcitrant problem. The alarming increase in the global burden of cancer in LMICs underscores the urgency and makes this an opportune time fornovel and sustainable solutions to transform cancer care

  19. Selection Input Output by Restriction Using DEA Models Based on a Fuzzy Delphi Approach and Expert Information

    Science.gov (United States)

    Arsad, Roslah; Nasir Abdullah, Mohammad; Alias, Suriana; Isa, Zaidi

    2017-09-01

    Stock evaluation has always been an interesting problem for investors. In this paper, a comparison regarding the efficiency stocks of listed companies in Bursa Malaysia were made through the application of estimation method of Data Envelopment Analysis (DEA). One of the interesting research subjects in DEA is the selection of appropriate input and output parameter. In this study, DEA was used to measure efficiency of stocks of listed companies in Bursa Malaysia in terms of the financial ratio to evaluate performance of stocks. Based on previous studies and Fuzzy Delphi Method (FDM), the most important financial ratio was selected. The results indicated that return on equity, return on assets, net profit margin, operating profit margin, earnings per share, price to earnings and debt to equity were the most important ratios. Using expert information, all the parameter were clarified as inputs and outputs. The main objectives were to identify most critical financial ratio, clarify them based on expert information and compute the relative efficiency scores of stocks as well as rank them in the construction industry and material completely. The methods of analysis using Alirezaee and Afsharian’s model were employed in this study, where the originality of Charnes, Cooper and Rhodes (CCR) with the assumption of Constant Return to Scale (CSR) still holds. This method of ranking relative efficiency of decision making units (DMUs) was value-added by the Balance Index. The interested data was made for year 2015 and the population of the research includes accepted companies in stock markets in the construction industry and material (63 companies). According to the ranking, the proposed model can rank completely for 63 companies using selected financial ratio.

  20. A potential theory approach to an algorithm of conceptual space partitioning

    Directory of Open Access Journals (Sweden)

    Roman Urban

    2017-12-01

    Full Text Available A potential theory approach to an algorithm of conceptual space partitioning This paper proposes a new classification algorithm for the partitioning of a conceptual space. All the algorithms which have been used until now have mostly been based on the theory of Voronoi diagrams. This paper proposes an approach based on potential theory, with the criteria for measuring similarities between objects in the conceptual space being based on the Newtonian potential function. The notion of a fuzzy prototype, which generalizes the previous definition of a prototype, is introduced. Furthermore, the necessary conditions that a natural concept must meet are discussed. Instead of convexity, as proposed by Gärdenfors, the notion of geodesically convex sets is used. Thus, if a concept corresponds to a set which is geodesically convex, it is a natural concept. This definition applies, for example, if the conceptual space is an Euclidean space. As a by-product of the construction of the algorithm, an extension of the conceptual space to d-dimensional Riemannian manifolds is obtained.   Algorytm podziału przestrzeni konceptualnych przy użyciu teorii potencjału W niniejszej pracy zaproponowany został nowy algorytm podziału przestrzeni konceptualnej. Dotąd podział taki zazwyczaj wykorzystywał teorię diagramów Voronoi. Nasze podejście do problemu oparte jest na teorii potencjału Miara podobieństwa pomiędzy elementami przestrzeni konceptualnej bazuje na Newtonowskiej funkcji potencjału. Definiujemy pojęcie rozmytego prototypu, który uogólnia dotychczas stosowane definicje prototypu. Ponadto zajmujemy się warunkiem koniecznym, który musi spełniać naturalny koncept. Zamiast wypukłości zaproponowanej przez Gärdenforsa, rozważamy linie geodezyjne w obszarze odpowiadającym danemu konceptowi naturalnemu, otrzymując warunek mówiący, że koncept jest konceptem naturalnym, jeżeli zbiór odpowiadający temu konceptowi jest geodezyjnie wypuk

  1. A New Spectral Shape-Based Record Selection Approach Using Np and Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Edén Bojórquez

    2013-01-01

    Full Text Available With the aim to improve code-based real records selection criteria, an approach inspired in a parameter proxy of spectral shape, named Np, is analyzed. The procedure is based on several objectives aimed to minimize the record-to-record variability of the ground motions selected for seismic structural assessment. In order to select the best ground motion set of records to be used as an input for nonlinear dynamic analysis, an optimization approach is applied using genetic algorithms focuse on finding the set of records more compatible with a target spectrum and target Np values. The results of the new Np-based approach suggest that the real accelerograms obtained with this procedure, reduce the scatter of the response spectra as compared with the traditional approach; furthermore, the mean spectrum of the set of records is very similar to the target seismic design spectrum in the range of interest periods, and at the same time, similar Np values are obtained for the selected records and the target spectrum.

  2. The Application of Machine Learning Algorithms for Text Mining based on Sentiment Analysis Approach

    Directory of Open Access Journals (Sweden)

    Reza Samizade

    2018-06-01

    Full Text Available Classification of the cyber texts and comments into two categories of positive and negative sentiment among social media users is of high importance in the research are related to text mining. In this research, we applied supervised classification methods to classify Persian texts based on sentiment in cyber space. The result of this research is in a form of a system that can decide whether a comment which is published in cyber space such as social networks is considered positive or negative. The comments that are published in Persian movie and movie review websites from 1392 to 1395 are considered as the data set for this research. A part of these data are considered as training and others are considered as testing data. Prior to implementing the algorithms, pre-processing activities such as tokenizing, removing stop words, and n-germs process were applied on the texts. Naïve Bayes, Neural Networks and support vector machine were used for text classification in this study. Out of sample tests showed that there is no evidence indicating that the accuracy of SVM approach is statistically higher than Naïve Bayes or that the accuracy of Naïve Bayes is not statistically higher than NN approach. However, the researchers can conclude that the accuracy of the classification using SVM approach is statistically higher than the accuracy of NN approach in 5% confidence level.

  3. A new approach for visual identification of orange varieties using neural networks and metaheuristic algorithms

    Directory of Open Access Journals (Sweden)

    Sajad Sabzi

    2018-03-01

    Full Text Available Accurate classification of fruit varieties in processing factories and during post-harvesting applications is a challenge that has been widely studied. This paper presents a novel approach to automatic fruit identification applied to three common varieties of oranges (Citrus sinensis L., namely Bam, Payvandi and Thomson. A total of 300 color images were used for the experiments, 100 samples for each orange variety, which are publicly available. After segmentation, 263 parameters, including texture, color and shape features, were extracted from each sample using image processing. Among them, the 6 most effective features were automatically selected by using a hybrid approach consisting of an artificial neural network and particle swarm optimization algorithm (ANN-PSO. Then, three different classifiers were applied and compared: hybrid artificial neural network – artificial bee colony (ANN-ABC; hybrid artificial neural network – harmony search (ANN-HS; and k-nearest neighbors (kNN. The experimental results show that the hybrid approaches outperform the results of kNN. The average correct classification rate of ANN-HS was 94.28%, while ANN-ABS achieved 96.70% accuracy with the available data, contrasting with the 70.9% baseline accuracy of kNN. Thus, this new proposed methodology provides a fast and accurate way to classify multiple fruits varieties, which can be easily implemented in processing factories. The main contribution of this work is that the method can be directly adapted to other use cases, since the selection of the optimal features and the configuration of the neural network are performed automatically using metaheuristic algorithms.

  4. The First Expert CAI System

    Science.gov (United States)

    Feurzeig, Wallace

    1984-01-01

    The first expert instructional system, the Socratic System, was developed in 1964. One of the earliest applications of this system was in the area of differential diagnosis in clinical medicine. The power of the underlying instructional paradigm was demonstrated and the potential of the approach for valuably supplementing medical instruction was recognized. Twenty years later, despite further educationally significant advances in expert systems technology and enormous reductions in the cost of computers, expert instructional methods have found very little application in medical schools.

  5. FORMATION ALGORITHM OF DYNAMIC TURN FOR UNMANNED AERIAL VEHICLES ON APPROACH

    Directory of Open Access Journals (Sweden)

    Igor A. Chekhov

    2017-01-01

    Full Text Available Great interest in using unmanned aerial vehicles has recently been shown, both from economic entities, and from national security, defense and law enforcement agencies. However, for using UAV for the civil purposes there is now a number of problems which are connected with the use of airspace and without solving them it is impossible to use the UAV fully. It should be noted that the level of flight safety, both for regular aircraft, and for the UAV, has the primary value. It is necessary to use modern methods of data processing and to have an opportunity to quickly and effectively control the current flight safety level. For this purpose the fullest information on the current movement of aircraft and unmanned aerial vehicles, and also on the structure of the used airspace has to be used. The problem of procedures and maneuvers development that resolve potential traffic conflict including the UAV, is extremely important for air traffic safety especially in the vicinity of the destination or landing aerodrome. The possibility of creation of an algorithm of dynamic turn formation and the choice of a trajectory on approach of unmanned aerial vehicles is considered in this article. The technology of automatic dependent surveillance broadcast was used when collecting statistical data. Implementation of the landing algorithm is executed based on the criteria of ensuring efficiency and flight safety. The developed software provides the use only of open data on the aircraft movement in terminal airspace. The suggested algorithm can be adapted for air traffic management of the UAV in any selected airspace.

  6. A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms.

    Science.gov (United States)

    Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein

    2017-01-01

    Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts' Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2-100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms.

  7. System Experts and Decision Making Experts in Transdisciplinary Projects

    Science.gov (United States)

    Mieg, Harald A.

    2006-01-01

    Purpose: This paper aims at a better understanding of expert roles in transdisciplinary projects. Thus, the main purpose is the analysis of the roles of experts in transdisciplinary projects. Design/methodology/approach: The analysis of the ETH-UNS case studies from the point of view of the psychology of expertise and the sociology of professions…

  8. New MPPT algorithm for PV applications based on hybrid dynamical approach

    KAUST Repository

    Elmetennani, Shahrazed

    2016-10-24

    This paper proposes a new Maximum Power Point Tracking (MPPT) algorithm for photovoltaic applications using the multicellular converter as a stage of power adaptation. The proposed MPPT technique has been designed using a hybrid dynamical approach to model the photovoltaic generator. The hybrid dynamical theory has been applied taking advantage of the particular topology of the multicellular converter. Then, a hybrid automata has been established to optimize the power production. The maximization of the produced solar energy is achieved by switching between the different operative modes of the hybrid automata, which is conditioned by some invariance and transition conditions. These conditions have been validated by simulation tests under different conditions of temperature and irradiance. Moreover, the performance of the proposed algorithm has been then evaluated by comparison with standard MPPT techniques numerically and by experimental tests under varying external working conditions. The results have shown the interesting features that the hybrid MPPT technique presents in terms of performance and simplicity for real time implementation.

  9. Brake fault diagnosis using Clonal Selection Classification Algorithm (CSCA – A statistical learning approach

    Directory of Open Access Journals (Sweden)

    R. Jegadeeshwaran

    2015-03-01

    Full Text Available In automobile, brake system is an essential part responsible for control of the vehicle. Any failure in the brake system impacts the vehicle's motion. It will generate frequent catastrophic effects on the vehicle cum passenger's safety. Thus the brake system plays a vital role in an automobile and hence condition monitoring of the brake system is essential. Vibration based condition monitoring using machine learning techniques are gaining momentum. This study is one such attempt to perform the condition monitoring of a hydraulic brake system through vibration analysis. In this research, the performance of a Clonal Selection Classification Algorithm (CSCA for brake fault diagnosis has been reported. A hydraulic brake system test rig was fabricated. Under good and faulty conditions of a brake system, the vibration signals were acquired using a piezoelectric transducer. The statistical parameters were extracted from the vibration signal. The best feature set was identified for classification using attribute evaluator. The selected features were then classified using CSCA. The classification accuracy of such artificial intelligence technique has been compared with other machine learning approaches and discussed. The Clonal Selection Classification Algorithm performs better and gives the maximum classification accuracy (96% for the fault diagnosis of a hydraulic brake system.

  10. Numerical algorithm for rigid body position estimation using the quaternion approach

    Science.gov (United States)

    Zigic, Miodrag; Grahovac, Nenad

    2017-11-01

    This paper deals with rigid body attitude estimation on the basis of the data obtained from an inertial measurement unit mounted on the body. The aim of this work is to present the numerical algorithm, which can be easily applied to the wide class of problems concerning rigid body positioning, arising in aerospace and marine engineering, or in increasingly popular robotic systems and unmanned aerial vehicles. Following the considerations of kinematics of rigid bodies, the relations between accelerations of different points of the body are given. A rotation matrix is formed using the quaternion approach to avoid singularities. We present numerical procedures for determination of the absolute accelerations of the center of mass and of an arbitrary point of the body expressed in the inertial reference frame, as well as its attitude. An application of the algorithm to the example of a heavy symmetrical gyroscope is presented, where input data for the numerical procedure are obtained from the solution of differential equations of motion, instead of using sensor measurements.

  11. New MPPT algorithm for PV applications based on hybrid dynamical approach

    KAUST Repository

    Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem; Djemai, M.; Tadjine, M.

    2016-01-01

    This paper proposes a new Maximum Power Point Tracking (MPPT) algorithm for photovoltaic applications using the multicellular converter as a stage of power adaptation. The proposed MPPT technique has been designed using a hybrid dynamical approach to model the photovoltaic generator. The hybrid dynamical theory has been applied taking advantage of the particular topology of the multicellular converter. Then, a hybrid automata has been established to optimize the power production. The maximization of the produced solar energy is achieved by switching between the different operative modes of the hybrid automata, which is conditioned by some invariance and transition conditions. These conditions have been validated by simulation tests under different conditions of temperature and irradiance. Moreover, the performance of the proposed algorithm has been then evaluated by comparison with standard MPPT techniques numerically and by experimental tests under varying external working conditions. The results have shown the interesting features that the hybrid MPPT technique presents in terms of performance and simplicity for real time implementation.

  12. Relativistic algorithm for time transfer in Mars missions under IAU Resolutions: an analytic approach

    International Nuclear Information System (INIS)

    Pan Jun-Yang; Xie Yi

    2015-01-01

    With tremendous advances in modern techniques, Einstein's general relativity has become an inevitable part of deep space missions. We investigate the relativistic algorithm for time transfer between the proper time τ of the onboard clock and the Geocentric Coordinate Time, which extends some previous works by including the effects of propagation of electromagnetic signals. In order to evaluate the implicit algebraic equations and integrals in the model, we take an analytic approach to work out their approximate values. This analytic model might be used in an onboard computer because of its limited capability to perform calculations. Taking an orbiter like Yinghuo-1 as an example, we find that the contributions of the Sun, the ground station and the spacecraft dominate the outcomes of the relativistic corrections to the model. (research papers)

  13. A Sensor Based Navigation Algorithm for a Mobile Robot using the DVFF Approach

    Directory of Open Access Journals (Sweden)

    A. OUALID DJEKOUNE

    2009-06-01

    Full Text Available Often autonomous mobile robots operate in environment for which prior maps are incomplete or inaccurate. They require the safe execution for a collision free motion to a goal position. This paper addresses a complete navigation method for a mobile robot that moves in unknown environment. Thus, a novel method called DVFF combining the Virtual Force Field (VFF obstacle avoidance approach and global path planning based on D* algorithm is proposed. While D* generates global path information towards a goal position, the VFF local controller generates the admissible trajectories that ensure safe robot motion. Results and analysis from a battery of experiments with this new method implemented on a ATRV2 mobile robot are shown.

  14. Quantitative segmentation of fluorescence microscopy images of heterogeneous tissue: Approach for tuning algorithm parameters

    Science.gov (United States)

    Mueller, Jenna L.; Harmany, Zachary T.; Mito, Jeffrey K.; Kennedy, Stephanie A.; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G.; Willett, Rebecca M.; Brown, J. Quincy; Ramanujam, Nimmi

    2013-02-01

    The combination of fluorescent contrast agents with microscopy is a powerful technique to obtain real time images of tissue histology without the need for fixing, sectioning, and staining. The potential of this technology lies in the identification of robust methods for image segmentation and quantitation, particularly in heterogeneous tissues. Our solution is to apply sparse decomposition (SD) to monochrome images of fluorescently-stained microanatomy to segment and quantify distinct tissue types. The clinical utility of our approach is demonstrated by imaging excised margins in a cohort of mice after surgical resection of a sarcoma. Representative images of excised margins were used to optimize the formulation of SD and tune parameters associated with the algorithm. Our results demonstrate that SD is a robust solution that can advance vital fluorescence microscopy as a clinically significant technology.

  15. DESIGNING ALGORITHMS FOR SOLVING PHYSICS PROBLEMS ON THE BASIS OF MIVAR APPROACH

    Directory of Open Access Journals (Sweden)

    Dmitry Alekseevich Chuvikov

    2017-05-01

    Full Text Available The paper considers the process of designing algorithms for solving physics problems on the basis of mivar approach. The work also describes general principles of mivar theory. The concepts of parameter, relation and class in mivar space are considered. There are descriptions of properties which every object in Wi!Mi model should have. An experiment in testing capabilities of the Wi!Mi software has been carried out, thus the model has been designed which solves physics problems from year 8 school course in Russia. To conduct the experiment a new version of Wi!Mi 2.1 software has been used. The physics model deals with the following areas: thermal phenomena, electric and electromagnetic phenomena, optical phenomena.

  16. Innovations in ILC detector design using a particle flow algorithm approach

    International Nuclear Information System (INIS)

    Magill, S.; High Energy Physics

    2007-01-01

    The International Linear Collider (ILC) is a future e + e - collider that will produce particles with masses up to the design center-of-mass (CM) energy of 500 GeV. The ILC complements the Large Hadron Collider (LHC) which, although colliding protons at 14 TeV in the CM, will be luminosity-limited to particle production with masses up to ∼1-2 TeV. At the ILC, interesting cross-sections are small, but there are no backgrounds from underlying events, so masses should be able to be measured by hadronic decays to dijets (∼80% BR) as well as in leptonic decay modes. The precise measurement of jets will require major detector innovations, in particular to the calorimeter, which will be optimized to reconstruct final state particle 4-vectors--called the particle flow algorithm approach to jet reconstruction

  17. A NAÏVE APPROACH TO SPEED UP PORTFOLIO OPTIMIZATION PROBLEM USING A MULTIOBJECTIVE GENETIC ALGORITHM

    Directory of Open Access Journals (Sweden)

    Baixauli-Soler, J. Samuel

    2012-05-01

    Full Text Available Genetic algorithms (GAs are appropriate when investors have the objective of obtaining mean‑variance (VaR efficient frontier as minimising VaR leads to non‑convex and non‑differential risk‑return optimisation problems. However GAs are a time‑consuming optimisation technique. In this paper, we propose to use a naïve approach consisting of using samples split by quartile of risk to obtain complete efficient frontiers in a reasonable computation time. Our results show that using reduced problems which only consider a quartile of the assets allow us to explore the efficient frontier for a large range of risk values. In particular, the third quartile allows us to obtain efficient frontiers from the 1.8% to 2.5% level of VaR quickly, while that of the first quartile of assets is from 1% to 1.3% level of VaR.

  18. A Systematic Approach to Modified BCJR MAP Algorithms for Convolutional Codes

    Directory of Open Access Journals (Sweden)

    Patenaude François

    2006-01-01

    Full Text Available Since Berrou, Glavieux and Thitimajshima published their landmark paper in 1993, different modified BCJR MAP algorithms have appeared in the literature. The existence of a relatively large number of similar but different modified BCJR MAP algorithms, derived using the Markov chain properties of convolutional codes, naturally leads to the following questions. What is the relationship among the different modified BCJR MAP algorithms? What are their relative performance, computational complexities, and memory requirements? In this paper, we answer these questions. We derive systematically four major modified BCJR MAP algorithms from the BCJR MAP algorithm using simple mathematical transformations. The connections between the original and the four modified BCJR MAP algorithms are established. A detailed analysis of the different modified BCJR MAP algorithms shows that they have identical computational complexities and memory requirements. Computer simulations demonstrate that the four modified BCJR MAP algorithms all have identical performance to the BCJR MAP algorithm.

  19. A Dynamic Health Assessment Approach for Shearer Based on Artificial Immune Algorithm

    Directory of Open Access Journals (Sweden)

    Zhongbin Wang

    2016-01-01

    Full Text Available In order to accurately identify the dynamic health of shearer, reducing operating trouble and production accident of shearer and improving coal production efficiency further, a dynamic health assessment approach for shearer based on artificial immune algorithm was proposed. The key technologies such as system framework, selecting the indicators for shearer dynamic health assessment, and health assessment model were provided, and the flowchart of the proposed approach was designed. A simulation example, with an accuracy of 96%, based on the collected data from industrial production scene was provided. Furthermore, the comparison demonstrated that the proposed method exhibited higher classification accuracy than the classifiers based on back propagation-neural network (BP-NN and support vector machine (SVM methods. Finally, the proposed approach was applied in an engineering problem of shearer dynamic health assessment. The industrial application results showed that the paper research achievements could be used combining with shearer automation control system in fully mechanized coal face. The simulation and the application results indicated that the proposed method was feasible and outperforming others.

  20. An Interval-Valued Approach to Business Process Simulation Based on Genetic Algorithms and the BPMN

    Directory of Open Access Journals (Sweden)

    Mario G.C.A. Cimino

    2014-05-01

    Full Text Available Simulating organizational processes characterized by interacting human activities, resources, business rules and constraints, is a challenging task, because of the inherent uncertainty, inaccuracy, variability and dynamicity. With regard to this problem, currently available business process simulation (BPS methods and tools are unable to efficiently capture the process behavior along its lifecycle. In this paper, a novel approach of BPS is presented. To build and manage simulation models according to the proposed approach, a simulation system is designed, developed and tested on pilot scenarios, as well as on real-world processes. The proposed approach exploits interval-valued data to represent model parameters, in place of conventional single-valued or probability-valued parameters. Indeed, an interval-valued parameter is comprehensive; it is the easiest to understand and express and the simplest to process, among multi-valued representations. In order to compute the interval-valued output of the system, a genetic algorithm is used. The resulting process model allows forming mappings at different levels of detail and, therefore, at different model resolutions. The system has been developed as an extension of a publicly available simulation engine, based on the Business Process Model and Notation (BPMN standard.

  1. A HYBRID GENETIC ALGORITHM-NEURAL NETWORK APPROACH FOR PRICING CORES AND REMANUFACTURED CORES

    Directory of Open Access Journals (Sweden)

    M. Seidi

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT:Sustainability has become a major issue in most economies, causing many leading companies to focus on product recovery and reverse logistics. Remanufacturing is an industrial process that makes used products reusable. One of the important aspects in both reverse logistics and remanufacturing is the pricing of returned and remanufactured products (called cores. In this paper, we focus on pricing the cores and remanufactured cores. First we present a mathematical model for this purpose. Since this model does not satisfy our requirements, we propose a simulation optimisation approach. This approach consists of a hybrid genetic algorithm based on a neural network employed as the fitness function. We use automata learning theory to obtain the learning rate required for training the neural network. Numerical results demonstrate that the optimal value of the acquisition price of cores and price of remanufactured cores is obtained by this approach.

    AFRIKAANSE OPSOMMING: Volhoubaarheid het ‘n belangrike saak geword in die meeste ekonomieë, wat verskeie maatskappye genoop het om produkherwinning en omgekeerde logistiek te onder oë te neem. Hervervaardiging is ‘n industriële proses wat gebruikte produkte weer bruikbaar maak. Een van die belangrike aspekte in beide omgekeerde logistiek en hervervaardiging is die prysbepaling van herwinne en hervervaardigde produkte. Hierdie artikel fokus op die prysbepalingsaspekte by wyse van ‘n wiskundige model.

  2. An Adaptive Agent-Based Model of Homing Pigeons: A Genetic Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Francis Oloo

    2017-01-01

    Full Text Available Conventionally, agent-based modelling approaches start from a conceptual model capturing the theoretical understanding of the systems of interest. Simulation outcomes are then used “at the end” to validate the conceptual understanding. In today’s data rich era, there are suggestions that models should be data-driven. Data-driven workflows are common in mathematical models. However, their application to agent-based models is still in its infancy. Integration of real-time sensor data into modelling workflows opens up the possibility of comparing simulations against real data during the model run. Calibration and validation procedures thus become automated processes that are iteratively executed during the simulation. We hypothesize that incorporation of real-time sensor data into agent-based models improves the predictive ability of such models. In particular, that such integration results in increasingly well calibrated model parameters and rule sets. In this contribution, we explore this question by implementing a flocking model that evolves in real-time. Specifically, we use genetic algorithms approach to simulate representative parameters to describe flight routes of homing pigeons. The navigation parameters of pigeons are simulated and dynamically evaluated against emulated GPS sensor data streams and optimised based on the fitness of candidate parameters. As a result, the model was able to accurately simulate the relative-turn angles and step-distance of homing pigeons. Further, the optimised parameters could replicate loops, which are common patterns in flight tracks of homing pigeons. Finally, the use of genetic algorithms in this study allowed for a simultaneous data-driven optimization and sensitivity analysis.

  3. THE EUROPEAN SOCIETY FOR CLINICAL AND ECONOMIC ASPECTS OF OSTEOPOROSIS AND OSTEOARTHRITIS (ESCEO ALGORITHM FOR THE MANAGEMENT OF KNEE OSTEOARTHRITIS IS APPLICABLE TO RUSSIAN CLINICAL PRACTICE: A CONSENSUS STATEMENT OF LEADING RUSSIAN AND ESCEO OSTEOARTHRITIS EXPERTS

    Directory of Open Access Journals (Sweden)

    L. N. Denisov

    2016-01-01

    Full Text Available The European Society for Clinical and Economic Aspects of Osteoporosis and Osteoarthritis (ESCEO treatment algorithm for the management of knee osteoarthritis (OA, published in December 2014, provides practical guidance for the prioritization of interventions. This current paper represents an assessment and endorsement of the algorithm by Russian experts in OA for use in Russian clinical practice, with the aim of providing easy-to-follow advice on how to establish a treatment flow in patients with knee OA, in support of the clinicians’ individualized assessment of the patient. Medications recommended by the ESCEO algorithm are available in Russia. In step 1, background maintenance therapy with symptomatic slow-acting drugs for osteoarthritis (SYSADOA is advised, for which high-quality evidence is provided only for the formulations of patented crystalline glucosamine sulphate (pCGS (Rottapharm/Meda and prescription chondroitin sulfate. Paracetamol may be added for rescue analgesia only, due to limited efficacy and increasing safety signals. Topical non-steroidal anti-inflammatory drugs (NSAIDs may provide additional symptomatic treatment with the same degree of efficacy as oral NSAIDs but without the systemic safety concerns. To be effective, topical NSAIDs must have high bioavailability, and among NSAIDs molecules like etofenamate have high absorption and bioavailability alongside evidence for accumulation in synovial tissues. Oral NSAIDs maintain a central role in step 2 advanced management of persistent symptoms. However, oral NSAIDs are highly heterogeneous in terms of gastrointestinal and cardiovascular safety profile, and patient stratification with careful treatment selection is advocated to maximize the risk: benefit ratio. Intra-articular hyaluronic acid as a next step provides sustained clinical benefit with effects lasting up to 6 months after a short-course of weekly injections. As a last step before surgery, the slow

  4. The tradition algorithm approach underestimates the prevalence of serodiagnosis of syphilis in HIV-infected individuals.

    Directory of Open Access Journals (Sweden)

    Bin Chen

    2017-07-01

    Full Text Available Currently, there are three algorithms for screening of syphilis: traditional algorithm, reverse algorithm and European Centre for Disease Prevention and Control (ECDC algorithm. To date, there is not a generally recognized diagnostic algorithm. When syphilis meets HIV, the situation is even more complex. To evaluate their screening performance and impact on the seroprevalence of syphilis in HIV-infected individuals, we conducted a cross-sectional study included 865 serum samples from HIV-infected patients in a tertiary hospital. Every sample (one per patient was tested with toluidine red unheated serum test (TRUST, T. pallidum particle agglutination assay (TPPA, and Treponema pallidum enzyme immunoassay (TP-EIA according to the manufacturer's instructions. The results of syphilis serological testing were interpreted following different algorithms respectively. We directly compared the traditional syphilis screening algorithm with the reverse syphilis screening algorithm in this unique population. The reverse algorithm achieved remarkable higher seroprevalence of syphilis than the traditional algorithm (24.9% vs. 14.2%, p < 0.0001. Compared to the reverse algorithm, the traditional algorithm also had a missed serodiagnosis rate of 42.8%. The total percentages of agreement and corresponding kappa values of tradition and ECDC algorithm compared with those of reverse algorithm were as follows: 89.4%,0.668; 99.8%, 0.994. There was a very good strength of agreement between the reverse and the ECDC algorithm. Our results supported the reverse (or ECDC algorithm in screening of syphilis in HIV-infected populations. In addition, our study demonstrated that screening of HIV-populations using different algorithms may result in a statistically different seroprevalence of syphilis.

  5. The tradition algorithm approach underestimates the prevalence of serodiagnosis of syphilis in HIV-infected individuals.

    Science.gov (United States)

    Chen, Bin; Peng, Xiuming; Xie, Tiansheng; Jin, Changzhong; Liu, Fumin; Wu, Nanping

    2017-07-01

    Currently, there are three algorithms for screening of syphilis: traditional algorithm, reverse algorithm and European Centre for Disease Prevention and Control (ECDC) algorithm. To date, there is not a generally recognized diagnostic algorithm. When syphilis meets HIV, the situation is even more complex. To evaluate their screening performance and impact on the seroprevalence of syphilis in HIV-infected individuals, we conducted a cross-sectional study included 865 serum samples from HIV-infected patients in a tertiary hospital. Every sample (one per patient) was tested with toluidine red unheated serum test (TRUST), T. pallidum particle agglutination assay (TPPA), and Treponema pallidum enzyme immunoassay (TP-EIA) according to the manufacturer's instructions. The results of syphilis serological testing were interpreted following different algorithms respectively. We directly compared the traditional syphilis screening algorithm with the reverse syphilis screening algorithm in this unique population. The reverse algorithm achieved remarkable higher seroprevalence of syphilis than the traditional algorithm (24.9% vs. 14.2%, p algorithm, the traditional algorithm also had a missed serodiagnosis rate of 42.8%. The total percentages of agreement and corresponding kappa values of tradition and ECDC algorithm compared with those of reverse algorithm were as follows: 89.4%,0.668; 99.8%, 0.994. There was a very good strength of agreement between the reverse and the ECDC algorithm. Our results supported the reverse (or ECDC) algorithm in screening of syphilis in HIV-infected populations. In addition, our study demonstrated that screening of HIV-populations using different algorithms may result in a statistically different seroprevalence of syphilis.

  6. Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach

    Science.gov (United States)

    Kumral, Mustafa; Ozer, Umit

    2013-03-01

    Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution

  7. Action and familiarity effects on self and other expert musicians’ Laban effort-shape analyses of expressive bodily behaviors in instrumental music performance: a case study approach

    Science.gov (United States)

    Broughton, Mary C.; Davidson, Jane W.

    2014-01-01

    Self-reflective performance review and expert evaluation are features of Western music performance practice. While music is usually the focus, visual information provided by performing musicians’ expressive bodily behaviors communicates expressiveness to musically trained and untrained observers. Yet, within a seemingly homogenous group, such as one of musically trained individuals, diversity of experience exists. Individual differences potentially affect perception of the subtleties of expressive performance, and performers’ effective communication of their expressive intentions. This study aimed to compare self- and other expert musicians’ perception of expressive bodily behaviors observed in marimba performance. We hypothesized that analyses of expressive bodily behaviors differ between expert musicians according to their specialist motor expertise and familiarity with the music. Two professional percussionists and experienced marimba players, and one professional classical singer took part in the study. Participants independently conducted Laban effort-shape analysis – proposing that intentions manifest in bodily activity are understood through shared embodied processes – of a marimbists’ expressive bodily behaviors in an audio-visual performance recording. For one percussionist, this was a self-reflective analysis. The work was unfamiliar to the other percussionist and singer. Perception of the performer’s expressive bodily behaviors appeared to differ according to participants’ individual instrumental or vocal motor expertise, and familiarity with the music. Furthermore, individual type of motor experience appeared to direct participants’ attention in approaching the analyses. Findings support forward and inverse perception–action models, and embodied cognitive theory. Implications offer scientific rigor and artistic interest for how performance practitioners can reflectively analyze performance to improve expressive communication. PMID

  8. Action and familiarity effects on self and other expert musicians’ Laban effort-shape analyses of expressive bodily behaviors in instrumental music performance: A case study approach

    Directory of Open Access Journals (Sweden)

    Mary C Broughton

    2014-10-01

    Full Text Available Self-reflective performance review and expert evaluation are features of Western music performance practice. While music is usually the focus, visual information provided by performing musicians’ expressive bodily behaviors communicates expressiveness to musically trained and untrained observers. Yet, within a seemingly homogenous group such as one of musically trained individuals, diversity of experience exists. Individual differences potentially affect perception of the subtleties of expressive performance, and performers’ effective communication of their expressive intentions. This study aimed to compare self- and other expert musicians’ perception of expressive bodily behaviors observed in marimba performance. We hypothesised that analyses of expressive expressive bodily behaviors differ between expert musicians according to their specialist motor expertise and familiarity with the music. Two professional percussionists and experienced marimba players, and one professional classical singer took part in the study. Participants independently conducted Laban effort-shape analysis – proposing that intentions manifest in bodily activity are understood through shared embodied processes – of a marimbists’ expressive bodily behaviors in an audio-visual performance recording. For one percussionist, this was a self-reflective analysis. The work was unfamiliar to the other percussionist and singer. Perception of the performer’s expressive bodily behaviors differed according to participants’ individual instrumental or vocal motor expertise, and familiarity with the music. Furthermore, individual type of motor experience appeared to direct participants’ attention in approaching the analyses. Findings support forward and inverse perception-action models, and embodied cognitive theory. Implications offer scientific rigour and artistic interest for how performance practitioners can reflectively analyze performance to improve expressive

  9. Prognostic and health management for engineering systems: a review of the data-driven approach and algorithms

    Directory of Open Access Journals (Sweden)

    Thamo Sutharssan

    2015-07-01

    Full Text Available Prognostics and health management (PHM has become an important component of many engineering systems and products, where algorithms are used to detect anomalies, diagnose faults and predict remaining useful lifetime (RUL. PHM can provide many advantages to users and maintainers. Although primary goals are to ensure the safety, provide state of the health and estimate RUL of the components and systems, there are also financial benefits such as operational and maintenance cost reductions and extended lifetime. This study aims at reviewing the current status of algorithms and methods used to underpin different existing PHM approaches. The focus is on providing a structured and comprehensive classification of the existing state-of-the-art PHM approaches, data-driven approaches and algorithms.

  10. Quantitative Trait Loci Mapping Problem: An Extinction-Based Multi-Objective Evolutionary Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Nicholas S. Flann

    2013-09-01

    Full Text Available The Quantitative Trait Loci (QTL mapping problem aims to identify regions in the genome that are linked to phenotypic features of the developed organism that vary in degree. It is a principle step in determining targets for further genetic analysis and is key in decoding the role of specific genes that control quantitative traits within species. Applications include identifying genetic causes of disease, optimization of cross-breeding for desired traits and understanding trait diversity in populations. In this paper a new multi-objective evolutionary algorithm (MOEA method is introduced and is shown to increase the accuracy of QTL mapping identification for both independent and epistatic loci interactions. The MOEA method optimizes over the space of possible partial least squares (PLS regression QTL models and considers the conflicting objectives of model simplicity versus model accuracy. By optimizing for minimal model complexity, MOEA has the advantage of solving the over-fitting problem of conventional PLS models. The effectiveness of the method is confirmed by comparing the new method with Bayesian Interval Mapping approaches over a series of test cases where the optimal solutions are known. This approach can be applied to many problems that arise in analysis of genomic data sets where the number of features far exceeds the number of observations and where features can be highly correlated.

  11. A Parallel Approach To Optimum Actuator Selection With a Genetic Algorithm

    Science.gov (United States)

    Rogers, James L.

    2000-01-01

    Recent discoveries in smart technologies have created a variety of aerodynamic actuators which have great potential to enable entirely new approaches to aerospace vehicle flight control. For a revolutionary concept such as a seamless aircraft with no moving control surfaces, there is a large set of candidate locations for placing actuators, resulting in a substantially larger number of combinations to examine in order to find an optimum placement satisfying the mission requirements. The placement of actuators on a wing determines the control effectiveness of the airplane. One approach to placement Maximizes the moments about the pitch, roll, and yaw axes, while minimizing the coupling. Genetic algorithms have been instrumental in achieving good solutions to discrete optimization problems, such as the actuator placement problem. As a proof of concept, a genetic has been developed to find the minimum number of actuators required to provide uncoupled pitch, roll, and yaw control for a simplified, untapered, unswept wing model. To find the optimum placement by searching all possible combinations would require 1,100 hours. Formulating the problem and as a multi-objective problem and modifying it to take advantage of the parallel processing capabilities of a multi-processor computer, reduces the optimization time to 22 hours.

  12. A data mining approach to optimize pellets manufacturing process based on a decision tree algorithm.

    Science.gov (United States)

    Ronowicz, Joanna; Thommes, Markus; Kleinebudde, Peter; Krysiński, Jerzy

    2015-06-20

    The present study is focused on the thorough analysis of cause-effect relationships between pellet formulation characteristics (pellet composition as well as process parameters) and the selected quality attribute of the final product. The shape using the aspect ratio value expressed the quality of pellets. A data matrix for chemometric analysis consisted of 224 pellet formulations performed by means of eight different active pharmaceutical ingredients and several various excipients, using different extrusion/spheronization process conditions. The data set contained 14 input variables (both formulation and process variables) and one output variable (pellet aspect ratio). A tree regression algorithm consistent with the Quality by Design concept was applied to obtain deeper understanding and knowledge of formulation and process parameters affecting the final pellet sphericity. The clear interpretable set of decision rules were generated. The spehronization speed, spheronization time, number of holes and water content of extrudate have been recognized as the key factors influencing pellet aspect ratio. The most spherical pellets were achieved by using a large number of holes during extrusion, a high spheronizer speed and longer time of spheronization. The described data mining approach enhances knowledge about pelletization process and simultaneously facilitates searching for the optimal process conditions which are necessary to achieve ideal spherical pellets, resulting in good flow characteristics. This data mining approach can be taken into consideration by industrial formulation scientists to support rational decision making in the field of pellets technology. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Embedding SAS approach into conjugate gradient algorithms for asymmetric 3D elasticity problems

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Hsin-Chu; Warsi, N.A. [Clark Atlanta Univ., GA (United States); Sameh, A. [Univ. of Minnesota, Minneapolis, MN (United States)

    1996-12-31

    In this paper, we present two strategies to embed the SAS (symmetric-and-antisymmetric) scheme into conjugate gradient (CG) algorithms to make solving 3D elasticity problems, with or without global reflexive symmetry, more efficient. The SAS approach is physically a domain decomposition scheme that takes advantage of reflexive symmetry of discretized physical problems, and algebraically a matrix transformation method that exploits special reflexivity properties of the matrix resulting from discretization. In addition to offering large-grain parallelism, which is valuable in a multiprocessing environment, the SAS scheme also has the potential for reducing arithmetic operations in the numerical solution of a reasonably wide class of scientific and engineering problems. This approach can be applied directly to problems that have global reflexive symmetry, yielding smaller and independent subproblems to solve, or indirectly to problems with partial symmetry, resulting in loosely coupled subproblems. The decomposition is achieved by separating the reflexive subspace from the antireflexive one, possessed by a special class of matrices A, A {element_of} C{sup n x n} that satisfy the relation A = PAP where P is a reflection matrix (symmetric signed permutation matrix).

  14. On e-business strategy planning and performance evaluation: An adaptive algorithmic managerial approach

    Directory of Open Access Journals (Sweden)

    Alexandra Lipitakis

    2017-07-01

    Full Text Available A new e-business strategy planning and performance evaluation scheme based on adaptive algorithmic modelling techniques is presented. The effect of financial and non-financial performance of organizations on e-business strategy planning is investigated. The relationships between the four strategic planning parameters are examined, the directions of these relationships are given and six additional basic components are also considered. The new conceptual model has been constructed for e-business strategic planning and performance evaluation and an adaptive algorithmic modelling approach is presented. The new adaptive algorithmic modelling scheme including eleven dynamic modules, can be optimized and used effectively in e-business strategic planning and strategic planning evaluation of various e-services in very large organizations and businesses. A synoptic statistical analysis and comparative numerical results for the case of UK and Greece are given. The proposed e-business models indicate how e-business strategic planning may affect financial and non-financial performance in business and organizations by exploring whether models which are used for strategy planning can be applied to e-business planning and whether these models would be valid in different environments. A conceptual model has been constructed and qualitative research methods have been used for testing a predetermined number of considered hypotheses. The proposed models have been tested in the UK and Greece and the conclusions including numerical results and statistical analyses indicated existing relationships between considered dependent and independent variables. The proposed e-business models are expected to contribute to e-business strategy planning of businesses and organizations and managers should consider applying these models to their e-business strategy planning to improve their companies’ performances. This research study brings together elements of e

  15. Surgical experts: born or made?

    Science.gov (United States)

    Sadideen, Hazim; Alvand, Abtin; Saadeddin, Munir; Kneebone, Roger

    2013-01-01

    The concept of surgical expertise and the processes involved in its development are topical, and there is a constant drive to identify reliable measures of expert performance in surgery. This review explores the notion of whether surgical experts are "born" or "made", with reference to educational theory and pertinent literature. Peer-reviewed publications, books, and online resources on surgical education, expertise and training were reviewed. Important themes and aspects of expertise acquisition were identified in order to better understand the concept of a surgical expert. The definition of surgical expertise and several important aspects of its development are highlighted. Innate talent plays an important role, but is insufficient on its own to produce a surgical expert. Multiple theories that explore motor skill acquisition and memory are relevant, and Ericsson's theory of the development of competence followed by deliberate self-practice has been especially influential. Psychomotor and non-technical skills are necessary for progression in the current climate in light of our training curricula; surgical experts are adaptive experts who excel in these. The literature suggests that surgical expertise is reached through practice; surgical experts are made, not born. A deeper understanding of the nature of expert performance and its development will ensure that surgical education training programmes are of the highest possible quality. Surgical educators should aim to develop an expertise-based approach, with expert performance as the benchmark. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  16. Iterative Expert-Functional Approach to the SWOT-Analysis in the Context of Strategic Marketing at the Japanese Cuisine Market

    Directory of Open Access Journals (Sweden)

    Igor Dmitrievich Kim

    2015-12-01

    Full Text Available SWOT-analysis is one of the most common in the world of strategic planning methods used in the intra-firm, corporate, branch and territorial levels. The popularity of this approach is due to the relatively simple intuitive mechanics of its implementation, as well as the minimum cost of financial resources and time. The designed iterative expert-functional approach to the SWOT-analysis and to the development of strategic initiatives suggest analyzing the company at an early stage in the context of its key areas of work, determining the components of internal and external environment from the perspective of core business functions, taking into account the degree of development of the organization, the complexity of its business processes and corporate culture. It is extremely important criteria of the SWOT-analysis, and the development of proposals should be: comprehensive, the most objective approach to the study of the internal and external components of interest management in the analysis and implementation of strategic initiatives, understanding the specifics of the business project manager and its business processes, opportunities and threats surrounding environment, institutional approach to the study of the factors that may affect the economic and financial results of the company and marketing. In practice, the literature is often possible to meet criticism of the SWOT-analysis because of its subjectivity and descriptive results. The proposed procedure does not rule out the nature of these disadvantages, but maximizes the comprehensive assessment of the company’s operations, taking into account the interaction of its structural units and efficiently organize business processes, evaluate the degree of rationality and flexibility in addressing the opportunities and threats of the external environment. The outcome of the expert-functional approach should be the strategic initiatives that take into account the interests and competence of the

  17. An improved approach to exchange non-rectangular departments in CRAFT algorithm

    OpenAIRE

    Esmaeili Aliabadi, Danial; Pourghannad, Behrooz

    2012-01-01

    In this Paper, an algorithm which improves CRAFT algorithm’s efficacy is developed. CRAFT is an algorithm widely used to solve facility layout problems. Our proposed method, named Plasma, can be used to improve CRAFT results. In this note, Plasma algorithm is tested in several sample problems. The comparison between Plasma and classic CRAFT and also Micro-CRAFT indicates that Plasma is successful in cost reduction in comparison with CRAFT and Micro-CRAFT.

  18. Algorithm for detecting violations of traffic rules based on computer vision approaches

    Directory of Open Access Journals (Sweden)

    Ibadov Samir

    2017-01-01

    Full Text Available We propose a new algorithm for automatic detect violations of traffic rules for improving the people safety on the unregulated pedestrian crossing. The algorithm uses multi-step proceedings. They are zebra detection, cars detection, and pedestrian detection. For car detection, we use faster R-CNN deep learning tool. The algorithm shows promising results in the detection violations of traffic rules.

  19. An Algorithmic Approach to Total Breast Reconstruction with Free Tissue Transfer

    Directory of Open Access Journals (Sweden)

    Seong Cheol Yu

    2013-05-01

    Full Text Available As microvascular techniques continue to improve, perforator flap free tissue transfer is now the gold standard for autologous breast reconstruction. Various options are available for breast reconstruction with autologous tissue. These include the free transverse rectus abdominis myocutaneous (TRAM flap, deep inferior epigastric perforator flap, superficial inferior epigastric artery flap, superior gluteal artery perforator flap, and transverse/vertical upper gracilis flap. In addition, pedicled flaps can be very successful in the right hands and the right patient, such as the pedicled TRAM flap, latissimus dorsi flap, and thoracodorsal artery perforator. Each flap comes with its own advantages and disadvantages related to tissue properties and donor-site morbidity. Currently, the problem is how to determine the most appropriate flap for a particular patient among those potential candidates. Based on a thorough review of the literature and accumulated experiences in the author’s institution, this article provides a logical approach to autologous breast reconstruction. The algorithms presented here can be helpful to customize breast reconstruction to individual patient needs.

  20. Intelligent control a hybrid approach based on fuzzy logic, neural networks and genetic algorithms

    CERN Document Server

    Siddique, Nazmul

    2014-01-01

    Intelligent Control considers non-traditional modelling and control approaches to nonlinear systems. Fuzzy logic, neural networks and evolutionary computing techniques are the main tools used. The book presents a modular switching fuzzy logic controller where a PD-type fuzzy controller is executed first followed by a PI-type fuzzy controller thus improving the performance of the controller compared with a PID-type fuzzy controller.  The advantage of the switching-type fuzzy controller is that it uses one rule-base thus minimises the rule-base during execution. A single rule-base is developed by merging the membership functions for change of error of the PD-type controller and sum of error of the PI-type controller. Membership functions are then optimized using evolutionary algorithms. Since the two fuzzy controllers were executed in series, necessary further tuning of the differential and integral scaling factors of the controller is then performed. Neural-network-based tuning for the scaling parameters of t...

  1. BP neural network optimized by genetic algorithm approach for titanium and iron content prediction in EDXRF

    International Nuclear Information System (INIS)

    Wang Jun; Liu Mingzhe; Li Zhe; Li Lei; Shi Rui; Tuo Xianguo

    2015-01-01

    The quantitative elemental content analysis is difficult due to the uniform effect, particle effect and the element matrix effect, etc, when using energy dispersive X-ray fluorescence (EDXRF) technique. In this paper, a hybrid approach of genetic algorithm (GA) and back propagation (BP) neural network was proposed without considering the complex relationship between the concentration and intensity. The aim of GA optimized BP was to get better network initial weights and thresholds. The basic idea was that the reciprocal of the mean square error of the initialization BP neural network was set as the fitness value of the individual in GA, and the initial weights and thresholds were replaced by individuals, and then the optimal individual was sought by selection, crossover and mutation operations, finally a new BP neural network model was created with the optimal initial weights and thresholds. The calculation results of quantitative analysis of titanium and iron contents for five types of ore bodies in Panzhihua Mine show that the results of classification prediction are far better than that of overall forecasting, and relative errors of 76.7% samples are less than 2% compared with chemical analysis values, which demonstrates the effectiveness of the proposed method. (authors)

  2. Combined Simulated Annealing and Genetic Algorithm Approach to Bus Network Design

    Science.gov (United States)

    Liu, Li; Olszewski, Piotr; Goh, Pong-Chai

    A new method - combined simulated annealing (SA) and genetic algorithm (GA) approach is proposed to solve the problem of bus route design and frequency setting for a given road network with fixed bus stop locations and fixed travel demand. The method involves two steps: a set of candidate routes is generated first and then the best subset of these routes is selected by the combined SA and GA procedure. SA is the main process to search for a better solution to minimize the total system cost, comprising user and operator costs. GA is used as a sub-process to generate new solutions. Bus demand assignment on two alternative paths is performed at the solution evaluation stage. The method was implemented on four theoretical grid networks of different size and a benchmark network. Several GA operators (crossover and mutation) were utilized and tested for their effectiveness. The results show that the proposed method can efficiently converge to the optimal solution on a small network but computation time increases significantly with network size. The method can also be used for other transport operation management problems.

  3. Assessing the clinical effectiveness of an algorithmic approach for mucosal lichen planus (MLP): A retrospective review.

    Science.gov (United States)

    Ashack, Kurt A; Haley, Laura L; Luther, Chelsea A; Riemer, Christie A; Ashack, Richard J

    2016-06-01

    Mucosal lichen planus (MLP) is a therapeutic challenge in need of a new treatment approach because of its debilitating effect on patient's quality of life. We sought to evaluate a standardized treatment plan for patients with MLP. A second objective was to describe the effect of mycophenolate mofetil in this patient population. The study retrospectively analyzed 53 patients with MLP treated using a standardized algorithm. The number of MLP lesions, disease activity, and pain at the last visit were compared with baseline scores determined at the initial visit. Results were analyzed using the paired samples t test and confirmed with the Wilcoxon matched pairs signed rank test. The average number of lesions was reduced from 3.77 to 1.67 (P < .001). The average disease activity was reduced from 2.73 to 0.90 (P < .001). Average pain reported decreased from 2.03 to 1.03 (P < .001). This study was a retrospective analysis of a small patient population. There was no universal symptom severity scale used at the time of treatment for some patients. The standardized treatment plan reduced symptoms for patients with MLP. Mycophenolate mofetil appears to be a reasonable treatment option for these patients. Copyright © 2016 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  4. Morphologically occult systemic mastocytosis in bone marrow: clinicopathologic features and an algorithmic approach to diagnosis.

    Science.gov (United States)

    Reichard, Kaaren K; Chen, Dong; Pardanani, Animesh; McClure, Rebecca F; Howard, Matthew T; Kurtin, Paul J; Wood, Adam J; Ketterling, Rhett P; King, Rebecca L; He, Rong; Morice, William G; Hanson, Curtis A

    2015-09-01

    Bone marrow (BM) biopsy specimens involved by systemic mastocytosis (SM) typically show multifocal, compact, dense aggregates of spindled mast cells (MCs). However, some cases lack aggregate formation and fulfill the World Health Organization 2008 criteria for SM, based on minor criteria. We identified 26 BM cases of KIT D816V-mutated, morphologically occult SM in the BM. All patients had some combination of allergic/MC activating symptoms. Peripheral blood counts were generally normal. BM aspirates showed 5% or less MCs, which were only occasionally spindled. BM biopsy specimens showed no morphologic classic MC lesions. Tryptase immunohistochemistry (IHC) demonstrated interstitial, individually distributed MCs (up to 5%) with prominent spindling, lacking aggregate formation. MCs coexpressed CD25 by IHC and/or flow cytometry. Spindled MCs constituted more than 25% of total MCs in all cases and more than 50% in 20 of 26 cases. Morphologically occult involvement of normal-appearing BM by SM will be missed without appropriate clinical suspicion and pathologic evaluation by tryptase and CD25 IHC and KIT D816V mutation analysis. On the basis of these findings, we propose a cost-effective, data-driven, evidence-based algorithmic approach to the workup of these cases. Copyright© by the American Society for Clinical Pathology.

  5. REMAINING LIFE TIME PREDICTION OF BEARINGS USING K-STAR ALGORITHM – A STATISTICAL APPROACH

    Directory of Open Access Journals (Sweden)

    R. SATISHKUMAR

    2017-01-01

    Full Text Available The role of bearings is significant in reducing the down time of all rotating machineries. The increasing trend of bearing failures in recent times has triggered the need and importance of deployment of condition monitoring. There are multiple factors associated to a bearing failure while it is in operation. Hence, a predictive strategy is required to evaluate the current state of the bearings in operation. In past, predictive models with regression techniques were widely used for bearing lifetime estimations. The Objective of this paper is to estimate the remaining useful life of bearings through a machine learning approach. The ultimate objective of this study is to strengthen the predictive maintenance. The present study was done using classification approach following the concepts of machine learning and a predictive model was built to calculate the residual lifetime of bearings in operation. Vibration signals were acquired on a continuous basis from an experiment wherein the bearings are made to run till it fails naturally. It should be noted that the experiment was carried out with new bearings at pre-defined load and speed conditions until the bearing fails on its own. In the present work, statistical features were deployed and feature selection process was carried out using J48 decision tree and selected features were used to develop the prognostic model. The K-Star classification algorithm, a supervised machine learning technique is made use of in building a predictive model to estimate the lifetime of bearings. The performance of classifier was cross validated with distinct data. The result shows that the K-Star classification model gives 98.56% classification accuracy with selected features.

  6. Inductive acquisition of expert knowledge

    Energy Technology Data Exchange (ETDEWEB)

    Muggleton, S.H.

    1986-01-01

    Expert systems divide neatly into two categories: those in which (1) the expert decisions result in changes to some external environment (control systems), and (2) the expert decisions merely seek to describe the environment (classification systems). Both the explanation of computer-based reasoning and the bottleneck (Feigenbaum, 1979) of knowledge acquisition are major issues in expert-systems research. The author contributed to these areas of research in two ways: 1. He implemented an expert-system shell, the Mugol environment, which facilitates knowledge acquisition by inductive inference and provides automatic explanation of run-time reasoning on demand. RuleMaster, a commercial version of this environment, was used to advantage industrially in the construction and testing of two large classification systems. 2. He investigated a new techniques called 'sequence induction' that can be used in construction of control systems. Sequence induction is based on theoretical work in grammatical learning. He improved existing grammatical learning algorithms as well as suggesting and theoretically characterizing new ones. These algorithms were successfully applied to acquisition of knowledge for a diverse set of control systems, including inductive construction of robot plans and chess end-gam strategies.

  7. Algorithms for finding Chomsky and Greibach normal forms for a fuzzy context-free grammar using an algebraic approach

    Energy Technology Data Exchange (ETDEWEB)

    Lee, E.T.

    1983-01-01

    Algorithms for the construction of the Chomsky and Greibach normal forms for a fuzzy context-free grammar using the algebraic approach are presented and illustrated by examples. The results obtained in this paper may have useful applications in fuzzy languages, pattern recognition, information storage and retrieval, artificial intelligence, database and pictorial information systems. 16 references.

  8. Application of expert system technology to nondestructive waste assay - initial prototype model

    Energy Technology Data Exchange (ETDEWEB)

    Becker, G.K.; Determan, J.C. [Idaho National Engineering and Environmental Lab., Idaho Falls, ID (United States)

    1997-11-01

    Expert system technology has been identified as a technique useful for filling certain types of technology/capability gaps in existing waste nondestructive assay (NDA) applications. In particular, expert system techniques are being investigated with the intent of providing on-line evaluation of acquired data and/or directed acquisition of data in a manner that mimics the logic and decision making process a waste NDA expert would employ. The space from which information and data sources utilized in this process is much expanded with respect to the algorithmic approach typically utilized in waste NDA. Expert system technology provides a mechanism to manage and reason with this expanded information/data set. The material presented in this paper concerns initial studies and a resultant prototype expert system that incorporates pertinent information, and evaluation logic and decision processes, for the purpose of validating acquired waste NDA measurement assays. 6 refs., 6 figs.

  9. Application of expert system technology to nondestructive waste assay - initial prototype model

    International Nuclear Information System (INIS)

    Becker, G.K.; Determan, J.C.

    1997-01-01

    Expert system technology has been identified as a technique useful for filling certain types of technology/capability gaps in existing waste nondestructive assay (NDA) applications. In particular, expert system techniques are being investigated with the intent of providing on-line evaluation of acquired data and/or directed acquisition of data in a manner that mimics the logic and decision making process a waste NDA expert would employ. The space from which information and data sources utilized in this process is much expanded with respect to the algorithmic approach typically utilized in waste NDA. Expert system technology provides a mechanism to manage and reason with this expanded information/data set. The material presented in this paper concerns initial studies and a resultant prototype expert system that incorporates pertinent information, and evaluation logic and decision processes, for the purpose of validating acquired waste NDA measurement assays. 6 refs., 6 figs

  10. Expert opinion

    DEFF Research Database (Denmark)

    Ferrer, Marta; Boccon-Gibod, Isabelle; Gonçalo, Margarida

    2017-01-01

    Omalizumab (a recombinant, humanized anti-immunoglobulin-E antibody) has been shown in three pivotal Phase III trials (ASTERIA I, II and GLACIAL) and real-world studies to be effective and well-tolerated for the treatment of chronic spontaneous urticaria (CSU), and is the only licensed third......-line treatment for CSU. However, the definition of response to omalizumab treatment often differs between clinical trials, real-world studies, and daily practice of individual physicians globally. As such, a consensus definition of "complete", "partial" and "non-response" to omalizumab is required in order...... into a patient's disease burden and its changes during treatment. A potential omalizumab treatment approach based on speed and pattern of response at 1-3 and 3-6 months is suggested. In cases where there is no response during the first 1-3 months, physicians should consider reassessing the original CSU diagnosis...

  11. Reflection group on 'Expert Culture'

    International Nuclear Information System (INIS)

    Eggermont, G.

    2000-01-01

    As part of SCK-CEN's social sciences and humanities programme, a reflection group on 'Expert Culture' was established. The objectives of the reflection group are: (1) to clarify the role of SCK-CEN experts; (2) to clarify the new role of expertise in the evolving context of risk society; (3) to confront external views and internal SCK-CEN experiences on expert culture; (4) to improve trust building of experts and credibility of SCK-CEN as a nuclear actor in society; (5) to develop a draft for a deontological code; (6) to integrate the approach in training on assertivity and communication; (7) to create an output for a topical day on the subject of expert culture. The programme, achievements and perspectives of the refection group are summarised

  12. A New RTL Design Approach for a DCT/IDCT-Based Image Compression Architecture using the mCBE Algorithm

    Directory of Open Access Journals (Sweden)

    Rachmad Vidya Wicaksana Putra

    2012-09-01

    Full Text Available In the literature, several approaches of designing a DCT/IDCT-based image compression system have been proposed. In this paper, we present a new RTL design approach with as main focus developing a DCT/IDCT-based image compression architecture using a self-created algorithm. This algorithm can efficiently minimize the amount of shifter-adders to substitute multipliers. We call this new algorithm the multiplication from Common Binary Expression (mCBE Algorithm. Besides this algorithm, we propose alternative quantization numbers, which can be implemented simply as shifters in digital hardware. Mostly, these numbers can retain a good compressed-image quality compared to JPEG recommendations. These ideas lead to our design being small in circuit area, multiplierless, and low in complexity. The proposed 8-point 1D-DCT design has only six stages, while the 8-point 1D-IDCT design has only seven stages (one stage being defined as equal to the delay of one shifter or 2-input adder. By using the pipelining method, we can achieve a high-speed architecture with latency as a trade-off consideration. The design has been synthesized and can reach a speed of up to 1.41ns critical path delay (709.22MHz.

  13. A Teaching Approach from the Exhaustive Search Method to the Needleman-Wunsch Algorithm

    Science.gov (United States)

    Xu, Zhongneng; Yang, Yayun; Huang, Beibei

    2017-01-01

    The Needleman-Wunsch algorithm has become one of the core algorithms in bioinformatics; however, this programming requires more suitable explanations for students with different major backgrounds. In supposing sample sequences and using a simple store system, the connection between the exhaustive search method and the Needleman-Wunsch algorithm…

  14. Phase-unwrapping algorithm by a rounding-least-squares approach

    Science.gov (United States)

    Juarez-Salazar, Rigoberto; Robledo-Sanchez, Carlos; Guerrero-Sanchez, Fermin

    2014-02-01

    A simple and efficient phase-unwrapping algorithm based on a rounding procedure and a global least-squares minimization is proposed. Instead of processing the gradient of the wrapped phase, this algorithm operates over the gradient of the phase jumps by a robust and noniterative scheme. Thus, the residue-spreading and over-smoothing effects are reduced. The algorithm's performance is compared with four well-known phase-unwrapping methods: minimum cost network flow (MCNF), fast Fourier transform (FFT), quality-guided, and branch-cut. A computer simulation and experimental results show that the proposed algorithm reaches a high-accuracy level than the MCNF method by a low-computing time similar to the FFT phase-unwrapping method. Moreover, since the proposed algorithm is simple, fast, and user-free, it could be used in metrological interferometric and fringe-projection automatic real-time applications.

  15. An improved algorithm for information hiding based on features of Arabic text: A Unicode approach

    Directory of Open Access Journals (Sweden)

    A.A. Mohamed

    2014-07-01

    Full Text Available Steganography means how to hide secret information in a cover media, so that other individuals fail to realize their existence. Due to the lack of data redundancy in the text file in comparison with other carrier files, text steganography is a difficult problem to solve. In this paper, we proposed a new promised steganographic algorithm for Arabic text based on features of Arabic text. The focus is on more secure algorithm and high capacity of the carrier. Our extensive experiments using the proposed algorithm resulted in a high capacity of the carrier media. The embedding capacity rate ratio of the proposed algorithm is high. In addition, our algorithm can resist traditional attacking methods since it makes the changes in carrier text as minimum as possible.

  16. A new approach to nuclear reactor design optimization using genetic algorithms and regression analysis

    International Nuclear Information System (INIS)

    Kumar, Akansha; Tsvetkov, Pavel V.

    2015-01-01

    Highlights: • This paper presents a new method useful for the optimization of complex dynamic systems. • The method uses the strengths of; genetic algorithms (GA), and regression splines. • The method is applied to the design of a gas cooled fast breeder reactor design. • Tools like Java, R, and codes like MCNP, Matlab are used in this research. - Abstract: A module based optimization method using genetic algorithms (GA), and multivariate regression analysis has been developed to optimize a set of parameters in the design of a nuclear reactor. GA simulates natural evolution to perform optimization, and is widely used in recent times by the scientific community. The GA fits a population of random solutions to the optimized solution of a specific problem. In this work, we have developed a genetic algorithm to determine the values for a set of nuclear reactor parameters to design a gas cooled fast breeder reactor core including a basis thermal–hydraulics analysis, and energy transfer. Multivariate regression is implemented using regression splines (RS). Reactor designs are usually complex and a simulation needs a significantly large amount of time to execute, hence the implementation of GA or any other global optimization techniques is not feasible, therefore we present a new method of using RS in conjunction with GA. Due to using RS, we do not necessarily need to run the neutronics simulation for all the inputs generated from the GA module rather, run the simulations for a predefined set of inputs, build a multivariate regression fit to the input and the output parameters, and then use this fit to predict the output parameters for the inputs generated by GA. The reactor parameters are given by the, radius of a fuel pin cell, isotopic enrichment of the fissile material in the fuel, mass flow rate of the coolant, and temperature of the coolant at the core inlet. And, the optimization objectives for the reactor core are, high breeding of U-233 and Pu-239 in

  17. The SF3M approach to 3-D photo-reconstruction for non-expert users: application to a gully network

    Science.gov (United States)

    Castillo, C.; James, M. R.; Redel-Macías, M. D.; Pérez, R.; Gómez, J. A.

    2015-04-01

    3-D photo-reconstruction (PR) techniques have been successfully used to produce high resolution elevation models for different applications and over different spatial scales. However, innovative approaches are required to overcome some limitations that this technique may present in challenging scenarios. Here, we evaluate SF3M, a new graphical user interface for implementing a complete PR workflow based on freely available software (including external calls to VisualSFM and CloudCompare), in combination with a low-cost survey design for the reconstruction of a several-hundred-meters-long gully network. SF3M provided a semi-automated workflow for 3-D reconstruction requiring ~ 49 h (of which only 17% required operator assistance) for obtaining a final gully network model of > 17 million points over a gully plan area of 4230 m2. We show that a walking itinerary along the gully perimeter using two light-weight automatic cameras (1 s time-lapse mode) and a 6 m-long pole is an efficient method for 3-D monitoring of gullies, at a low cost (about EUR 1000 budget for the field equipment) and time requirements (~ 90 min for image collection). A mean error of 6.9 cm at the ground control points was found, mainly due to model deformations derived from the linear geometry of the gully and residual errors in camera calibration. The straightforward image collection and processing approach can be of great benefit for non-expert users working on gully erosion assessment.

  18. A feature-based approach for best arm identification in the case of the Monte Carlo search algorithm discovery for one-player games

    OpenAIRE

    Taralla, David

    2013-01-01

    The field of reinforcement learning recently received the contribution by Ernst et al. (2013) "Monte carlo search algorithm discovery for one player games" who introduced a new way to conceive completely new algorithms. Moreover, it brought an automatic method to find the best algorithm to use in a particular situation using a multi-arm bandit approach. We address here the problem of best arm identification. The main problem is that the generated algorithm space (ie. the arm space) can be qui...

  19. An intelligent approach to optimize the EDM process parameters using utility concept and QPSO algorithm

    Directory of Open Access Journals (Sweden)

    Chinmaya P. Mohanty

    2017-04-01

    Full Text Available Although significant research has gone into the field of electrical discharge machining (EDM, analysis related to the machining efficiency of the process with different electrodes has not been adequately made. Copper and brass are frequently used as electrode materials but graphite can be used as a potential electrode material due to its high melting point temperature and good electrical conductivity. In view of this, the present work attempts to compare the machinability of copper, graphite and brass electrodes while machining Inconel 718 super alloy. Taguchi’s L27 orthogonal array has been employed to collect data for the study and analyze effect of machining parameters on performance measures. The important performance measures selected for this study are material removal rate, tool wear rate, surface roughness and radial overcut. Machining parameters considered for analysis are open circuit voltage, discharge current, pulse-on-time, duty factor, flushing pressure and electrode material. From the experimental analysis, it is observed that electrode material, discharge current and pulse-on-time are the important parameters for all the performance measures. Utility concept has been implemented to transform a multiple performance characteristics into an equivalent performance characteristic. Non-linear regression analysis is carried out to develop a model relating process parameters and overall utility index. Finally, the quantum behaved particle swarm optimization (QPSO and particle swarm optimization (PSO algorithms have been used to compare the optimal level of cutting parameters. Results demonstrate the elegance of QPSO in terms of convergence and computational effort. The optimal parametric setting obtained through both the approaches is validated by conducting confirmation experiments.

  20. A Numerical Approach to Solving an Inverse Heat Conduction Problem Using the Levenberg-Marquardt Algorithm

    Directory of Open Access Journals (Sweden)

    Tao Min

    2014-01-01

    Full Text Available This paper is intended to provide a numerical algorithm involving the combined use of the Levenberg-Marquardt algorithm and the Galerkin finite element method for estimating the diffusion coefficient in an inverse heat conduction problem (IHCP. In the present study, the functional form of the diffusion coefficient is unknown a priori. The unknown diffusion coefficient is approximated by the polynomial form and the present numerical algorithm is employed to find the solution. Numerical experiments are presented to show the efficiency of the proposed method.

  1. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  2. Algorithms and programs of dynamic mixture estimation unified approach to different types of components

    CERN Document Server

    Nagy, Ivan

    2017-01-01

    This book provides a general theoretical background for constructing the recursive Bayesian estimation algorithms for mixture models. It collects the recursive algorithms for estimating dynamic mixtures of various distributions and brings them in the unified form, providing a scheme for constructing the estimation algorithm for a mixture of components modeled by distributions with reproducible statistics. It offers the recursive estimation of dynamic mixtures, which are free of iterative processes and close to analytical solutions as much as possible. In addition, these methods can be used online and simultaneously perform learning, which improves their efficiency during estimation. The book includes detailed program codes for solving the presented theoretical tasks. Codes are implemented in the open source platform for engineering computations. The program codes given serve to illustrate the theory and demonstrate the work of the included algorithms.

  3. Multi-Level Sensor Fusion Algorithm Approach for BMD Interceptor Applications

    National Research Council Canada - National Science Library

    Allen, Doug

    1998-01-01

    ... through fabrication and testing of advanced sensor hardware concepts and advanced sensor fusion algorithms. Advanced sensor concepts include onboard LADAR in conjunction with a multi-color passive IR sensor...

  4. Bioinformatics algorithm based on a parallel implementation of a machine learning approach using transducers

    International Nuclear Information System (INIS)

    Roche-Lima, Abiel; Thulasiram, Ruppa K

    2012-01-01

    Finite automata, in which each transition is augmented with an output label in addition to the familiar input label, are considered finite-state transducers. Transducers have been used to analyze some fundamental issues in bioinformatics. Weighted finite-state transducers have been proposed to pairwise alignments of DNA and protein sequences; as well as to develop kernels for computational biology. Machine learning algorithms for conditional transducers have been implemented and used for DNA sequence analysis. Transducer learning algorithms are based on conditional probability computation. It is calculated by using techniques, such as pair-database creation, normalization (with Maximum-Likelihood normalization) and parameters optimization (with Expectation-Maximization - EM). These techniques are intrinsically costly for computation, even worse when are applied to bioinformatics, because the databases sizes are large. In this work, we describe a parallel implementation of an algorithm to learn conditional transducers using these techniques. The algorithm is oriented to bioinformatics applications, such as alignments, phylogenetic trees, and other genome evolution studies. Indeed, several experiences were developed using the parallel and sequential algorithm on Westgrid (specifically, on the Breeze cluster). As results, we obtain that our parallel algorithm is scalable, because execution times are reduced considerably when the data size parameter is increased. Another experience is developed by changing precision parameter. In this case, we obtain smaller execution times using the parallel algorithm. Finally, number of threads used to execute the parallel algorithm on the Breezy cluster is changed. In this last experience, we obtain as result that speedup is considerably increased when more threads are used; however there is a convergence for number of threads equal to or greater than 16.

  5. A new approach to optic disc detection in human retinal images using the firefly algorithm.

    Science.gov (United States)

    Rahebi, Javad; Hardalaç, Fırat

    2016-03-01

    There are various methods and algorithms to detect the optic discs in retinal images. In recent years, much attention has been given to the utilization of the intelligent algorithms. In this paper, we present a new automated method of optic disc detection in human retinal images using the firefly algorithm. The firefly intelligent algorithm is an emerging intelligent algorithm that was inspired by the social behavior of fireflies. The population in this algorithm includes the fireflies, each of which has a specific rate of lighting or fitness. In this method, the insects are compared two by two, and the less attractive insects can be observed to move toward the more attractive insects. Finally, one of the insects is selected as the most attractive, and this insect presents the optimum response to the problem in question. Here, we used the light intensity of the pixels of the retinal image pixels instead of firefly lightings. The movement of these insects due to local fluctuations produces different light intensity values in the images. Because the optic disc is the brightest area in the retinal images, all of the insects move toward brightest area and thus specify the location of the optic disc in the image. The results of implementation show that proposed algorithm could acquire an accuracy rate of 100 % in DRIVE dataset, 95 % in STARE dataset, and 94.38 % in DiaRetDB1 dataset. The results of implementation reveal high capability and accuracy of proposed algorithm in the detection of the optic disc from retinal images. Also, recorded required time for the detection of the optic disc in these images is 2.13 s for DRIVE dataset, 2.81 s for STARE dataset, and 3.52 s for DiaRetDB1 dataset accordingly. These time values are average value.

  6. New approaches of the potential field for QPSO algorithm applied to nuclear reactor reload problem

    International Nuclear Information System (INIS)

    Nicolau, Andressa dos Santos; Schirru, Roberto

    2015-01-01

    Recently quantum-inspired version of the Particle Swarm Optimization (PSO) algorithm, Quantum Particle Swarm Optimization (QPSO) was proposed. The QPSO algorithm permits all particles to have a quantum behavior, where some sort of 'quantum motion' is imposed in the search process. When the QPSO is tested against a set of benchmarking functions, it showed superior performances as compared to classical PSO. The QPSO outperforms the classical one most of the time in convergence speed and achieves better levels for the fitness functions. The great advantage of QPSO algorithm is that it uses only one parameter control. The critical step or QPSO algorithm is the choice of suitable attractive potential field that can guarantee bound states for the particles moving in the quantum environment. In this article, one version of QPSO algorithm was tested with two types of potential well: delta-potential well harmonic oscillator. The main goal of this study is to show with of the potential field is the most suitable for use in QPSO in a solution of the Nuclear Reactor Reload Optimization Problem, especially in the cycle 7 of a Brazilian Nuclear Power Plant. All result were compared with the performance of its classical counterpart of the literature and shows that QPSO algorithm are well situated among the best alternatives for dealing with hard optimization problems, such as NRROP. (author)

  7. New approaches of the potential field for QPSO algorithm applied to nuclear reactor reload problem

    Energy Technology Data Exchange (ETDEWEB)

    Nicolau, Andressa dos Santos; Schirru, Roberto, E-mail: andressa@lmp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2015-07-01

    Recently quantum-inspired version of the Particle Swarm Optimization (PSO) algorithm, Quantum Particle Swarm Optimization (QPSO) was proposed. The QPSO algorithm permits all particles to have a quantum behavior, where some sort of 'quantum motion' is imposed in the search process. When the QPSO is tested against a set of benchmarking functions, it showed superior performances as compared to classical PSO. The QPSO outperforms the classical one most of the time in convergence speed and achieves better levels for the fitness functions. The great advantage of QPSO algorithm is that it uses only one parameter control. The critical step or QPSO algorithm is the choice of suitable attractive potential field that can guarantee bound states for the particles moving in the quantum environment. In this article, one version of QPSO algorithm was tested with two types of potential well: delta-potential well harmonic oscillator. The main goal of this study is to show with of the potential field is the most suitable for use in QPSO in a solution of the Nuclear Reactor Reload Optimization Problem, especially in the cycle 7 of a Brazilian Nuclear Power Plant. All result were compared with the performance of its classical counterpart of the literature and shows that QPSO algorithm are well situated among the best alternatives for dealing with hard optimization problems, such as NRROP. (author)

  8. High-precision approach to localization scheme of visible light communication based on artificial neural networks and modified genetic algorithms

    Science.gov (United States)

    Guan, Weipeng; Wu, Yuxiang; Xie, Canyu; Chen, Hao; Cai, Ye; Chen, Yingcong

    2017-10-01

    An indoor positioning algorithm based on visible light communication (VLC) is presented. This algorithm is used to calculate a three-dimensional (3-D) coordinate of an indoor optical wireless environment, which includes sufficient orders of multipath reflections from reflecting surfaces of the room. Leveraging the global optimization ability of the genetic algorithm (GA), an innovative framework for 3-D position estimation based on a modified genetic algorithm is proposed. Unlike other techniques using VLC for positioning, the proposed system can achieve indoor 3-D localization without making assumptions about the height or acquiring the orientation angle of the mobile terminal. Simulation results show that an average localization error of less than 1.02 cm can be achieved. In addition, in most VLC-positioning systems, the effect of reflection is always neglected and its performance is limited by reflection, which makes the results not so accurate for a real scenario and the positioning errors at the corners are relatively larger than other places. So, we take the first-order reflection into consideration and use artificial neural network to match the model of a nonlinear channel. The studies show that under the nonlinear matching of direct and reflected channels the average positioning errors of four corners decrease from 11.94 to 0.95 cm. The employed algorithm is emerged as an effective and practical method for indoor localization and outperform other existing indoor wireless localization approaches.

  9. Robust Fault-Tolerant Control for Satellite Attitude Stabilization Based on Active Disturbance Rejection Approach with Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Fei Song

    2014-01-01

    Full Text Available This paper proposed a robust fault-tolerant control algorithm for satellite stabilization based on active disturbance rejection approach with artificial bee colony algorithm. The actuating mechanism of attitude control system consists of three working reaction flywheels and one spare reaction flywheel. The speed measurement of reaction flywheel is adopted for fault detection. If any reaction flywheel fault is detected, the corresponding fault flywheel is isolated and the spare reaction flywheel is activated to counteract the fault effect and ensure that the satellite is working safely and reliably. The active disturbance rejection approach is employed to design the controller, which handles input information with tracking differentiator, estimates system uncertainties with extended state observer, and generates control variables by state feedback and compensation. The designed active disturbance rejection controller is robust to both internal dynamics and external disturbances. The bandwidth parameter of extended state observer is optimized by the artificial bee colony algorithm so as to improve the performance of attitude control system. A series of simulation experiment results demonstrate the performance superiorities of the proposed robust fault-tolerant control algorithm.

  10. A method of knowledge base verification and validation for nuclear power plants expert systems

    International Nuclear Information System (INIS)

    Kwon, Il Won

    1996-02-01

    The adoption of expert systems mainly as operator supporting systems is becoming increasingly popular as the control algorithms of system become more and more sophisticated and complicated. As a result of this popularity, a large number of expert systems are developed. The nature of expert systems, however, requires that they be verified and validated carefully and that detailed methodologies for their development be devised. Therefore, it is widely noted that assuring the reliability of expert systems is very important, especially in nuclear industry, and it is also recognized that the process of verification and validation is an essential part of reliability assurance for these systems. Research and practices have produced numerous methods for expert system verification and validation (V and V) that suggest traditional software and system approaches to V and V. However, many approaches and methods for expert system V and V are partial, unreliable, and not uniform. The purpose of this paper is to present a new approach to expert system V and V, based on Petri nets, providing a uniform model. We devise and suggest an automated tool, called COKEP (Checker Of Knowledge base using Extended Petri net), for checking incorrectness, inconsistency, and incompleteness in a knowledge base. We also suggest heuristic analysis for validation process to show that the reasoning path is correct

  11. A novel gene network inference algorithm using predictive minimum description length approach.

    Science.gov (United States)

    Chaitankar, Vijender; Ghosh, Preetam; Perkins, Edward J; Gong, Ping; Deng, Youping; Zhang, Chaoyang

    2010-05-28

    Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold which defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we proposed a new inference algorithm which incorporated mutual information (MI), conditional mutual information (CMI) and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm was evaluated using both synthetic time series data sets and a biological time series data set for the yeast Saccharomyces cerevisiae. The benchmark quantities precision and recall were used as performance measures. The results show that the proposed algorithm produced less false edges and significantly improved the precision, as compared to the existing algorithm. For further analysis the performance of the algorithms was observed over different sizes of data. We have proposed a new algorithm that implements the PMDL principle for inferring gene regulatory networks from time series DNA microarray data that eliminates the need of a fine tuning parameter. The evaluation results obtained from both synthetic and actual biological data sets show that the

  12. New approach for measuring 3D space by using Advanced SURF Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Youm, Minkyo; Min, Byungil; Suh, Kyungsuk [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Lee, Backgeun [Sungkyunkwan Univ., Suwon (Korea, Republic of)

    2013-05-15

    The nuclear disasters compared to natural disaster create a more extreme condition for analyzing and evaluating. In this paper, measuring 3D space and modeling was studied by simple pictures in case of small sand dune. The suggested method can be used for the acquisition of spatial information by robot at the disaster area. As a result, these data are helpful for identify the damaged part, degree of damage and determination of recovery sequences. In this study we are improving computer vision algorithm for 3-D geo spatial information measurement. And confirm by test. First, we can get noticeable improvement of 3-D geo spatial information result by SURF algorithm and photogrammetry surveying. Second, we can confirm not only decrease algorithm running time, but also increase matching points through epi polar line filtering. From the study, we are extracting 3-D model by open source algorithm and delete miss match point by filtering method. However on characteristic of SURF algorithm, it can't find match point if structure don't have strong feature. So we will need more study about find feature point if structure don't have strong feature.

  13. Design of Linear Accelerator (LINAC) tanks for proton therapy via Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) approaches

    Energy Technology Data Exchange (ETDEWEB)

    Castellano, T.; De Palma, L.; Laneve, D.; Strippoli, V.; Cuccovilllo, A.; Prudenzano, F. [Electrical and Information Engineering Department (DEI), Polytechnic Institute of Bari, 4 Orabona Street, CAP 70125, Bari, (Italy); Dimiccoli, V.; Losito, O.; Prisco, R. [ITEL Telecomunicazioni, 39 Labriola Street, CAP 70037, Ruvo di Puglia, Bari, (Italy)

    2015-07-01

    A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)

  14. Design of Linear Accelerator (LINAC) tanks for proton therapy via Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) approaches

    International Nuclear Information System (INIS)

    Castellano, T.; De Palma, L.; Laneve, D.; Strippoli, V.; Cuccovilllo, A.; Prudenzano, F.; Dimiccoli, V.; Losito, O.; Prisco, R.

    2015-01-01

    A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)

  15. The Genetic-Algorithm-Based Normal Boundary Intersection (GANBI) Method; An Efficient Approach to Pareto Multiobjective Optimization for Engineering Design

    Science.gov (United States)

    2006-05-15

    of different evolutionary approaches to multiobjective optimal design are given by Van Veldhuizen ,7 Van Veldhuizen and Lamont,8 and Zitzler and Thiele...and Machine Learning, Addison-Wesley, Boston, 1989. 7. D. A. Van Veldhuizen , "Multiobjective Evolutionary Algorithms: Classifications, Analyses, and...New Innovations," Ph.D. Dissertation, Air Force Institute of Technology, 1999. 39 8. D. A. Van Veldhuizen and G. B. Lamont, "Multiobjective

  16. A New Approach of Parallelism and Load Balance for the Apriori Algorithm

    Directory of Open Access Journals (Sweden)

    BOLINA, A. C.

    2013-06-01

    Full Text Available The main goal of data mining is to discover relevant information on digital content. The Apriori algorithm is widely used to this objective, but its sequential version has a low performance when execu- ted over large volumes of data. Among the solutions for this problem is the parallel implementation of the algorithm, and among the parallel implementations presented in the literature that based on Apriori, it highlights the DPA (Distributed Parallel Apriori [10]. This paper presents the DMTA (Distributed Multithread Apriori algorithm, which is based on DPA and exploits the parallelism level of threads in order to increase the performance. Besides, DMTA can be executed over heterogeneous hardware platform, using different number of cores. The results showed that DMTA outperforms DPA, presents load balance among processes and threads, and it is effective in current multicore architectures.

  17. A Genetic Algorithms Based Approach for Identification of Escherichia coli Fed-batch Fermentation

    Directory of Open Access Journals (Sweden)

    Olympia Roeva

    2004-10-01

    Full Text Available This paper presents the use of genetic algorithms for identification of Escherichia coli fed-batch fermentation process. Genetic algorithms are a directed random search technique, based on the mechanics of natural selection and natural genetics, which can find the global optimal solution in complex multidimensional search space. The dynamic behavior of considered process has known nonlinear structure, described with a system of deterministic nonlinear differential equations according to the mass balance. The parameters of the model are estimated using genetic algorithms. Simulation examples for demonstration of the effectiveness and robustness of the proposed identification scheme are included. As a result, the model accurately predicts the process of cultivation of E. coli.

  18. A Genetic Algorithm Approach to the Optimization of a Radioactive Waste Treatment System

    International Nuclear Information System (INIS)

    Yang, Yeongjin; Lee, Kunjai; Koh, Y.; Mun, J.H.; Kim, H.S.

    1998-01-01

    This study is concerned with the applications of goal programming and genetic algorithm techniques to the analysis of management and operational problems in the radioactive waste treatment system (RWTS). A typical RWTS is modeled and solved by goal program and genetic algorithm to study and resolve the effects of conflicting objectives such as cost, limitation of released radioactivity to the environment, equipment utilization and total treatable radioactive waste volume before discharge and disposal. The developed model is validated and verified using actual data obtained from the RWTS at Kyoto University in Japan. The solution by goal programming and genetic algorithm would show the optimal operation point which is to maximize the total treatable radioactive waste volume and minimize the released radioactivity of liquid waste even under the restricted resources. The comparison of two methods shows very similar results. (author)

  19. The M-OLAP Cube Selection Problem: A Hyper-polymorphic Algorithm Approach

    Science.gov (United States)

    Loureiro, Jorge; Belo, Orlando

    OLAP systems depend heavily on the materialization of multidimensional structures to speed-up queries, whose appropriate selection constitutes the cube selection problem. However, the recently proposed distribution of OLAP structures emerges to answer new globalization's requirements, capturing the known advantages of distributed databases. But this hardens the search for solutions, especially due to the inherent heterogeneity, imposing an extra characteristic of the algorithm that must be used: adaptability. Here the emerging concept known as hyper-heuristic can be a solution. In fact, having an algorithm where several (meta-)heuristics may be selected under the control of a heuristic has an intrinsic adaptive behavior. This paper presents a hyper-heuristic polymorphic algorithm used to solve the extended cube selection and allocation problem generated in M-OLAP architectures.

  20. Expert system technology for nondestructive waste assay

    International Nuclear Information System (INIS)

    Becker, G.K.; Determan, J.C.

    1998-01-01

    Nondestructive assay waste characterization data generated for use in the National TRU Program must be of known and demonstrable quality. Each measurement is required to receive an independent technical review by a qualified expert. An expert system prototype has been developed to automate waste NDA data review of a passive/active neutron drum counter system. The expert system is designed to yield a confidence rating regarding measurement validity. Expert system rules are derived from data in a process involving data clustering, fuzzy logic, and genetic algorithms. Expert system performance is assessed against confidence assignments elicited from waste NDA domain experts. Performance levels varied for the active, passive shielded, and passive system assay modes of the drum counter system, ranging from 78% to 94% correct classifications

  1. A possibilistic approach for transient identification with 'don't know' response capability optimized by genetic algorithm

    International Nuclear Information System (INIS)

    Almeida, Jose Carlos S. de; Schirru, Roberto; Pereira, Claudio M.N.A.; Universidade Federal, Rio de Janeiro, RJ

    2002-01-01

    This work describes a possibilistic approach for transient identification based on the minimum centroids set method, proposed in previous work, optimized by genetic algorithm. The idea behind this method is to split the complex classification problem into small and simple ones, so that the performance in the classification can be increased. In order to accomplish that, a genetic algorithm is used to learn, from realistic simulated data, the optimized time partitions, which the robustness and correctness in the classification are maximized. The use of a possibilistic classification approach propitiates natural and consistent classification rules, leading naturally to a good heuristic to handle the 'don't know 'response, in case of unrecognized transient, which is fairly desirable in transient classification systems where safety is critical. Application of the proposed approach to a nuclear transient indentification problem reveals good capability of the genetic algorithm in learning optimized possibilistic classification rules for efficient diagnosis including 'don't know' response. Obtained results are shown and commented. (author)

  2. Hybrid Approach To Steganography System Based On Quantum Encryption And Chaos Algorithms

    Directory of Open Access Journals (Sweden)

    ZAID A. ABOD

    2018-01-01

    Full Text Available A hybrid scheme for secretly embedding image into a dithered multilevel image is presented. This work inputs both a cover image and secret image, which are scrambling and divided into groups to embedded together based on multiple chaos algorithms (Lorenz map, Henon map and Logistic map respectively. Finally, encrypt the embedded images by using one of the quantum cryptography mechanisms, which is quantum one time pad. The experimental results show that the proposed hybrid system successfully embedded images and combine with the quantum cryptography algorithms and gives high efficiency for secure communication.

  3. Approaches to drug therapy for COPD in Russia: a proposed therapeutic algorithm

    Directory of Open Access Journals (Sweden)

    Zykov KA

    2017-04-01

    Full Text Available Kirill A Zykov,1 Svetlana I Ovcharenko2 1Laboratory of Pulmonology, Moscow State University of Medicine and Dentistry named after A.I. Evdokimov, 2I.M. Sechenov First Moscow State Medical University, Moscow, Russia Abstract: Until recently, there have been few clinical algorithms for the management of patients with COPD. Current evidence-based clinical management guidelines can appear to be complex, and they lack clear step-by-step instructions. For these reasons, we chose to create a simple and practical clinical algorithm for the management of patients with COPD, which would be applicable to real-world clinical practice, and which was based on clinical symptoms and spirometric parameters that would take into account the pathophysiological heterogeneity of COPD. This optimized algorithm has two main fields, one for nonspecialist treatment by primary care and general physicians and the other for treatment by specialized pulmonologists. Patients with COPD are treated with long-acting bronchodilators and short-acting drugs on a demand basis. If the forced expiratory volume in one second (FEV1 is ≥50% of predicted and symptoms are mild, treatment with a single long-acting muscarinic antagonist or long-acting beta-agonist is proposed. When FEV1 is <50% of predicted and/or the COPD assessment test score is ≥10, the use of combined bronchodilators is advised. If there is no response to treatment after three months, referral to a pulmonary specialist is recommended for ­pathophysiological endotyping: 1 eosinophilic endotype with peripheral blood or sputum eosinophilia >3%; 2 neutrophilic endotype with peripheral blood neutrophilia >60% or green sputum; or 3 pauci-granulocytic endotype. It is hoped that this simple, optimized, step-by-step algorithm will help to individualize the treatment of COPD in real-world clinical practice. This algorithm has yet to be evaluated prospectively or by comparison with other COPD management algorithms, including

  4. Chlorophyll-a Algorithms for Oligotrophic Oceans: A Novel Approach Based on Three-Band Reflectance Difference

    Science.gov (United States)

    Hu, Chuanmin; Lee, Zhongping; Franz, Bryan

    2011-01-01

    A new empirical algorithm is proposed to estimate surface chlorophyll-a concentrations (Chl) in the global ocean for Chl less than or equal to 0.25 milligrams per cubic meters (approximately 77% of the global ocean area). The algorithm is based on a color index (CI), defined as the difference between remote sensing reflectance (R(sub rs), sr(sup -1) in the green and a reference formed linearly between R(sub rs) in the blue and red. For low Chl waters, in situ data showed a tighter (and therefore better) relationship between CI and Chl than between traditional band-ratios and Chl, which was further validated using global data collected concurrently by ship-borne and SeaWiFS satellite instruments. Model simulations showed that for low Chl waters, compared with the band-ratio algorithm, the CI-based algorithm (CIA) was more tolerant to changes in chlorophyll-specific backscattering coefficient, and performed similarly for different relative contributions of non-phytoplankton absorption. Simulations using existing atmospheric correction approaches further demonstrated that the CIA was much less sensitive than band-ratio algorithms to various errors induced by instrument noise and imperfect atmospheric correction (including sun glint and whitecap corrections). Image and time-series analyses of SeaWiFS and MODIS/Aqua data also showed improved performance in terms of reduced image noise, more coherent spatial and temporal patterns, and consistency between the two sensors. The reduction in noise and other errors is particularly useful to improve the detection of various ocean features such as eddies. Preliminary tests over MERIS and CZCS data indicate that the new approach should be generally applicable to all existing and future ocean color instruments.

  5. Medical Expert Systems Survey

    OpenAIRE

    Abu-Nasser, Bassem S.

    2017-01-01

    International audience; There is an increased interest in the area of Artificial Intelligence in general and expert systems in particular. Expert systems are rapidly growing technology. Expert systems are a branch of Artificial Intelligence which is having a great impact on many fields of human life. Expert systems use human expert knowledge to solve complex problems in many fields such as Health, science, engineering, business, and weather forecasting. Organizations employing the technology ...

  6. A New Hybrid Approach for Wind Speed Prediction Using Fast Block Least Mean Square Algorithm and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Ummuhan Basaran Filik

    2016-01-01

    Full Text Available A new hybrid wind speed prediction approach, which uses fast block least mean square (FBLMS algorithm and artificial neural network (ANN method, is proposed. FBLMS is an adaptive algorithm which has reduced complexity with a very fast convergence rate. A hybrid approach is proposed which uses two powerful methods: FBLMS and ANN method. In order to show the efficiency and accuracy of the proposed approach, seven-year real hourly collected wind speed data sets belonging to Turkish State Meteorological Service of Bozcaada and Eskisehir regions are used. Two different ANN structures are used to compare with this approach. The first six-year data is handled as a train set; the remaining one-year hourly data is handled as test data. Mean absolute error (MAE and root mean square error (RMSE are used for performance evaluations. It is shown for various cases that the performance of the new hybrid approach gives better results than the different conventional ANN structure.

  7. A Low-Tech, Hands-On Approach To Teaching Sorting Algorithms to Working Students.

    Science.gov (United States)

    Dios, R.; Geller, J.

    1998-01-01

    Focuses on identifying the educational effects of "activity oriented" instructional techniques. Examines which instructional methods produce enhanced learning and comprehension. Discusses the problem of learning "sorting algorithms," a major topic in every Computer Science curriculum. Presents a low-tech, hands-on teaching method for sorting…

  8. A firefly algorithm approach for determining the parameters characteristics of solar cell

    Directory of Open Access Journals (Sweden)

    Mohamed LOUZAZNI

    2017-12-01

    Full Text Available A metaheuristic algorithm is proposed to describe the characteristics of solar cell. The I-V characteristics of solar cell present double nonlinearity in the presence of exponential and in the five parameters. Since, these parameters are unknown, it is important to predict these parameters for accurate modelling of I-V and P-V curves of solar cell. Moreover, firefly algorithm has attracted the intention to optimize the non-linear and complex systems, based on the flashing patterns and behaviour of firefly’s swarm. Besides, the proposed constrained objective function is derived from the current-voltage curve. Using the experimental current and voltage of commercial RTC France Company mono-crystalline silicon solar cell single diode at 33°C and 1000W/m² to predict the unknown parameters. The statistical errors are calculated to verify the accuracy of the results. The obtained results are compared with experimental data and other reported meta-heuristic optimization algorithms. In the end, the theoretical results confirm the validity and reliability of firefly algorithm in estimation the optimal parameters of the solar cell.

  9. Novel Approaches for Diagnosing Melanoma Skin Lesions Through Supervised and Deep Learning Algorithms.

    Science.gov (United States)

    Premaladha, J; Ravichandran, K S

    2016-04-01

    Dermoscopy is a technique used to capture the images of skin, and these images are useful to analyze the different types of skin diseases. Malignant melanoma is a kind of skin cancer whose severity even leads to death. Earlier detection of melanoma prevents death and the clinicians can treat the patients to increase the chances of survival. Only few machine learning algorithms are developed to detect the melanoma using its features. This paper proposes a Computer Aided Diagnosis (CAD) system which equips efficient algorithms to classify and predict the melanoma. Enhancement of the images are done using Contrast Limited Adaptive Histogram Equalization technique (CLAHE) and median filter. A new segmentation algorithm called Normalized Otsu's Segmentation (NOS) is implemented to segment the affected skin lesion from the normal skin, which overcomes the problem of variable illumination. Fifteen features are derived and extracted from the segmented images are fed into the proposed classification techniques like Deep Learning based Neural Networks and Hybrid Adaboost-Support Vector Machine (SVM) algorithms. The proposed system is tested and validated with nearly 992 images (malignant & benign lesions) and it provides a high classification accuracy of 93 %. The proposed CAD system can assist the dermatologists to confirm the decision of the diagnosis and to avoid excisional biopsies.

  10. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  11. Artificial Neural Network Approach in Laboratory Test Reporting:  Learning Algorithms.

    Science.gov (United States)

    Demirci, Ferhat; Akan, Pinar; Kume, Tuncay; Sisman, Ali Riza; Erbayraktar, Zubeyde; Sevinc, Suleyman

    2016-08-01

    In the field of laboratory medicine, minimizing errors and establishing standardization is only possible by predefined processes. The aim of this study was to build an experimental decision algorithm model open to improvement that would efficiently and rapidly evaluate the results of biochemical tests with critical values by evaluating multiple factors concurrently. The experimental model was built by Weka software (Weka, Waikato, New Zealand) based on the artificial neural network method. Data were received from Dokuz Eylül University Central Laboratory. "Training sets" were developed for our experimental model to teach the evaluation criteria. After training the system, "test sets" developed for different conditions were used to statistically assess the validity of the model. After developing the decision algorithm with three iterations of training, no result was verified that was refused by the laboratory specialist. The sensitivity of the model was 91% and specificity was 100%. The estimated κ score was 0.950. This is the first study based on an artificial neural network to build an experimental assessment and decision algorithm model. By integrating our trained algorithm model into a laboratory information system, it may be possible to reduce employees' workload without compromising patient safety. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. A robust algorithm to solve the signal setting problem considering different traffic assignment approaches

    Directory of Open Access Journals (Sweden)

    Adacher Ludovica

    2017-12-01

    Full Text Available In this paper we extend a stochastic discrete optimization algorithm so as to tackle the signal setting problem. Signalized junctions represent critical points of an urban transportation network, and the efficiency of their traffic signal setting influences the overall network performance. Since road congestion usually takes place at or close to junction areas, an improvement in signal settings contributes to improving travel times, drivers’ comfort, fuel consumption efficiency, pollution and safety. In a traffic network, the signal control strategy affects the travel time on the roads and influences drivers’ route choice behavior. The paper presents an algorithm for signal setting optimization of signalized junctions in a congested road network. The objective function used in this work is a weighted sum of delays caused by the signalized intersections. We propose an iterative procedure to solve the problem by alternately updating signal settings based on fixed flows and traffic assignment based on fixed signal settings. To show the robustness of our method, we consider two different assignment methods: one based on user equilibrium assignment, well established in the literature as well as in practice, and the other based on a platoon simulation model with vehicular flow propagation and spill-back. Our optimization algorithm is also compared with others well known in the literature for this problem. The surrogate method (SM, particle swarm optimization (PSO and the genetic algorithm (GA are compared for a combined problem of global optimization of signal settings and traffic assignment (GOSSTA. Numerical experiments on a real test network are reported.

  13. Forecasting spot electricity prices : Deep learning approaches and empirical comparison of traditional algorithms

    NARCIS (Netherlands)

    Lago Garcia, J.; De Ridder, Fjo; De Schutter, B.H.K.

    2018-01-01

    In this paper, a novel modeling framework for forecasting electricity prices is proposed. While many predictive models have been already proposed to perform this task, the area of deep learning algorithms remains yet unexplored. To fill this scientific gap, we propose four different deep learning

  14. The CARPEDIEM Algorithm: A Rule-Based System for Identifying Heart Failure Phenotype with a Precision Public Health Approach

    Directory of Open Access Journals (Sweden)

    Michela Franchini

    2018-01-01

    Full Text Available Modern medicine remains dependent on the accurate evaluation of a patient’s health state, recognizing that disease is a process that evolves over time and interacts with many factors unique to that patient. The CARPEDIEM project represents a concrete attempt to address these issues by developing reproducible algorithms to support the accuracy in detection of complex diseases. This study aims to establish and validate the CARPEDIEM approach and algorithm for identifying those patients presenting with or at risk of heart failure (HF by studying 153,393 subjects in Italy, based on patient information flow databases and is not reliant on the electronic health record to accomplish its goals. The resulting algorithm has been validated in a two-stage process, comparing predicted results with (1 HF diagnosis as identified by general practitioners (GPs among the reference cohort and (2 HF diagnosis as identified by cardiologists within a randomly sampled subpopulation of 389 patients. The sources of data used to detect HF cases are numerous and were standardized for this study. The accuracy and the predictive values of the algorithm with respect to the GPs and the clinical standards are highly consistent with those from previous studies. In particular, the algorithm is more efficient in detecting the more severe cases of HF according to the GPs’ validation (specificity increases according to the number of comorbidities and external validation (NYHA: II–IV; HF severity index: 2, 3. Positive and negative predictive values reveal that the CARPEDIEM algorithm is most consistent with clinical evaluation performed in the specialist setting, while it presents a greater ability to rule out false-negative HF cases within the GP practice, probably as a consequence of the different HF prevalence in the two different care settings. Further development includes analyzing the clinical features of false-positive and -negative predictions, to explore the natural

  15. GPU Based N-Gram String Matching Algorithm with Score Table Approach for String Searching in Many Documents

    Science.gov (United States)

    Srinivasa, K. G.; Shree Devi, B. N.

    2017-10-01

    String searching in documents has become a tedious task with the evolution of Big Data. Generation of large data sets demand for a high performance search algorithm in areas such as text mining, information retrieval and many others. The popularity of GPU's for general purpose computing has been increasing for various applications. Therefore it is of great interest to exploit the thread feature of a GPU to provide a high performance search algorithm. This paper proposes an optimized new approach to N-gram model for string search in a number of lengthy documents and its GPU implementation. The algorithm exploits GPGPUs for searching strings in many documents employing character level N-gram matching with parallel Score Table approach and search using CUDA API. The new approach of Score table used for frequency storage of N-grams in a document, makes the search independent of the document's length and allows faster access to the frequency values, thus decreasing the search complexity. The extensive thread feature in a GPU has been exploited to enable parallel pre-processing of trigrams in a document for Score Table creation and parallel search in huge number of documents, thus speeding up the whole search process even for a large pattern size. Experiments were carried out for many documents of varied length and search strings from the standard Lorem Ipsum text on NVIDIA's GeForce GT 540M GPU with 96 cores. Results prove that the parallel approach for Score Table creation and searching gives a good speed up than the same approach executed serially.

  16. CAF: Cluster algorithm and a-star with fuzzy approach for lifetime enhancement in wireless sensor networks

    KAUST Repository

    Yuan, Y.; Li, C.; Yang, Y.; Zhang, Xiangliang; Li, L.

    2014-01-01

    Energy is a major factor in designing wireless sensor networks (WSNs). In particular, in the real world, battery energy is limited; thus the effective improvement of the energy becomes the key of the routing protocols. Besides, the sensor nodes are always deployed far away from the base station and the transmission energy consumption is index times increasing with the increase of distance as well. This paper proposes a new routing method for WSNs to extend the network lifetime using a combination of a clustering algorithm, a fuzzy approach, and an A-star method. The proposal is divided into two steps. Firstly, WSNs are separated into clusters using the Stable Election Protocol (SEP) method. Secondly, the combined methods of fuzzy inference and A-star algorithm are adopted, taking into account the factors such as the remaining power, the minimum hops, and the traffic numbers of nodes. Simulation results demonstrate that the proposed method has significant effectiveness in terms of balancing energy consumption as well as maximizing the network lifetime by comparing the performance of the A-star and fuzzy (AF) approach, cluster and fuzzy (CF)method, cluster and A-star (CA)method, A-star method, and SEP algorithm under the same routing criteria. 2014 Yali Yuan et al.

  17. CAF: Cluster algorithm and a-star with fuzzy approach for lifetime enhancement in wireless sensor networks

    KAUST Repository

    Yuan, Y.

    2014-04-28

    Energy is a major factor in designing wireless sensor networks (WSNs). In particular, in the real world, battery energy is limited; thus the effective improvement of the energy becomes the key of the routing protocols. Besides, the sensor nodes are always deployed far away from the base station and the transmission energy consumption is index times increasing with the increase of distance as well. This paper proposes a new routing method for WSNs to extend the network lifetime using a combination of a clustering algorithm, a fuzzy approach, and an A-star method. The proposal is divided into two steps. Firstly, WSNs are separated into clusters using the Stable Election Protocol (SEP) method. Secondly, the combined methods of fuzzy inference and A-star algorithm are adopted, taking into account the factors such as the remaining power, the minimum hops, and the traffic numbers of nodes. Simulation results demonstrate that the proposed method has significant effectiveness in terms of balancing energy consumption as well as maximizing the network lifetime by comparing the performance of the A-star and fuzzy (AF) approach, cluster and fuzzy (CF)method, cluster and A-star (CA)method, A-star method, and SEP algorithm under the same routing criteria. 2014 Yali Yuan et al.

  18. State-of-the-art report on systematic approaches to safety management - Special Expert Group on Human and Organisational Factors (SEGHOF)

    International Nuclear Information System (INIS)

    Van den Berghe, Yves; Frischknecht, Albert; Gil, Benito; Martin, Anibal; McRobbie, Helen; Reiersen, Craig; Tasset, Daniel; Aastrand, Kaisa; Dahlgren-Persson, Kerstin; Pyy, Pekka; Mauny, Elisabeth

    2006-02-01

    There is a growing awareness of the significant contribution which human and organisational factors (HOF) make to nuclear safety. Within the HOF area, attention is increasingly focused on addressing management and organisational issues. This reflects an evolving recognition that the members of a nuclear licensee form part of a socio-technological system, and that their performance is influenced by the organisation and the culture within that organisation. A series of events across the nuclear industry and other sectors has reinforced the appreciation of the importance of robust safety management. Also, the management and organisation of nuclear installations is impacted by a number of current challenges such as deregulation, change in institutional ownership of the industry, contractorization and an ageing plant and workforce. It is in this context that the CSNI (Committee on Safety of Nuclear Installations) Special Experts' Group on Human and Organisational Factors (SEGHOF) was requested by the CNRA (Committee on Nuclear Regulatory Actions) to examine the role and influence of safety management in nuclear plant operations in 2000. A workshop on 'systematic approaches to safety management' was held in spring 2002 and this was followed by a survey in 2003-4 of relevant practices and developments across licensees and regulators. This report provides a brief explanation of the relationship between safety management and safety culture. It reinforces the need for nuclear licensees and regulators to take positive steps to ensure that licensees develop and sustain a robust safety management system as a part of their management systems as a whole. The report draws out the main findings of the workshop and presents the results of the survey in more detail. It seeks to identify current issues and areas warranting further consideration. The workshop explored the development of current organisational theories and their application to nuclear plant safety management. It

  19. Semi-Infinite Geology Modeling Algorithm (SIGMA): a Modular Approach to 3D Gravity

    Science.gov (United States)

    Chang, J. C.; Crain, K.

    2015-12-01

    Conventional 3D gravity computations can take up to days, weeks, and even months, depending on the size and resolution of the data being modeled. Additional modeling runs, due to technical malfunctions or additional data modifications, only compound computation times even further. We propose a new modeling algorithm that utilizes vertical line elements to approximate mass, and non-gridded (point) gravity observations. This algorithm is (1) magnitudes faster than conventional methods, (2) accurate to less than 0.1% error, and (3) modular. The modularity of this methodology means that researchers can modify their geology/terrain or gravity data, and only the modified component needs to be re-run. Additionally, land-, sea-, and air-based platforms can be modeled at their observation point, without having to filter data into a synthesized grid.

  20. Development of Human-level Decision Making Algorithm for NPPs through Deep Neural Networks : Conceptual Approach

    International Nuclear Information System (INIS)

    Kim, Seung Geun; Seong, Poong Hyun

    2017-01-01

    Development of operation support systems and automation systems are closely related to machine learning field. However, since it is hard to achieve human-level delicacy and flexibility for complex tasks with conventional machine learning technologies, only operation support systems with simple purposes were developed and high-level automation related studies were not actively conducted. As one of the efforts for reducing human error in NPPs and technical advance toward automation, the ultimate goal of this research is to develop human-level decision making algorithm for NPPs during emergency situations. The concepts of SL, RL, policy network, value network, and MCTS, which were applied to decision making algorithm for other fields are introduced and combined with nuclear field specifications. Since the research is currently at the conceptual stage, more research is warranted.

  1. An algorithmic approach to solving polynomial equations associated with quantum circuits

    International Nuclear Information System (INIS)

    Gerdt, V.P.; Zinin, M.V.

    2009-01-01

    In this paper we present two algorithms for reducing systems of multivariate polynomial equations over the finite field F 2 to the canonical triangular form called lexicographical Groebner basis. This triangular form is the most appropriate for finding solutions of the system. On the other hand, the system of polynomials over F 2 whose variables also take values in F 2 (Boolean polynomials) completely describes the unitary matrix generated by a quantum circuit. In particular, the matrix itself can be computed by counting the number of solutions (roots) of the associated polynomial system. Thereby, efficient construction of the lexicographical Groebner bases over F 2 associated with quantum circuits gives a method for computing their circuit matrices that is alternative to the direct numerical method based on linear algebra. We compare our implementation of both algorithms with some other software packages available for computing Groebner bases over F 2

  2. Genetic Algorithm Based PID Controller Tuning Approach for Continuous Stirred Tank Reactor

    OpenAIRE

    A. Jayachitra; R. Vinodha

    2014-01-01

    Genetic algorithm (GA) based PID (proportional integral derivative) controller has been proposed for tuning optimized PID parameters in a continuous stirred tank reactor (CSTR) process using a weighted combination of objective functions, namely, integral square error (ISE), integral absolute error (IAE), and integrated time absolute error (ITAE). Optimization of PID controller parameters is the key goal in chemical and biochemical industries. PID controllers have narrowed down the operating r...

  3. Systems Engineering Approach to Develop Guidance, Navigation and Control Algorithms for Unmanned Ground Vehicle

    Science.gov (United States)

    2016-09-01

    Global Positioning System HNA hybrid navigation algorithm HRI human-robot interface IED Improvised Explosive Device IMU inertial measurement unit...Potential Field Method R&D research and development RDT&E Research, development, test and evaluation RF radiofrequency RGB red, green and blue ROE...were radiofrequency (RF) controlled and pneumatically actuated upon receiving the wireless commands from the radio operator. The pairing of such an

  4. An Algorithmic Approach for the Reconstruction of Nasal Skin Defects: Retrospective Analysis of 130 Cases

    Directory of Open Access Journals (Sweden)

    Berrak Akşam

    2016-06-01

    Full Text Available Objective: Most of the malignant cutaneous carcinomas are seen in the nasal region. Reconstruction of nasal defects is challenging because of the unique anatomic properties and complex structure of this region. In this study, we present our algorithm for the nasal skin defects that occurred after malignant skin tumor excisions. Material and Methods: Patients whose nasal skin was reconstructed after malignant skin tumor excision were included in the study. These patients were evaluated by their age, gender, comorbities, tumor location, tumor size, reconstruction type, histopathological diagnosis, and tumor recurrence. Results: A total of 130 patients (70 female, 60 male were evaluated. The average age of the patients was 67.8 years. Tumors were located mostly at the dorsum, alar region, and tip of the nose. When reconstruction methods were evaluated, primary closure was preferred in 14.6% patients, full thickness skin grafts were used in 25.3% patients, and reconstruction with flaps were the choice in 60% patients. Different flaps were used according to the subunits. Mostly, dorsal nasal flaps, bilobed flaps, nasolabial flaps, and forehead flaps were used. Conclusion: The defect-only reconstruction principle was accepted in this study. Previously described subunits, such as the dorsum, tip, alar region, lateral wall, columella, and soft triangles, of the nose were further divided into subregions by their anatomical relations. An algorithm was planned with these sub regions. In nasal skin reconstruction, this algorithm helps in selection the methods for the best results and minimize the complications.

  5. Genetic algorithm based approach to investigate doped metal oxide materials: Application to lanthanide-doped ceria

    Science.gov (United States)

    Hooper, James; Ismail, Arif; Giorgi, Javier B.; Woo, Tom K.

    2010-06-01

    A genetic algorithm (GA)-inspired method to effectively map out low-energy configurations of doped metal oxide materials is presented. Specialized mating and mutation operations that do not alter the identity of the parent metal oxide have been incorporated to efficiently sample the metal dopant and oxygen vacancy sites. The search algorithms have been tested on lanthanide-doped ceria (L=Sm,Gd,Lu) with various dopant concentrations. Using both classical and first-principles density-functional-theory (DFT) potentials, we have shown the methodology reproduces the results of recent systematic searches of doped ceria at low concentrations (3.2% L2O3 ) and identifies low-energy structures of concentrated samarium-doped ceria (3.8% and 6.6% L2O3 ) which relate to the experimental and theoretical findings published thus far. We introduce a tandem classical/DFT GA algorithm in which an inexpensive classical potential is first used to generate a fit gene pool of structures to enhance the overall efficiency of the computationally demanding DFT-based GA search.

  6. An Event-Triggered Online Energy Management Algorithm of Smart Home: Lyapunov Optimization Approach

    Directory of Open Access Journals (Sweden)

    Wei Fan

    2016-05-01

    Full Text Available As an important component of the smart grid on the user side, a home energy management system is the core of optimal operation for a smart home. In this paper, the energy scheduling problem for a household equipped with photovoltaic devices was investigated. An online energy management algorithm based on event triggering was proposed. The Lyapunov optimization method was adopted to schedule controllable load in the household. Without forecasting related variables, real-time decisions were made based only on the current information. Energy could be rapidly regulated under the fluctuation of distributed generation, electricity demand and market price. The event-triggering mechanism was adopted to trigger the execution of the online algorithm, so as to cut down the execution frequency and unnecessary calculation. A comprehensive result obtained from simulation shows that the proposed algorithm could effectively decrease the electricity bills of users. Moreover, the required computational resource is small, which contributes to the low-cost energy management of a smart home.

  7. Chaotic logic gate: A new approach in set and design by genetic algorithm

    International Nuclear Information System (INIS)

    Beyki, Mahmood; Yaghoobi, Mahdi

    2015-01-01

    How to reconfigure a logic gate is an attractive subject for different applications. Chaotic systems can yield a wide variety of patterns and here we use this feature to produce a logic gate. This feature forms the basis for designing a dynamical computing device that can be rapidly reconfigured to become any wanted logical operator. This logic gate that can reconfigure to any logical operator when placed in its chaotic state is called chaotic logic gate. The reconfiguration realize by setting the parameter values of chaotic logic gate. In this paper we present mechanisms about how to produce a logic gate based on the logistic map in its chaotic state and genetic algorithm is used to set the parameter values. We use three well-known selection methods used in genetic algorithm: tournament selection, Roulette wheel selection and random selection. The results show the tournament selection method is the best method for set the parameter values. Further, genetic algorithm is a powerful tool to set the parameter values of chaotic logic gate

  8. A Finite State Machine Approach to Algorithmic Lateral Inhibition for Real-Time Motion Detection †

    Directory of Open Access Journals (Sweden)

    María T. López

    2018-05-01

    Full Text Available Many researchers have explored the relationship between recurrent neural networks and finite state machines. Finite state machines constitute the best-characterized computational model, whereas artificial neural networks have become a very successful tool for modeling and problem solving. The neurally-inspired lateral inhibition method, and its application to motion detection tasks, have been successfully implemented in recent years. In this paper, control knowledge of the algorithmic lateral inhibition (ALI method is described and applied by means of finite state machines, in which the state space is constituted from the set of distinguishable cases of accumulated charge in a local memory. The article describes an ALI implementation for a motion detection task. For the implementation, we have chosen to use one of the members of the 16-nm Kintex UltraScale+ family of Xilinx FPGAs. FPGAs provide the necessary accuracy, resolution, and precision to run neural algorithms alongside current sensor technologies. The results offered in this paper demonstrate that this implementation provides accurate object tracking performance on several datasets, obtaining a high F-score value (0.86 for the most complex sequence used. Moreover, it outperforms implementations of a complete ALI algorithm and a simplified version of the ALI algorithm—named “accumulative computation”—which was run about ten years ago, now reaching real-time processing times that were simply not achievable at that time for ALI.

  9. Designing an 'expert knowledge' based approach for the quantification of historical floods - the case study of the Kinzig catchment in Southwest Germany

    Science.gov (United States)

    Bösmeier, Annette; Glaser, Rüdiger; Stahl, Kerstin; Himmelsbach, Iso; Schönbein, Johannes

    2017-04-01

    Future estimations of flood hazard and risk for developing optimal coping and adaption strategies inevitably include considerations of the frequency and magnitude of past events. Methods of historical climatology represent one way of assessing flood occurrences beyond the period of instrumental measurements and can thereby substantially help to extend the view into the past and to improve modern risk analysis. Such historical information can be of additional value and has been used in statistical approaches like Bayesian flood frequency analyses during recent years. However, the derivation of quantitative values from vague descriptive information of historical sources remains a crucial challenge. We explored possibilities of parametrization of descriptive flood related data specifically for the assessment of historical floods in a framework that combines a hermeneutical approach with mathematical and statistical methods. This study forms part of the transnational, Franco-German research project TRANSRISK2 (2014 - 2017), funded by ANR and DFG, with the focus on exploring the floods history of the last 300 years for the regions of Upper and Middle Rhine. A broad data base of flood events had been compiled, dating back to AD 1500. The events had been classified based on hermeneutical methods, depending on intensity, spatial dimension, temporal structure, damages and mitigation measures associated with the specific events. This indexed database allowed the exploration of a link between descriptive data and quantitative information for the overlapping time period of classified floods and instrumental measurements since the end of the 19th century. Thereby, flood peak discharges as a quantitative measure of the severity of a flood were used to assess the discharge intervals for flood classes (upper and lower thresholds) within different time intervals for validating the flood classification, as well as examining the trend in the perception threshold over time

  10. Hybrid expert system

    International Nuclear Information System (INIS)

    Tsoukalas, L.; Ikonomopoulos, A.; Uhrig, R.E.

    1991-01-01

    This paper presents a methodology that couples rule-based expert systems using fuzzy logic, to pre-trained artificial neutral networks (ANN) for the purpose of transient identification in Nuclear Power Plants (NPP). In order to provide timely concise, and task-specific information about the may aspects of the transient and to determine the state of the system based on the interpretation of potentially noisy data a model-referenced approach is utilized. In it, the expert system performs the basic interpretation and processing of the model data, and pre-trained ANNs provide the model. having access to a set of neural networks that typify general categories of transients, the rule based system is able to perform identification functions. Membership functions - condensing information about a transient in a form convenient for a rule-based identification system characterizing a transient - are the output of neural computations. This allows the identification function to be performed with a speed comparable to or faster than that of the temporal evolution of the system. Simulator data form major secondary system pipe rupture is used to demonstrate the methodology. The results indicate excellent noise-tolerance for ANN's and suggest a new method for transient identification within the framework of Fuzzy Logic

  11. Sleep-spindle detection: crowdsourcing and evaluating performance of experts, non-experts and automated methods

    DEFF Research Database (Denmark)

    Warby, Simon C.; Wendt, Sabrina Lyngbye; Welinder, Peter

    2014-01-01

    to crowdsource spindle identification by human experts and non-experts, and we compared their performance with that of automated detection algorithms in data from middle- to older-aged subjects from the general population. We also refined methods for forming group consensus and evaluating the performance...... of event detectors in physiological data such as electroencephalographic recordings from polysomnography. Compared to the expert group consensus gold standard, the highest performance was by individual experts and the non-expert group consensus, followed by automated spindle detectors. This analysis showed...... that crowdsourcing the scoring of sleep data is an efficient method to collect large data sets, even for difficult tasks such as spindle identification. Further refinements to spindle detection algorithms are needed for middle- to older-aged subjects....

  12. A Hybrid Maximum Power Point Tracking Approach for Photovoltaic Systems under Partial Shading Conditions Using a Modified Genetic Algorithm and the Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Yu-Pei Huang

    2018-01-01

    Full Text Available This paper proposes a modified maximum power point tracking (MPPT algorithm for photovoltaic systems under rapidly changing partial shading conditions (PSCs. The proposed algorithm integrates a genetic algorithm (GA and the firefly algorithm (FA and further improves its calculation process via a differential evolution (DE algorithm. The conventional GA is not advisable for MPPT because of its complicated calculations and low accuracy under PSCs. In this study, we simplified the GA calculations with the integration of the DE mutation process and FA attractive process. Results from both the simulation and evaluation verify that the proposed algorithm provides rapid response time and high accuracy due to the simplified processing. For instance, evaluation results demonstrate that when compared to the conventional GA, the execution time and tracking accuracy of the proposed algorithm can be, respectively, improved around 69.4% and 4.16%. In addition, in comparison to FA, the tracking speed and tracking accuracy of the proposed algorithm can be improved around 42.9% and 1.85%, respectively. Consequently, the major improvement of the proposed method when evaluated against the conventional GA and FA is tracking speed. Moreover, this research provides a framework to integrate multiple nature-inspired algorithms for MPPT. Furthermore, the proposed method is adaptable to different types of solar panels and different system formats with specifically designed equations, the advantages of which are rapid tracking speed with high accuracy under PSCs.

  13. Expert auditors’ services classification

    OpenAIRE

    Jolanta Wisniewska

    2013-01-01

    The profession of an expert auditor is a public trust occupation with a distinctive feature of taking responsibility for actions in the public interest. The main responsibility of expert auditors is performing financial auditing; however, expert auditors are prepared to carry out different tasks which encompass a wide plethora of financial and auditing services for different kinds of institutions and companies. The aim of the article is first of all the description of expert auditors’ service...

  14. A Regularized Approach for Solving Magnetic Differential Equations and a Revised Iterative Equilibrium Algorithm

    International Nuclear Information System (INIS)

    Hudson, S.R.

    2010-01-01

    A method for approximately solving magnetic differential equations is described. The approach is to include a small diffusion term to the equation, which regularizes the linear operator to be inverted. The extra term allows a 'source-correction' term to be defined, which is generally required in order to satisfy the solvability conditions. The approach is described in the context of computing the pressure and parallel currents in the iterative approach for computing magnetohydrodynamic equilibria.

  15. Order Batching in Warehouses by Minimizing Total Tardiness: A Hybrid Approach of Weighted Association Rule Mining and Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Amir Hossein Azadnia

    2013-01-01

    Full Text Available One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach.

  16. An Unsupervised Opinion Mining Approach for Japanese Weblog Reputation Information Using an Improved SO-PMI Algorithm

    Science.gov (United States)

    Wang, Guangwei; Araki, Kenji

    In this paper, we propose an improved SO-PMI (Semantic Orientation Using Pointwise Mutual Information) algorithm, for use in Japanese Weblog Opinion Mining. SO-PMI is an unsupervised approach proposed by Turney that has been shown to work well for English. When this algorithm was translated into Japanese naively, most phrases, whether positive or negative in meaning, received a negative SO. For dealing with this slanting phenomenon, we propose three improvements: to expand the reference words to sets of words, to introduce a balancing factor and to detect neutral expressions. In our experiments, the proposed improvements obtained a well-balanced result: both positive and negative accuracy exceeded 62%, when evaluated on 1,200 opinion sentences sampled from three different domains (reviews of Electronic Products, Cars and Travels from Kakaku. com). In a comparative experiment on the same corpus, a supervised approach (SA-Demo) achieved a very similar accuracy to our method. This shows that our proposed approach effectively adapted SO-PMI for Japanese, and it also shows the generality of SO-PMI.

  17. Solution Approach to Automatic Generation Control Problem Using Hybridized Gravitational Search Algorithm Optimized PID and FOPID Controllers

    Directory of Open Access Journals (Sweden)

    DAHIYA, P.

    2015-05-01

    Full Text Available This paper presents the application of hybrid opposition based disruption operator in gravitational search algorithm (DOGSA to solve automatic generation control (AGC problem of four area hydro-thermal-gas interconnected power system. The proposed DOGSA approach combines the advantages of opposition based learning which enhances the speed of convergence and disruption operator which has the ability to further explore and exploit the search space of standard gravitational search algorithm (GSA. The addition of these two concepts to GSA increases its flexibility for solving the complex optimization problems. This paper addresses the design and performance analysis of DOGSA based proportional integral derivative (PID and fractional order proportional integral derivative (FOPID controllers for automatic generation control problem. The proposed approaches are demonstrated by comparing the results with the standard GSA, opposition learning based GSA (OGSA and disruption based GSA (DGSA. The sensitivity analysis is also carried out to study the robustness of DOGSA tuned controllers in order to accommodate variations in operating load conditions, tie-line synchronizing coefficient, time constants of governor and turbine. Further, the approaches are extended to a more realistic power system model by considering the physical constraints such as thermal turbine generation rate constraint, speed governor dead band and time delay.

  18. Combinatorial theory of the semiclassical evaluation of transport moments II: Algorithmic approach for moment generating functions

    Energy Technology Data Exchange (ETDEWEB)

    Berkolaiko, G. [Department of Mathematics, Texas A and M University, College Station, Texas 77843-3368 (United States); Kuipers, J. [Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg (Germany)

    2013-12-15

    Electronic transport through chaotic quantum dots exhibits universal behaviour which can be understood through the semiclassical approximation. Within the approximation, calculation of transport moments reduces to codifying classical correlations between scattering trajectories. These can be represented as ribbon graphs and we develop an algorithmic combinatorial method to generate all such graphs with a given genus. This provides an expansion of the linear transport moments for systems both with and without time reversal symmetry. The computational implementation is then able to progress several orders further than previous semiclassical formulae as well as those derived from an asymptotic expansion of random matrix results. The patterns observed also suggest a general form for the higher orders.

  19. Prediction of breast cancer risk using a machine learning approach embedded with a locality preserving projection algorithm

    Science.gov (United States)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Hollingsworth, Alan B.; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qiu, Yuchen; Liu, Hong; Zheng, Bin

    2018-02-01

    In order to automatically identify a set of effective mammographic image features and build an optimal breast cancer risk stratification model, this study aims to investigate advantages of applying a machine learning approach embedded with a locally preserving projection (LPP) based feature combination and regeneration algorithm to predict short-term breast cancer risk. A dataset involving negative mammograms acquired from 500 women was assembled. This dataset was divided into two age-matched classes of 250 high risk cases in which cancer was detected in the next subsequent mammography screening and 250 low risk cases, which remained negative. First, a computer-aided image processing scheme was applied to segment fibro-glandular tissue depicted on mammograms and initially compute 44 features related to the bilateral asymmetry of mammographic tissue density distribution between left and right breasts. Next, a multi-feature fusion based machine learning classifier was built to predict the risk of cancer detection in the next mammography screening. A leave-one-case-out (LOCO) cross-validation method was applied to train and test the machine learning classifier embedded with a LLP algorithm, which generated a new operational vector with 4 features using a maximal variance approach in each LOCO process. Results showed a 9.7% increase in risk prediction accuracy when using this LPP-embedded machine learning approach. An increased trend of adjusted odds ratios was also detected in which odds ratios increased from 1.0 to 11.2. This study demonstrated that applying the LPP algorithm effectively reduced feature dimensionality, and yielded higher and potentially more robust performance in predicting short-term breast cancer risk.

  20. Delegating Decisions to Experts

    Science.gov (United States)

    Li, Hao; Suen, Wing

    2004-01-01

    We present a model of delegation with self-interested and privately informed experts. A team of experts with extreme but opposite biases is acceptable to a wide range of decision makers with diverse preferences, but the value of expertise from such a team is low. A decision maker wants to appoint experts who are less partisan than he is in order…

  1. Analytical and Algorithmic Approaches to Determine the Number of Sensor Nodes for Minimum Power Consumption in LWSNs

    Directory of Open Access Journals (Sweden)

    Ali Soner Kilinc

    2017-08-01

    Full Text Available A Linear Wireless Sensor Network (LWSN is a kind of wireless sensor network where the nodes are deployed in a line. Since the sensor nodes are energy restricted, energy efficiency becomes one of the most significant design issues for LWSNs as well as wireless sensor networks. With the proper deployment, the power consumption could be minimized by adjusting the distance between the sensor nodes which is known as hop length. In this paper, analytical and algorithmic approaches are presented to determine the number of hops and sensor nodes for minimum power consumption in a linear wireless sensor network including equidistantly placed sensor nodes.

  2. Optimal Management Of Renewable-Based Mgs An Intelligent Approach Through The Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Mehdi Nafar

    2015-08-01

    Full Text Available Abstract- This article proposes a probabilistic frame built on Scenario fabrication to considerate the uncertainties in the finest action managing of Micro Grids MGs. The MG contains different recoverable energy resources such as Wind Turbine WT Micro Turbine MT Photovoltaic PV Fuel Cell FC and one battery as the storing device. The advised frame is based on scenario generation and Roulette wheel mechanism to produce different circumstances for handling the uncertainties of altered factors. It habits typical spreading role as a probability scattering function of random factors. The uncertainties which are measured in this paper are grid bid alterations cargo request calculating error and PV and WT yield power productions. It is well-intentioned to asset that solving the MG difficult for 24 hours of a day by considering diverse uncertainties and different constraints needs one powerful optimization method that can converge fast when it doesnt fall in local optimal topic. Simultaneously single Group Search Optimization GSO system is presented to vision the total search space globally. The GSO algorithm is instigated from group active of beasts. Also the GSO procedure one change is similarly planned for this algorithm. The planned context and way is applied o one test grid-connected MG as a typical grid.

  3. An Algorithm Approach for the Analysis of Urban Land-Use/Cover: Logic Filters

    Directory of Open Access Journals (Sweden)

    Şinasi Kaya

    2014-11-01

    Full Text Available Accurate classification of land-use/cover based on remotely sensed data is important for interpreters who analyze time or event-based change on certain areas. Any method that has user flexibility on area selection provides great simplicity during analysis, since the analyzer may need to work on a specific area of interest instead of dealing with the entire remotely sensed data. The objectives of the paper are to develop an automation algorithm using Matlab & Simulink on user selected areas, to filter V-I-S (Vegetation, Impervious, Soil components using the algorithm, to analyze the components according to upper and lower threshold values based on each band histogram, and finally to obtain land-use/cover map combining the V-I-S components. LANDSAT 5TM satellite data covering Istanbul and Izmit regions are utilized, and 4, 3, 2 (RGB band combination is selected to fulfill the aims of the study. These referred bands are normalized, and V-I-S components of each band are determined. This methodology that uses Matlab & Simulink program is equally successful like the unsupervised and supervised methods. Practices with these methods that lead to qualitative and quantitative assessments of selected urban areas will further provide important spatial information and data especially to the urban planners and decision-makers.

  4. FCJ-185 An Algorithmic Agartha: Post-App Approaches to Synarchic Regulation

    Directory of Open Access Journals (Sweden)

    Dan Mellamphy

    2015-08-01

    Full Text Available This rather suggestive and altogether speculative essay began as an attempt on our part to use a model of bio-chemical signal-transduction (Howard Rasmussen’s schema for ‘synarchic regulation’ to explain, beyond the boundaries of cell-transduction in molecular chemistry, transduction in cell-phone applications: the ‘synarchic regulation’ — and rather remarkable reticulation — of ‘cellular transmission’ in the techno-communicational rather than bio-chemical field. It was to be a complement and/or an alternate perspective to our conference-paper and subsequent book-chapter on the ‘app-alliance’ both of which had been written in and for the event of the Apps and Affect conference in October 2013. It became something slightly different, unmoored from mere cellular transmission as such and suggestive of a much more general and more comprehensive techno-scientific, market-economic and politico-military — or ‘synarchic’ — network, operating as the regulative engine for an emerging and overarching planetary system of algorithmic governance. In what follows, we offer an ‘app’lication of the principles of ‘synarchic regulation’ to the field of ‘algorithmic governance’.

  5. A genetic algorithm approach for open-pit mine production scheduling

    Directory of Open Access Journals (Sweden)

    Aref Alipour

    2017-06-01

    Full Text Available In an Open-Pit Production Scheduling (OPPS problem, the goal is to determine the mining sequence of an orebody as a block model. In this article, linear programing formulation is used to aim this goal. OPPS problem is known as an NP-hard problem, so an exact mathematical model cannot be applied to solve in the real state. Genetic Algorithm (GA is a well-known member of evolutionary algorithms that widely are utilized to solve NP-hard problems. Herein, GA is implemented in a hypothetical Two-Dimensional (2D copper orebody model. The orebody is featured as two-dimensional (2D array of blocks. Likewise, counterpart 2D GA array was used to represent the OPPS problem’s solution space. Thereupon, the fitness function is defined according to the OPPS problem’s objective function to assess the solution domain. Also, new normalization method was used for the handling of block sequencing constraint. A numerical study is performed to compare the solutions of the exact and GA-based methods. It is shown that the gap between GA and the optimal solution by the exact method is less than % 5; hereupon GA is found to be efficiently in solving OPPS problem.

  6. Algorithmic Approach With Clinical Pathology Consultation Improves Access to Specialty Care for Patients With Systemic Lupus Erythematosus.

    Science.gov (United States)

    Chen, Lei; Welsh, Kerry J; Chang, Brian; Kidd, Laura; Kott, Marylee; Zare, Mohammad; Carroll, Kelley; Nguyen, Andy; Wahed, Amer; Tholpady, Ashok; Pung, Norin; McKee, Donna; Risin, Semyon A; Hunter, Robert L

    2016-09-01

    Harris Health System (HHS) is a safety net system providing health care to the underserved of Harris County, Texas. There was a 6-month waiting period for a rheumatologist consult for patients with suspected systemic lupus erythematosus (SLE). The objective of the intervention was to improve access to specialty care. An algorithmic approach to testing for SLE was implemented initially through the HHS referral center. The algorithm was further offered as a "one-click" order for physicians, with automated reflex testing, interpretation, and case triaging by clinical pathology. Data review revealed that prior to the intervention, 80% of patients did not have complete laboratory workups available at the first rheumatology visit. Implementation of algorithmic testing and triaging of referrals by pathologists resulted in decreasing the waiting time for a rheumatologist by 50%. Clinical pathology intervention and case triaging can improve access to care in a county health care system. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. A New RTL Design Approach for a DCT/IDCT-Based Image Compression Architecture using the mCBE Algorithm

    Directory of Open Access Journals (Sweden)

    Rachmad Vidya Wicaksana Putra

    2013-09-01

    Full Text Available In  the  literature, several approaches  of  designing  a  DCT/IDCT-based image compression system have been proposed.  In this paper,  we present a new RTL design approach with as main  focus developing a  DCT/IDCT-based image compression  architecture  using  a  self-created  algorithm.  This  algorithm  can efficiently  minimize  the  amount  of  shifter -adders  to  substitute  multiplier s.  We call  this  new  algorithm  the  multiplication  from  Common  Binary  Expression (mCBE  Algorithm. Besides this algorithm, we propose alternative quantization numbers,  which  can  be  implemented  simply  as  shifters  in  digital  hardware. Mostly, these numbers can retain a good compressed-image quality  compared to JPEG  recommendations.  These  ideas  lead  to  our  design  being  small  in  circuit area,  multiplierless,  and  low  in  complexity.  The  proposed  8-point  1D-DCT design  has  only  six  stages,  while  the  8-point  1D-IDCT  design  has  only  seven stages  (one  stage  being  defined as  equal  to  the  delay  of  one  shifter  or  2-input adder. By using the pipelining method, we can achieve a high-speed architecture with latency as    a  trade-off consideration. The  design has been synthesized and can reach a speed of up to 1.41ns critical path delay (709.22MHz. 

  8. Evaluating progressive-rendering algorithms in appearance design tasks.

    Science.gov (United States)

    Jiawei Ou; Karlik, Ondrej; Křivánek, Jaroslav; Pellacini, Fabio

    2013-01-01

    Progressive rendering is becoming a popular alternative to precomputational approaches to appearance design. However, progressive algorithms create images exhibiting visual artifacts at early stages. A user study investigated these artifacts' effects on user performance in appearance design tasks. Novice and expert subjects performed lighting and material editing tasks with four algorithms: random path tracing, quasirandom path tracing, progressive photon mapping, and virtual-point-light rendering. Both the novices and experts strongly preferred path tracing to progressive photon mapping and virtual-point-light rendering. None of the participants preferred random path tracing to quasirandom path tracing or vice versa; the same situation held between progressive photon mapping and virtual-point-light rendering. The user workflow didn’t differ significantly with the four algorithms. The Web Extras include a video showing how four progressive-rendering algorithms converged (at http://youtu.be/ck-Gevl1e9s), the source code used, and other supplementary materials.

  9. Expert status and performance.

    Directory of Open Access Journals (Sweden)

    Mark A Burgman

    Full Text Available Expert judgements are essential when time and resources are stretched or we face novel dilemmas requiring fast solutions. Good advice can save lives and large sums of money. Typically, experts are defined by their qualifications, track record and experience. The social expectation hypothesis argues that more highly regarded and more experienced experts will give better advice. We asked experts to predict how they will perform, and how their peers will perform, on sets of questions. The results indicate that the way experts regard each other is consistent, but unfortunately, ranks are a poor guide to actual performance. Expert advice will be more accurate if technical decisions routinely use broadly-defined expert groups, structured question protocols and feedback.

  10. An expert panel approach to assessing potential effects of bull trout reintroduction on federally listed salmonids in the Clackamas River, Oregon

    Science.gov (United States)

    Bruce G. Marcot; Chris S. Allen; Steve Morey; Dan Shively; Rollie. White

    2012-01-01

    The bull trout Salvelinus confluentus is an apex predator in native fish communities in the western USA and is listed as threatened under the U.S. Endangered Species Act (ESA). Restoration of this species has raised concerns over its potential predatory impacts on native fish fauna. We held a five-person expert panel to help determine potential...

  11. Algorithms for Design of Continuum Robots Using the Concentric Tubes Approach: A Neurosurgical Example.

    Science.gov (United States)

    Anor, Tomer; Madsen, Joseph R; Dupont, Pierre

    2011-05-09

    We propose a novel systematic approach to optimizing the design of concentric tube robots for neurosurgical procedures. These procedures require that the robot approach specified target sites while navigating and operating within an anatomically constrained work space. The availability of preoperative imaging makes our approach particularly suited for neurosurgery, and we illustrate the method with the example of endoscopic choroid plexus ablation. A novel parameterization of the robot characteristics is used in conjunction with a global pattern search optimization method. The formulation returns the design of the least-complex robot capable of reaching single or multiple target points in a confined space with constrained optimization metrics. A particular advantage of this approach is that it identifies the need for either fixed-curvature versus variable-curvature sections. We demonstrate the performance of the method in four clinically relevant examples.

  12. Soft-tissue injuries of the fingertip: methods of evaluation and treatment. An algorithmic approach.

    Science.gov (United States)

    Lemmon, Joshua A; Janis, Jeffrey E; Rohrich, Rod J

    2008-09-01

    After studying this article, the participant should be able to: 1. Understand the anatomy of the fingertip. 2. Describe the methods of evaluating fingertip injuries. 3. Discuss reconstructive options for various tip injuries. The fingertip is the most commonly injured part of the hand, and therefore fingertip injuries are among the most frequent injuries that plastic surgeons are asked to treat. Although microsurgical techniques have enabled replantation of even very distal tip amputations, it is relatively uncommon that a distal tip injury will be appropriate for replantation. In the event that replantation is not pursued, options for distal tip soft-tissue reconstruction must be considered. This review presents a straightforward method for evaluating fingertip injuries and provides an algorithm for fingertip reconstruction.

  13. Residential-commercial energy input estimation based on genetic algorithm (GA) approaches: an application of Turkey

    International Nuclear Information System (INIS)

    Ozturk, H.K.; Canyurt, O.E.; Hepbasli, A.; Utlu, Z.

    2004-01-01

    The main objective of the present study is to develop the energy input estimation equations for the residential-commercial sector (RCS) in order to estimate the future projections based on genetic algorithm (GA) notion and to examine the effect of the design parameters on the energy input of the sector. For this purpose, the Turkish RCS is given as an example. The GA Energy Input Estimation Model (GAEIEM) is used to estimate Turkey's future residential-commercial energy input demand based on gross domestic product (GDP), population, import, export, house production, cement production and basic house appliances consumption figures. It may be concluded that the three various forms of models proposed here can be used as an alternative solution and estimation techniques to available estimation techniques. It is also expected that this study will be helpful in developing highly applicable and productive planning for energy policies. (author)

  14. Vehicle recognition and tracking using a generic multi-sensor and multi-algorithm fusion approach

    OpenAIRE

    Nashashibi , Fawzi; Khammari , Ayoub; Laurgeau , Claude

    2008-01-01

    International audience; This paper tackles the problem of improving the robustness of vehicle detection for Adaptive Cruise Control (ACC) applications. Our approach is based on a multisensor and a multialgorithms data fusion for vehicle detection and recognition. Our architecture combines two sensors: a frontal camera and a laser scanner. The improvement of the robustness stems from two aspects. First, we addressed the vision-based detection by developing an original approach based on fine gr...

  15. A non-algorithmic approach to the In-core-fuel management problem of a PWR core

    International Nuclear Information System (INIS)

    Kimhy, Y.

    1992-03-01

    The primary objective of a commercial nuclear power plant operation is to produce electricity a low cost while satisfying safety constraints imposed on the operating conditions. Design of a fuel reload cycle for the current generation nuclear power plant represents a multistage process with a series of design decisions taken at various time points. Of these stages, reload core design is an important stage, due to its impact on safety and economic plant performance parameters. Overall. performance of the plant during the power production cycle depends on chosen fresh fuel parameters, as well as specific fuel configuration of the reactor core. The motivation to computerize generation and optimization of fuel reload configurations follows from some reasons: first, reload is performed periodically and requires manipulation of a large amount of data. second, in recent years, more complicated fuel loading patterns were developed and implemented following changes in fuel design and/or operational requirements, such as, longer cycles, advanced burnable poison designs, low leakage loading patterns and reduction of irradiation-induced damage of the pressure vessel. An algorithmic approach to the problem was generally adopted. The nature of the reload design process is a 'heuristic' search performed manually by a fuel manager. The knowledge used by the fuel manager is mostly accumulated experience in reactor physics and core calculations. These features of the problem and the inherent disadvantage of the algorithmic method are the main reasons to explore a non-algorithmic approach for solving the reload configuration problem. Several features of the 'solutions space' ( a collection of acceptable final configurations ) are emphasized in this work: 1) the space contain numerous number of entities (> 25) that are distributed un homogeneously, 2) the lack of a monotonic objective function decrease the probability to find an isolated optimum configuration by depth first search or

  16. A Phenomenology of Expert Musicianship

    DEFF Research Database (Denmark)

    Høffding, Simon

    This dissertation develops a phenomenology of expert musicianship through an interdisciplinary approach that integrates qualitative interviews with the Danish String Quartet with philosophical analyses drawing on ideas and theses found in phenomenology, philosophy of mind, cognitive science...... and psychology of music. The dissertation is structured through the asking, analyzing and answering of three primary questions, namely: 1) What is it like to be an expert? 2) What is the general phenomenology of expert musicianship? 3) What happens to the self in deep musical absorption? The first question...... targets a central debate in philosophy and psychology on whether reflection is conducive for, or detrimental to, skillful performance. My analyses show that the concepts assumed in the literature on this question are poorly defined and gloss over more important features of expertise. The second question...

  17. Expert system application education project

    Science.gov (United States)

    Gonzelez, Avelino J.; Ragusa, James M.

    1988-01-01

    Artificial intelligence (AI) technology, and in particular expert systems, has shown potential applicability in many areas of operation at the Kennedy Space Center (KSC). In an era of limited resources, the early identification of good expert system applications, and their segregation from inappropriate ones can result in a more efficient use of available NASA resources. On the other hand, the education of students in a highly technical area such as AI requires an extensive hands-on effort. The nature of expert systems is such that proper sample applications for the educational process are difficult to find. A pilot project between NASA-KSC and the University of Central Florida which was designed to simultaneously address the needs of both institutions at a minimum cost. This project, referred to as Expert Systems Prototype Training Project (ESPTP), provided NASA with relatively inexpensive development of initial prototype versions of certain applications. University students likewise benefit by having expertise on a non-trivial problem accessible to them at no cost. Such expertise is indispensible in a hands-on training approach to developing expert systems.

  18. Expert systems: an alternative paradigm

    Energy Technology Data Exchange (ETDEWEB)

    Coombs, M.; Alty, J.

    1984-01-01

    There has recently been a significant effort by the AI community to interest industry in the potential of expert systems. However, this has resulted in far fewer substantial applications projects than might be expected. This article argues that this is because human experts are rarely required to perform the role that computer-based experts are programmed to adopt. Instead of being called in to answer well-defined problems, they are more often asked to assist other experts to extend and refine their understanding of a problem area at the junction of their two domains of knowledge. This more properly involves educational rather than problem-solving skills. An alternative approach to expert system design is proposed based upon guided discovery learning. The user is provided with a supportive environment for a particular class of problem, the system predominantly acting as an adviser rather than directing the interaction. The environment includes a database of domain knowledge, a set of procedures for its application to a concrete problem, and an intelligent machine-based adviser to judge the user's effectiveness and advise on strategy. The procedures focus upon the use of user generated explanations both to promote the application of domain knowledge and to expose understanding difficulties. Simple database PROLOG is being used as the subject material for the prototype system which is known as MINDPAD. 30 references.

  19. Evolving attractive faces using morphing technology and a genetic algorithm: a new approach to determining ideal facial aesthetics.

    Science.gov (United States)

    Wong, Brian J F; Karimi, Koohyar; Devcic, Zlatko; McLaren, Christine E; Chen, Wen-Pin

    2008-06-01

    The objectives of this study were to: 1) determine if a genetic algorithm in combination with morphing software can be used to evolve more attractive faces; and 2) evaluate whether this approach can be used as a tool to define or identify the attributes of the ideal attractive face. Basic research study incorporating focus group evaluations. Digital images were acquired of 250 female volunteers (18-25 y). Randomly selected images were used to produce a parent generation (P) of 30 synthetic faces using morphing software. Then, a focus group of 17 trained volunteers (18-25 y) scored each face on an attractiveness scale ranging from 1 (unattractive) to 10 (attractive). A genetic algorithm was used to select 30 new pairs from the parent generation, and these were morphed using software to produce a new first generation (F1) of faces. The F1 faces were scored by the focus group, and the process was repeated for a total of four iterations of the algorithm. The algorithm mimics natural selection by using the attractiveness score as the selection pressure; the more attractive faces are more likely to morph. All five generations (P-F4) were then scored by three focus groups: a) surgeons (n = 12), b) cos-metology students (n = 44), and c) undergraduate students (n = 44). Morphometric measurements were made of 33 specific features on each of the 150 synthetic faces, and correlated with attractiveness scores using univariate and multivariate analysis. The average facial attractiveness scores increased with each generation and were 3.66 (+0.60), 4.59 (+/-0.73), 5.50 (+/-0.62), 6.23 (+/-0.31), and 6.39 (+/-0.24) for P and F1-F4 generations, respectively. Histograms of attractiveness score distributions show a significant shift in the skew of each curve toward more attractive faces with each generation. Univariate analysis identified nasal width, eyebrow arch height, and lip thickness as being significantly correlated with attractiveness scores. Multivariate analysis identified a

  20. How Can Big Data Complement Expert Analysis? A Value Chain Case Study

    Directory of Open Access Journals (Sweden)

    Kyungtae Kim

    2018-03-01

    Full Text Available In the world of big data, there is a need to investigate how data-driven approaches can support expert-based analyses during a technology planning process. To meet this goal, we examined opportunities and challenges for big data analytics in the social sciences, particularly with respect to value chain analysis. To accomplish this, we designed a value chain mapping experiment that aimed to compare the results of expert-based and data-based mappings. In the expert-based approach, we asked an industry expert to visually depict an industry value chain based on insights and collected data. We also reviewed a previously published value chain developed by a panel of industry experts during a national technology planning process. In the data-driven analysis, we used a massive number of business transaction records between companies under the assumption that the data would be useful in identifying relationships between items in a value chain. The case study results demonstrated that data-driven analysis can help researchers understand the current status of industry structures, enabling them to develop more realistic, although less flexible value chain maps. This approach is expected to provide more value when used in combination with other databases. It is important to note that significant effort is required to develop an elaborate analysis algorithm, and data preprocessing is essential for obtaining meaningful results, both of which make this approach challenging. Experts’ insights are still helpful for validating the analytic results in value chain mapping.

  1. Morphology combined with ancillary techniques: An algorithm approach for thyroid nodules.

    Science.gov (United States)

    Rossi, E D; Martini, M; Capodimonti, S; Cenci, T; Bilotta, M; Pierconti, F; Pontecorvi, A; Lombardi, C P; Fadda, G; Larocca, L M

    2018-04-23

    Several authors have underlined the limits of morphological analysis mostly in the diagnosis of follicular neoplasms (FN). The application of ancillary techniques, including immunocytochemistry (ICC) and molecular testing, contributes to a better definition of the risk of malignancy (ROM) and management of FN. According to literature, the application of models, including the evaluation of ICC, somatic mutations (ie, BRAF V 600E ), micro RNA analysis is proposed for FNs. This study discusses the validation of a diagnostic algorithm in FN with a special focus on the role of morphology then followed by ancillary techniques. From June 2014 to January 2016, we enrolled 37 FNs with histological follow-up. In the same reference period, 20 benign nodules and 20 positive for malignancy were selected as control. ICC, BRAF V 600E mutation and miR-375 were carried out on LBC. The 37 FNs included 14 atypia of undetermined significance/follicular lesion of undetermined significance and 23 FN. Specifically, atypia of undetermined significance/follicular lesion of undetermined significance resulted in three goitres, 10 follicular adenomas and one NIFTP whereas FN/suspicious for FN by seven follicular adenomas and 16 malignancies (nine non-invasive follicular thyroid neoplasms with papillary-like nuclear features, two invasive follicular variant of papillary thyroid carcinoma [PTC] and five PTC). The 20 positive for malignancy samples included two invasive follicular variant of PTC, 16 PTCs and two medullary carcinomas. The morphological features of BRAF V 600E mutation (nuclear features of PTC and moderate/abundant eosinophilic cytoplasms) were associated with 100% ROM. In the wild type cases, ROM was 83.3% in presence of a concordant positive ICC panel whilst significantly lower (10.5%) in a negative concordant ICC. High expression values of MirR-375 provided 100% ROM. The adoption of an algorithm might represent the best choice for the correct diagnosis of FNs. The morphological

  2. A novel multi-model probability battery state of charge estimation approach for electric vehicles using H-infinity algorithm

    International Nuclear Information System (INIS)

    Lin, Cheng; Mu, Hao; Xiong, Rui; Shen, Weixiang

    2016-01-01

    Highlights: • A novel multi-model probability battery SOC fusion estimation approach was proposed. • The linear matrix inequality-based H∞ technique is employed to estimate the SOC. • The Bayes theorem has been employed to realize the optimal weight for the fusion. • The robustness of the proposed approach is verified by different batteries. • The results show that the proposed method can promote global estimation accuracy. - Abstract: Due to the strong nonlinearity and complex time-variant property of batteries, the existing state of charge (SOC) estimation approaches based on a single equivalent circuit model (ECM) cannot provide the accurate SOC for the entire discharging period. This paper aims to present a novel SOC estimation approach based on a multiple ECMs fusion method for improving the practical application performance. In the proposed approach, three battery ECMs, namely the Thevenin model, the double polarization model and the 3rd order RC model, are selected to describe the dynamic voltage of lithium-ion batteries and the genetic algorithm is then used to determine the model parameters. The linear matrix inequality-based H-infinity technique is employed to estimate the SOC from the three models and the Bayes theorem-based probability method is employed to determine the optimal weights for synthesizing the SOCs estimated from the three models. Two types of lithium-ion batteries are used to verify the feasibility and robustness of the proposed approach. The results indicate that the proposed approach can improve the accuracy and reliability of the SOC estimation against uncertain battery materials and inaccurate initial states.

  3. An efficient algorithmic approach for mass spectrometry-based disulfide connectivity determination using multi-ion analysis

    Directory of Open Access Journals (Sweden)

    Yen Ten-Yang

    2011-02-01

    Full Text Available Abstract Background Determining the disulfide (S-S bond pattern in a protein is often crucial for understanding its structure and function. In recent research, mass spectrometry (MS based analysis has been applied to this problem following protein digestion under both partial reduction and non-reduction conditions. However, this paradigm still awaits solutions to certain algorithmic problems fundamental amongst which is the efficient matching of an exponentially growing set of putative S-S bonded structural alternatives to the large amounts of experimental spectrometric data. Current methods circumvent this challenge primarily through simplifications, such as by assuming only the occurrence of certain ion-types (b-ions and y-ions that predominate in the more popular dissociation methods, such as collision-induced dissociation (CID. Unfortunately, this can adversely impact the quality of results. Method We present an algorithmic approach to this problem that can, with high computational efficiency, analyze multiple ions types (a, b, bo, b*, c, x, y, yo, y*, and z and deal with complex bonding topologies, such as inter/intra bonding involving more than two peptides. The proposed approach combines an approximation algorithm-based search formulation with data driven parameter estimation. This formulation considers only those regions of the search space where the correct solution resides with a high likelihood. Putative disulfide bonds thus obtained are finally combined in a globally consistent pattern to yield the overall disulfide bonding topology of the molecule. Additionally, each bond is associated with a confidence score, which aids in interpretation and assimilation of the results. Results The method was tested on nine different eukaryotic Glycosyltransferases possessing disulfide bonding topologies of varying complexity. Its performance was found to be characterized by high efficiency (in terms of time and the fraction of search space

  4. A Parametric Genetic Algorithm Approach to Assess Complementary Options of Large Scale Wind-solar Coupling

    Institute of Scientific and Technical Information of China (English)

    Tim; Mareda; Ludovic; Gaudard; Franco; Romerio

    2017-01-01

    The transitional path towards a highly renewable power system based on wind and solar energy sources is investigated considering their intermittent and spatially distributed characteristics. Using an extensive weather-driven simulation of hourly power mismatches between generation and load, we explore the interplay between geographical resource complementarity and energy storage strategies. Solar and wind resources are considered at variable spatial scales across Europe and related to the Swiss load curve, which serve as a typical demand side reference. The optimal spatial distribution of renewable units is further assessed through a parameterized optimization method based on a genetic algorithm. It allows us to explore systematically the effective potential of combined integration strategies depending on the sizing of the system, with a focus on how overall performance is affected by the definition of network boundaries. Upper bounds on integration schemes are provided considering both renewable penetration and needed reserve power capacity. The quantitative trade-off between grid extension, storage and optimal wind-solar mix is highlighted.This paper also brings insights on how optimal geographical distribution of renewable units evolves as a function of renewable penetration and grid extent.

  5. A novel approach of battery pack state of health estimation using artificial intelligence optimization algorithm

    Science.gov (United States)

    Zhang, Xu; Wang, Yujie; Liu, Chang; Chen, Zonghai

    2018-02-01

    An accurate battery pack state of health (SOH) estimation is important to characterize the dynamic responses of battery pack and ensure the battery work with safety and reliability. However, the different performances in battery discharge/charge characteristics and working conditions in battery pack make the battery pack SOH estimation difficult. In this paper, the battery pack SOH is defined as the change of battery pack maximum energy storage. It contains all the cells' information including battery capacity, the relationship between state of charge (SOC) and open circuit voltage (OCV), and battery inconsistency. To predict the battery pack SOH, the method of particle swarm optimization-genetic algorithm is applied in battery pack model parameters identification. Based on the results, a particle filter is employed in battery SOC and OCV estimation to avoid the noise influence occurring in battery terminal voltage measurement and current drift. Moreover, a recursive least square method is used to update cells' capacity. Finally, the proposed method is verified by the profiles of New European Driving Cycle and dynamic test profiles. The experimental results indicate that the proposed method can estimate the battery states with high accuracy for actual operation. In addition, the factors affecting the change of SOH is analyzed.

  6. Field tests and machine learning approaches for refining algorithms and correlations of driver's model parameters.

    Science.gov (United States)

    Tango, Fabio; Minin, Luca; Tesauri, Francesco; Montanari, Roberto

    2010-03-01

    This paper describes the field tests on a driving simulator carried out to validate the algorithms and the correlations of dynamic parameters, specifically driving task demand and drivers' distraction, able to predict drivers' intentions. These parameters belong to the driver's model developed by AIDE (Adaptive Integrated Driver-vehicle InterfacE) European Integrated Project. Drivers' behavioural data have been collected from the simulator tests to model and validate these parameters using machine learning techniques, specifically the adaptive neuro fuzzy inference systems (ANFIS) and the artificial neural network (ANN). Two models of task demand and distraction have been developed, one for each adopted technique. The paper provides an overview of the driver's model, the description of the task demand and distraction modelling and the tests conducted for the validation of these parameters. A test comparing predicted and expected outcomes of the modelled parameters for each machine learning technique has been carried out: for distraction, in particular, promising results (low prediction errors) have been obtained by adopting an artificial neural network.

  7. DESIGN AND OPTIMIZATION OF VALVELESS MICROPUMPS BY USING GENETIC ALGORITHMS APPROACH

    Directory of Open Access Journals (Sweden)

    AIDA F. M. SHUKUR

    2015-10-01

    Full Text Available This paper presents a design optimization of valveless micropump using Genetic Algorithms (GA. The micropump is designed with a diaphragm, pumping chamber and diffuser/nozzle element functions as inlet and outlet of micropump with outer dimension of (5×1.75×5 mm3. The main objectives of this research are to determine the optimum pressure to be applied at micropump’s diaphragm and to find the optimum coupling parameters of the micropump to achieve high flow rate with low power consumption. In order to determine the micropump design performance, the total deformation, strain energy density, equivalent stress for diaphragm, velocity and net flow rate of micropump are investigated. An optimal resonant frequency range for the diaphragm of valveless micropump is obtained through the result assessment. With the development of GA-ANSYS model, a maximum total displacement of diaphragm, 5.3635 µm, with 12 kPa actuation pressure and optimum net flowrate of 7.467 mL/min are achieved.

  8. The application of adaptive Luenberger observer concept in chemical process control: An algorithmic approach

    Science.gov (United States)

    Doko, Marthen Luther

    2017-05-01

    When developing a wide class of on-line parameter estimation scheme for estimating the unknown parameter vector that appears in certain general linear and bilinear parametric model will be parametrizations of LTI processes or plants as well as of some special classes of nonlinear processes or plants. The resuls is used to design one of the important tools in control, i.e., adaptive observer and for stable LTI processes or plants. In this paper it will consider the design of schemes that simultaneously estimate the plant state variables and parameters by processing the plant I/O measurements on-line and such schemes is refered to as adaptive observers. The design of an adaptive observer is based on the combination of a state observer that could be used to estimate the state variables of aparticular plant state-space representation with an on-line estimation scheme. The choice of the plant state-space representation is crucial for the design and stability analysis of the adaptive observer. The paper will discuss a class of observer called Adaptive Luenberger Observer and its application. Begin with observable canonical form one can find observability matrix of n linear independent rows. By using this fact or their linear combination chosen as a basis, various canonical forms known also as Luenberger canonical form can be obtained. Also,this formation will leads to various algorithm for computing including computation of observable canonical form, observable Hessenberg form and reduced-order state observer design.

  9. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  10. Simulation of dose deposition in stereotactic synchrotron radiation therapy: a fast approach combining Monte Carlo and deterministic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Smekens, F; Freud, N; Letang, J M; Babot, D [CNDRI (Nondestructive Testing using Ionizing Radiations) Laboratory, INSA-Lyon, 69621 Villeurbanne Cedex (France); Adam, J-F; Elleaume, H; Esteve, F [INSERM U-836, Equipe 6 ' Rayonnement Synchrotron et Recherche Medicale' , Institut des Neurosciences de Grenoble (France); Ferrero, C; Bravin, A [European Synchrotron Radiation Facility, Grenoble (France)], E-mail: francois.smekens@insa-lyon.fr

    2009-08-07

    A hybrid approach, combining deterministic and Monte Carlo (MC) calculations, is proposed to compute the distribution of dose deposited during stereotactic synchrotron radiation therapy treatment. The proposed approach divides the computation into two parts: (i) the dose deposited by primary radiation (coming directly from the incident x-ray beam) is calculated in a deterministic way using ray casting techniques and energy-absorption coefficient tables and (ii) the dose deposited by secondary radiation (Rayleigh and Compton scattering, fluorescence) is computed using a hybrid algorithm combining MC and deterministic calculations. In the MC part, a small number of particle histories are simulated. Every time a scattering or fluorescence event takes place, a splitting mechanism is applied, so that multiple secondary photons are generated with a reduced weight. The secondary events are further processed in a deterministic way, using ray casting techniques. The whole simulation, carried out within the framework of the Monte Carlo code Geant4, is shown to converge towards the same results as the full MC simulation. The speed of convergence is found to depend notably on the splitting multiplicity, which can easily be optimized. To assess the performance of the proposed algorithm, we compare it to state-of-the-art MC simulations, accelerated by the track length estimator technique (TLE), considering a clinically realistic test case. It is found that the hybrid approach is significantly faster than the MC/TLE method. The gain in speed in a test case was about 25 for a constant precision. Therefore, this method appears to be suitable for treatment planning applications.

  11. A GENETIC ALGORITHM USING THE LOCAL SEARCH HEURISTIC IN FACILITIES LAYOUT PROBLEM: A MEMETİC ALGORİTHM APPROACH

    Directory of Open Access Journals (Sweden)

    Orhan TÜRKBEY

    2002-02-01

    Full Text Available Memetic algorithms, which use local search techniques, are hybrid structured algorithms like genetic algorithms among evolutionary algorithms. In this study, for Quadratic Assignment Problem (QAP, a memetic structured algorithm using a local search heuristic like 2-opt is developed. Developed in the algorithm, a crossover operator that has not been used before for QAP is applied whereas, Eshelman procedure is used in order to increase thesolution variability. The developed memetic algorithm is applied on test problems taken from QAP-LIB, the results are compared with the present techniques in the literature.

  12. Adaptive MANET Multipath Routing Algorithm Based on the Simulated Annealing Approach

    Directory of Open Access Journals (Sweden)

    Sungwook Kim

    2014-01-01

    Full Text Available Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes.

  13. Faults detection approach using PCA and SOM algorithm in PMSG-WT system

    Directory of Open Access Journals (Sweden)

    Mohamed Lamine FADDA

    2016-07-01

    Full Text Available In this paper, a new approach for faults detection in observable data system wind turbine - permanent magnet synchronous generator (WT-PMSG, the studying objective, illustrate the combination (SOM-PCA to build Multi-local-PCA models faults detection in system (WT-PMSG, the performance of the method suggested to faults detection in system data, finding good results in simulation experiment.

  14. Application of expert systems

    Energy Technology Data Exchange (ETDEWEB)

    Basden, A

    1983-11-01

    This article seeks to bring together a number of issues relevant to the application of expert systems by discussing their advantages and limitations, their roles and benefits, and the influence that real-life applications might have on the design of expert systems software. Part of the expert systems strategy of one major chemical company is outlined. Because it was in constructing one particular expert system that many of these issues became important this system is described briefly at the start of the paper and used to illustrate much of the later discussion. It is of the plausible-inference type and has application in the field of materials engineering. 22 references.

  15. Being an expert

    International Nuclear Information System (INIS)

    Brechet, Y.; Musseau, O.; Bruna, G.; Sperandio, M.; Roulleaux-Dugage, M.; Andrieux, S.; Metteau, L.

    2014-01-01

    This series of short articles are dedicated to the role of the expert in the enterprise. There is an important difference between a scientific counsellor and an expert, the expert, recognized by his peers, can speak publicly in his field of expertise but has a duty of transparency while the job of a scientific counsellor requires confidentiality. The making and the use of an expert in an enterprise requires a dedicated organization. The organization of the expertise in 5 enterprises in nuclear industry are considered: CEA (French Alternative Energies and Atomic Energy Commission), IRSN (Institute of Radioprotection and Nuclear Safety), AREVA, ANDRA (National Radioactive Waste Management Agency) and EDF (Electricity of France)

  16. From the SAIN,LIM system to the SENS algorithm: a review of a French approach of nutrient profiling.

    Science.gov (United States)

    Tharrey, Marion; Maillot, Matthieu; Azaïs-Braesco, Véronique; Darmon, Nicole

    2017-08-01

    Nutrient profiling aims to classify or rank foods according to their nutritional composition to assist policies aimed at improving the nutritional quality of foods and diets. The present paper reviews a French approach of nutrient profiling by describing the SAIN,LIM system and its evolution from its early draft to the simplified nutrition labelling system (SENS) algorithm. Considered in 2010 by WHO as the 'French model' of nutrient profiling, SAIN,LIM classifies foods into four classes based on two scores: a nutrient density score (NDS) called SAIN and a score of nutrients to limit called LIM, and one threshold on each score. The system was first developed by the French Food Standard Agency in 2008 in response to the European regulation on nutrition and health claims (European Commission (EC) 1924/2006) to determine foods that may be eligible for bearing claims. Recently, the European regulation (EC 1169/2011) on the provision of food information to consumers allowed simplified nutrition labelling to facilitate consumer information and help them make fully informed choices. In that context, the SAIN,LIM was adapted to obtain the SENS algorithm, a system able to rank foods for simplified nutrition labelling. The implementation of the algorithm followed a step-by-step, systematic, transparent and logical process where shortcomings of the SAIN,LIM were addressed by integrating specificities of food categories in the SENS, reducing the number of nutrients, ordering the four classes and introducing European reference intakes. Through the French example, this review shows how an existing nutrient profiling system can be specifically adapted to support public health nutrition policies.

  17. Algorithmic approach to patients presenting with heartburn and epigastric pain refractory to empiric proton pump inhibitor therapy.

    Science.gov (United States)

    Roorda, Andrew K; Marcus, Samuel N; Triadafilopoulos, George

    2011-10-01

    Reflux-like dyspepsia (RLD), where predominant epigastric pain is associated with heartburn and/or regurgitation, is a common clinical syndrome in both primary and specialty care. Because symptom frequency and severity vary, overlap among gastroesophageal reflux disease (GERD), non-erosive reflux disease (NERD), and RLD, is quite common. The chronic and recurrent nature of RLD and its variable response to proton pump inhibitor (PPI) therapy remain problematic. To examine the prevalence of GERD, NERD, and RLD in a community setting using an algorithmic approach and to assess the potential, reproducibility, and validity of a multi-factorial scoring system in discriminating patients with RLD from those with GERD or NERD. Using a novel algorithmic approach, we evaluated an outpatient, community-based cohort referred to a gastroenterologist because of epigastric pain and heartburn that were only partially relieved by PPI. After an initial symptom evaluation (for epigastric pain, heartburn, regurgitation, dysphagia), an endoscopy and distal esophageal biopsies were performed, followed by esophageal motility and 24-h ambulatory pH monitoring to assess esophageal function and pathological acid exposure. A scoring system based on presence of symptoms and severity of findings was devised. Data was collected in two stages: subjects in the first stage were designated as the derivation cohort; subjects in the second stage were labeled the validation cohort. The total cohort comprised 159 patients (59 males, 100 females; mean age 52). On endoscopy, 30 patients (19%) had complicated esophagitis (CE) and 11 (7%) had Barrett's esophagus (BE) and were classified collectively as patients with GERD. One-hundred and eighteen (74%) patients had normal esophagus. Of these, 94 (59%) had one or more of the following: hiatal hernia, positive biopsy, abnormal pH, and/or abnormal motility studies and were classified as patients with NERD. The remaining 24 patients (15%) had normal functional

  18. Expert system validation in prolog

    Science.gov (United States)

    Stock, Todd; Stachowitz, Rolf; Chang, Chin-Liang; Combs, Jacqueline

    1988-01-01

    An overview of the Expert System Validation Assistant (EVA) is being implemented in Prolog at the Lockheed AI Center. Prolog was chosen to facilitate rapid prototyping of the structure and logic checkers and since February 1987, we have implemented code to check for irrelevance, subsumption, duplication, deadends, unreachability, and cycles. The architecture chosen is extremely flexible and expansible, yet concise and complementary with the normal interactive style of Prolog. The foundation of the system is in the connection graph representation. Rules and facts are modeled as nodes in the graph and arcs indicate common patterns between rules. The basic activity of the validation system is then a traversal of the connection graph, searching for various patterns the system recognizes as erroneous. To aid in specifying these patterns, a metalanguage is developed, providing the user with the basic facilities required to reason about the expert system. Using the metalanguage, the user can, for example, give the Prolog inference engine the goal of finding inconsistent conclusions among the rules, and Prolog will search the graph intantiations which can match the definition of inconsistency. Examples of code for some of the checkers are provided and the algorithms explained. Technical highlights include automatic construction of a connection graph, demonstration of the use of metalanguage, the A* algorithm modified to detect all unique cycles, general-purpose stacks in Prolog, and a general-purpose database browser with pattern completion.

  19. The mass-action law based algorithm for cost-effective approach for cancer drug discovery and development.

    Science.gov (United States)

    Chou, Ting-Chao

    2011-01-01

    The mass-action law based system analysis via mathematical induction and deduction lead to the generalized theory and algorithm that allows computerized simulation of dose-effect dynamics with small size experiments using a small number of data points in vitro, in animals, and in humans. The median-effect equation of the mass-action law deduced from over 300 mechanism specific-equations has been shown to be the unified theory that serves as the common-link for complicated biomedical systems. After using the median-effect principle as the common denominator, its applications are mechanism-independent, drug unit-independent, and dynamic order-independent; and can be used generally for single drug analysis or for multiple drug combinations in constant-ratio or non-constant ratios. Since the "median" is the common link and universal reference point in biological systems, these general enabling lead to computerized quantitative bio-informatics for econo-green bio-research in broad disciplines. Specific applications of the theory, especially relevant to drug discovery, drug combination, and clinical trials, have been cited or illustrated in terms of algorithms, experimental design and computerized simulation for data analysis. Lessons learned from cancer research during the past fifty years provide a valuable opportunity to reflect, and to improve the conventional divergent approach and to introduce a new convergent avenue, based on the mass-action law principle, for the efficient cancer drug discovery and the low-cost drug development.

  20. A novel hybrid approach based on Particle Swarm Optimization and Ant Colony Algorithm to forecast energy demand of Turkey

    International Nuclear Information System (INIS)

    Kıran, Mustafa Servet; Özceylan, Eren; Gündüz, Mesut; Paksoy, Turan

    2012-01-01

    Highlights: ► PSO and ACO algorithms are hybridized for forecasting energy demands of Turkey. ► Linear and quadratic forms are developed to meet the fluctuations of indicators. ► GDP, population, export and import have significant impacts on energy demand. ► Quadratic form provides better fit solution than linear form. ► Proposed approach gives lower estimation error than ACO and PSO, separately. - Abstract: This paper proposes a new hybrid method (HAP) for estimating energy demand of Turkey using Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO). Proposed energy demand model (HAPE) is the first model which integrates two mentioned meta-heuristic techniques. While, PSO, developed for solving continuous optimization problems, is a population based stochastic technique; ACO, simulating behaviors between nest and food source of real ants, is generally used for discrete optimizations. Hybrid method based PSO and ACO is developed to estimate energy demand using gross domestic product (GDP), population, import and export. HAPE is developed in two forms which are linear (HAPEL) and quadratic (HAPEQ). The future energy demand is estimated under different scenarios. In order to show the accuracy of the algorithm, a comparison is made with ACO and PSO which are developed for the same problem. According to obtained results, relative estimation errors of the HAPE model are the lowest of them and quadratic form (HAPEQ) provides better-fit solutions due to fluctuations of the socio-economic indicators.

  1. Contextual Factors for Finding Similar Experts

    DEFF Research Database (Denmark)

    Hofmann, Katja; Balog, Krisztian; Bogers, Toine

    2010-01-01

    -seeking models, are rarely taken into account. In this article, we extend content-based expert-finding approaches with contextual factors that have been found to influence human expert finding. We focus on a task of science communicators in a knowledge-intensive environment, the task of finding similar experts......, given an example expert. Our approach combines expertise-seeking and retrieval research. First, we conduct a user study to identify contextual factors that may play a role in the studied task and environment. Then, we design expert retrieval models to capture these factors. We combine these with content......-based retrieval models and evaluate them in a retrieval experiment. Our main finding is that while content-based features are the most important, human participants also take contextual factors into account, such as media experience and organizational structure. We develop two principled ways of modeling...

  2. Performance Evaluation of the Approaches and Algorithms Using Hamburg Airport Operations

    Science.gov (United States)

    Zhu, Zhifan; Okuniek, Nikolai; Gerdes, Ingrid; Schier, Sebastian; Lee, Hanbong; Jung, Yoon

    2016-01-01

    The German Aerospace Center (DLR) and the National Aeronautics and Space Administration (NASA) have been independently developing and testing their own concepts and tools for airport surface traffic management. Although these concepts and tools have been tested individually for European and US airports, they have never been compared or analyzed side-by-side. This paper presents the collaborative research devoted to the evaluation and analysis of two different surface management concepts. Hamburg Airport was used as a common test bed airport for the study. First, two independent simulations using the same traffic scenario were conducted; one by the DLR team using the Controller Assistance for Departure Optimization (CADEO) and the Taxi Routing for Aircraft: Creation and Controlling (TRACC) in a real-time simulation environment, and one by the NASA team based on the Spot and Runway Departure Advisor (SARDA) in a fast-time simulation environment. A set of common performance metrics was defined. The simulation results showed that both approaches produced operational benefits in efficiency, such as reducing taxi times, while maintaining runway throughput. Both approaches generated the gate pushback schedule to meet the runway schedule, such that the runway utilization was maximized. The conflict-free taxi guidance by TRACC helped avoid taxi conflicts and reduced taxiing stops, but the taxi benefit needed be assessed together with runway throughput to analyze the overall performance objective.

  3. Performance Evaluation of the Approaches and Algorithms for Hamburg Airport Operations

    Science.gov (United States)

    Zhu, Zhifan; Okuniek, Nikolai; Gerdes, Ingrid; Schier, Sebastian; Lee, Hanbong; Jung, Yoon

    2016-01-01

    The German Aerospace Center (DLR) and the National Aeronautics and Space Administration (NASA) have been independently developing and testing their own concepts and tools for airport surface traffic management. Although these concepts and tools have been tested individually for European and US airports, they have never been compared or analyzed side-by-side. This paper presents the collaborative research devoted to the evaluation and analysis of two different surface management concepts. Hamburg Airport was used as a common test bed airport for the study. First, two independent simulations using the same traffic scenario were conducted: one by the DLR team using the Controller Assistance for Departure Optimization (CADEO) and the Taxi Routing for Aircraft: Creation and Controlling (TRACC) in a real-time simulation environment, and one by the NASA team based on the Spot and Runway Departure Advisor (SARDA) in a fast-time simulation environment. A set of common performance metrics was defined. The simulation results showed that both approaches produced operational benefits in efficiency, such as reducing taxi times, while maintaining runway throughput. Both approaches generated the gate pushback schedule to meet the runway schedule, such that the runway utilization was maximized. The conflict-free taxi guidance by TRACC helped avoid taxi conflicts and reduced taxiing stops, but the taxi benefit needed be assessed together with runway throughput to analyze the overall performance objective.

  4. Improved approach for electric vehicle rapid charging station placement and sizing using Google maps and binary lightning search algorithm

    Science.gov (United States)

    Shareef, Hussain; Mohamed, Azah

    2017-01-01

    The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method. PMID:29220396

  5. Contrast improvement of continuous wave diffuse optical tomography reconstruction by hybrid approach using least square and genetic algorithm

    Science.gov (United States)

    Patra, Rusha; Dutta, Pranab K.

    2015-07-01

    Reconstruction of the absorption coefficient of tissue with good contrast is of key importance in functional diffuse optical imaging. A hybrid approach using model-based iterative image reconstruction and a genetic algorithm is proposed to enhance the contrast of the reconstructed image. The proposed method yields an observed contrast of 98.4%, mean square error of 0.638×10-3, and object centroid error of (0.001 to 0.22) mm. Experimental validation of the proposed method has also been provided with tissue-like phantoms which shows a significant improvement in image quality and thus establishes the potential of the method for functional diffuse optical tomography reconstruction with continuous wave setup. A case study of finger joint imaging is illustrated as well to show the prospect of the proposed method in clinical diagnosis. The method can also be applied to the concentration measurement of a region of interest in a turbid medium.

  6. Improved approach for electric vehicle rapid charging station placement and sizing using Google maps and binary lightning search algorithm.

    Directory of Open Access Journals (Sweden)

    Md Mainul Islam

    Full Text Available The electric vehicle (EV is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS planning. A novel optimization technique, called binary lightning search algorithm (BLSA, is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method.

  7. Reliable computation of roots in analytical waveguide modeling using an interval-Newton approach and algorithmic differentiation.

    Science.gov (United States)

    Bause, Fabian; Walther, Andrea; Rautenberg, Jens; Henning, Bernd

    2013-12-01

    For the modeling and simulation of wave propagation in geometrically simple waveguides such as plates or rods, one may employ the analytical global matrix method. That is, a certain (global) matrix depending on the two parameters wavenumber and frequency is built. Subsequently, one must calculate all parameter pairs within the domain of interest where the global matrix becomes singular. For this purpose, one could compute all roots of the determinant of the global matrix when the two parameters vary in the given intervals. This requirement to calculate all roots is actually the method's most concerning restriction. Previous approaches are based on so-called mode-tracers, which use the physical phenomenon that solutions, i.e., roots of the determinant of the global matrix, appear in a certain pattern, the waveguide modes, to limit the root-finding algorithm's search space with respect to consecutive solutions. In some cases, these reductions of the search space yield only an incomplete set of solutions, because some roots may be missed as a result of uncertain predictions. Therefore, we propose replacement of the mode-tracer approach with a suitable version of an interval- Newton method. To apply this interval-based method, we extended the interval and derivative computation provided by a numerical computing environment such that corresponding information is also available for Bessel functions used in circular models of acoustic waveguides. We present numerical results for two different scenarios. First, a polymeric cylindrical waveguide is simulated, and second, we show simulation results of a one-sided fluid-loaded plate. For both scenarios, we compare results obtained with the proposed interval-Newton algorithm and commercial software.

  8. Potential shallow aquifers characterization through an integrated geophysical method: multivariate approach by means of k-means algorithms

    Directory of Open Access Journals (Sweden)

    Stefano Bernardinetti

    2017-06-01

    Full Text Available The need to obtain a detailed hydrogeological characterization of the subsurface and its interpretation for the groundwater resources management, often requires to apply several and complementary geophysical methods. The goal of the approach in this paper is to provide a unique model of the aquifer by synthesizing and optimizing the information provided by several geophysical methods. This approach greatly reduces the degree of uncertainty and subjectivity of the interpretation by exploiting the different physical and mechanic characteristics of the aquifer. The studied area, into the municipality of Laterina (Arezzo, Italy, is a shallow basin filled by lacustrine and alluvial deposits (Pleistocene and Olocene epochs, Quaternary period, with alternated silt, sand with variable content of gravel and clay where the bottom is represented by arenaceous-pelitic rocks (Mt. Cervarola Unit, Tuscan Domain, Miocene epoch. This shallow basin constitutes the unconfined superficial aquifer to be exploited in the nearly future. To improve the geological model obtained from a detailed geological survey we performed electrical resistivity and P wave refraction tomographies along the same line in order to obtain different, independent and integrable data sets. For the seismic data also the reflected events have been processed, a remarkable contribution to draw the geologic setting. Through the k-means algorithm, we perform a cluster analysis for the bivariate data set to individuate relationships between the two sets of variables. This algorithm allows to individuate clusters with the aim of minimizing the dissimilarity within each cluster and maximizing it among different clusters of the bivariate data set. The optimal number of clusters “K”, corresponding to the individuated geophysical facies, depends to the multivariate data set distribution and in this work is estimated with the Silhouettes. The result is an integrated tomography that shows a finite

  9. Computer Based Expert Systems.

    Science.gov (United States)

    Parry, James D.; Ferrara, Joseph M.

    1985-01-01

    Claims knowledge-based expert computer systems can meet needs of rural schools for affordable expert advice and support and will play an important role in the future of rural education. Describes potential applications in prediction, interpretation, diagnosis, remediation, planning, monitoring, and instruction. (NEC)

  10. Real time expert systems

    International Nuclear Information System (INIS)

    Asami, Tohru; Hashimoto, Kazuo; Yamamoto, Seiichi

    1992-01-01

    Recently, aiming at the application to the plant control for nuclear reactors and traffic and communication control, the research and the practical use of the expert system suitable to real time processing have become conspicuous. In this report, the condition for the required function to control the object that dynamically changes within a limited time is presented, and the technical difference between the real time expert system developed so as to satisfy it and the expert system of conventional type is explained with the actual examples and from theoretical aspect. The expert system of conventional type has the technical base in the problem-solving equipment originating in STRIPS. The real time expert system is applied to the fields accompanied by surveillance and control, to which conventional expert system is hard to be applied. The requirement for the real time expert system, the example of the real time expert system, and as the techniques of realizing real time processing, the realization of interruption processing, dispersion processing, and the mechanism of maintaining the consistency of knowledge are explained. (K.I.)

  11. Expert systems: An overview

    International Nuclear Information System (INIS)

    Verdejo, F.

    1985-01-01

    The purpose of this article is to introduce readers to the basic principles of rule-based expert systems. Four topics are discussed in subsequent sections: (1) Definition; (2) Structure of an expert system; (3) State of the art and (4) Impact and future research. (orig.)

  12. Trendwatch combining expert opinion

    NARCIS (Netherlands)

    Hendrix, E.M.T.; Kornelis, M.; Pegge, S.M.; Galen, van M.A.

    2006-01-01

    In this study, focus is on a systematic way to detect future changes in trends that may effect the dynamics in the agro-food sector, and on the combination of opinions of experts. For the combination of expert opinions, the usefulness of multilevel models is investigated. Bayesian data analysis is

  13. A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality

    Science.gov (United States)

    Cheung, KW; So, HC; Ma, W.-K.; Chan, YT

    2006-12-01

    The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.

  14. A Precise Lane Detection Algorithm Based on Top View Image Transformation and Least-Square Approaches

    Directory of Open Access Journals (Sweden)

    Byambaa Dorj

    2016-01-01

    Full Text Available The next promising key issue of the automobile development is a self-driving technique. One of the challenges for intelligent self-driving includes a lane-detecting and lane-keeping capability for advanced driver assistance systems. This paper introduces an efficient and lane detection method designed based on top view image transformation that converts an image from a front view to a top view space. After the top view image transformation, a Hough transformation technique is integrated by using a parabolic model of a curved lane in order to estimate a parametric model of the lane in the top view space. The parameters of the parabolic model are estimated by utilizing a least-square approach. The experimental results show that the newly proposed lane detection method with the top view transformation is very effective in estimating a sharp and curved lane leading to a precise self-driving capability.

  15. The ontological model and the hybrid expert system for products and processes quality identification involving the approach based on system analysis and quality function deployment

    Directory of Open Access Journals (Sweden)

    Dmitriev Aleksandr

    2016-01-01

    Full Text Available Discussed model of quality of identification has improved mathematical tools and allows you to use a variety of additional information. The proposed robust method is a matrix MTQFD (Matrix Technique Quality Function Deployment allows you to determine not only the priorities but also the assessment of the target values of the product characteristics and process parameters, with the possible use of the information on the negative relationship. Designed ontological model, method and model of expert system versatile and can be used to identify the quality of services.

  16. An Approach for Predicting Essential Genes Using Multiple Homology Mapping and Machine Learning Algorithms.

    Science.gov (United States)

    Hua, Hong-Li; Zhang, Fa-Zhan; Labena, Abraham Alemayehu; Dong, Chuan; Jin, Yan-Ting; Guo, Feng-Biao

    Investigation of essential genes is significant to comprehend the minimal gene sets of cell and discover potential drug targets. In this study, a novel approach based on multiple homology mapping and machine learning method was introduced to predict essential genes. We focused on 25 bacteria which have characterized essential genes. The predictions yielded the highest area under receiver operating characteristic (ROC) curve (AUC) of 0.9716 through tenfold cross-validation test. Proper features were utilized to construct models to make predictions in distantly related bacteria. The accuracy of predictions was evaluated via the consistency of predictions and known essential genes of target species. The highest AUC of 0.9552 and average AUC of 0.8314 were achieved when making predictions across organisms. An independent dataset from Synechococcus elongatus , which was released recently, was obtained for further assessment of the performance of our model. The AUC score of predictions is 0.7855, which is higher than other methods. This research presents that features obtained by homology mapping uniquely can achieve quite great or even better results than those integrated features. Meanwhile, the work indicates that machine learning-based method can assign more efficient weight coefficients than using empirical formula based on biological knowledge.

  17. Genetic Algorithms Principles Towards Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Nabil M. Hewahi

    2011-10-01

    Full Text Available In this paper we propose a general approach based on Genetic Algorithms (GAs to evolve Hidden Markov Models (HMM. The problem appears when experts assign probability values for HMM, they use only some limited inputs. The assigned probability values might not be accurate to serve in other cases related to the same domain. We introduce an approach based on GAs to find
    out the suitable probability values for the HMM to be mostly correct in more cases than what have been used to assign the probability values.

  18. Algorithmic design of a noise-resistant and efficient closed-loop deep brain stimulation system: A computational approach.

    Directory of Open Access Journals (Sweden)

    Sofia D Karamintziou

    Full Text Available Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson's disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications.

  19. Algorithmic design of a noise-resistant and efficient closed-loop deep brain stimulation system: A computational approach.

    Science.gov (United States)

    Karamintziou, Sofia D; Custódio, Ana Luísa; Piallat, Brigitte; Polosan, Mircea; Chabardès, Stéphan; Stathis, Pantelis G; Tagaris, George A; Sakas, Damianos E; Polychronaki, Georgia E; Tsirogiannis, George L; David, Olivier; Nikita, Konstantina S

    2017-01-01

    Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson's disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications.

  20. A Novel Approach for Blast-Induced Flyrock Prediction Based on Imperialist Competitive Algorithm and Artificial Neural Network

    Science.gov (United States)

    Marto, Aminaton; Jahed Armaghani, Danial; Tonnizam Mohamad, Edy; Makhtar, Ahmad Mahir

    2014-01-01

    Flyrock is one of the major disturbances induced by blasting which may cause severe damage to nearby structures. This phenomenon has to be precisely predicted and subsequently controlled through the changing in the blast design to minimize potential risk of blasting. The scope of this study is to predict flyrock induced by blasting through a novel approach based on the combination of imperialist competitive algorithm (ICA) and artificial neural network (ANN). For this purpose, the parameters of 113 blasting operations were accurately recorded and flyrock distances were measured for each operation. By applying the sensitivity analysis, maximum charge per delay and powder factor were determined as the most influential parameters on flyrock. In the light of this analysis, two new empirical predictors were developed to predict flyrock distance. For a comparison purpose, a predeveloped backpropagation (BP) ANN was developed and the results were compared with those of the proposed ICA-ANN model and empirical predictors. The results clearly showed the superiority of the proposed ICA-ANN model in comparison with the proposed BP-ANN model and empirical approaches. PMID:25147856

  1. Identification of novel adhesins of M. tuberculosis H37Rv using integrated approach of multiple computational algorithms and experimental analysis.

    Directory of Open Access Journals (Sweden)

    Sanjiv Kumar

    Full Text Available Pathogenic bacteria interacting with eukaryotic host express adhesins on their surface. These adhesins aid in bacterial attachment to the host cell receptors during colonization. A few adhesins such as Heparin binding hemagglutinin adhesin (HBHA, Apa, Malate Synthase of M. tuberculosis have been identified using specific experimental interaction models based on the biological knowledge of the pathogen. In the present work, we carried out computational screening for adhesins of M. tuberculosis. We used an integrated computational approach using SPAAN for predicting adhesins, PSORTb, SubLoc and LocTree for extracellular localization, and BLAST for verifying non-similarity to human proteins. These steps are among the first of reverse vaccinology. Multiple claims and attacks from different algorithms were processed through argumentative approach. Additional filtration criteria included selection for proteins with low molecular weights and absence of literature reports. We examined binding potential of the selected proteins using an image based ELISA. The protein Rv2599 (membrane protein binds to human fibronectin, laminin and collagen. Rv3717 (N-acetylmuramoyl-L-alanine amidase and Rv0309 (L,D-transpeptidase bind to fibronectin and laminin. We report Rv2599 (membrane protein, Rv0309 and Rv3717 as novel adhesins of M. tuberculosis H37Rv. Our results expand the number of known adhesins of M. tuberculosis and suggest their regulated expression in different stages.

  2. A Novel Approach for Blast-Induced Flyrock Prediction Based on Imperialist Competitive Algorithm and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Aminaton Marto

    2014-01-01

    Full Text Available Flyrock is one of the major disturbances induced by blasting which may cause severe damage to nearby structures. This phenomenon has to be precisely predicted and subsequently controlled through the changing in the blast design to minimize potential risk of blasting. The scope of this study is to predict flyrock induced by blasting through a novel approach based on the combination of imperialist competitive algorithm (ICA and artificial neural network (ANN. For this purpose, the parameters of 113 blasting operations were accurately recorded and flyrock distances were measured for each operation. By applying the sensitivity analysis, maximum charge per delay and powder factor were determined as the most influential parameters on flyrock. In the light of this analysis, two new empirical predictors were developed to predict flyrock distance. For a comparison purpose, a predeveloped backpropagation (BP ANN was developed and the results were compared with those of the proposed ICA-ANN model and empirical predictors. The results clearly showed the superiority of the proposed ICA-ANN model in comparison with the proposed BP-ANN model and empirical approaches.

  3. Expert Panel Elicitation

    Energy Technology Data Exchange (ETDEWEB)

    Jensen, M. [Swedish Radiation Protection Authority, Stockholm (Sweden). Dept. of Waste Management and Environmental Protection; Hora, S.C. [Univ. of Hawaii, Hilo, HI (United States)

    2005-09-15

    Scientists are now frequently in a situation where data cannot be easily assessed, since they may have conflicting or uncertain sources. While expert judgment reflects private choices, it is possible both reduce the personal aspect as well as in crease confidence in the judgments by using formal protocols for choice and elicitation of experts. A full-scale elicitation made on seismicity following glaciation, now in its late phase and presented here in a preliminary form, illustrates the value of the technique and some essential issues in connection with the decision to launch such a project. The results show an unusual low variation between the experts.

  4. Experts on public trial

    DEFF Research Database (Denmark)

    Blok, Anders

    2007-01-01

    a case study of the May 2003 Danish consensus conference on environmental economics as a policy tool, the article reflects on the politics of expert authority permeating practices of public participation. Adopting concepts from the sociology of scientific knowledge (SSK), the conference is seen......-than-successful defense in the citizen perspective. Further, consensus conferences are viewed alternatively as "expert dissent conferences," serving to disclose a multiplicity of expert commitments. From this perspective, some challenges for democratizing expertise through future exercises in public participation...

  5. Expert judgement models in quantitative risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    Rosqvist, T. [VTT Automation, Helsinki (Finland); Tuominen, R. [VTT Automation, Tampere (Finland)

    1999-12-01

    Expert judgement is a valuable source of information in risk management. Especially, risk-based decision making relies significantly on quantitative risk assessment, which requires numerical data describing the initiator event frequencies and conditional probabilities in the risk model. This data is seldom found in databases and has to be elicited from qualified experts. In this report, we discuss some modelling approaches to expert judgement in risk modelling. A classical and a Bayesian expert model is presented and applied to real case expert judgement data. The cornerstone in the models is the log-normal distribution, which is argued to be a satisfactory choice for modelling degree-of-belief type probability distributions with respect to the unknown parameters in a risk model. Expert judgements are qualified according to bias, dispersion, and dependency, which are treated differently in the classical and Bayesian approaches. The differences are pointed out and related to the application task. Differences in the results obtained from the different approaches, as applied to real case expert judgement data, are discussed. Also, the role of a degree-of-belief type probability in risk decision making is discussed.

  6. Expert judgement models in quantitative risk assessment

    International Nuclear Information System (INIS)

    Rosqvist, T.; Tuominen, R.

    1999-01-01

    Expert judgement is a valuable source of information in risk management. Especially, risk-based decision making relies significantly on quantitative risk assessment, which requires numerical data describing the initiator event frequencies and conditional probabilities in the risk model. This data is seldom found in databases and has to be elicited from qualified experts. In this report, we discuss some modelling approaches to expert judgement in risk modelling. A classical and a Bayesian expert model is presented and applied to real case expert judgement data. The cornerstone in the models is the log-normal distribution, which is argued to be a satisfactory choice for modelling degree-of-belief type probability distributions with respect to the unknown parameters in a risk model. Expert judgements are qualified according to bias, dispersion, and dependency, which are treated differently in the classical and Bayesian approaches. The differences are pointed out and related to the application task. Differences in the results obtained from the different approaches, as applied to real case expert judgement data, are discussed. Also, the role of a degree-of-belief type probability in risk decision making is discussed

  7. Experts' meeting: Maintenance '83

    International Nuclear Information System (INIS)

    1983-01-01

    The brochure presents, in full wording, 20 papers read at the experts' meeting ''Maintenance '83'' in Wiesbaden. Most of the papers discuss reliability data (acquisition, evaluation, processing) of nearly all fields of industry. (RW) [de

  8. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    Energy Technology Data Exchange (ETDEWEB)

    Acciarri, R.; Bagby, L.; Baller, B.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Greenlee, H.; James, C.; Jostlein, H.; Ketchum, W.; Kirby, M.; Kobilarcik, T.; Lockwitz, S.; Lundberg, B.; Marchionni, A.; Moore, C.D.; Palamara, O.; Pavlovic, Z.; Raaf, J.L.; Schukraft, A.; Snider, E.L.; Spentzouris, P.; Strauss, T.; Toups, M.; Wolbers, S.; Yang, T.; Zeller, G.P. [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Adams, C. [Harvard University, Cambridge, MA (United States); Yale University, New Haven, CT (United States); An, R.; Littlejohn, B.R.; Martinez Caicedo, D.A. [Illinois Institute of Technology (IIT), Chicago, IL (United States); Anthony, J.; Escudero Sanchez, L.; De Vries, J.J.; Marshall, J.; Smith, A.; Thomson, M. [University of Cambridge, Cambridge (United Kingdom); Asaadi, J. [University of Texas, Arlington, TX (United States); Auger, M.; Ereditato, A.; Goeldi, D.; Kreslo, I.; Lorca, D.; Luethi, M.; Rudolf von Rohr, C.; Sinclair, J.; Weber, M. [Universitaet Bern, Bern (Switzerland); Balasubramanian, S.; Fleming, B.T.; Gramellini, E.; Hackenburg, A.; Luo, X.; Russell, B.; Tufanli, S. [Yale University, New Haven, CT (United States); Barnes, C.; Mousseau, J.; Spitz, J. [University of Michigan, Ann Arbor, MI (United States); Barr, G.; Bass, M.; Del Tutto, M.; Laube, A.; Soleti, S.R.; De Pontseele, W.V. [University of Oxford, Oxford (United Kingdom); Bay, F. [TUBITAK Space Technologies Research Institute, Ankara (Turkey); Bishai, M.; Chen, H.; Joshi, J.; Kirby, B.; Li, Y.; Mooney, M.; Qian, X.; Viren, B.; Zhang, C. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Blake, A.; Devitt, D.; Lister, A.; Nowak, J. [Lancaster University, Lancaster (United Kingdom); Bolton, T.; Horton-Smith, G.; Meddage, V.; Rafique, A. [Kansas State University (KSU), Manhattan, KS (United States); Camilleri, L.; Caratelli, D.; Crespo-Anadon, J.I.; Fadeeva, A.A.; Genty, V.; Kaleko, D.; Seligman, W.; Shaevitz, M.H. [Columbia University, New York, NY (United States); Church, E. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Cianci, D.; Karagiorgi, G. [Columbia University, New York, NY (United States); The University of Manchester (United Kingdom); Cohen, E.; Piasetzky, E. [Tel Aviv University, Tel Aviv (Israel); Collin, G.H.; Conrad, J.M.; Hen, O.; Hourlier, A.; Moon, J.; Wongjirad, T.; Yates, L. [Massachusetts Institute of Technology (MIT), Cambridge, MA (United States); Convery, M.; Eberly, B.; Rochester, L.; Tsai, Y.T.; Usher, T. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States); Dytman, S.; Graf, N.; Jiang, L.; Naples, D.; Paolone, V.; Wickremasinghe, D.A. [University of Pittsburgh, Pittsburgh, PA (United States); Esquivel, J.; Hamilton, P.; Pulliam, G.; Soderberg, M. [Syracuse University, Syracuse, NY (United States); Foreman, W.; Ho, J.; Schmitz, D.W.; Zennamo, J. [University of Chicago, IL (United States); Furmanski, A.P.; Garcia-Gamez, D.; Hewes, J.; Hill, C.; Murrells, R.; Porzio, D.; Soeldner-Rembold, S.; Szelc, A.M. [The University of Manchester (United Kingdom); Garvey, G.T.; Huang, E.C.; Louis, W.C.; Mills, G.B.; De Water, R.G.V. [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Gollapinni, S. [Kansas State University (KSU), Manhattan, KS (United States); University of Tennessee, Knoxville, TN (United States); and others

    2018-01-15

    The development and operation of liquid-argon time-projection chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens of algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies. (orig.)

  9. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    CERN Document Server

    Acciarri, R.; An, R.; Anthony, J.; Asaadi, J.; Auger, M.; Bagby, L.; Balasubramanian, S.; Baller, B.; Barnes, C.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Cohen, E.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fadeeva, A. A.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garcia-Gamez, D.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; Hourlier, A.; Huang, E.-C.; James, C.; Jan de Vries, J.; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Piasetzky, E.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; Rudolf von Rohr, C.; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Smith, A.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van De Pontseele, W.; Van de Water, R. G.; Viren, B.; Weber, M.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Yates, L.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2017-01-01

    The development and operation of Liquid-Argon Time-Projection Chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens of algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the...

  10. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    Science.gov (United States)

    Acciarri, R.; Adams, C.; An, R.; Anthony, J.; Asaadi, J.; Auger, M.; Bagby, L.; Balasubramanian, S.; Baller, B.; Barnes, C.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Cohen, E.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fadeeva, A. A.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garcia-Gamez, D.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; Hourlier, A.; Huang, E.-C.; James, C.; Jan de Vries, J.; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Piasetzky, E.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; Rudolf von Rohr, C.; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Smith, A.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van De Pontseele, W.; Van de Water, R. G.; Viren, B.; Weber, M.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Yates, L.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2018-01-01

    The development and operation of liquid-argon time-projection chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens of algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.

  11. Expert judgement in performance assessment

    International Nuclear Information System (INIS)

    Wilmot, R.D.; Galson, D.A.

    2000-01-01

    This report is a pilot study that systematically describes the various types of expert judgement that are made throughout the development of a PA, and summarizes existing tools and practices for dealing with expert judgements. The report also includes recommendations for further work in the area of expert judgement. Expert judgements can be classified in a number of ways, including classification according to why the judgements are made and according to how the judgements are made. In terms of why judgements are made, there is a broad distinction between: Judgements concerning data that are made because alternatives are not feasible; and Judgements about the conduct of a PA that are made because there are no alternative approaches for making the decision. In the case of how judgements are made, the report distinguishes between non-elicited judgements made by individuals, non-elicited judgements made by groups, and elicited judgements made by individuals or groups. These types of judgement can generally be distinguished by the extent of the associated documentation, and hence their traceability. Tools for assessing judgements vary depending on the type of judgements being examined. Key tools are peer review, an appropriate QA regime, documentation, and elicitation. Dialogue with stake holders is also identified as important in establishing whether judgements are justified in the context in which they are used. The PA process comprises a number of stages, from establishing the assessment context, through site selection and repository design, to scenario and model development and parametrisation. The report discusses how judgements are used in each of these stages, and identifies which of the tools and procedures for assessing judgements are most appropriate at each stage. Recommendations for further work include the conduct of a trial expert elicitation to gain experience in the advantages and disadvantages of this technique, the development of guidance for peer

  12. Expert judgement in performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Wilmot, R.D.; Galson, D.A. [Galson Sciences Ltd, Oakham (United Kingdom)

    2000-01-01

    This report is a pilot study that systematically describes the various types of expert judgement that are made throughout the development of a PA, and summarizes existing tools and practices for dealing with expert judgements. The report also includes recommendations for further work in the area of expert judgement. Expert judgements can be classified in a number of ways, including classification according to why the judgements are made and according to how the judgements are made. In terms of why judgements are made, there is a broad distinction between: Judgements concerning data that are made because alternatives are not feasible; and Judgements about the conduct of a PA that are made because there are no alternative approaches for making the decision. In the case of how judgements are made, the report distinguishes between non-elicited judgements made by individuals, non-elicited judgements made by groups, and elicited judgements made by individuals or groups. These types of judgement can generally be distinguished by the extent of the associated documentation, and hence their traceability. Tools for assessing judgements vary depending on the type of judgements being examined. Key tools are peer review, an appropriate QA regime, documentation, and elicitation. Dialogue with stake holders is also identified as important in establishing whether judgements are justified in the context in which they are used. The PA process comprises a number of stages, from establishing the assessment context, through site selection and repository design, to scenario and model development and parametrisation. The report discusses how judgements are used in each of these stages, and identifies which of the tools and procedures for assessing judgements are most appropriate at each stage. Recommendations for further work include the conduct of a trial expert elicitation to gain experience in the advantages and disadvantages of this technique, the development of guidance for peer

  13. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  14. Optimizing Thermal-Elastic Properties of C/C–SiC Composites Using a Hybrid Approach and PSO Algorithm

    Science.gov (United States)

    Xu, Yingjie; Gao, Tian

    2016-01-01

    Carbon fiber-reinforced multi-layered pyrocarbon–silicon carbide matrix (C/C–SiC) composites are widely used in aerospace structures. The complicated spatial architecture and material heterogeneity of C/C–SiC composites constitute the challenge for tailoring their properties. Thus, discovering the intrinsic relations between the properties and the microstructures and sequentially optimizing the microstructures to obtain composites with the best performances becomes the key for practical applications. The objective of this work is to optimize the thermal-elastic properties of unidirectional C/C–SiC composites by controlling the multi-layered matrix thicknesses. A hybrid approach based on micromechanical modeling and back propagation (BP) neural network is proposed to predict the thermal-elastic properties of composites. Then, a particle swarm optimization (PSO) algorithm is interfaced with this hybrid model to achieve the optimal design for minimizing the coefficient of thermal expansion (CTE) of composites with the constraint of elastic modulus. Numerical examples demonstrate the effectiveness of the proposed hybrid model and optimization method. PMID:28773343

  15. Management of High-energy Avulsive Ballistic Facial Injury: A Review of the Literature and Algorithmic Approach.

    Science.gov (United States)

    Vaca, Elbert E; Bellamy, Justin L; Sinno, Sammy; Rodriguez, Eduardo D

    2018-03-01

    High-energy avulsive ballistic facial injuries pose one of the most significant reconstructive challenges. We conducted a systematic review of the literature to evaluate management trends and outcomes for the treatment of devastating ballistic facial trauma. Furthermore, we describe the senior author's early and definitive staged reconstructive approach to these challenging patients. A Medline search was conducted to include studies that described timing of treatment, interventions, complications, and/or aesthetic outcomes. Initial query revealed 41 articles, of which 17 articles met inclusion criteria. A single comparative study revealed that early versus delayed management resulted in a decreased incidence of soft-tissue contracture, required fewer total procedures, and resulted in shorter hospitalizations (level 3 evidence). Seven of the 9 studies (78%) that advocated delayed reconstruction were from the Middle East, whereas 5 of the 6 studies (83%) advocating immediate or early definitive reconstruction were from the United States. No study compared debridement timing directly in a head-to-head fashion, nor described flap selection based on defect characteristics. Existing literature suggests that early and aggressive intervention improves outcomes following avulsive ballistic injuries. Further comparative studies are needed; however, although evidence is limited, the senior author presents a 3-stage reconstructive algorithm advocating early and definitive reconstruction with aesthetic free tissue transfer in an attempt to optimize reconstructive outcomes of these complex injuries.

  16. DEFLATE Compression Algorithm Corrects for Overestimation of Phylogenetic Diversity by Grantham Approach to Single-Nucleotide Polymorphism Classification

    Directory of Open Access Journals (Sweden)

    Arran Schlosberg

    2014-05-01

    Full Text Available Improvements in speed and cost of genome sequencing are resulting in increasing numbers of novel non-synonymous single nucleotide polymorphisms (nsSNPs in genes known to be associated with disease. The large number of nsSNPs makes laboratory-based classification infeasible and familial co-segregation with disease is not always possible. In-silico methods for classification or triage are thus utilised. A popular tool based on multiple-species sequence alignments (MSAs and work by Grantham, Align-GVGD, has been shown to underestimate deleterious effects, particularly as sequence numbers increase. We utilised the DEFLATE compression algorithm to account for expected variation across a number of species. With the adjusted Grantham measure we derived a means of quantitatively clustering known neutral and deleterious nsSNPs from the same gene; this was then used to assign novel variants to the most appropriate cluster as a means of binary classification. Scaling of clusters allows for inter-gene comparison of variants through a single pathogenicity score. The approach improves upon the classification accuracy of Align-GVGD while correcting for sensitivity to large MSAs. Open-source code and a web server are made available at https://github.com/aschlosberg/CompressGV.

  17. Voxel-based morphometric analysis in hypothyroidism using diffeomorphic anatomic registration via an exponentiated lie algebra algorithm approach.

    Science.gov (United States)

    Singh, S; Modi, S; Bagga, D; Kaur, P; Shankar, L R; Khushu, S

    2013-03-01

    The present study aimed to investigate whether brain morphological differences exist between adult hypothyroid subjects and age-matched controls using voxel-based morphometry (VBM) with diffeomorphic anatomic registration via an exponentiated lie algebra algorithm (DARTEL) approach. High-resolution structural magnetic resonance images were taken in ten healthy controls and ten hypothyroid subjects. The analysis was conducted using statistical parametric mapping. The VBM study revealed a reduction in grey matter volume in the left postcentral gyrus and cerebellum of hypothyroid subjects compared to controls. A significant reduction in white matter volume was also found in the cerebellum, right inferior and middle frontal gyrus, right precentral gyrus, right inferior occipital gyrus and right temporal gyrus of hypothyroid patients compared to healthy controls. Moreover, no meaningful cluster for greater grey or white matter volume was obtained in hypothyroid subjects compared to controls. Our study is the first VBM study of hypothyroidism in an adult population and suggests that, compared to controls, this disorder is associated with differences in brain morphology in areas corresponding to known functional deficits in attention, language, motor speed, visuospatial processing and memory in hypothyroidism. © 2012 British Society for Neuroendocrinology.

  18. A genetic algorithm approach for evaluation of optical functions of very thin tantalum pentoxide films on Si substrate

    International Nuclear Information System (INIS)

    Sharlandjiev, P S; Nazarova, D I

    2013-01-01

    The optical characteristics of tantalum pentoxide films, deposited on Si(100) substrate by reactive sputtering, are studied. These films are investigated as high-kappa materials for the needs of nano-electronics, i.e. design of dynamic random access memories, etc. One problem in their implementation is that metal oxides are thermodynamically unstable with Si and an interfacial layer is formed between the oxide film and the silicon substrate during the deposition process. Herein, the center of attention is on the optical properties of that interfacial layer, which is studied by spectral photometric measurements. The evaluation of the optical parameters of the structure is fulfilled with the genetic algorithm approach. The spectral range of evaluation covers deep UV to NIR. The equivalent physical thickness (2.5 nm) and the equivalent refractive index of the interfacial layer are estimated from 236 to 750 nm as well as the thickness of the tantalum pentoxide film (9.5 nm). (paper)

  19. Management of High-energy Avulsive Ballistic Facial Injury: A Review of the Literature and Algorithmic Approach

    Science.gov (United States)

    Vaca, Elbert E.; Bellamy, Justin L.; Sinno, Sammy

    2018-01-01

    Background: High-energy avulsive ballistic facial injuries pose one of the most significant reconstructive challenges. We conducted a systematic review of the literature to evaluate management trends and outcomes for the treatment of devastating ballistic facial trauma. Furthermore, we describe the senior author’s early and definitive staged reconstructive approach to these challenging patients. Methods: A Medline search was conducted to include studies that described timing of treatment, interventions, complications, and/or aesthetic outcomes. Results: Initial query revealed 41 articles, of which 17 articles met inclusion criteria. A single comparative study revealed that early versus delayed management resulted in a decreased incidence of soft-tissue contracture, required fewer total procedures, and resulted in shorter hospitalizations (level 3 evidence). Seven of the 9 studies (78%) that advocated delayed reconstruction were from the Middle East, whereas 5 of the 6 studies (83%) advocating immediate or early definitive reconstruction were from the United States. No study compared debridement timing directly in a head-to-head fashion, nor described flap selection based on defect characteristics. Conclusions: Existing literature suggests that early and aggressive intervention improves outcomes following avulsive ballistic injuries. Further comparative studies are needed; however, although evidence is limited, the senior author presents a 3-stage reconstructive algorithm advocating early and definitive reconstruction with aesthetic free tissue transfer in an attempt to optimize reconstructive outcomes of these complex injuries. PMID:29707453

  20. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    Directory of Open Access Journals (Sweden)

    Akemi Gálvez

    2013-01-01

    Full Text Available Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor’s method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  1. An Approach to Diagnosis and Endovascular Treatment of Refractory Ascites in Liver Transplant: A Pictorial Essay and Clinical Practice Algorithm.

    Science.gov (United States)

    Pereira, Keith; Salsamendi, Jason; Fan, Ji

    2015-10-01

    Recipients of liver transplant are surviving longer as both the surgical procedure and postsurgical care have improved. Despite improvements, serious complications from the procedure remain that significantly affect patient outcome and may result in retransplant. Refractory ascites is one complication, occurring in about 5.6% of transplant recipients. Management of refractory ascites after liver transplant presents a challenge to the multidisciplinary team caring for these patients. We discuss approaches to the diagnosis and treatment of refractory ascites after liver transplant, based on a literature review, with a primary focus on vascular causes. These approaches are illustrated by case examples highlighting our experiences at an academic tertiary medical center. We propose a clinical practice algorithm for optimal endovascular treatment of refractory ascites after liver transplant. The cornerstone of refractory ascites care is diagnosis and treatment of the cause. Vascular causes are not infrequently encountered and, if not treated early, are associated with graft loss and high morbidity and mortality and are major indications for retransplant. For patients with recurrent disease or graft rejection needing large volume paracentesis, the use of a transjugular intrahepatic portosystemic shunt may serve as a bridge to more definitive treatment (retransplant), although it may not be as effective for managing ascites as splenic artery embolization, arguably underused, which is emerging as a potential alternative treatment option. A multidisciplinary strategy for the diagnosis and care of patients with refractory ascites after liver transplant is crucial, with endovascular treatment playing an important role. The aim is for this document to serve as a concise and informative reference to be used by those who may care for patients with this rare yet serious diagnosis.

  2. Adaptive capture of expert knowledge

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, C.L.; Jones, R.D. [Los Alamos National Lab., NM (United States); Hand, Un Kyong [Los Alamos National Lab., NM (United States)]|[US Navy (United States)

    1995-05-01

    A method is introduced that can directly acquire knowledge-engineered, rule-based logic in an adaptive network. This adaptive representation of the rule system can then replace the rule system in simulated intelligent agents and thereby permit further performance-based adaptation of the rule system. The approach described provides both weight-fitting network adaptation and potentially powerful rule mutation and selection mechanisms. Nonlinear terms are generated implicitly in the mutation process through the emergent interaction of multiple linear terms. By this method it is possible to acquire nonlinear relations that exist in the training data without addition of hidden layers or imposition of explicit nonlinear terms in the network. We smoothed and captured a set of expert rules with an adaptive network. The motivation for this was to (1) realize a speed advantage over traditional rule-based simulations; (2) have variability in the intelligent objects not possible by rule-based systems but provided by adaptive systems: and (3) maintain the understandability of rule-based simulations. A set of binary rules was smoothed and converted into a simple set of arithmetic statements, where continuous, non-binary rules are permitted. A neural network, called the expert network, was developed to capture this rule set, which it was able to do with zero error. The expert network is also capable of learning a nonmonotonic term without a hidden layer. The trained network in feedforward operation is fast running, compact, and traceable to the rule base.

  3. Expert witness and Jungian archetypes.

    Science.gov (United States)

    Lallave, Juan Antonio; Gutheil, Thomas Gordon

    2012-01-01

    Jung's theories of archetype, shadow, and the personal and collective unconscious provide a postmodern framework in which to consider the role of the expert witness in judicial proceedings. Archetypal themes, motifs, and influences help to illuminate the shadow of the judicial system and projections and behaviors among the cast of the court in pursuing justice. This article speaks to archetypal influences and dialectical tensions encountered by the expert witness in this judicial drama. The archetype of Justice is born from the human need for order and relational fairness in a world of chaos. The persona of justice is the promise of truth in the drama. The shadow of justice is untruth, the need to win by any means. The dynamics of the trickster archetype serve and promote injustice. These influences are examined by means of a case example. This approach will deepen understanding of court proceedings and the role of the expert witness in the heroic quest for justice. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Expert System for ASIC Imaging

    Science.gov (United States)

    Gupta, Shri N.; Arshak, Khalil I.; McDonnell, Pearse; Boyce, Conor; Duggan, Andrew

    1989-07-01

    With the developments in the techniques of artificial intelligence over the last few years, development of advisory, scheduling and similar class of problems has become very convenient using tools such as PROLOG. In this paper an expert system has been described which helps lithographers and process engineers in several ways. The methodology used is to model each work station according to its input, output and control parameters, combine these work stations in a logical sequence based on past experience and work out process schedule for a job. In addition, all the requirements vis-a-vis a particular job parameters are converted into decision rules. One example is the exposure time, develop time for a wafer with different feature sizes would be different. This expert system has been written in Turbo Prolog. By building up a large number of rules, one can tune the program to any facility and use it for as diverse applications as advisory help, trouble shooting etc. Leitner (1) has described an advisory expert system that is being used at National Semiconductor. This system is quite different from the one being reported in the present paper. The approach is quite different for one. There is stress on job flow and process for another.

  5. A modified gravitational search algorithm based on a non-dominated sorting genetic approach for hydro-thermal-wind economic emission dispatching

    International Nuclear Information System (INIS)

    Chen, Fang; Zhou, Jianzhong; Wang, Chao; Li, Chunlong; Lu, Peng

    2017-01-01

    Wind power is a type of clean and renewable energy, and reasonable utilization of wind power is beneficial to environmental protection and economic development. Therefore, a short-term hydro-thermal-wind economic emission dispatching (SHTW-EED) problem is presented in this paper. The proposed problem aims to distribute the load among hydro, thermal and wind power units to simultaneously minimize economic cost and pollutant emission. To solve the SHTW-EED problem with complex constraints, a modified gravitational search algorithm based on the non-dominated sorting genetic algorithm-III (MGSA-NSGA-III) is proposed. In the proposed MGSA-NSGA-III, a non-dominated sorting approach, reference-point based selection mechanism and chaotic mutation strategy are applied to improve the evolutionary process of the original gravitational search algorithm (GSA) and maintain the distribution diversity of Pareto optimal solutions. Moreover, a parallel computing strategy is introduced to improve the computational efficiency. Finally, the proposed MGSA-NSGA-III is applied to a typical hydro-thermal-wind system to verify its feasibility and effectiveness. The simulation results indicate that the proposed algorithm can obtain low economic cost and small pollutant emission when dealing with the SHTW-EED problem. - Highlights: • A hybrid algorithm is proposed to handle hydro-thermal-wind power dispatching. • Several improvement strategies are applied to the algorithm. • A parallel computing strategy is applied to improve computational efficiency. • Two cases are analyzed to verify the efficiency of the optimize mode.

  6. A comparison of genetic algorithm and artificial bee colony approaches in solving blocking hybrid flowshop scheduling problem with sequence dependent setup/changeover times

    Directory of Open Access Journals (Sweden)

    Pongpan Nakkaew

    2016-06-01

    Full Text Available In manufacturing process where efficiency is crucial in order to remain competitive, flowshop is a common configuration in which machines are arranged in series and products are produced through the stages one by one. In certain production processes, the machines are frequently configured in the way that each production stage may contain multiple processing units in parallel or hybrid. Moreover, along with precedent conditions, the sequence dependent setup times may exist. Finally, in case there is no buffer, a machine is said to be blocked if the next stage to handle its output is being occupied. Such NP-Hard problem, referred as Blocking Hybrid Flowshop Scheduling Problem with Sequence Dependent Setup/Changeover Times, is usually not possible to find the best exact solution to satisfy optimization objectives such as minimization of the overall production time. Thus, it is usually solved by approximate algorithms such as metaheuristics. In this paper, we investigate comparatively the effectiveness of the two approaches: a Genetic Algorithm (GA and an Artificial Bee Colony (ABC algorithm. GA is inspired by the process of natural selection. ABC, in the same manner, resembles the way types of bees perform specific functions and work collectively to find their foods by means of division of labor. Additionally, we apply an algorithm to improve the GA and ABC algorithms so that they can take advantage of parallel processing resources of modern multiple core processors while eliminate the need for screening the optimal parameters of both algorithms in advance.

  7. Expert Judgement Assessment & SCENT Ontological Analysis

    Directory of Open Access Journals (Sweden)

    NICHERSU Iulian

    2018-05-01

    Full Text Available This study aims to provide insights in the starting point of the Horizon 2020 ECfunded project SCENT (Smart Toolbox for Εngaging Citizens into a People-Centric Observation Web Citizen Observatory (CO in terms of existing infrastructure, existing monitoring systems and some discussion on the existing legal and administrative framework that relate to flood monitoring and management in the area of Danube Delta. The methodology used in this approach is based on expert judgement and ontological analysis, using the information collected from the identified end-users of the SCENT toolbox. In this type of analysis the stages of flood monitoring and management that the experts are involved in are detailed. This is done through an Expert Judgement Assessment analysis. The latter is complemented by a set of Key Performance Indicators that the stakeholders have assessed and/or proposed for the evaluation of the SCENT demonstrations, for the impact of the project and finally for SCENT toolbox performance and usefulness. The second part of the study presents an analysis that attempts to map the interactions between different organizations and components of the existing monitoring systems in the Danube Delta case study. Expert Judgement (EJ allows to gain information from specialists in a specific field through a consultation process with one or more experts that have experience in similar and complementary topics. Expert judgment, expert estimates, or expert opinion are all terms that refer to the contents of the problem; estimates, outcomes, predictions, uncertainties, and their corresponding assumptions and conditions are all examples of expert judgment. Expert Judgement is affected by the process used to gather it. On the other hand, the ontological analysis comes to complete this study, by organizing and presenting the connections behind the flood management and land use systems in the three phases of the flood event.

  8. Design-order, non-conformal low-Mach fluid algorithms using a hybrid CVFEM/DG approach

    Science.gov (United States)

    Domino, Stefan P.

    2018-04-01

    A hybrid, design-order sliding mesh algorithm, which uses a control volume finite element method (CVFEM), in conjunction with a discontinuous Galerkin (DG) approach at non-conformal interfaces, is outlined in the context of a low-Mach fluid dynamics equation set. This novel hybrid DG approach is also demonstrated to be compatible with a classic edge-based vertex centered (EBVC) scheme. For the CVFEM, element polynomial, P, promotion is used to extend the low-order P = 1 CVFEM method to higher-order, i.e., P = 2. An equal-order low-Mach pressure-stabilized methodology, with emphasis on the non-conformal interface boundary condition, is presented. A fully implicit matrix solver approach that accounts for the full stencil connectivity across the non-conformal interface is employed. A complete suite of formal verification studies using the method of manufactured solutions (MMS) is performed to verify the order of accuracy of the underlying methodology. The chosen suite of analytical verification cases range from a simple steady diffusion system to a traveling viscous vortex across mixed-order non-conformal interfaces. Results from all verification studies demonstrate either second- or third-order spatial accuracy and, for transient solutions, second-order temporal accuracy. Significant accuracy gains in manufactured solution error norms are noted even with modest promotion of the underlying polynomial order. The paper also demonstrates the CVFEM/DG methodology on two production-like simulation cases that include an inner block subjected to solid rotation, i.e., each of the simulations include a sliding mesh, non-conformal interface. The first production case presented is a turbulent flow past a high-rate-of-rotation cube (Re, 4000; RPM, 3600) on like and mixed-order polynomial interfaces. The final simulation case is a full-scale Vestas V27 225 kW wind turbine (tower and nacelle omitted) in which a hybrid topology, low-order mesh is used. Both production simulations

  9. Waste Load Allocation Based on Total Maximum Daily Load Approach Using the Charged System Search (CSS Algorithm

    Directory of Open Access Journals (Sweden)

    Elham Faraji

    2016-03-01

    Full Text Available In this research, the capability of a charged system search algorithm (CSS in handling water management optimization problems is investigated. First, two complex mathematical problems are solved by CSS and the results are compared with those obtained from other metaheuristic algorithms. In the last step, the optimization model developed by the CSS algorithm is applied to the waste load allocation in rivers based on the total maximum daily load (TMDL concept. The results are presented in Tables and Figures for easy comparison. The study indicates the superiority of the CSS algorithm in terms of its speed and performance over the other metaheuristic algorithms while its precision in water management optimization problems is verified.

  10. Waste disposal experts meet

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1959-01-15

    Problems connected with the disposal into the sea of radioactive wastes from peaceful uses of atomic energy are being examined by a panel of experts, convened by the International Atomic Energy Agency. These experts from eight different countries held a first meeting at IAEA headquarters in Vienna from 4-9 December 1958, under the chairmanship of Dr. Harry Brynielsson, Director General of the Swedish Atomic Energy Company. The countries represented are: Canada, Czechoslovakia, France, Japan, Netherlands, United Kingdom and United States. The group will meet again in 1959. (author)

  11. Developing Expert Tools for the LHC

    CERN Document Server

    AUTHOR|(CDS)2160780; Timkó, Helga

    2017-10-12

    This Thesis describes software tools developed for automated, precision setting-up of low-power level radio frequency (LLRF) loops, which will help expert users to have better control and faster setting-up of the radio-frequency (RF) system in the Large Hadron Collider (LHC) experiment. The aim was to completely redesign the software architecture, to add new features, to improve certain algorithms, and to increase the automation.

  12. Price competition between an expert and a non-expert

    OpenAIRE

    Bouckaert, J.M.C.; Degryse, H.A.

    1998-01-01

    This paper characterizes price competition between an expert and a non-expert. In contrast with the expert, the non-expert’s repair technology is not always successful. Consumers visit the expert after experiencing an unsuccessful match at the non-expert. This re-entry affects the behaviour of both sellers. For low enough probability of successful repair at the non-expert, all consumers first visit the non-expert, and a ‘timid-pricing’ equilibrium results. If the non-expert’s repair technolog...

  13. GOLD predictivity mapping in French Guiana using an expert-guided data-driven approach based on a regional-scale GIS

    Science.gov (United States)

    Cassard, Daniel; Billa, Mario; Lambert, Alain; Picot, Jean-Claude; Husson, Yves

    2008-05-01

    The realistic estimation of gold mining in French Guiana requires including the numerous illegal gold washing activities in predictivity mapping. The combination of a classical approach, based on the algebraic method of Knox-Robinson and Groves, with innovative processing grid-type geochemical and radiometric data, as well as cluster analysis technique provides a better understanding of the structure of studied mineralized areas.

  14. An international consensus algorithm for management of chronic postoperative inguinal pain.

    Science.gov (United States)

    Lange, J F M; Kaufmann, R; Wijsmuller, A R; Pierie, J P E N; Ploeg, R J; Chen, D C; Amid, P K

    2015-02-01

    Tension-free mesh repair of inguinal hernia has led to uniformly low recurrence rates. Morbidity associated with this operation is mainly related to chronic pain. No consensus guidelines exist for the management of this condition. The goal of this study is to design an expert-based algorithm for diagnostic and therapeutic management of chronic inguinal postoperative pain (CPIP). A group of surgeons considered experts on inguinal hernia surgery was solicited to develop the algorithm. Consensus regarding each step of an algorithm proposed by the authors was sought by means of the Delphi method leading to a revised expert-based algorithm. With the input of 28 international experts, an algorithm for a stepwise approach for management of CPIP was created. 26 participants accepted the final algorithm as a consensus model. One participant could not agree with the final concept. One expert did not respond during the final phase. There is a need for guidelines with regard to management of CPIP. This algorithm can serve as a guide with regard to the diagnosis, management, and treatment of these patients and improve clinical outcomes. If an expectative phase of a few months has passed without any amelioration of CPIP, a multidisciplinary approach is indicated and a pain management team should be consulted. Pharmacologic, behavioral, and interventional modalities including nerve blocks are essential. If conservative measures fail and surgery is considered, triple neurectomy, correction for recurrence with or without neurectomy, and meshoma removal if indicated should be performed. Surgeons less experienced with remedial operations for CPIP should not hesitate to refer their patients to dedicated hernia surgeons.

  15. Expert Systems Research.

    Science.gov (United States)

    Duda, Richard O.; Shortliffe, Edward H.

    1983-01-01

    Discusses a class of artificial intelligence computer programs (often called "expert systems" because they address problems normally thought to require human specialists for their solution) intended to serve as consultants for decision making. Also discusses accomplishments (including information systematization in medical diagnosis and…

  16. Computers Simulate Human Experts.

    Science.gov (United States)

    Roberts, Steven K.

    1983-01-01

    Discusses recent progress in artificial intelligence in such narrowly defined areas as medical and electronic diagnosis. Also discusses use of expert systems, man-machine communication problems, novel programing environments (including comments on LISP and LISP machines), and types of knowledge used (factual, heuristic, and meta-knowledge). (JN)

  17. Expert Cold Structure Development

    Science.gov (United States)

    Atkins, T.; Demuysere, P.

    2011-05-01

    The EXPERT Program is funded by ESA. The objective of the EXPERT mission is to perform a sub-orbital flight during which measurements of critical aero- thermodynamic phenomena will be obtained by using state-of-the-art instrumentation. As part of the EXPERT Flight Segment, the responsibility of the Cold Structure Development Design, Manufacturing and Validation was committed to the Belgian industrial team SONACA/SABCA. The EXPERT Cold Structure includes the Launcher Adapter, the Bottom Panel, the Upper Panel, two Cross Panels and the Parachute Bay. An additional Launcher Adapter was manufactured for the separation tests. The selected assembly definition and manufacturing technologies ( machined parts and sandwich panels) were dictated classically by the mass and stiffness, but also by the CoG location and the sensitive separation interface. Used as support for the various on-board equipment, the Cold Structure is fixed to but thermally uncoupled from the PM 1000 thermal shield. It is protect on its bottom panel by a thermal blanket. As it is a protoflight, analysis was the main tool for the verification. Low level stiffness and modal analysis tests have also been performed on the Cold Structure equipped with its ballast. It allowed to complete its qualification and to prepare SONACA/SABCA support for the system dynamic tests foreseen in 2011. The structure was finally coated with a thermal control black painting and delivered on time to Thales Alenia Space-Italy end of March 201.

  18. Comparing the ISO-recommended and the cumulative data-reduction algorithms in S-on-1 laser damage test by a reverse approach method

    Science.gov (United States)

    Zorila, Alexandru; Stratan, Aurel; Nemes, George

    2018-01-01

    We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.

  19. Basis Expansion Approaches for Regularized Sequential Dictionary Learning Algorithms With Enforced Sparsity for fMRI Data Analysis.

    Science.gov (United States)

    Seghouane, Abd-Krim; Iqbal, Asif

    2017-09-01

    Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.

  20. SIMULATION OF LANDMARK APPROACH FOR WALL FOLLOWING ALGORITHM ON FIRE-FIGHTING ROBOT USING V-REP

    Directory of Open Access Journals (Sweden)

    Sumarsih Condroayu Purbarani

    2015-08-01

    Full Text Available Autonomous mobile robot has been implemented to assist humans in their daily activity. Autonomous robots have also contributed significantly in human safety. Autonomous mobile robot have been implemented to assist humans in their daily activity. Autonomous robots Have also contributed significantly in human safety. An example of the autonomous robot in the human safety sector is the fire fighting robot, which is the main topic of this paper. As an autonomous robot, the fire fighting robot needs a robust navigation ability to execute a given task in the shortest time interval. Wall-following algorithm is one of several navigating algorithm that simplifies this autonomous navigation problem. As a contribution, we propose two methods that could be combined to make the existing wall-following algorithm more robust. The combined wall-flowing algorithm will be compared to the original wall-following algorithm. By doing so, we could determine which method has more impact on the robot’s navigation robustness. Our goal is to see which method is more effective when combined with the wall-following algorithm.

  1. Multisensor satellite data for water quality analysis and water pollution risk assessment: decision making under deep uncertainty with fuzzy algorithm in framework of multimodel approach

    Science.gov (United States)

    Kostyuchenko, Yuriy V.; Sztoyka, Yulia; Kopachevsky, Ivan; Artemenko, Igor; Yuschenko, Maxim

    2017-10-01

    Multi-model approach for remote sensing data processing and interpretation is described. The problem of satellite data utilization in multi-modeling approach for socio-ecological risks assessment is formally defined. Observation, measurement and modeling data utilization method in the framework of multi-model approach is described. Methodology and models of risk assessment in framework of decision support approach are defined and described. Method of water quality assessment using satellite observation data is described. Method is based on analysis of spectral reflectance of aquifers. Spectral signatures of freshwater bodies and offshores are analyzed. Correlations between spectral reflectance, pollutions and selected water quality parameters are analyzed and quantified. Data of MODIS, MISR, AIRS and Landsat sensors received in 2002-2014 have been utilized verified by in-field spectrometry and lab measurements. Fuzzy logic based approach for decision support in field of water quality degradation risk is discussed. Decision on water quality category is making based on fuzzy algorithm using limited set of uncertain parameters. Data from satellite observations, field measurements and modeling is utilizing in the framework of the approach proposed. It is shown that this algorithm allows estimate water quality degradation rate and pollution risks. Problems of construction of spatial and temporal distribution of calculated parameters, as well as a problem of data regularization are discussed. Using proposed approach, maps of surface water pollution risk from point and diffuse sources are calculated and discussed.

  2. Do maize models capture the impacts of heat and drought stresses on yield? Using algorithm ensembles to identify successful approaches.

    Science.gov (United States)

    Jin, Zhenong; Zhuang, Qianlai; Tan, Zeli; Dukes, Jeffrey S; Zheng, Bangyou; Melillo, Jerry M

    2016-09-01

    Stresses from heat and drought are expected to increasingly suppress crop yields, but the degree to which current models can represent these effects is uncertain. Here we evaluate the algorithms that determine impacts of heat and drought stress on maize in 16 major maize models by incorporating these algorithms into a standard model, the Agricultural Production Systems sIMulator (APSIM), and running an ensemble of simulations. Although both daily mean temperature and daylight temperature are common choice of forcing heat stress algorithms, current parameterizations in most models favor the use of daylight temperature even though the algorithm was designed for daily mean temperature. Different drought algorithms (i.e., a function of soil water content, of soil water supply to demand ratio, and of actual to potential transpiration ratio) simulated considerably different patterns of water shortage over the growing season, but nonetheless predicted similar decreases in annual yield. Using the selected combination of algorithms, our simulations show that maize yield reduction was more sensitive to drought stress than to heat stress for the US Midwest since the 1980s, and this pattern will continue under future scenarios; the influence of excessive heat will become increasingly prominent by the late 21st century. Our review of algorithms in 16 crop models suggests that the impacts of heat and drought stress on plant yield can be best described by crop models that: (i) incorporate event-based descriptions of heat and drought stress, (ii) consider the effects of nighttime warming, and (iii) coordinate the interactions among multiple stresses. Our study identifies the proficiency with which different model formulations capture the impacts of heat and drought stress on maize biomass and yield production. The framework presented here can be applied to other modeled processes and used to improve yield predictions of other crops with a wide variety of crop models. © 2016 John

  3. Expert PLSQL Practices

    CERN Document Server

    Beresniewicz, John

    2011-01-01

    Expert PL/SQL Practices is a book of collected wisdom on PL/SQL programming from some of the best and the brightest in the field. Each chapter is a deep-dive into a specific problem, technology, or feature set that you'll face as a PL/SQL programmer. Each author has chosen their topic out of the strong belief that what they share can make a positive difference in the quality and scalability of code that you write. The path to mastery begins with syntax and the mechanics of writing statements to make things happen. If you've reached that point with PL/SQL, then let the authors of Expert PL/SQL

  4. Bioethics for Technical Experts

    Science.gov (United States)

    Asano, Shigetaka

    Along with rapidly expanding applications of life science and technology, technical experts have been implicated more and more often with ethical, social, and legal problems than before. It should be noted that in this background there are scientific and social uncertainty elements which are inevitable during the progress of life science in addition to the historically-established social unreliability to scientists and engineers. In order to solve these problems, therefore, we should establish the social governance with ‘relief’ and ‘reliance’ which enables for both citizens and engineers to share the awareness of the issues, to design social orders and criterions based on hypothetical sense of values for bioethics, to carry out practical use management of each subject carefully, and to improve the sense of values from hypothetical to universal. Concerning these measures, the technical experts can learn many things from the present performance in the medical field.

  5. Expert tool use

    DEFF Research Database (Denmark)

    Thorndahl, Kathrine Liedtke; Ravn, Susanne

    2017-01-01

    on a case study of elite rope skipping, we argue that the phenomenological concept of incorporation does not suffice to adequately describe how expert tool users feel when interacting with their tools. By analyzing a combination of insights gained from participant observation of 11 elite rope skippers......According to some phenomenologists, a tool can be experienced as incorporated when, as a result of habitual use or deliberate practice, someone is able to manipulate it without conscious effort. In this article, we specifically focus on the experience of expertise tool use in elite sport. Based...... and autoethnographic material from one former elite skipper, we take some initial steps toward the development of a more nuanced understanding of the concept of incorporation; one that is able to accommodate the experiences of expert tool users. In sum, our analyses indicate that the possibility for experiencing...

  6. ALICE Expert System

    CERN Document Server

    Ionita, C

    2014-01-01

    The ALICE experiment at CERN employs a number of human operators (shifters), who have to make sure that the experiment is always in a state compatible with taking Physics data. Given the complexity of the system and the myriad of errors that can arise, this is not always a trivial task. The aim of this paper is to describe an expert system that is capable of assisting human shifters in the ALICE control room. The system diagnoses potential issues and attempts to make smart recommendations for troubleshooting. At its core, a Prolog engine infers whether a Physics or a technical run can be started based on the current state of the underlying sub-systems. A separate C++ component queries certain SMI objects and stores their state as facts in a Prolog knowledge base. By mining the data stored in dierent system logs, the expert system can also diagnose errors arising during a run. Currently the system is used by the on-call experts for faster response times, but we expect it to be adopted as a standard tool by reg...

  7. ALICE Expert System

    International Nuclear Information System (INIS)

    Ionita, C; Carena, F

    2014-01-01

    The ALICE experiment at CERN employs a number of human operators (shifters), who have to make sure that the experiment is always in a state compatible with taking Physics data. Given the complexity of the system and the myriad of errors that can arise, this is not always a trivial task. The aim of this paper is to describe an expert system that is capable of assisting human shifters in the ALICE control room. The system diagnoses potential issues and attempts to make smart recommendations for troubleshooting. At its core, a Prolog engine infers whether a Physics or a technical run can be started based on the current state of the underlying sub-systems. A separate C++ component queries certain SMI objects and stores their state as facts in a Prolog knowledge base. By mining the data stored in different system logs, the expert system can also diagnose errors arising during a run. Currently the system is used by the on-call experts for faster response times, but we expect it to be adopted as a standard tool by regular shifters during the next data taking period

  8. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  9. Spot quantification in two dimensional gel electrophoresis image analysis: comparison of different approaches and presentation of a novel compound fitting algorithm

    Science.gov (United States)

    2014-01-01

    Background Various computer-based methods exist for the detection and quantification of protein spots in two dimensional gel electrophoresis images. Area-based methods are commonly used for spot quantification: an area is assigned to each spot and the sum of the pixel intensities in that area, the so-called volume, is used a measure for spot signal. Other methods use the optical density, i.e. the intensity of the most intense pixel of a spot, or calculate the volume from the parameters of a fitted function. Results In this study we compare the performance of different spot quantification methods using synthetic and real data. We propose a ready-to-use algorithm for spot detection and quantification that uses fitting of two dimensional Gaussian function curves for the extraction of data from two dimensional gel electrophoresis (2-DE) images. The algorithm implements fitting using logical compounds and is computationally efficient. The applicability of the compound fitting algorithm was evaluated for various simulated data and compared with other quantification approaches. We provide evidence that even if an incorrect bell-shaped function is used, the fitting method is superior to other approaches, especially when spots overlap. Finally, we validated the method with experimental data of urea-based 2-DE of Aβ peptides andre-analyzed published data sets. Our methods showed higher precision and accuracy than other approaches when applied to exposure time series and standard gels. Conclusion Compound fitting as a quantification method for 2-DE spots shows several advantages over other approaches and could be combined with various spot detection methods. The algorithm was scripted in MATLAB (Mathworks) and is available as a supplemental file. PMID:24915860

  10. Integration of Artificial Neural Network Modeling and Genetic Algorithm Approach for Enrichment of Laccase Production in Solid State Fermentation by Pleurotus ostreatus

    OpenAIRE

    Potu Venkata Chiranjeevi; Moses Rajasekara Pandian; Sathish Thadikamala

    2014-01-01

    Black gram husk was used as a solid substrate for laccase production by Pleurotus ostreatus, and various fermentation conditions were optimized based on an artificial intelligence method. A total of six parameters, i.e., temperature, inoculum concentration, moisture content, CuSO4, glucose, and peptone concentrations, were optimized. A total of 50 experiments were conducted, and the obtained data were modeled by a hybrid of artificial neural network (ANN) and genetic algorithm (GA) approaches...

  11. Preemie abandonment? Multidisciplinary experts consider how to best meet preemies needs at "preterm infants: a collaborative approach to specialized care" roundtable.

    Science.gov (United States)

    Als, Heidelise; Behrman, Richard; Checchia, Paul; Denne, Scott; Dennery, Phyllis; Hall, Caroline B; Martin, Richard; Panitch, Howard; Schmidt, Barbara; Stevenson, David K; Vila, Linda

    2007-06-04

    In June 2006, the Institute of Medicine (LoM), released a comprehensive study, Preterm Birth: Causes, Consequences, and Prevention. The report was a result of the IoM's efforts, in particular the Committee on Understanding Premature Birth and Assuring Healthy Outcomes, to better understand and prevent preterm birth and improve care for babies born prematurely. After its publication, a group of health care professionals came together in a roundtable session, "Preterm Infants: A Collaborative Approach to Specialized Care," to discuss the implications of the report. The following article captures the group's April 2007 discussion about the clinical and societal problems of preterm birth. It should be of interest to hospital administrators, pediatricians, third-party payers, policy makers, public health officials, academic researchers, funding agencies, allied health professionals, and others with a vested interest in curbing healthcare costs as well as what needs to be understood and done to safeguard the short- and long-term health of a most vulnerable population.

  12. Crystal engineering using a "turtlebug" algorithm: A de novo approach to the design of binodal metal-organic frameworks

    KAUST Repository

    McColm, Gregory L.

    2011-09-07

    A new series of computer programs that enumerate three-dimensional periodic embedded nets (i.e., representing crystals) is based on an algorithm that can theoretically enumerate all possible structures for all possible periodic topologies. Unlike extant programs, this algorithm employs algebraic and combinatorial machinery developed during the 1980s in combinatorial and geometric group theory and ancillary fields. This algorithm was validated by a demonstration program that found all strictly binodal periodic edge-transitive 3,4-, 3,6-, 4,4-, and 4,6-coordinated nets listed in the RCSR database. These programs could be used in two ways: to suggest new ways for targeting known nets, and to provide blueprints for new chemically feasible nets. They rely on a discrete version of "turtle geometry" adapted for these nets. © 2011 American Chemical Society.

  13. Crystal engineering using a "turtlebug" algorithm: A de novo approach to the design of binodal metal-organic frameworks

    KAUST Repository

    McColm, Gregory L.; Clark, W. Edwin; Eddaoudi, Mohamed; Wojtas, Łukasz; Zaworotko, Michael J.

    2011-01-01

    A new series of computer programs that enumerate three-dimensional periodic embedded nets (i.e., representing crystals) is based on an algorithm that can theoretically enumerate all possible structures for all possible periodic topologies. Unlike extant programs, this algorithm employs algebraic and combinatorial machinery developed during the 1980s in combinatorial and geometric group theory and ancillary fields. This algorithm was validated by a demonstration program that found all strictly binodal periodic edge-transitive 3,4-, 3,6-, 4,4-, and 4,6-coordinated nets listed in the RCSR database. These programs could be used in two ways: to suggest new ways for targeting known nets, and to provide blueprints for new chemically feasible nets. They rely on a discrete version of "turtle geometry" adapted for these nets. © 2011 American Chemical Society.

  14. The fuzzy clearing approach for a niching genetic algorithm applied to a nuclear reactor core design optimization problem

    International Nuclear Information System (INIS)

    Sacco, Wagner F.; Machado, Marcelo D.; Pereira, Claudio M.N.A.; Schirru, Roberto

    2004-01-01

    This article extends previous efforts on genetic algorithms (GAs) applied to a core design optimization problem. We introduce the application of a new Niching Genetic Algorithm (NGA) to this problem and compare its performance to these previous works. The optimization problem consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a three-enrichment zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. After exhaustive experiments we observed that our new niching method performs better than the conventional GA due to a greater exploration of the search space

  15. Management of High-energy Avulsive Ballistic Facial Injury: A Review of the Literature and Algorithmic Approach

    Directory of Open Access Journals (Sweden)

    Elbert E. Vaca, MD

    2018-03-01

    Conclusions:. Existing literature suggests that early and aggressive intervention improves outcomes following avulsive ballistic injuries. Further comparative studies are needed; however, although evidence is limited, the senior author presents a 3-stage reconstructive algorithm advocating early and definitive reconstruction with aesthetic free tissue transfer in an attempt to optimize reconstructive outcomes of these complex injuries.

  16. Expert Oracle Exadata

    CERN Document Server

    Johnson, Randy

    2011-01-01

    Throughout history, advances in technology have come in spurts. A single great idea can often spur rapid change as the idea takes hold and is propagated, often in totally unexpected directions. Exadata embodies such a change in how we think about and manage relational databases. The key change lies in the concept of offloading SQL processing to the storage layer. That concept is a huge win, and its implementation in the form of Exadata is truly a game changer. Expert Oracle Exadata will give you a look under the covers at how the combination of hardware and software that comprise Exadata actua

  17. The naked experts

    International Nuclear Information System (INIS)

    Martin, B.

    1982-01-01

    In an article critical of experts, the cases argued for and against nuclear power are discussed under the headings: environmental hazards arising from the nuclear fuel cycle; proliferation of nuclear weapons capabilities via expansion of the nuclear power industry; political and social threats and restraints of a nuclear society (terrorism, reduction in civil liberties, centralised political and economic power); economic and employment disadvantages of nuclear power; impact of uranium mining on (Australian) aboriginal culture; inadequacy of nuclear power as a solution to energy problems; advantages of a 'soft energy path' based around conservation and renewable energy technologies. (U.K.)

  18. A simulation-based expert system for nuclear power plant diagnostics

    International Nuclear Information System (INIS)

    Hassberger, J.A.; Lee, J.C.

    1989-01-01

    An expert system for diagnosing operational transients in a nuclear power plant is discussed. Hypothesis and test is used as the problem-solving strategy with hypotheses generated by an expert system that monitors the plant for patterns of data symptomatic of known failure modes. Fuzzy logic is employed as the inferencing mechanism with two complementary implication schemes to handle scenarios involving competing failures. Hypothesis testing is performed. An artificial intelligence framework based on a critical functions approach is used to deal with the complexity of a nuclear plant. A prototype system for diagnosing transients in the reactor coolant system of a pressurized water reactor has been developed to test the algorithms described here. Results are presented for the diagnosis of data from the Three Mile Island Unit 2 loss-of-feedwater/small-break loss-of-collant accident

  19. Fuzzy Expert System to Characterize Students

    Science.gov (United States)

    Van Hecke, T.

    2011-01-01

    Students wanting to succeed in higher education are required to adopt an adequate learning approach. By analyzing individual learning characteristics, teachers can give personal advice to help students identify their learning success factors. An expert system based on fuzzy logic can provide economically viable solutions to help students identify…

  20. Visual Cues for an Adaptive Expert System.

    Science.gov (United States)

    Miller, Helen B.

    NCR (National Cash Register) Corporation is pursuing opportunities to make their point of sale (POS) terminals easy to use and easy to learn. To approach the goal of making the technology invisible to the user, NCR has developed an adaptive expert prototype system for a department store POS operation. The structure for the adaptive system, the…