WorldWideScience

Sample records for gis based decision-tree

  1. Spatial soil zinc content distribution from terrain parameters: A GIS-based decision-tree model in Lebanon

    International Nuclear Information System (INIS)

    Bou Kheir, Rania; Greve, Mogens H.; Abdallah, Chadi; Dalgaard, Tommy

    2010-01-01

    Heavy metal contamination has been and continues to be a worldwide phenomenon that has attracted a great deal of attention from governments and regulatory bodies. In this context, our study proposes a regression-tree model to predict the concentration level of zinc in the soils of northern Lebanon (as a case study of Mediterranean landscapes) under a GIS environment. The developed tree-model explained 88% of variance in zinc concentration using pH (100% in relative importance), surroundings of waste areas (90%), proximity to roads (80%), nearness to cities (50%), distance to drainage line (25%), lithology (24%), land cover/use (14%), slope gradient (10%), conductivity (7%), soil type (7%), organic matter (5%), and soil depth (5%). The overall accuracy of the quantitative zinc map produced (at 1:50.000 scale) was estimated to be 78%. The proposed tree model is relatively simple and may also be applied to other areas. - GIS regression-tree analysis explained 88% of the variability in field/laboratory Zinc concentrations.

  2. Spatial soil zinc content distribution from terrain parameters: A GIS-based decision-tree model in Lebanon

    Energy Technology Data Exchange (ETDEWEB)

    Bou Kheir, Rania, E-mail: rania.boukheir@agrsci.d [Lebanese University, Faculty of Letters and Human Sciences, Department of Geography, GIS Research Laboratory, P.O. Box 90-1065, Fanar (Lebanon); Department of Agroecology and Environment, Faculty of Agricultural Sciences (DJF), Aarhus University, Blichers Alle 20, P.O. Box 50, DK-8830 Tjele (Denmark); Greve, Mogens H. [Department of Agroecology and Environment, Faculty of Agricultural Sciences (DJF), Aarhus University, Blichers Alle 20, P.O. Box 50, DK-8830 Tjele (Denmark); Abdallah, Chadi [National Council for Scientific Research, Remote Sensing Center, P.O. Box 11-8281, Beirut (Lebanon); Dalgaard, Tommy [Department of Agroecology and Environment, Faculty of Agricultural Sciences (DJF), Aarhus University, Blichers Alle 20, P.O. Box 50, DK-8830 Tjele (Denmark)

    2010-02-15

    Heavy metal contamination has been and continues to be a worldwide phenomenon that has attracted a great deal of attention from governments and regulatory bodies. In this context, our study proposes a regression-tree model to predict the concentration level of zinc in the soils of northern Lebanon (as a case study of Mediterranean landscapes) under a GIS environment. The developed tree-model explained 88% of variance in zinc concentration using pH (100% in relative importance), surroundings of waste areas (90%), proximity to roads (80%), nearness to cities (50%), distance to drainage line (25%), lithology (24%), land cover/use (14%), slope gradient (10%), conductivity (7%), soil type (7%), organic matter (5%), and soil depth (5%). The overall accuracy of the quantitative zinc map produced (at 1:50.000 scale) was estimated to be 78%. The proposed tree model is relatively simple and may also be applied to other areas. - GIS regression-tree analysis explained 88% of the variability in field/laboratory Zinc concentrations.

  3. The application of GIS based decision-tree models for generating the spatial distribution of hydromorphic organic landscapes in relation to digital terrain data

    DEFF Research Database (Denmark)

    Kheir, Rania Bou; Bøcher, Peder Klith; Greve, Mette Balslev

    2010-01-01

    Accurate information about organic/mineral soil occurrence is a prerequisite for many land resources management applications (including climate change mitigation). This paper aims at investigating the potential of using geomorphometrical analysis and decision tree modeling to predict the geographic......, realistic and practical, and it can be applied to other areas, thereby providing a tool to facilitate the implementation of pedological/hydrological plans for conservation and sustainable management. It is particularly useful when information about soil properties from conventional field surveys is limited....

  4. CUDT: A CUDA Based Decision Tree Algorithm

    Directory of Open Access Journals (Sweden)

    Win-Tsung Lo

    2014-01-01

    Full Text Available Decision tree is one of the famous classification methods in data mining. Many researches have been proposed, which were focusing on improving the performance of decision tree. However, those algorithms are developed and run on traditional distributed systems. Obviously the latency could not be improved while processing huge data generated by ubiquitous sensing node in the era without new technology help. In order to improve data processing latency in huge data mining, in this paper, we design and implement a new parallelized decision tree algorithm on a CUDA (compute unified device architecture, which is a GPGPU solution provided by NVIDIA. In the proposed system, CPU is responsible for flow control while the GPU is responsible for computation. We have conducted many experiments to evaluate system performance of CUDT and made a comparison with traditional CPU version. The results show that CUDT is 5∼55 times faster than Weka-j48 and is 18 times speedup than SPRINT for large data set.

  5. EEG feature selection method based on decision tree.

    Science.gov (United States)

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  6. An Isometric Mapping Based Co-Location Decision Tree Algorithm

    Science.gov (United States)

    Zhou, G.; Wei, J.; Zhou, X.; Zhang, R.; Huang, W.; Sha, H.; Chen, J.

    2018-05-01

    Decision tree (DT) induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information) as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT) method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT), which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1) The extraction method of exposed carbonate rocks is of high accuracy. (2) The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.

  7. AN ISOMETRIC MAPPING BASED CO-LOCATION DECISION TREE ALGORITHM

    Directory of Open Access Journals (Sweden)

    G. Zhou

    2018-05-01

    Full Text Available Decision tree (DT induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT, which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1 The extraction method of exposed carbonate rocks is of high accuracy. (2 The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.

  8. Spatial distribution of block falls using volumetric GIS-decision-tree models

    Science.gov (United States)

    Abdallah, C.

    2010-10-01

    Block falls are considered a significant aspect of surficial instability contributing to losses in land and socio-economic aspects through their damaging effects to natural and human environments. This paper predicts and maps the geographic distribution and volumes of block falls in central Lebanon using remote sensing, geographic information systems (GIS) and decision-tree modeling (un-pruned and pruned trees). Eleven terrain parameters (lithology, proximity to fault line, karst type, soil type, distance to drainage line, elevation, slope gradient, slope aspect, slope curvature, land cover/use, and proximity to roads) were generated to statistically explain the occurrence of block falls. The latter were discriminated using SPOT4 satellite imageries, and their dimensions were determined during field surveys. The un-pruned tree model based on all considered parameters explained 86% of the variability in field block fall measurements. Once pruned, it classifies 50% in block falls' volumes by selecting just four parameters (lithology, slope gradient, soil type, and land cover/use). Both tree models (un-pruned and pruned) were converted to quantitative 1:50,000 block falls' maps with different classes; starting from Nil (no block falls) to more than 4000 m 3. These maps are fairly matching with coincidence value equal to 45%; however, both can be used to prioritize the choice of specific zones for further measurement and modeling, as well as for land-use management. The proposed tree models are relatively simple, and may also be applied to other areas (i.e. the choice of un-pruned or pruned model is related to the availability of terrain parameters in a given area).

  9. Improved Frame Mode Selection for AMR-WB+ Based on Decision Tree

    Science.gov (United States)

    Kim, Jong Kyu; Kim, Nam Soo

    In this letter, we propose a coding mode selection method for the AMR-WB+ audio coder based on a decision tree. In order to reduce computation while maintaining good performance, decision tree classifier is adopted with the closed loop mode selection results as the target classification labels. The size of the decision tree is controlled by pruning, so the proposed method does not increase the memory requirement significantly. Through an evaluation test on a database covering both speech and music materials, the proposed method is found to achieve a much better mode selection accuracy compared with the open loop mode selection module in the AMR-WB+.

  10. Detecting Structural Metadata with Decision Trees and Transformation-Based Learning

    National Research Council Canada - National Science Library

    Kim, Joungbum; Schwarm, Sarah E; Ostendorf, Mari

    2004-01-01

    .... Specifically, combinations of decision trees and language models are used to predict sentence ends and interruption points and given these events transformation based learning is used to detect edit...

  11. Decision Tree Repository and Rule Set Based Mingjiang River Estuarine Wetlands Classifaction

    Science.gov (United States)

    Zhang, W.; Li, X.; Xiao, W.

    2018-05-01

    The increasing urbanization and industrialization have led to wetland losses in estuarine area of Mingjiang River over past three decades. There has been increasing attention given to produce wetland inventories using remote sensing and GIS technology. Due to inconsistency training site and training sample, traditionally pixel-based image classification methods can't achieve a comparable result within different organizations. Meanwhile, object-oriented image classification technique shows grate potential to solve this problem and Landsat moderate resolution remote sensing images are widely used to fulfill this requirement. Firstly, the standardized atmospheric correct, spectrally high fidelity texture feature enhancement was conducted before implementing the object-oriented wetland classification method in eCognition. Secondly, we performed the multi-scale segmentation procedure, taking the scale, hue, shape, compactness and smoothness of the image into account to get the appropriate parameters, using the top and down region merge algorithm from single pixel level, the optimal texture segmentation scale for different types of features is confirmed. Then, the segmented object is used as the classification unit to calculate the spectral information such as Mean value, Maximum value, Minimum value, Brightness value and the Normalized value. The Area, length, Tightness and the Shape rule of the image object Spatial features and texture features such as Mean, Variance and Entropy of image objects are used as classification features of training samples. Based on the reference images and the sampling points of on-the-spot investigation, typical training samples are selected uniformly and randomly for each type of ground objects. The spectral, texture and spatial characteristics of each type of feature in each feature layer corresponding to the range of values are used to create the decision tree repository. Finally, with the help of high resolution reference images, the

  12. MRI-based decision tree model for diagnosis of biliary atresia.

    Science.gov (United States)

    Kim, Yong Hee; Kim, Myung-Joon; Shin, Hyun Joo; Yoon, Haesung; Han, Seok Joo; Koh, Hong; Roh, Yun Ho; Lee, Mi-Jung

    2018-02-23

    To evaluate MRI findings and to generate a decision tree model for diagnosis of biliary atresia (BA) in infants with jaundice. We retrospectively reviewed features of MRI and ultrasonography (US) performed in infants with jaundice between January 2009 and June 2016 under approval of the institutional review board, including the maximum diameter of periportal signal change on MRI (MR triangular cord thickness, MR-TCT) or US (US-TCT), visibility of common bile duct (CBD) and abnormality of gallbladder (GB). Hepatic subcapsular flow was reviewed on Doppler US. We performed conditional inference tree analysis using MRI findings to generate a decision tree model. A total of 208 infants were included, 112 in the BA group and 96 in the non-BA group. Mean age at the time of MRI was 58.7 ± 36.6 days. Visibility of CBD, abnormality of GB and MR-TCT were good discriminators for the diagnosis of BA and the MRI-based decision tree using these findings with MR-TCT cut-off 5.1 mm showed 97.3 % sensitivity, 94.8 % specificity and 96.2 % accuracy. MRI-based decision tree model reliably differentiates BA in infants with jaundice. MRI can be an objective imaging modality for the diagnosis of BA. • MRI-based decision tree model reliably differentiates biliary atresia in neonatal cholestasis. • Common bile duct, gallbladder and periportal signal changes are the discriminators. • MRI has comparable performance to ultrasonography for diagnosis of biliary atresia.

  13. PCA based feature reduction to improve the accuracy of decision tree c4.5 classification

    Science.gov (United States)

    Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.

    2018-03-01

    Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.

  14. [Prediction of regional soil quality based on mutual information theory integrated with decision tree algorithm].

    Science.gov (United States)

    Lin, Fen-Fang; Wang, Ke; Yang, Ning; Yan, Shi-Guang; Zheng, Xin-Yu

    2012-02-01

    In this paper, some main factors such as soil type, land use pattern, lithology type, topography, road, and industry type that affect soil quality were used to precisely obtain the spatial distribution characteristics of regional soil quality, mutual information theory was adopted to select the main environmental factors, and decision tree algorithm See 5.0 was applied to predict the grade of regional soil quality. The main factors affecting regional soil quality were soil type, land use, lithology type, distance to town, distance to water area, altitude, distance to road, and distance to industrial land. The prediction accuracy of the decision tree model with the variables selected by mutual information was obviously higher than that of the model with all variables, and, for the former model, whether of decision tree or of decision rule, its prediction accuracy was all higher than 80%. Based on the continuous and categorical data, the method of mutual information theory integrated with decision tree could not only reduce the number of input parameters for decision tree algorithm, but also predict and assess regional soil quality effectively.

  15. A symptom based decision tree approach to boiling water reactor emergency operating procedures

    International Nuclear Information System (INIS)

    Knobel, R.C.

    1984-01-01

    This paper describes a Decision Tree approach to development of BWR Emergency Operating Procedures for use by operators during emergencies. This approach utilizes the symptom based Emergency Procedure Guidelines approved for implementation by the USNRC. Included in the paper is a discussion of the relative merits of the event based Emergency Operating Procedures currently in use at USBWR plants. The body of the paper is devoted to a discussion of the Decision Tree Approach to Emergency Operating Procedures soon to be implemented at two United States Boiling Water Reactor plants, why this approach solves many of the problems with procedures indentified in the post accident reviews of Three Mile Island procedures, and why only now is this approach both desirable and feasible. The paper discusses how nuclear plant simulators were involved in the development of the Emergency Operating Procedure decision trees, and in the verification and validation of these procedures. (orig./HP)

  16. Diagnosis of Constant Faults in Read-Once Contact Networks over Finite Bases using Decision Trees

    KAUST Repository

    Busbait, Monther I.

    2014-05-01

    We study the depth of decision trees for diagnosis of constant faults in read-once contact networks over finite bases. This includes diagnosis of 0-1 faults, 0 faults and 1 faults. For any finite basis, we prove a linear upper bound on the minimum depth of decision tree for diagnosis of constant faults depending on the number of edges in a contact network over that basis. Also, we obtain asymptotic bounds on the depth of decision trees for diagnosis of each type of constant faults depending on the number of edges in contact networks in the worst case per basis. We study the set of indecomposable contact networks with up to 10 edges and obtain sharp coefficients for the linear upper bound for diagnosis of constant faults in contact networks over bases of these indecomposable contact networks. We use a set of algorithms, including one that we create, to obtain the sharp coefficients.

  17. Decision tree based knowledge acquisition and failure diagnosis using a PWR loop vibration model

    International Nuclear Information System (INIS)

    Bauernfeind, V.; Ding, Y.

    1993-01-01

    An analytical vibration model of the primary system of a 1300 MW PWR was used for simulating mechanical faults. Deviations in the calculated power density spectra and coherence functions are determined and classified. The decision tree technique is then used for a personal computer supported knowledge presentation and for optimizing the logical relationships between the simulated faults and the observed symptoms. The optimized decision tree forms the knowledge base and can be used to diagnose known cases as well as to include new data into the knowledge base if new faults occur. (author)

  18. Dynamic Security Assessment of Danish Power System Based on Decision Trees: Today and Tomorrow

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Liu, Leo; Chen, Zhe

    2013-01-01

    The research work presented in this paper analyzes the impact of wind energy, phasing out of central power plants and cross border power exchange on dynamic security of Danish Power System. Contingency based decision tree (DT) approach is used to assess the dynamic security of present and future...

  19. Diagnosis of Constant Faults in Read-Once Contact Networks over Finite Bases using Decision Trees

    KAUST Repository

    Busbait, Monther I.

    2014-01-01

    We study the depth of decision trees for diagnosis of constant faults in read-once contact networks over finite bases. This includes diagnosis of 0-1 faults, 0 faults and 1 faults. For any finite basis, we prove a linear upper bound on the minimum

  20. Development of a diagnostic decision tree for obstructive pulmonary diseases based on real-life data

    Directory of Open Access Journals (Sweden)

    Esther I. Metting

    2016-01-01

    Full Text Available The aim of this study was to develop and explore the diagnostic accuracy of a decision tree derived from a large real-life primary care population. Data from 9297 primary care patients (45% male, mean age 53±17 years with suspicion of an obstructive pulmonary disease was derived from an asthma/chronic obstructive pulmonary disease (COPD service where patients were assessed using spirometry, the Asthma Control Questionnaire, the Clinical COPD Questionnaire, history data and medication use. All patients were diagnosed through the Internet by a pulmonologist. The Chi-squared Automatic Interaction Detection method was used to build the decision tree. The tree was externally validated in another real-life primary care population (n=3215. Our tree correctly diagnosed 79% of the asthma patients, 85% of the COPD patients and 32% of the asthma–COPD overlap syndrome (ACOS patients. External validation showed a comparable pattern (correct: asthma 78%, COPD 83%, ACOS 24%. Our decision tree is considered to be promising because it was based on real-life primary care patients with a specialist's diagnosis. In most patients the diagnosis could be correctly predicted. Predicting ACOS, however, remained a challenge. The total decision tree can be implemented in computer-assisted diagnostic systems for individual patients. A simplified version of this tree can be used in daily clinical practice as a desk tool.

  1. Development of a diagnostic decision tree for obstructive pulmonary diseases based on real-life data

    Science.gov (United States)

    in ’t Veen, Johannes C.C.M.; Dekhuijzen, P.N. Richard; van Heijst, Ellen; Kocks, Janwillem W.H.; Muilwijk-Kroes, Jacqueline B.; Chavannes, Niels H.; van der Molen, Thys

    2016-01-01

    The aim of this study was to develop and explore the diagnostic accuracy of a decision tree derived from a large real-life primary care population. Data from 9297 primary care patients (45% male, mean age 53±17 years) with suspicion of an obstructive pulmonary disease was derived from an asthma/chronic obstructive pulmonary disease (COPD) service where patients were assessed using spirometry, the Asthma Control Questionnaire, the Clinical COPD Questionnaire, history data and medication use. All patients were diagnosed through the Internet by a pulmonologist. The Chi-squared Automatic Interaction Detection method was used to build the decision tree. The tree was externally validated in another real-life primary care population (n=3215). Our tree correctly diagnosed 79% of the asthma patients, 85% of the COPD patients and 32% of the asthma–COPD overlap syndrome (ACOS) patients. External validation showed a comparable pattern (correct: asthma 78%, COPD 83%, ACOS 24%). Our decision tree is considered to be promising because it was based on real-life primary care patients with a specialist's diagnosis. In most patients the diagnosis could be correctly predicted. Predicting ACOS, however, remained a challenge. The total decision tree can be implemented in computer-assisted diagnostic systems for individual patients. A simplified version of this tree can be used in daily clinical practice as a desk tool. PMID:27730177

  2. Data Clustering and Evolving Fuzzy Decision Tree for Data Base Classification Problems

    Science.gov (United States)

    Chang, Pei-Chann; Fan, Chin-Yuan; Wang, Yen-Wen

    Data base classification suffers from two well known difficulties, i.e., the high dimensionality and non-stationary variations within the large historic data. This paper presents a hybrid classification model by integrating a case based reasoning technique, a Fuzzy Decision Tree (FDT), and Genetic Algorithms (GA) to construct a decision-making system for data classification in various data base applications. The model is major based on the idea that the historic data base can be transformed into a smaller case-base together with a group of fuzzy decision rules. As a result, the model can be more accurately respond to the current data under classifying from the inductions by these smaller cases based fuzzy decision trees. Hit rate is applied as a performance measure and the effectiveness of our proposed model is demonstrated by experimentally compared with other approaches on different data base classification applications. The average hit rate of our proposed model is the highest among others.

  3. Applied Swarm-based medicine: collecting decision trees for patterns of algorithms analysis.

    Science.gov (United States)

    Panje, Cédric M; Glatzer, Markus; von Rappard, Joscha; Rothermundt, Christian; Hundsberger, Thomas; Zumstein, Valentin; Plasswilm, Ludwig; Putora, Paul Martin

    2017-08-16

    The objective consensus methodology has recently been applied in consensus finding in several studies on medical decision-making among clinical experts or guidelines. The main advantages of this method are an automated analysis and comparison of treatment algorithms of the participating centers which can be performed anonymously. Based on the experience from completed consensus analyses, the main steps for the successful implementation of the objective consensus methodology were identified and discussed among the main investigators. The following steps for the successful collection and conversion of decision trees were identified and defined in detail: problem definition, population selection, draft input collection, tree conversion, criteria adaptation, problem re-evaluation, results distribution and refinement, tree finalisation, and analysis. This manuscript provides information on the main steps for successful collection of decision trees and summarizes important aspects at each point of the analysis.

  4. A P2P Botnet detection scheme based on decision tree and adaptive multilayer neural networks.

    Science.gov (United States)

    Alauthaman, Mohammad; Aslam, Nauman; Zhang, Li; Alasem, Rafe; Hossain, M A

    2018-01-01

    In recent years, Botnets have been adopted as a popular method to carry and spread many malicious codes on the Internet. These malicious codes pave the way to execute many fraudulent activities including spam mail, distributed denial-of-service attacks and click fraud. While many Botnets are set up using centralized communication architecture, the peer-to-peer (P2P) Botnets can adopt a decentralized architecture using an overlay network for exchanging command and control data making their detection even more difficult. This work presents a method of P2P Bot detection based on an adaptive multilayer feed-forward neural network in cooperation with decision trees. A classification and regression tree is applied as a feature selection technique to select relevant features. With these features, a multilayer feed-forward neural network training model is created using a resilient back-propagation learning algorithm. A comparison of feature set selection based on the decision tree, principal component analysis and the ReliefF algorithm indicated that the neural network model with features selection based on decision tree has a better identification accuracy along with lower rates of false positives. The usefulness of the proposed approach is demonstrated by conducting experiments on real network traffic datasets. In these experiments, an average detection rate of 99.08 % with false positive rate of 0.75 % was observed.

  5. Reconciliation of Decision-Making Heuristics Based on Decision Trees Topologies and Incomplete Fuzzy Probabilities Sets.

    Science.gov (United States)

    Doubravsky, Karel; Dohnal, Mirko

    2015-01-01

    Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details.

  6. Reconciliation of Decision-Making Heuristics Based on Decision Trees Topologies and Incomplete Fuzzy Probabilities Sets.

    Directory of Open Access Journals (Sweden)

    Karel Doubravsky

    Full Text Available Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (rechecked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details.

  7. Classifying dysmorphic syndromes by using artificial neural network based hierarchical decision tree.

    Science.gov (United States)

    Özdemir, Merve Erkınay; Telatar, Ziya; Eroğul, Osman; Tunca, Yusuf

    2018-05-01

    Dysmorphic syndromes have different facial malformations. These malformations are significant to an early diagnosis of dysmorphic syndromes and contain distinctive information for face recognition. In this study we define the certain features of each syndrome by considering facial malformations and classify Fragile X, Hurler, Prader Willi, Down, Wolf Hirschhorn syndromes and healthy groups automatically. The reference points are marked on the face images and ratios between the points' distances are taken into consideration as features. We suggest a neural network based hierarchical decision tree structure in order to classify the syndrome types. We also implement k-nearest neighbor (k-NN) and artificial neural network (ANN) classifiers to compare classification accuracy with our hierarchical decision tree. The classification accuracy is 50, 73 and 86.7% with k-NN, ANN and hierarchical decision tree methods, respectively. Then, the same images are shown to a clinical expert who achieve a recognition rate of 46.7%. We develop an efficient system to recognize different syndrome types automatically in a simple, non-invasive imaging data, which is independent from the patient's age, sex and race at high accuracy. The promising results indicate that our method can be used for pre-diagnosis of the dysmorphic syndromes by clinical experts.

  8. Predicting volume of distribution with decision tree-based regression methods using predicted tissue:plasma partition coefficients.

    Science.gov (United States)

    Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat

    2015-01-01

    Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.

  9. A New Architecture for Making Moral Agents Based on C4.5 Decision Tree Algorithm

    OpenAIRE

    Meisam Azad-Manjiri

    2014-01-01

    Regarding to the influence of robots in the various fields of life, the issue of trusting to them is important, especially when a robot deals with people directly. One of the possible ways to get this confidence is adding a moral dimension to the robots. Therefore, we present a new architecture in order to build moral agents that learn from demonstrations. This agent is based on Beauchamp and Childress’s principles of biomedical ethics (a type of deontological theory) and uses decision tree a...

  10. SITUATIONAL CONTROL OF HOT BLAST STOVES GROUP BASED ON DECISION TREE

    Directory of Open Access Journals (Sweden)

    E. I. Kobysh

    2016-09-01

    Full Text Available In this paper was developed the control system of group of hot blast stoves, which operates on the basis of the packing heating control subsystem and subsystem of forecasting of modes duration in the hot blast stoves APCS of iron smelting in a blast furnace. With the use of multi-criteria optimization methods, implemented the adjustment of control system conduct, which takes into account the current production situation that has arisen in the course of the heating packing of each hot blast stove group. Developed a situation recognition algorithm and the choice of scenarios of control based on a decision tree.

  11. Decision tree and PCA-based fault diagnosis of rotating machinery

    Science.gov (United States)

    Sun, Weixiang; Chen, Jin; Li, Jiaqing

    2007-04-01

    After analysing the flaws of conventional fault diagnosis methods, data mining technology is introduced to fault diagnosis field, and a new method based on C4.5 decision tree and principal component analysis (PCA) is proposed. In this method, PCA is used to reduce features after data collection, preprocessing and feature extraction. Then, C4.5 is trained by using the samples to generate a decision tree model with diagnosis knowledge. At last the tree model is used to make diagnosis analysis. To validate the method proposed, six kinds of running states (normal or without any defect, unbalance, rotor radial rub, oil whirl, shaft crack and a simultaneous state of unbalance and radial rub), are simulated on Bently Rotor Kit RK4 to test C4.5 and PCA-based method and back-propagation neural network (BPNN). The result shows that C4.5 and PCA-based diagnosis method has higher accuracy and needs less training time than BPNN.

  12. Data Fusion Research of Triaxial Human Body Motion Gesture based on Decision Tree

    Directory of Open Access Journals (Sweden)

    Feihong Zhou

    2014-05-01

    Full Text Available The development status of human body motion gesture data fusion domestic and overseas has been analyzed. A triaxial accelerometer is adopted to develop a wearable human body motion gesture monitoring system aimed at old people healthcare. On the basis of a brief introduction of decision tree algorithm, the WEKA workbench is adopted to generate a human body motion gesture decision tree. At last, the classification quality of the decision tree has been validated through experiments. The experimental results show that the decision tree algorithm could reach an average predicting accuracy of 97.5 % with lower time cost.

  13. Decision tree-based learning to predict patient controlled analgesia consumption and readjustment

    Directory of Open Access Journals (Sweden)

    Hu Yuh-Jyh

    2012-11-01

    Full Text Available Abstract Background Appropriate postoperative pain management contributes to earlier mobilization, shorter hospitalization, and reduced cost. The under treatment of pain may impede short-term recovery and have a detrimental long-term effect on health. This study focuses on Patient Controlled Analgesia (PCA, which is a delivery system for pain medication. This study proposes and demonstrates how to use machine learning and data mining techniques to predict analgesic requirements and PCA readjustment. Methods The sample in this study included 1099 patients. Every patient was described by 280 attributes, including the class attribute. In addition to commonly studied demographic and physiological factors, this study emphasizes attributes related to PCA. We used decision tree-based learning algorithms to predict analgesic consumption and PCA control readjustment based on the first few hours of PCA medications. We also developed a nearest neighbor-based data cleaning method to alleviate the class-imbalance problem in PCA setting readjustment prediction. Results The prediction accuracies of total analgesic consumption (continuous dose and PCA dose and PCA analgesic requirement (PCA dose only by an ensemble of decision trees were 80.9% and 73.1%, respectively. Decision tree-based learning outperformed Artificial Neural Network, Support Vector Machine, Random Forest, Rotation Forest, and Naïve Bayesian classifiers in analgesic consumption prediction. The proposed data cleaning method improved the performance of every learning method in this study of PCA setting readjustment prediction. Comparative analysis identified the informative attributes from the data mining models and compared them with the correlates of analgesic requirement reported in previous works. Conclusion This study presents a real-world application of data mining to anesthesiology. Unlike previous research, this study considers a wider variety of predictive factors, including PCA

  14. A comparative study on the predictive ability of the decision tree, support vector machine and neuro-fuzzy models in landslide susceptibility mapping using GIS

    Science.gov (United States)

    Pradhan, Biswajeet

    2013-02-01

    The purpose of the present study is to compare the prediction performances of three different approaches such as decision tree (DT), support vector machine (SVM) and adaptive neuro-fuzzy inference system (ANFIS) for landslide susceptibility mapping at Penang Hill area, Malaysia. The necessary input parameters for the landslide susceptibility assessments were obtained from various sources. At first, landslide locations were identified by aerial photographs and field surveys and a total of 113 landslide locations were constructed. The study area contains 340,608 pixels while total 8403 pixels include landslides. The landslide inventory was randomly partitioned into two subsets: (1) part 1 that contains 50% (4000 landslide grid cells) was used in the training phase of the models; (2) part 2 is a validation dataset 50% (4000 landslide grid cells) for validation of three models and to confirm its accuracy. The digitally processed images of input parameters were combined in GIS. Finally, landslide susceptibility maps were produced, and the performances were assessed and discussed. Total fifteen landslide susceptibility maps were produced using DT, SVM and ANFIS based models, and the resultant maps were validated using the landslide locations. Prediction performances of these maps were checked by receiver operating characteristics (ROC) by using both success rate curve and prediction rate curve. The validation results showed that, area under the ROC curve for the fifteen models produced using DT, SVM and ANFIS varied from 0.8204 to 0.9421 for success rate curve and 0.7580 to 0.8307 for prediction rate curves, respectively. Moreover, the prediction curves revealed that model 5 of DT has slightly higher prediction performance (83.07), whereas the success rate showed that model 5 of ANFIS has better prediction (94.21) capability among all models. The results of this study showed that landslide susceptibility mapping in the Penang Hill area using the three approaches (e

  15. Preventing KPI Violations in Business Processes based on Decision Tree Learning and Proactive Runtime Adaptation

    Directory of Open Access Journals (Sweden)

    Dimka Karastoyanova

    2012-01-01

    Full Text Available The performance of business processes is measured and monitored in terms of Key Performance Indicators (KPIs. If the monitoring results show that the KPI targets are violated, the underlying reasons have to be identified and the process should be adapted accordingly to address the violations. In this paper we propose an integrated monitoring, prediction and adaptation approach for preventing KPI violations of business process instances. KPIs are monitored continuously while the process is executed. Additionally, based on KPI measurements of historical process instances we use decision tree learning to construct classification models which are then used to predict the KPI value of an instance while it is still running. If a KPI violation is predicted, we identify adaptation requirements and adaptation strategies in order to prevent the violation.

  16. Intrusion Detection System Based on Decision Tree over Big Data in Fog Environment

    Directory of Open Access Journals (Sweden)

    Kai Peng

    2018-01-01

    Full Text Available Fog computing, as the supplement of cloud computing, can provide low-latency services between mobile users and the cloud. However, fog devices may encounter security challenges as a result of the fog nodes being close to the end users and having limited computing ability. Traditional network attacks may destroy the system of fog nodes. Intrusion detection system (IDS is a proactive security protection technology and can be used in the fog environment. Although IDS in tradition network has been well investigated, unfortunately directly using them in the fog environment may be inappropriate. Fog nodes produce massive amounts of data at all times, and, thus, enabling an IDS system over big data in the fog environment is of paramount importance. In this study, we propose an IDS system based on decision tree. Firstly, we propose a preprocessing algorithm to digitize the strings in the given dataset and then normalize the whole data, to ensure the quality of the input data so as to improve the efficiency of detection. Secondly, we use decision tree method for our IDS system, and then we compare this method with Naïve Bayesian method as well as KNN method. Both the 10% dataset and the full dataset are tested. Our proposed method not only completely detects four kinds of attacks but also enables the detection of twenty-two kinds of attacks. The experimental results show that our IDS system is effective and precise. Above all, our IDS system can be used in fog computing environment over big data.

  17. Traditional Chinese medicine pharmacovigilance in signal detection: decision tree-based data classification.

    Science.gov (United States)

    Wei, Jian-Xiang; Wang, Jing; Zhu, Yun-Xia; Sun, Jun; Xu, Hou-Ming; Li, Ming

    2018-03-09

    Traditional Chinese Medicine (TCM) is a style of traditional medicine informed by modern medicine but built on a foundation of more than 2500 years of Chinese medical practice. According to statistics, TCM accounts for approximately 14% of total adverse drug reaction (ADR) spontaneous reporting data in China. Because of the complexity of the components in TCM formula, which makes it essentially different from Western medicine, it is critical to determine whether ADR reports of TCM should be analyzed independently. Reports in the Chinese spontaneous reporting database between 2010 and 2011 were selected. The dataset was processed and divided into the total sample (all data) and the subsample (including TCM data only). Four different ADR signal detection methods-PRR, ROR, MHRA and IC- currently widely used in China, were applied for signal detection on the two samples. By comparison of experimental results, three of them-PRR, MHRA and IC-were chosen to do the experiment. We designed several indicators for performance evaluation such as R (recall ratio), P (precision ratio), and D (discrepancy ratio) based on the reference database and then constructed a decision tree for data classification based on such indicators. For PRR: R 1 -R 2  = 0.72%, P 1 -P 2  = 0.16% and D = 0.92%; For MHRA: R 1 -R 2  = 0.97%, P 1 -P 2  = 0.20% and D = 1.18%; For IC: R 1 -R 2  = 1.44%, P 2 -P 1  = 4.06% and D = 4.72%. The threshold of R,Pand Dis set as 2%, 2% and 3% respectively. Based on the decision tree, the results are "separation" for PRR, MHRA and IC. In order to improve the efficiency and accuracy of signal detection, we suggest that TCM data should be separated from the total sample when conducting analyses.

  18. Decision Tree-Based Contextual Location Prediction from Mobile Device Logs

    Directory of Open Access Journals (Sweden)

    Linyuan Xia

    2018-01-01

    Full Text Available Contextual location prediction is an important topic in the field of personalized location recommendation in LBS (location-based services. With the advancement of mobile positioning techniques and various sensors embedded in smartphones, it is convenient to obtain massive human mobile trajectories and to derive a large amount of valuable information from geospatial big data. Extracting and recognizing personally interesting places and predicting next semantic location become a research hot spot in LBS. In this paper, we proposed an approach to predict next personally semantic place with historical visiting patterns derived from mobile device logs. To address the problems of location imprecision and lack of semantic information, a modified trip-identify method is employed to extract key visit points from GPS trajectories to a more accurate extent while semantic information are added through stay point detection and semantic places recognition. At last, a decision tree model is adopted to explore the spatial, temporal, and sequential features in contextual location prediction. To validate the effectiveness of our approach, experiments were conducted based on a trajectory collection in Guangzhou downtown area. The results verified the feasibility of our approach on contextual location prediction from continuous mobile devices logs.

  19. Landslide susceptibility mapping using decision-tree based CHi-squared automatic interaction detection (CHAID) and Logistic regression (LR) integration

    International Nuclear Information System (INIS)

    Althuwaynee, Omar F; Pradhan, Biswajeet; Ahmad, Noordin

    2014-01-01

    This article uses methodology based on chi-squared automatic interaction detection (CHAID), as a multivariate method that has an automatic classification capacity to analyse large numbers of landslide conditioning factors. This new algorithm was developed to overcome the subjectivity of the manual categorization of scale data of landslide conditioning factors, and to predict rainfall-induced susceptibility map in Kuala Lumpur city and surrounding areas using geographic information system (GIS). The main objective of this article is to use CHi-squared automatic interaction detection (CHAID) method to perform the best classification fit for each conditioning factor, then, combining it with logistic regression (LR). LR model was used to find the corresponding coefficients of best fitting function that assess the optimal terminal nodes. A cluster pattern of landslide locations was extracted in previous study using nearest neighbor index (NNI), which were then used to identify the clustered landslide locations range. Clustered locations were used as model training data with 14 landslide conditioning factors such as; topographic derived parameters, lithology, NDVI, land use and land cover maps. Pearson chi-squared value was used to find the best classification fit between the dependent variable and conditioning factors. Finally the relationship between conditioning factors were assessed and the landslide susceptibility map (LSM) was produced. An area under the curve (AUC) was used to test the model reliability and prediction capability with the training and validation landslide locations respectively. This study proved the efficiency and reliability of decision tree (DT) model in landslide susceptibility mapping. Also it provided a valuable scientific basis for spatial decision making in planning and urban management studies

  20. Landslide susceptibility mapping using decision-tree based CHi-squared automatic interaction detection (CHAID) and Logistic regression (LR) integration

    Science.gov (United States)

    Althuwaynee, Omar F.; Pradhan, Biswajeet; Ahmad, Noordin

    2014-06-01

    This article uses methodology based on chi-squared automatic interaction detection (CHAID), as a multivariate method that has an automatic classification capacity to analyse large numbers of landslide conditioning factors. This new algorithm was developed to overcome the subjectivity of the manual categorization of scale data of landslide conditioning factors, and to predict rainfall-induced susceptibility map in Kuala Lumpur city and surrounding areas using geographic information system (GIS). The main objective of this article is to use CHi-squared automatic interaction detection (CHAID) method to perform the best classification fit for each conditioning factor, then, combining it with logistic regression (LR). LR model was used to find the corresponding coefficients of best fitting function that assess the optimal terminal nodes. A cluster pattern of landslide locations was extracted in previous study using nearest neighbor index (NNI), which were then used to identify the clustered landslide locations range. Clustered locations were used as model training data with 14 landslide conditioning factors such as; topographic derived parameters, lithology, NDVI, land use and land cover maps. Pearson chi-squared value was used to find the best classification fit between the dependent variable and conditioning factors. Finally the relationship between conditioning factors were assessed and the landslide susceptibility map (LSM) was produced. An area under the curve (AUC) was used to test the model reliability and prediction capability with the training and validation landslide locations respectively. This study proved the efficiency and reliability of decision tree (DT) model in landslide susceptibility mapping. Also it provided a valuable scientific basis for spatial decision making in planning and urban management studies.

  1. Change Analysis and Decision Tree Based Detection Model for Residential Objects across Multiple Scales

    Directory of Open Access Journals (Sweden)

    CHEN Liyan

    2018-03-01

    Full Text Available Change analysis and detection plays important role in the updating of multi-scale databases.When overlap an updated larger-scale dataset and a to-be-updated smaller-scale dataset,people usually focus on temporal changes caused by the evolution of spatial entities.Little attention is paid to the representation changes influenced by map generalization.Using polygonal building data as an example,this study examines the changes from different perspectives,such as the reasons for their occurrence,their performance format.Based on this knowledge,we employ decision tree in field of machine learning to establish a change detection model.The aim of the proposed model is to distinguish temporal changes that need to be applied as updates to the smaller-scale dataset from representation changes.The proposed method is validated through tests using real-world building data from Guangzhou city.The experimental results show the overall precision of change detection is more than 90%,which indicates our method is effective to identify changed objects.

  2. Beef Quality Identification Using Thresholding Method and Decision Tree Classification Based on Android Smartphone

    Directory of Open Access Journals (Sweden)

    Kusworo Adi

    2017-01-01

    Full Text Available Beef is one of the animal food products that have high nutrition because it contains carbohydrates, proteins, fats, vitamins, and minerals. Therefore, the quality of beef should be maintained so that consumers get good beef quality. Determination of beef quality is commonly conducted visually by comparing the actual beef and reference pictures of each beef class. This process presents weaknesses, as it is subjective in nature and takes a considerable amount of time. Therefore, an automated system based on image processing that is capable of determining beef quality is required. This research aims to develop an image segmentation method by processing digital images. The system designed consists of image acquisition processes with varied distance, resolution, and angle. Image segmentation is done to separate the images of fat and meat using the Otsu thresholding method. Classification was carried out using the decision tree algorithm and the best accuracies were obtained at 90% for training and 84% for testing. Once developed, this system is then embedded into the android programming. Results show that the image processing technique is capable of proper marbling score identification.

  3. Classification and Progression Based on CFS-GA and C5.0 Boost Decision Tree of TCM Zheng in Chronic Hepatitis B.

    Science.gov (United States)

    Chen, Xiao Yu; Ma, Li Zhuang; Chu, Na; Zhou, Min; Hu, Yiyang

    2013-01-01

    Chronic hepatitis B (CHB) is a serious public health problem, and Traditional Chinese Medicine (TCM) plays an important role in the control and treatment for CHB. In the treatment of TCM, zheng discrimination is the most important step. In this paper, an approach based on CFS-GA (Correlation based Feature Selection and Genetic Algorithm) and C5.0 boost decision tree is used for zheng classification and progression in the TCM treatment of CHB. The CFS-GA performs better than the typical method of CFS. By CFS-GA, the acquired attribute subset is classified by C5.0 boost decision tree for TCM zheng classification of CHB, and C5.0 decision tree outperforms two typical decision trees of NBTree and REPTree on CFS-GA, CFS, and nonselection in comparison. Based on the critical indicators from C5.0 decision tree, important lab indicators in zheng progression are obtained by the method of stepwise discriminant analysis for expressing TCM zhengs in CHB, and alterations of the important indicators are also analyzed in zheng progression. In conclusion, all the three decision trees perform better on CFS-GA than on CFS and nonselection, and C5.0 decision tree outperforms the two typical decision trees both on attribute selection and nonselection.

  4. Objective consensus from decision trees

    International Nuclear Information System (INIS)

    Putora, Paul Martin; Panje, Cedric M; Papachristofilou, Alexandros; Pra, Alan Dal; Hundsberger, Thomas; Plasswilm, Ludwig

    2014-01-01

    Consensus-based approaches provide an alternative to evidence-based decision making, especially in situations where high-level evidence is limited. Our aim was to demonstrate a novel source of information, objective consensus based on recommendations in decision tree format from multiple sources. Based on nine sample recommendations in decision tree format a representative analysis was performed. The most common (mode) recommendations for each eventuality (each permutation of parameters) were determined. The same procedure was applied to real clinical recommendations for primary radiotherapy for prostate cancer. Data was collected from 16 radiation oncology centres, converted into decision tree format and analyzed in order to determine the objective consensus. Based on information from multiple sources in decision tree format, treatment recommendations can be assessed for every parameter combination. An objective consensus can be determined by means of mode recommendations without compromise or confrontation among the parties. In the clinical example involving prostate cancer therapy, three parameters were used with two cut-off values each (Gleason score, PSA, T-stage) resulting in a total of 27 possible combinations per decision tree. Despite significant variations among the recommendations, a mode recommendation could be found for specific combinations of parameters. Recommendations represented as decision trees can serve as a basis for objective consensus among multiple parties

  5. Objective consensus from decision trees.

    Science.gov (United States)

    Putora, Paul Martin; Panje, Cedric M; Papachristofilou, Alexandros; Dal Pra, Alan; Hundsberger, Thomas; Plasswilm, Ludwig

    2014-12-05

    Consensus-based approaches provide an alternative to evidence-based decision making, especially in situations where high-level evidence is limited. Our aim was to demonstrate a novel source of information, objective consensus based on recommendations in decision tree format from multiple sources. Based on nine sample recommendations in decision tree format a representative analysis was performed. The most common (mode) recommendations for each eventuality (each permutation of parameters) were determined. The same procedure was applied to real clinical recommendations for primary radiotherapy for prostate cancer. Data was collected from 16 radiation oncology centres, converted into decision tree format and analyzed in order to determine the objective consensus. Based on information from multiple sources in decision tree format, treatment recommendations can be assessed for every parameter combination. An objective consensus can be determined by means of mode recommendations without compromise or confrontation among the parties. In the clinical example involving prostate cancer therapy, three parameters were used with two cut-off values each (Gleason score, PSA, T-stage) resulting in a total of 27 possible combinations per decision tree. Despite significant variations among the recommendations, a mode recommendation could be found for specific combinations of parameters. Recommendations represented as decision trees can serve as a basis for objective consensus among multiple parties.

  6. Development of a diagnostic decision tree for obstructive pulmonary diseases based on real-life data

    NARCIS (Netherlands)

    Metting, Esther I; In 't Veen, Johannes C C M; Dekhuijzen, P N Richard; van Heijst, Ellen; Kocks, Janwillem W H; Muilwijk-Kroes, Jacqueline B; Chavannes, Niels H; van der Molen, Thys

    2016-01-01

    The aim of this study was to develop and explore the diagnostic accuracy of a decision tree derived from a large real-life primary care population. Data from 9297 primary care patients (45% male, mean age 53±17 years) with suspicion of an obstructive pulmonary disease was derived from an

  7. A classification model of Hyperion image base on SAM combined decision tree

    Science.gov (United States)

    Wang, Zhenghai; Hu, Guangdao; Zhou, YongZhang; Liu, Xin

    2009-10-01

    Monitoring the Earth using imaging spectrometers has necessitated more accurate analyses and new applications to remote sensing. A very high dimensional input space requires an exponentially large amount of data to adequately and reliably represent the classes in that space. On the other hand, with increase in the input dimensionality the hypothesis space grows exponentially, which makes the classification performance highly unreliable. Traditional classification algorithms Classification of hyperspectral images is challenging. New algorithms have to be developed for hyperspectral data classification. The Spectral Angle Mapper (SAM) is a physically-based spectral classification that uses an ndimensional angle to match pixels to reference spectra. The algorithm determines the spectral similarity between two spectra by calculating the angle between the spectra, treating them as vectors in a space with dimensionality equal to the number of bands. The key and difficulty is that we should artificial defining the threshold of SAM. The classification precision depends on the rationality of the threshold of SAM. In order to resolve this problem, this paper proposes a new automatic classification model of remote sensing image using SAM combined with decision tree. It can automatic choose the appropriate threshold of SAM and improve the classify precision of SAM base on the analyze of field spectrum. The test area located in Heqing Yunnan was imaged by EO_1 Hyperion imaging spectrometer using 224 bands in visual and near infrared. The area included limestone areas, rock fields, soil and forests. The area was classified into four different vegetation and soil types. The results show that this method choose the appropriate threshold of SAM and eliminates the disturbance and influence of unwanted objects effectively, so as to improve the classification precision. Compared with the likelihood classification by field survey data, the classification precision of this model

  8. The risk of disabling, surgery and reoperation in Crohn's disease - A decision tree-based approach to prognosis.

    Science.gov (United States)

    Dias, Cláudia Camila; Pereira Rodrigues, Pedro; Fernandes, Samuel; Portela, Francisco; Ministro, Paula; Martins, Diana; Sousa, Paula; Lago, Paula; Rosa, Isadora; Correia, Luis; Moura Santos, Paula; Magro, Fernando

    2017-01-01

    Crohn's disease (CD) is a chronic inflammatory bowel disease known to carry a high risk of disabling and many times requiring surgical interventions. This article describes a decision-tree based approach that defines the CD patients' risk or undergoing disabling events, surgical interventions and reoperations, based on clinical and demographic variables. This multicentric study involved 1547 CD patients retrospectively enrolled and divided into two cohorts: a derivation one (80%) and a validation one (20%). Decision trees were built upon applying the CHAIRT algorithm for the selection of variables. Three-level decision trees were built for the risk of disabling and reoperation, whereas the risk of surgery was described in a two-level one. A receiver operating characteristic (ROC) analysis was performed, and the area under the curves (AUC) Was higher than 70% for all outcomes. The defined risk cut-off values show usefulness for the assessed outcomes: risk levels above 75% for disabling had an odds test positivity of 4.06 [3.50-4.71], whereas risk levels below 34% and 19% excluded surgery and reoperation with an odds test negativity of 0.15 [0.09-0.25] and 0.50 [0.24-1.01], respectively. Overall, patients with B2 or B3 phenotype had a higher proportion of disabling disease and surgery, while patients with later introduction of pharmacological therapeutic (1 months after initial surgery) had a higher proportion of reoperation. The decision-tree based approach used in this study, with demographic and clinical variables, has shown to be a valid and useful approach to depict such risks of disabling, surgery and reoperation.

  9. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography.

    Science.gov (United States)

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-07

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

  10. A ROUGH SET DECISION TREE BASED MLP-CNN FOR VERY HIGH RESOLUTION REMOTELY SENSED IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    C. Zhang

    2017-09-01

    Full Text Available Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP, which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.

  11. Identification of pests and diseases of Dalbergia hainanensis based on EVI time series and classification of decision tree

    Science.gov (United States)

    Luo, Qiu; Xin, Wu; Qiming, Xiong

    2017-06-01

    In the process of vegetation remote sensing information extraction, the problem of phenological features and low performance of remote sensing analysis algorithm is not considered. To solve this problem, the method of remote sensing vegetation information based on EVI time-series and the classification of decision-tree of multi-source branch similarity is promoted. Firstly, to improve the time-series stability of recognition accuracy, the seasonal feature of vegetation is extracted based on the fitting span range of time-series. Secondly, the decision-tree similarity is distinguished by adaptive selection path or probability parameter of component prediction. As an index, it is to evaluate the degree of task association, decide whether to perform migration of multi-source decision tree, and ensure the speed of migration. Finally, the accuracy of classification and recognition of pests and diseases can reach 87%--98% of commercial forest in Dalbergia hainanensis, which is significantly better than that of MODIS coverage accuracy of 80%--96% in this area. Therefore, the validity of the proposed method can be verified.

  12. How to differentiate acute pelvic inflammatory disease from acute appendicitis? A decision tree based on CT findings

    Energy Technology Data Exchange (ETDEWEB)

    El Hentour, Kim; Millet, Ingrid; Pages-Bouic, Emmanuelle; Curros-Doyon, Fernanda; Taourel, Patrice [Lapeyronie Hospital, Department of Medical Imaging, Montpellier (France); Molinari, Nicolas [UMR 5149 IMAG, CHU, Department of Medical Information and Statistics, Montpellier (France)

    2018-02-15

    To construct a decision tree based on CT findings to differentiate acute pelvic inflammatory disease (PID) from acute appendicitis (AA) in women with lower abdominal pain and inflammatory syndrome. This retrospective study was approved by our institutional review board and informed consent was waived. Contrast-enhanced CT studies of 109 women with acute PID and 218 age-matched women with AA were retrospectively and independently reviewed by two radiologists to identify CT findings predictive of PID or AA. Surgical and laboratory data were used for the PID and AA reference standard. Appropriate tests were performed to compare PID and AA and a CT decision tree using the classification and regression tree (CART) algorithm was generated. The median patient age was 28 years (interquartile range, 22-39 years). According to the decision tree, an appendiceal diameter ≥ 7 mm was the most discriminating criterion for differentiating acute PID and AA, followed by a left tubal diameter ≥ 10 mm, with a global accuracy of 98.2 % (95 % CI: 96-99.4). Appendiceal diameter and left tubal thickening are the most discriminating CT criteria for differentiating acute PID from AA. (orig.)

  13. How to differentiate acute pelvic inflammatory disease from acute appendicitis? A decision tree based on CT findings

    International Nuclear Information System (INIS)

    El Hentour, Kim; Millet, Ingrid; Pages-Bouic, Emmanuelle; Curros-Doyon, Fernanda; Taourel, Patrice; Molinari, Nicolas

    2018-01-01

    To construct a decision tree based on CT findings to differentiate acute pelvic inflammatory disease (PID) from acute appendicitis (AA) in women with lower abdominal pain and inflammatory syndrome. This retrospective study was approved by our institutional review board and informed consent was waived. Contrast-enhanced CT studies of 109 women with acute PID and 218 age-matched women with AA were retrospectively and independently reviewed by two radiologists to identify CT findings predictive of PID or AA. Surgical and laboratory data were used for the PID and AA reference standard. Appropriate tests were performed to compare PID and AA and a CT decision tree using the classification and regression tree (CART) algorithm was generated. The median patient age was 28 years (interquartile range, 22-39 years). According to the decision tree, an appendiceal diameter ≥ 7 mm was the most discriminating criterion for differentiating acute PID and AA, followed by a left tubal diameter ≥ 10 mm, with a global accuracy of 98.2 % (95 % CI: 96-99.4). Appendiceal diameter and left tubal thickening are the most discriminating CT criteria for differentiating acute PID from AA. (orig.)

  14. Skin autofluorescence based decision tree in detection of impaired glucose tolerance and diabetes.

    Directory of Open Access Journals (Sweden)

    Andries J Smit

    Full Text Available Diabetes (DM and impaired glucose tolerance (IGT detection are conventionally based on glycemic criteria. Skin autofluorescence (SAF is a noninvasive proxy of tissue accumulation of advanced glycation endproducts (AGE which are considered to be a carrier of glycometabolic memory. We compared SAF and a SAF-based decision tree (SAF-DM with fasting plasma glucose (FPG and HbA1c, and additionally with the Finnish Diabetes Risk Score (FINDRISC questionnaire±FPG for detection of oral glucose tolerance test (OGTT- or HbA1c-defined IGT and diabetes in intermediate risk persons.Participants had ≥1 metabolic syndrome criteria. They underwent an OGTT, HbA1c, SAF and FINDRISC, in adition to SAF-DM which includes SAF, age, BMI, and conditional questions on DM family history, antihypertensives, renal or cardiovascular disease events (CVE.218 persons, age 56 yr, 128M/90F, 97 with previous CVE, participated. With OGTT 28 had DM, 46 IGT, 41 impaired fasting glucose, 103 normal glucose tolerance. SAF alone revealed 23 false positives (FP, 34 false negatives (FN (sensitivity (S 68%; specificity (SP 86%. With SAF-DM, FP were reduced to 18, FN to 16 (5 with DM (S 82%; SP 89%. HbA1c scored 48 FP, 18 FN (S 80%; SP 75%. Using HbA1c-defined DM-IGT/suspicion ≥6%/42 mmol/mol, SAF-DM scored 33 FP, 24 FN (4 DM (S76%; SP72%, FPG 29 FP, 41 FN (S71%; SP80%. FINDRISC≥10 points as detection of HbA1c-based diabetes/suspicion scored 79 FP, 23 FN (S 69%; SP 45%.SAF-DM is superior to FPG and non-inferior to HbA1c to detect diabetes/IGT in intermediate-risk persons. SAF-DM's value for diabetes/IGT screening is further supported by its established performance in predicting diabetic complications.

  15. Decision trees in epidemiological research

    Directory of Open Access Journals (Sweden)

    Ashwini Venkatasubramaniam

    2017-09-01

    Full Text Available Abstract Background In many studies, it is of interest to identify population subgroups that are relatively homogeneous with respect to an outcome. The nature of these subgroups can provide insight into effect mechanisms and suggest targets for tailored interventions. However, identifying relevant subgroups can be challenging with standard statistical methods. Main text We review the literature on decision trees, a family of techniques for partitioning the population, on the basis of covariates, into distinct subgroups who share similar values of an outcome variable. We compare two decision tree methods, the popular Classification and Regression tree (CART technique and the newer Conditional Inference tree (CTree technique, assessing their performance in a simulation study and using data from the Box Lunch Study, a randomized controlled trial of a portion size intervention. Both CART and CTree identify homogeneous population subgroups and offer improved prediction accuracy relative to regression-based approaches when subgroups are truly present in the data. An important distinction between CART and CTree is that the latter uses a formal statistical hypothesis testing framework in building decision trees, which simplifies the process of identifying and interpreting the final tree model. We also introduce a novel way to visualize the subgroups defined by decision trees. Our novel graphical visualization provides a more scientifically meaningful characterization of the subgroups identified by decision trees. Conclusions Decision trees are a useful tool for identifying homogeneous subgroups defined by combinations of individual characteristics. While all decision tree techniques generate subgroups, we advocate the use of the newer CTree technique due to its simplicity and ease of interpretation.

  16. Decision trees in epidemiological research.

    Science.gov (United States)

    Venkatasubramaniam, Ashwini; Wolfson, Julian; Mitchell, Nathan; Barnes, Timothy; JaKa, Meghan; French, Simone

    2017-01-01

    In many studies, it is of interest to identify population subgroups that are relatively homogeneous with respect to an outcome. The nature of these subgroups can provide insight into effect mechanisms and suggest targets for tailored interventions. However, identifying relevant subgroups can be challenging with standard statistical methods. We review the literature on decision trees, a family of techniques for partitioning the population, on the basis of covariates, into distinct subgroups who share similar values of an outcome variable. We compare two decision tree methods, the popular Classification and Regression tree (CART) technique and the newer Conditional Inference tree (CTree) technique, assessing their performance in a simulation study and using data from the Box Lunch Study, a randomized controlled trial of a portion size intervention. Both CART and CTree identify homogeneous population subgroups and offer improved prediction accuracy relative to regression-based approaches when subgroups are truly present in the data. An important distinction between CART and CTree is that the latter uses a formal statistical hypothesis testing framework in building decision trees, which simplifies the process of identifying and interpreting the final tree model. We also introduce a novel way to visualize the subgroups defined by decision trees. Our novel graphical visualization provides a more scientifically meaningful characterization of the subgroups identified by decision trees. Decision trees are a useful tool for identifying homogeneous subgroups defined by combinations of individual characteristics. While all decision tree techniques generate subgroups, we advocate the use of the newer CTree technique due to its simplicity and ease of interpretation.

  17. Dynamic Security Assessment of Western Danish Power System Based on Ensemble Decision Trees

    DEFF Research Database (Denmark)

    Liu, Leo; Bak, Claus Leth; Chen, Zhe

    2014-01-01

    With the increasing penetration of renewable energy resources and other forms of dispersed generation, more and more uncertainties will be brought to the dynamic security assessment (DSA) of power systems. This paper proposes an approach that uses ensemble decision trees (EDT) for online DSA. Fed...... with online wide-area measurement data, it is capable of not only predicting the security states of current operating conditions (OC) with high accuracy, but also indicating the confidence of the security states 1 minute ahead of the real time by an outlier identification method. The results of EDT together...

  18. Combined prediction model for supply risk in nuclear power equipment manufacturing industry based on support vector machine and decision tree

    International Nuclear Information System (INIS)

    Shi Chunsheng; Meng Dapeng

    2011-01-01

    The prediction index for supply risk is developed based on the factor identifying of nuclear equipment manufacturing industry. The supply risk prediction model is established with the method of support vector machine and decision tree, based on the investigation on 3 important nuclear power equipment manufacturing enterprises and 60 suppliers. Final case study demonstrates that the combination model is better than the single prediction model, and demonstrates the feasibility and reliability of this model, which provides a method to evaluate the suppliers and measure the supply risk. (authors)

  19. Interacting with mobile devices by fusion eye and hand gestures recognition systems based on decision tree approach

    Science.gov (United States)

    Elleuch, Hanene; Wali, Ali; Samet, Anis; Alimi, Adel M.

    2017-03-01

    Two systems of eyes and hand gestures recognition are used to control mobile devices. Based on a real-time video streaming captured from the device's camera, the first system recognizes the motion of user's eyes and the second one detects the static hand gestures. To avoid any confusion between natural and intentional movements we developed a system to fuse the decision coming from eyes and hands gesture recognition systems. The phase of fusion was based on decision tree approach. We conducted a study on 5 volunteers and the results that our system is robust and competitive.

  20. Creating ensembles of decision trees through sampling

    Science.gov (United States)

    Kamath, Chandrika; Cantu-Paz, Erick

    2005-08-30

    A system for decision tree ensembles that includes a module to read the data, a module to sort the data, a module to evaluate a potential split of the data according to some criterion using a random sample of the data, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method is based on statistical sampling techniques and includes the steps of reading the data; sorting the data; evaluating a potential split according to some criterion using a random sample of the data, splitting the data, and combining multiple decision trees in ensembles.

  1. Importance Sampling Based Decision Trees for Security Assessment and the Corresponding Preventive Control Schemes: the Danish Case Study

    DEFF Research Database (Denmark)

    Liu, Leo; Rather, Zakir Hussain; Chen, Zhe

    2013-01-01

    Decision Trees (DT) based security assessment helps Power System Operators (PSO) by providing them with the most significant system attributes and guiding them in implementing the corresponding emergency control actions to prevent system insecurity and blackouts. DT is obtained offline from time...... and adopts a methodology of importance sampling to maximize the information contained in the database so as to increase the accuracy of DT. Further, this paper also studies the effectiveness of DT by implementing its corresponding preventive control schemes. These approaches are tested on the detailed model...

  2. Algorithms for Decision Tree Construction

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28

  3. Totally optimal decision trees for Boolean functions

    KAUST Repository

    Chikalov, Igor

    2016-07-28

    We study decision trees which are totally optimal relative to different sets of complexity parameters for Boolean functions. A totally optimal tree is an optimal tree relative to each parameter from the set simultaneously. We consider the parameters characterizing both time (in the worst- and average-case) and space complexity of decision trees, i.e., depth, total path length (average depth), and number of nodes. We have created tools based on extensions of dynamic programming to study totally optimal trees. These tools are applicable to both exact and approximate decision trees, and allow us to make multi-stage optimization of decision trees relative to different parameters and to count the number of optimal trees. Based on the experimental results we have formulated the following hypotheses (and subsequently proved): for almost all Boolean functions there exist totally optimal decision trees (i) relative to the depth and number of nodes, and (ii) relative to the depth and average depth.

  4. A decision-tree-based model for evaluating the thermal comfort of horses

    Directory of Open Access Journals (Sweden)

    Ana Paula de Assis Maia

    2013-12-01

    Full Text Available Thermal comfort is of great importance in preserving body temperature homeostasis during thermal stress conditions. Although the thermal comfort of horses has been widely studied, there is no report of its relationship with surface temperature (T S. This study aimed to assess the potential of data mining techniques as a tool to associate surface temperature with thermal comfort of horses. T S was obtained using infrared thermography image processing. Physiological and environmental variables were used to define the predicted class, which classified thermal comfort as "comfort" and "discomfort". The variables of armpit, croup, breast and groin T S of horses and the predicted classes were then subjected to a machine learning process. All variables in the dataset were considered relevant for the classification problem and the decision-tree model yielded an accuracy rate of 74 %. The feature selection methods used to reduce computational cost and simplify predictive learning decreased model accuracy to 70 %; however, the model became simpler with easily interpretable rules. For both these selection methods and for the classification using all attributes, armpit and breast T S had a higher power rating for predicting thermal comfort. Data mining techniques show promise in the discovery of new variables associated with the thermal comfort of horses.

  5. Voxel-based plaque classification in coronary intravascular optical coherence tomography images using decision trees

    Science.gov (United States)

    Kolluru, Chaitanya; Prabhu, David; Gharaibeh, Yazan; Wu, Hao; Wilson, David L.

    2018-02-01

    Intravascular Optical Coherence Tomography (IVOCT) is a high contrast, 3D microscopic imaging technique that can be used to assess atherosclerosis and guide stent interventions. Despite its advantages, IVOCT image interpretation is challenging and time consuming with over 500 image frames generated in a single pullback volume. We have developed a method to classify voxel plaque types in IVOCT images using machine learning. To train and test the classifier, we have used our unique database of labeled cadaver vessel IVOCT images accurately registered to gold standard cryoimages. This database currently contains 300 images and is growing. Each voxel is labeled as fibrotic, lipid-rich, calcified or other. Optical attenuation, intensity and texture features were extracted for each voxel and were used to build a decision tree classifier for multi-class classification. Five-fold cross-validation across images gave accuracies of 96 % +/- 0.01 %, 90 +/- 0.02% and 90 % +/- 0.01 % for fibrotic, lipid-rich and calcified classes respectively. To rectify performance degradation seen in left out vessel specimens as opposed to left out images, we are adding data and reducing features to limit overfitting. Following spatial noise cleaning, important vascular regions were unambiguous in display. We developed displays that enable physicians to make rapid determination of calcified and lipid regions. This will inform treatment decisions such as the need for devices (e.g., atherectomy or scoring balloon in the case of calcifications) or extended stent lengths to ensure coverage of lipid regions prone to injury at the edge of a stent.

  6. A Systematic Approach for Dynamic Security Assessment and the Corresponding Preventive Control Scheme Based on Decision Trees

    DEFF Research Database (Denmark)

    Liu, Leo; Sun, Kai; Rather, Zakir Hussain

    2014-01-01

    This paper proposes a decision tree (DT)-based systematic approach for cooperative online power system dynamic security assessment (DSA) and preventive control. This approach adopts a new methodology that trains two contingency-oriented DTs on a daily basis by the databases generated from power...... system simulations. Fed with real-time wide-area measurements, one DT of measurable variables is employed for online DSA to identify potential security issues, and the other DT of controllable variables provides online decision support on preventive control strategies against those issues. A cost......-effective algorithm is adopted in this proposed approach to optimize the trajectory of preventive control. The paper also proposes an importance sampling algorithm on database preparation for efficient DT training for power systems with high penetration of wind power and distributed generation. The performance...

  7. Prediction of axillary lymph node metastasis in primary breast cancer patients using a decision tree-based model

    Directory of Open Access Journals (Sweden)

    Takada Masahiro

    2012-06-01

    Full Text Available Abstract Background The aim of this study was to develop a new data-mining model to predict axillary lymph node (AxLN metastasis in primary breast cancer. To achieve this, we used a decision tree-based prediction method—the alternating decision tree (ADTree. Methods Clinical datasets for primary breast cancer patients who underwent sentinel lymph node biopsy or AxLN dissection without prior treatment were collected from three institutes (institute A, n = 148; institute B, n = 143; institute C, n = 174 and were used for variable selection, model training and external validation, respectively. The models were evaluated using area under the receiver operating characteristics (ROC curve analysis to discriminate node-positive patients from node-negative patients. Results The ADTree model selected 15 of 24 clinicopathological variables in the variable selection dataset. The resulting area under the ROC curve values were 0.770 [95% confidence interval (CI, 0.689–0.850] for the model training dataset and 0.772 (95% CI: 0.689–0.856 for the validation dataset, demonstrating high accuracy and generalization ability of the model. The bootstrap value of the validation dataset was 0.768 (95% CI: 0.763–0.774. Conclusions Our prediction model showed high accuracy for predicting nodal metastasis in patients with breast cancer using commonly recorded clinical variables. Therefore, our model might help oncologists in the decision-making process for primary breast cancer patients before starting treatment.

  8. Decision Tree Technique for Particle Identification

    International Nuclear Information System (INIS)

    Quiller, Ryan

    2003-01-01

    Particle identification based on measurements such as the Cerenkov angle, momentum, and the rate of energy loss per unit distance (-dE/dx) is fundamental to the BaBar detector for particle physics experiments. It is particularly important to separate the charged forms of kaons and pions. Currently, the Neural Net, an algorithm based on mapping input variables to an output variable using hidden variables as intermediaries, is one of the primary tools used for identification. In this study, a decision tree classification technique implemented in the computer program, CART, was investigated and compared to the Neural Net over the range of momenta, 0.25 GeV/c to 5.0 GeV/c. For a given subinterval of momentum, three decision trees were made using different sets of input variables. The sensitivity and specificity were calculated for varying kaon acceptance thresholds. This data was used to plot Receiver Operating Characteristic curves (ROC curves) to compare the performance of the classification methods. Also, input variables used in constructing the decision trees were analyzed. It was found that the Neural Net was a significant contributor to decision trees using dE/dx and the Cerenkov angle as inputs. Furthermore, the Neural Net had poorer performance than the decision tree technique, but tended to improve decision tree performance when used as an input variable. These results suggest that the decision tree technique using Neural Net input may possibly increase accuracy of particle identification in BaBar

  9. Algorithms for Decision Tree Construction

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28] showed that such algorithm may construct decision trees whose average depth is arbitrarily far from the minimum. Hyafil and Rivest in [35] proved NP-hardness of DT problem that is constructing a tree with the minimum average depth for a diagnostic problem over 2-valued information system and uniform probability distribution. Cox et al. in [22] showed that for a two-class problem over information system, even finding the root node attribute for an optimal tree is an NP-hard problem. © Springer-Verlag Berlin Heidelberg 2011.

  10. Safety validation of decision trees for hepatocellular carcinoma.

    Science.gov (United States)

    Wang, Xian-Qiang; Liu, Zhe; Lv, Wen-Ping; Luo, Ying; Yang, Guang-Yun; Li, Chong-Hui; Meng, Xiang-Fei; Liu, Yang; Xu, Ke-Sen; Dong, Jia-Hong

    2015-08-21

    To evaluate a different decision tree for safe liver resection and verify its efficiency. A total of 2457 patients underwent hepatic resection between January 2004 and December 2010 at the Chinese PLA General Hospital, and 634 hepatocellular carcinoma (HCC) patients were eligible for the final analyses. Post-hepatectomy liver failure (PHLF) was identified by the association of prothrombin time 50 μmol/L (the "50-50" criteria), which were assessed at day 5 postoperatively or later. The Swiss-Clavien decision tree, Tokyo University-Makuuchi decision tree, and Chinese consensus decision tree were adopted to divide patients into two groups based on those decision trees in sequence, and the PHLF rates were recorded. The overall mortality and PHLF rate were 0.16% and 3.0%. A total of 19 patients experienced PHLF. The numbers of patients to whom the Swiss-Clavien, Tokyo University-Makuuchi, and Chinese consensus decision trees were applied were 581, 573, and 622, and the PHLF rates were 2.75%, 2.62%, and 2.73%, respectively. Significantly more cases satisfied the Chinese consensus decision tree than the Swiss-Clavien decision tree and Tokyo University-Makuuchi decision tree (P decision trees. The Chinese consensus decision tree expands the indications for hepatic resection for HCC patients and does not increase the PHLF rate compared to the Swiss-Clavien and Tokyo University-Makuuchi decision trees. It would be a safe and effective algorithm for hepatectomy in patients with hepatocellular carcinoma.

  11. Parallel object-oriented decision tree system

    Science.gov (United States)

    Kamath,; Chandrika, Cantu-Paz [Dublin, CA; Erick, [Oakland, CA

    2006-02-28

    A data mining decision tree system that uncovers patterns, associations, anomalies, and other statistically significant structures in data by reading and displaying data files, extracting relevant features for each of the objects, and using a method of recognizing patterns among the objects based upon object features through a decision tree that reads the data, sorts the data if necessary, determines the best manner to split the data into subsets according to some criterion, and splits the data.

  12. Method for Walking Gait Identification in a Lower Extremity Exoskeleton Based on C4.5 Decision Tree Algorithm

    Directory of Open Access Journals (Sweden)

    Qing Guo

    2015-04-01

    Full Text Available A gait identification method for a lower extremity exoskeleton is presented in order to identify the gait sub-phases in human-machine coordinated motion. First, a sensor layout for the exoskeleton is introduced. Taking the difference between human lower limb motion and human-machine coordinated motion into account, the walking gait is divided into five sub-phases, which are ‘double standing’, ‘right leg swing and left leg stance’, ‘double stance with right leg front and left leg back’, ‘right leg stance and left leg swing’, and ‘double stance with left leg front and right leg back’. The sensors include shoe pressure sensors, knee encoders, and thigh and calf gyroscopes, and are used to measure the contact force of the foot, and the knee joint angle and its angular velocity. Then, five sub-phases of walking gait are identified by a C4.5 decision tree algorithm according to the data fusion of the sensors' information. Based on the simulation results for the gait division, identification accuracy can be guaranteed by the proposed algorithm. Through the exoskeleton control experiment, a division of five sub-phases for the human-machine coordinated walk is proposed. The experimental results verify this gait division and identification method. They can make hydraulic cylinders retract ahead of time and improve the maximal walking velocity when the exoskeleton follows the person's motion.

  13. Skin Autofluorescence Based Decision Tree in Detection of Impaired Glucose Tolerance and Diabetes

    NARCIS (Netherlands)

    Smit, Andries J.; Smit, Jitske M.; Botterblom, Gijs J.; Mulder, Douwe

    2013-01-01

    Aim: Diabetes (DM) and impaired glucose tolerance (IGT) detection are conventionally based on glycemic criteria. Skin autofluorescence (SAF) is a noninvasive proxy of tissue accumulation of advanced glycation endproducts (AGE) which are considered to be a carrier of glycometabolic memory. We

  14. Predictability of the future development of aggressive behavior of cranial dural arteriovenous fistulas based on decision tree analysis.

    Science.gov (United States)

    Satomi, Junichiro; Ghaibeh, A Ammar; Moriguchi, Hiroki; Nagahiro, Shinji

    2015-07-01

    The severity of clinical signs and symptoms of cranial dural arteriovenous fistulas (DAVFs) are well correlated with their pattern of venous drainage. Although the presence of cortical venous drainage can be considered a potential predictor of aggressive DAVF behaviors, such as intracranial hemorrhage or progressive neurological deficits due to venous congestion, accurate statistical analyses are currently not available. Using a decision tree data mining method, the authors aimed at clarifying the predictability of the future development of aggressive behaviors of DAVF and at identifying the main causative factors. Of 266 DAVF patients, 89 were eligible for analysis. Under observational management, 51 patients presented with intracranial hemorrhage/infarction during the follow-up period. The authors created a decision tree able to assess the risk for the development of aggressive DAVF behavior. Evaluated by 10-fold cross-validation, the decision tree's accuracy, sensitivity, and specificity were 85.28%, 88.33%, and 80.83%, respectively. The tree shows that the main factor in symptomatic patients was the presence of cortical venous drainage. In its absence, the lesion location determined the risk of a DAVF developing aggressive behavior. Decision tree analysis accurately predicts the future development of aggressive DAVF behavior.

  15. A Decision-Tree-Based Algorithm for Speech/Music Classification and Segmentation

    Directory of Open Access Journals (Sweden)

    Lavner Yizhar

    2009-01-01

    Full Text Available We present an efficient algorithm for segmentation of audio signals into speech or music. The central motivation to our study is consumer audio applications, where various real-time enhancements are often applied. The algorithm consists of a learning phase and a classification phase. In the learning phase, predefined training data is used for computing various time-domain and frequency-domain features, for speech and music signals separately, and estimating the optimal speech/music thresholds, based on the probability density functions of the features. An automatic procedure is employed to select the best features for separation. In the test phase, initial classification is performed for each segment of the audio signal, using a three-stage sieve-like approach, applying both Bayesian and rule-based methods. To avoid erroneous rapid alternations in the classification, a smoothing technique is applied, averaging the decision on each segment with past segment decisions. Extensive evaluation of the algorithm, on a database of more than 12 hours of speech and more than 22 hours of music showed correct identification rates of 99.4% and 97.8%, respectively, and quick adjustment to alternating speech/music sections. In addition to its accuracy and robustness, the algorithm can be easily adapted to different audio types, and is suitable for real-time operation.

  16. Accident diagnosis system based on real-time decision tree expert system

    Science.gov (United States)

    Nicolau, Andressa dos S.; Augusto, João P. da S. C.; Schirru, Roberto

    2017-06-01

    Safety is one of the most studied topics when referring to power stations. For that reason, sensors and alarms develop an important role in environmental and human protection. When abnormal event happens, it triggers a chain of alarms that must be, somehow, checked by the control room operators. In this case, diagnosis support system can help operators to accurately identify the possible root-cause of the problem in short time. In this article, we present a computational model of a generic diagnose support system based on artificial intelligence, that was applied on the dataset of two real power stations: Angra1 Nuclear Power Plant and Santo Antônio Hydroelectric Plant. The proposed system processes all the information logged in the sequence of events before a shutdown signal using the expert's knowledge inputted into an expert system indicating the chain of events, from the shutdown signal to its root-cause. The results of both applications showed that the support system is a potential tool to help the control room operators identify abnormal events, as accidents and consequently increase the safety.

  17. Estimating Surface Downward Shortwave Radiation over China Based on the Gradient Boosting Decision Tree Method

    Directory of Open Access Journals (Sweden)

    Lu Yang

    2018-01-01

    Full Text Available Downward shortwave radiation (DSR is an essential parameter in the terrestrial radiation budget and a necessary input for models of land-surface processes. Although several radiation products using satellite observations have been released, coarse spatial resolution and low accuracy limited their application. It is important to develop robust and accurate retrieval methods with higher spatial resolution. Machine learning methods may be powerful candidates for estimating the DSR from remotely sensed data because of their ability to perform adaptive, nonlinear data fitting. In this study, the gradient boosting regression tree (GBRT was employed to retrieve DSR measurements with the ground observation data in China collected from the China Meteorological Administration (CMA Meteorological Information Center and the satellite observations from the Advanced Very High Resolution Radiometer (AVHRR at a spatial resolution of 5 km. The validation results of the DSR estimates based on the GBRT method in China at a daily time scale for clear sky conditions show an R2 value of 0.82 and a root mean square error (RMSE value of 27.71 W·m−2 (38.38%. These values are 0.64 and 42.97 W·m−2 (34.57%, respectively, for cloudy sky conditions. The monthly DSR estimates were also evaluated using ground measurements. The monthly DSR estimates have an overall R2 value of 0.92 and an RMSE of 15.40 W·m−2 (12.93%. Comparison of the DSR estimates with the reanalyzed and retrieved DSR measurements from satellite observations showed that the estimated DSR is reasonably accurate but has a higher spatial resolution. Moreover, the proposed GBRT method has good scalability and is easy to apply to other parameter inversion problems by changing the parameters and training data.

  18. Review of cause-based decision tree approach for the development of domestic standard human reliability analysis procedure in low power/shutdown operation probabilistic safety assessment

    International Nuclear Information System (INIS)

    Kang, D. I.; Jung, W. D.

    2003-01-01

    We review the Cause-Based Decision Tree (CBDT) approach to decide whether we incorporate it or not for the development of domestic standard Human Reliability Analysis (HRA) procedure in low power/shutdown operation Probabilistic Safety Assessment (PSA). In this paper, we introduce the cause based decision tree approach, quantify human errors using it, and identify merits and demerits of it in comparision with previously used THERP. The review results show that it is difficult to incorporate the CBDT method for the development of domestic standard HRA procedure in low power/shutdown PSA because the CBDT method need for the subjective judgment of HRA analyst like as THERP. However, it is expected that the incorporation of the CBDT method into the development of domestic standard HRA procedure only for the comparision of quantitative HRA results will relieve the burden of development of detailed HRA procedure and will help maintain consistent quantitative HRA results

  19. Cloud GIS Based Watershed Management

    Science.gov (United States)

    Bediroğlu, G.; Colak, H. E.

    2017-11-01

    In this study, we generated a Cloud GIS based watershed management system with using Cloud Computing architecture. Cloud GIS is used as SAAS (Software as a Service) and DAAS (Data as a Service). We applied GIS analysis on cloud in terms of testing SAAS and deployed GIS datasets on cloud in terms of DAAS. We used Hybrid cloud computing model in manner of using ready web based mapping services hosted on cloud (World Topology, Satellite Imageries). We uploaded to system after creating geodatabases including Hydrology (Rivers, Lakes), Soil Maps, Climate Maps, Rain Maps, Geology and Land Use. Watershed of study area has been determined on cloud using ready-hosted topology maps. After uploading all the datasets to systems, we have applied various GIS analysis and queries. Results shown that Cloud GIS technology brings velocity and efficiency for watershed management studies. Besides this, system can be easily implemented for similar land analysis and management studies.

  20. Boosted decision trees and applications

    International Nuclear Information System (INIS)

    Coadou, Y.

    2013-01-01

    Decision trees are a machine learning technique more and more commonly used in high energy physics, while it has been widely used in the social sciences. After introducing the concepts of decision trees, this article focuses on its application in particle physics. (authors)

  1. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  2. 決策樹形式知識之線上預測系統架構 | An On-Line Decision Tree-Based Predictive System Architecture

    Directory of Open Access Journals (Sweden)

    馬芳資、林我聰 Fang-Tz Ma、Woo-Tsong Lin

    2003-10-01

    ="font-size: small;">This paper presents an on-line decision tree-based predictive system architecture. The architecture contains nine components, including a database of the examples, a learning system of the decision trees, a knowledge base, a historical knowledge base, a maintaining interface of the decision trees, an interface to upload training and testing examples, a PMML (Predictive Model Markup Language translator, an on-line predictive system, and a merging optional decision trees system. There are three channels to import knowledge in the architecture; the developers can upload the examples to the learning system to induce the decision tree, directly input the information of decision trees through the user interface, or import the decision trees in PMML format. In order to integrate the knowledge of the decision trees, we added the merging optional decision trees system into this architecture. The merging optional decision trees system can combine multiple decision trees into a single decision tree to integrate the knowledge of the trees. In the future research, we will implement this architecture as a real system in the web-based platform to do some empirical analyses. And in order to improve the performance of the merging decision trees, we will also develop some pruning strategies in the merging optional decision trees system.

  3. A novel decision tree approach based on transcranial Doppler sonography to screen for blunt cervical vascular injuries.

    Science.gov (United States)

    Purvis, Dianna; Aldaghlas, Tayseer; Trickey, Amber W; Rizzo, Anne; Sikdar, Siddhartha

    2013-06-01

    Early detection and treatment of blunt cervical vascular injuries prevent adverse neurologic sequelae. Current screening criteria can miss up to 22% of these injuries. The study objective was to investigate bedside transcranial Doppler sonography for detecting blunt cervical vascular injuries in trauma patients using a novel decision tree approach. This prospective pilot study was conducted at a level I trauma center. Patients undergoing computed tomographic angiography for suspected blunt cervical vascular injuries were studied with transcranial Doppler sonography. Extracranial and intracranial vasculatures were examined with a portable power M-mode transcranial Doppler unit. The middle cerebral artery mean flow velocity, pulsatility index, and their asymmetries were used to quantify flow patterns and develop an injury decision tree screening protocol. Student t tests validated associations between injuries and transcranial Doppler predictive measures. We evaluated 27 trauma patients with 13 injuries. Single vertebral artery injuries were most common (38.5%), followed by single internal carotid artery injuries (30%). Compared to patients without injuries, mean flow velocity asymmetry was higher for single internal carotid artery (P = .003) and single vertebral artery (P = .004) injuries. Similarly, pulsatility index asymmetry was higher in single internal carotid artery (P = .015) and single vertebral artery (P = .042) injuries, whereas the lowest pulsatility index was elevated for bilateral vertebral artery injuries (P = .006). The decision tree yielded 92% specificity, 93% sensitivity, and 93% correct classifications. In this pilot feasibility study, transcranial Doppler measures were significantly associated with the blunt cervical vascular injury status, suggesting that transcranial Doppler sonography might be a viable bedside screening tool for trauma. Patient-specific hemodynamic information from transcranial Doppler assessment has the potential to alter

  4. Decision tree modeling using R.

    Science.gov (United States)

    Zhang, Zhongheng

    2016-08-01

    In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.

  5. Unified framework for triaxial accelerometer-based fall event detection and classification using cumulants and hierarchical decision tree classifier.

    Science.gov (United States)

    Kambhampati, Satya Samyukta; Singh, Vishal; Manikandan, M Sabarimalai; Ramkumar, Barathram

    2015-08-01

    In this Letter, the authors present a unified framework for fall event detection and classification using the cumulants extracted from the acceleration (ACC) signals acquired using a single waist-mounted triaxial accelerometer. The main objective of this Letter is to find suitable representative cumulants and classifiers in effectively detecting and classifying different types of fall and non-fall events. It was discovered that the first level of the proposed hierarchical decision tree algorithm implements fall detection using fifth-order cumulants and support vector machine (SVM) classifier. In the second level, the fall event classification algorithm uses the fifth-order cumulants and SVM. Finally, human activity classification is performed using the second-order cumulants and SVM. The detection and classification results are compared with those of the decision tree, naive Bayes, multilayer perceptron and SVM classifiers with different types of time-domain features including the second-, third-, fourth- and fifth-order cumulants and the signal magnitude vector and signal magnitude area. The experimental results demonstrate that the second- and fifth-order cumulant features and SVM classifier can achieve optimal detection and classification rates of above 95%, as well as the lowest false alarm rate of 1.03%.

  6. Fast Image Texture Classification Using Decision Trees

    Science.gov (United States)

    Thompson, David R.

    2011-01-01

    Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.

  7. 基于决策树的戈壁信息提取研究%Gobi information extraction based on decision tree classification method

    Institute of Scientific and Technical Information of China (English)

    冯益明; 智长贵; 姚爱冬

    2013-01-01

    Gobi is one of the main landscape types of earth' s surface in the arid region of northwestern parts of China, with the total area of 458 000-757 000 km2, accounting for the 4.8%-7.9% of China's total land area. The gobi holds abundant natural resources such as minerals, wind energy and solar power. Meanwhile, many modern cities and towns and some important traffic routes were also constructed on the gobi region. The gobi region plays an important role in the construction of western economy. Therefore, it is important to launch the gobi research under current social and economic conditions, and accurately revealing the distribution and area of gobi is the base and premise of launching the gobi research. At present, it is difficult to do fieldwork due to the execrable natural conditions and the sparse dweller in the gobi region, which leads to the scarcity of research documents on the situation, distribution, type classification, transformation and utilization of gobi. The studied region of this paper is a typical gobi distribution region, locating in Ejina County in Inner Mongolia, China, and its climatic characteristics include lack of rain, more evaporation, full sunshine, large temperature difference and frequent windy sand weather. Using Remote Sensing imageries Landsat TM5 and TM7 of plant growth season of 2005-2010, the DEM with 30 m spatial resolution, administrative map, present land use map, field investigation data and related documents as the basic data resource. Firstly, the non-gobi distribution regions were extracted in GIS software by analyzing DEM. Then, based on the analysis of spectral characteristics of difference typical ground objects, the information extraction model of Decision Tree based on knowledge was constructed to classify the remote sensing imageries, and eroded gobi and cumulated gobi were relatively accurately separated. The general accuracy of the extracted gobi information reached 91.57%. There were few materials in China on using

  8. Decision Tree Identified Risk Groups with High Suicidal Ideation in South Korea: A Population-Based Study.

    Science.gov (United States)

    Kim, Hyun Kyung; Kim, Ji Young; Kim, Jong Hyen; Hyoung, Hee Kyoung

    2016-01-01

    The aim of this study was to identify risk groups with high suicidal ideation among South Korean adults. A descriptive cross-sectional design was adopted using secondary data from the 2011 Korea National Health and Nutrition Examination Survey (KNHANES). A total of 5,963 adults aged 19 years and older who participated in the 2011 KNHANES served as participants. The prevalence of suicidal ideation and its related factors, including physical, psychological, health behavioral, and sociodemographic characteristics, were examined. Descriptive statistics and a decision tree were used for data analysis. Nine groups with high suicidal ideation were identified. The coexistence of depression and high levels of stress increased the prevalence of suicidal ideation. The highest risk group was widowed or divorced adults with depression and high levels of stress, and 82.5% of these participants had suicidal ideation (the prevalence rate of this group was 5.7 times higher than the mean suicidal ideation prevalence rate in this study). Public health nurses and community mental health professionals should recognize risk groups with high suicidal ideation, and target these groups when implementing preventive interventions. © 2015 Wiley Periodicals, Inc.

  9. Interpreting CNNs via Decision Trees

    OpenAIRE

    Zhang, Quanshi; Yang, Yu; Wu, Ying Nian; Zhu, Song-Chun

    2018-01-01

    This paper presents a method to learn a decision tree to quantitatively explain the logic of each prediction of a pre-trained convolutional neural networks (CNNs). Our method boosts the following two aspects of network interpretability. 1) In the CNN, each filter in a high conv-layer must represent a specific object part, instead of describing mixed patterns without clear meanings. 2) People can explain each specific prediction made by the CNN at the semantic level using a decision tree, i.e....

  10. Design of a new hybrid artificial neural network method based on decision trees for calculating the Froude number in rigid rectangular channels

    Directory of Open Access Journals (Sweden)

    Ebtehaj Isa

    2016-09-01

    Full Text Available A vital topic regarding the optimum and economical design of rigid boundary open channels such as sewers and drainage systems is determining the movement of sediment particles. In this study, the incipient motion of sediment is estimated using three datasets from literature, including a wide range of hydraulic parameters. Because existing equations do not consider the effect of sediment bed thickness on incipient motion estimation, this parameter is applied in this study along with the multilayer perceptron (MLP, a hybrid method based on decision trees (DT (MLP-DT, to estimate incipient motion. According to a comparison with the observed experimental outcome, the proposed method performs well (MARE = 0.048, RMSE = 0.134, SI = 0.06, BIAS = -0.036. The performance of MLP and MLP-DT is compared with that of existing regression-based equations, and significantly higher performance over existing models is observed. Finally, an explicit expression for practical engineering is also provided.

  11. Comparison of greedy algorithms for α-decision tree construction

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2011-01-01

    A comparison among different heuristics that are used by greedy algorithms which constructs approximate decision trees (α-decision trees) is presented. The comparison is conducted using decision tables based on 24 data sets from UCI Machine Learning Repository [2]. Complexity of decision trees is estimated relative to several cost functions: depth, average depth, number of nodes, number of nonterminal nodes, and number of terminal nodes. Costs of trees built by greedy algorithms are compared with minimum costs calculated by an algorithm based on dynamic programming. The results of experiments assign to each cost function a set of potentially good heuristics that minimize it. © 2011 Springer-Verlag.

  12. Induction of Ordinal Decision Trees

    NARCIS (Netherlands)

    J.C. Bioch (Cor); V. Popova (Viara)

    2003-01-01

    textabstractThis paper focuses on the problem of monotone decision trees from the point of view of the multicriteria decision aid methodology (MCDA). By taking into account the preferences of the decision maker, an attempt is made to bring closer similar research within machine learning and MCDA.

  13. Representing Boolean Functions by Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    A Boolean or discrete function can be represented by a decision tree. A compact form of decision tree named binary decision diagram or branching program is widely known in logic design [2, 40]. This representation is equivalent to other forms

  14. Predicting membrane protein types using various decision tree classifiers based on various modes of general PseAAC for imbalanced datasets.

    Science.gov (United States)

    Sankari, E Siva; Manimegalai, D

    2017-12-21

    Predicting membrane protein types is an important and challenging research area in bioinformatics and proteomics. Traditional biophysical methods are used to classify membrane protein types. Due to large exploration of uncharacterized protein sequences in databases, traditional methods are very time consuming, expensive and susceptible to errors. Hence, it is highly desirable to develop a robust, reliable, and efficient method to predict membrane protein types. Imbalanced datasets and large datasets are often handled well by decision tree classifiers. Since imbalanced datasets are taken, the performance of various decision tree classifiers such as Decision Tree (DT), Classification And Regression Tree (CART), C4.5, Random tree, REP (Reduced Error Pruning) tree, ensemble methods such as Adaboost, RUS (Random Under Sampling) boost, Rotation forest and Random forest are analysed. Among the various decision tree classifiers Random forest performs well in less time with good accuracy of 96.35%. Another inference is RUS boost decision tree classifier is able to classify one or two samples in the class with very less samples while the other classifiers such as DT, Adaboost, Rotation forest and Random forest are not sensitive for the classes with fewer samples. Also the performance of decision tree classifiers is compared with SVM (Support Vector Machine) and Naive Bayes classifier. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Finding small equivalent decision trees is hard

    NARCIS (Netherlands)

    Zantema, H.; Bodlaender, H.L.

    2000-01-01

    Two decision trees are called decision equivalent if they represent the same function, i.e., they yield the same result for every possible input. We prove that given a decision tree and a number, to decide if there is a decision equivalent decision tree of size at most that number is NPcomplete. As

  16. Relationships for Cost and Uncertainty of Decision Trees

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2013-01-01

    This chapter is devoted to the design of new tools for the study of decision trees. These tools are based on dynamic programming approach and need the consideration of subtables of the initial decision table. So this approach is applicable only to relatively small decision tables. The considered tools allow us to compute: 1. Theminimum cost of an approximate decision tree for a given uncertainty value and a cost function. 2. The minimum number of nodes in an exact decision tree whose depth is at most a given value. For the first tool we considered various cost functions such as: depth and average depth of a decision tree and number of nodes (and number of terminal and nonterminal nodes) of a decision tree. The uncertainty of a decision table is equal to the number of unordered pairs of rows with different decisions. The uncertainty of approximate decision tree is equal to the maximum uncertainty of a subtable corresponding to a terminal node of the tree. In addition to the algorithms for such tools we also present experimental results applied to various datasets acquired from UCI ML Repository [4]. © Springer-Verlag Berlin Heidelberg 2013.

  17. Meta-learning in decision tree induction

    CERN Document Server

    Grąbczewski, Krzysztof

    2014-01-01

    The book focuses on different variants of decision tree induction but also describes  the meta-learning approach in general which is applicable to other types of machine learning algorithms. The book discusses different variants of decision tree induction and represents a useful source of information to readers wishing to review some of the techniques used in decision tree learning, as well as different ensemble methods that involve decision trees. It is shown that the knowledge of different components used within decision tree learning needs to be systematized to enable the system to generate and evaluate different variants of machine learning algorithms with the aim of identifying the top-most performers or potentially the best one. A unified view of decision tree learning enables to emulate different decision tree algorithms simply by setting certain parameters. As meta-learning requires running many different processes with the aim of obtaining performance results, a detailed description of the experimen...

  18. A decision-tree model to detect post-calving diseases based on rumination, activity, milk yield, BW and voluntary visits to the milking robot.

    Science.gov (United States)

    Steensels, M; Antler, A; Bahr, C; Berckmans, D; Maltz, E; Halachmi, I

    2016-09-01

    Early detection of post-calving health problems is critical for dairy operations. Separating sick cows from the herd is important, especially in robotic-milking dairy farms, where searching for a sick cow can disturb the other cows' routine. The objectives of this study were to develop and apply a behaviour- and performance-based health-detection model to post-calving cows in a robotic-milking dairy farm, with the aim of detecting sick cows based on available commercial sensors. The study was conducted in an Israeli robotic-milking dairy farm with 250 Israeli-Holstein cows. All cows were equipped with rumination- and neck-activity sensors. Milk yield, visits to the milking robot and BW were recorded in the milking robot. A decision-tree model was developed on a calibration data set (historical data of the 10 months before the study) and was validated on the new data set. The decision model generated a probability of being sick for each cow. The model was applied once a week just before the veterinarian performed the weekly routine post-calving health check. The veterinarian's diagnosis served as a binary reference for the model (healthy-sick). The overall accuracy of the model was 78%, with a specificity of 87% and a sensitivity of 69%, suggesting its practical value.

  19. Decision-Tree, Rule-Based, and Random Forest Classification of High-Resolution Multispectral Imagery for Wetland Mapping and Inventory

    Science.gov (United States)

    Efforts are increasingly being made to classify the world’s wetland resources, an important ecosystem and habitat that is diminishing in abundance. There are multiple remote sensing classification methods, including a suite of nonparametric classifiers such as decision-tree...

  20. CorRECTreatment: a web-based decision support tool for rectal cancer treatment that uses the analytic hierarchy process and decision tree.

    Science.gov (United States)

    Suner, A; Karakülah, G; Dicle, O; Sökmen, S; Çelikoğlu, C C

    2015-01-01

    The selection of appropriate rectal cancer treatment is a complex multi-criteria decision making process, in which clinical decision support systems might be used to assist and enrich physicians' decision making. The objective of the study was to develop a web-based clinical decision support tool for physicians in the selection of potentially beneficial treatment options for patients with rectal cancer. The updated decision model contained 8 and 10 criteria in the first and second steps respectively. The decision support model, developed in our previous study by combining the Analytic Hierarchy Process (AHP) method which determines the priority of criteria and decision tree that formed using these priorities, was updated and applied to 388 patients data collected retrospectively. Later, a web-based decision support tool named corRECTreatment was developed. The compatibility of the treatment recommendations by the expert opinion and the decision support tool was examined for its consistency. Two surgeons were requested to recommend a treatment and an overall survival value for the treatment among 20 different cases that we selected and turned into a scenario among the most common and rare treatment options in the patient data set. In the AHP analyses of the criteria, it was found that the matrices, generated for both decision steps, were consistent (consistency ratiodecisions of experts, the consistency value for the most frequent cases was found to be 80% for the first decision step and 100% for the second decision step. Similarly, for rare cases consistency was 50% for the first decision step and 80% for the second decision step. The decision model and corRECTreatment, developed by applying these on real patient data, are expected to provide potential users with decision support in rectal cancer treatment processes and facilitate them in making projections about treatment options.

  1. Prediction of Mortality in Patients with Isolated Traumatic Subarachnoid Hemorrhage Using a Decision Tree Classifier: A Retrospective Analysis Based on a Trauma Registry System.

    Science.gov (United States)

    Rau, Cheng-Shyuan; Wu, Shao-Chun; Chien, Peng-Chen; Kuo, Pao-Jen; Chen, Yi-Chun; Hsieh, Hsiao-Yun; Hsieh, Ching-Hua

    2017-11-22

    Background: In contrast to patients with traumatic subarachnoid hemorrhage (tSAH) in the presence of other types of intracranial hemorrhage, the prognosis of patients with isolated tSAH is good. The incidence of mortality in these patients ranges from 0-2.5%. However, few data or predictive models are available for the identification of patients with a high mortality risk. In this study, we aimed to construct a model for mortality prediction using a decision tree (DT) algorithm, along with data obtained from a population-based trauma registry, in a Level 1 trauma center. Methods: Five hundred and forty-five patients with isolated tSAH, including 533 patients who survived and 12 who died, between January 2009 and December 2016, were allocated to training ( n = 377) or test ( n = 168) sets. Using the data on demographics and injury characteristics, as well as laboratory data of the patients, classification and regression tree (CART) analysis was performed based on the Gini impurity index, using the rpart function in the rpart package in R. Results: In this established DT model, three nodes (head Abbreviated Injury Scale (AIS) score ≤4, creatinine (Cr) 4 died, as did the 57% of those with an AIS score ≤4, but Cr ≥1.4 and age ≥76 years. All patients who did not meet the above-mentioned criteria survived. With all the variables in the model, the DT achieved an accuracy of 97.9% (sensitivity of 90.9% and specificity of 98.1%) and 97.7% (sensitivity of 100% and specificity of 97.7%), for the training set and test set, respectively. Conclusions: The study established a DT model with three nodes (head AIS score ≤4, Cr decision-making algorithm may help identify patients with a high risk of mortality.

  2. Teaching Thousands with Cloud-based GIS

    Science.gov (United States)

    Gould, Michael; DiBiase, David; Beale, Linda

    2016-04-01

    Teaching Thousands with Cloud-based GIS Educators often draw a distinction between "teaching about GIS" and "teaching with GIS." Teaching about GIS involves helping students learn what GIS is, what it does, and how it works. On the other hand, teaching with GIS involves using the technology as a means to achieve education objectives in the sciences, social sciences, professional disciplines like engineering and planning, and even the humanities. The same distinction applies to CyberGIS. Understandably, early efforts to develop CyberGIS curricula and educational resources tend to be concerned primarily with CyberGIS itself. However, if CyberGIS becomes as functional, usable and scalable as it aspires to be, teaching with CyberGIS has the potential to enable large and diverse global audiences to perform spatial analysis using hosted data, mapping and analysis services all running in the cloud. Early examples of teaching tens of thousands of students across the globe with cloud-based GIS include the massive open online courses (MOOCs) offered by Penn State University and others, as well as the series of MOOCs more recently developed and offered by Esri. In each case, ArcGIS Online was used to help students achieve educational objectives in subjects like business, geodesign, geospatial intelligence, and spatial analysis, as well as mapping. Feedback from the more than 100,000 total student participants to date, as well as from the educators and staff who supported these offerings, suggest that online education with cloud-based GIS is scalable to very large audiences. Lessons learned from the course design, development, and delivery of these early examples may be useful in informing the continuing development of CyberGIS education. While MOOCs may have passed the peak of their "hype cycle" in higher education, the phenomenon they revealed persists: namely, a global mass market of educated young adults who turn to free online education to expand their horizons. The

  3. A method of building of decision trees based on data from wearable device during a rehabilitation of patients with tibia fractures

    International Nuclear Information System (INIS)

    Kupriyanov, M. S.; Shukeilo, E. Y.; Shichkina, J. A.

    2015-01-01

    Nowadays technologies which are used in traumatology are a combination of mechanical, electronic, calculating and programming tools. Relevance of development of mobile applications for an expeditious data processing which are received from medical devices (in particular, wearable devices), and formulation of management decisions increases. Using of a mathematical method of building of decision trees for an assessment of a patient’s health condition using data from a wearable device considers in this article

  4. A method of building of decision trees based on data from wearable device during a rehabilitation of patients with tibia fractures

    Science.gov (United States)

    Kupriyanov, M. S.; Shukeilo, E. Y.; Shichkina, J. A.

    2015-11-01

    Nowadays technologies which are used in traumatology are a combination of mechanical, electronic, calculating and programming tools. Relevance of development of mobile applications for an expeditious data processing which are received from medical devices (in particular, wearable devices), and formulation of management decisions increases. Using of a mathematical method of building of decision trees for an assessment of a patient's health condition using data from a wearable device considers in this article.

  5. A method of building of decision trees based on data from wearable device during a rehabilitation of patients with tibia fractures

    Energy Technology Data Exchange (ETDEWEB)

    Kupriyanov, M. S., E-mail: mikhail.kupriyanov@gmail.com; Shukeilo, E. Y., E-mail: eyshukeylo@gmail.com; Shichkina, J. A., E-mail: strange.y@mail.ru [Saint Petersburg Electrotechnical University “LETI” (Russian Federation)

    2015-11-17

    Nowadays technologies which are used in traumatology are a combination of mechanical, electronic, calculating and programming tools. Relevance of development of mobile applications for an expeditious data processing which are received from medical devices (in particular, wearable devices), and formulation of management decisions increases. Using of a mathematical method of building of decision trees for an assessment of a patient’s health condition using data from a wearable device considers in this article.

  6. Representing Boolean Functions by Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    A Boolean or discrete function can be represented by a decision tree. A compact form of decision tree named binary decision diagram or branching program is widely known in logic design [2, 40]. This representation is equivalent to other forms, and in some cases it is more compact than values table or even the formula [44]. Representing a function in the form of decision tree allows applying graph algorithms for various transformations [10]. Decision trees and branching programs are used for effective hardware [15] and software [5] implementation of functions. For the implementation to be effective, the function representation should have minimal time and space complexity. The average depth of decision tree characterizes the expected computing time, and the number of nodes in branching program characterizes the number of functional elements required for implementation. Often these two criteria are incompatible, i.e. there is no solution that is optimal on both time and space complexity. © Springer-Verlag Berlin Heidelberg 2011.

  7. Comparison of Greedy Algorithms for Decision Tree Optimization

    KAUST Repository

    Alkhalid, Abdulaziz

    2013-01-01

    This chapter is devoted to the study of 16 types of greedy algorithms for decision tree construction. The dynamic programming approach is used for construction of optimal decision trees. Optimization is performed relative to minimal values of average depth, depth, number of nodes, number of terminal nodes, and number of nonterminal nodes of decision trees. We compare average depth, depth, number of nodes, number of terminal nodes and number of nonterminal nodes of constructed trees with minimum values of the considered parameters obtained based on a dynamic programming approach. We report experiments performed on data sets from UCI ML Repository and randomly generated binary decision tables. As a result, for depth, average depth, and number of nodes we propose a number of good heuristics. © Springer-Verlag Berlin Heidelberg 2013.

  8. Decision tree methods: applications for classification and prediction.

    Science.gov (United States)

    Song, Yan-Yan; Lu, Ying

    2015-04-25

    Decision tree methodology is a commonly used data mining method for establishing classification systems based on multiple covariates or for developing prediction algorithms for a target variable. This method classifies a population into branch-like segments that construct an inverted tree with a root node, internal nodes, and leaf nodes. The algorithm is non-parametric and can efficiently deal with large, complicated datasets without imposing a complicated parametric structure. When the sample size is large enough, study data can be divided into training and validation datasets. Using the training dataset to build a decision tree model and a validation dataset to decide on the appropriate tree size needed to achieve the optimal final model. This paper introduces frequently used algorithms used to develop decision trees (including CART, C4.5, CHAID, and QUEST) and describes the SPSS and SAS programs that can be used to visualize tree structure.

  9. Optimization and analysis of decision trees and rules: Dynamic programming approach

    KAUST Repository

    Alkhalid, Abdulaziz

    2013-08-01

    This paper is devoted to the consideration of software system Dagger created in KAUST. This system is based on extensions of dynamic programming. It allows sequential optimization of decision trees and rules relative to different cost functions, derivation of relationships between two cost functions (in particular, between number of misclassifications and depth of decision trees), and between cost and uncertainty of decision trees. We describe features of Dagger and consider examples of this systems work on decision tables from UCI Machine Learning Repository. We also use Dagger to compare 16 different greedy algorithms for decision tree construction. © 2013 Taylor and Francis Group, LLC.

  10. Optimization and analysis of decision trees and rules: Dynamic programming approach

    KAUST Repository

    Alkhalid, Abdulaziz; Amin, Talha M.; Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This paper is devoted to the consideration of software system Dagger created in KAUST. This system is based on extensions of dynamic programming. It allows sequential optimization of decision trees and rules relative to different cost functions, derivation of relationships between two cost functions (in particular, between number of misclassifications and depth of decision trees), and between cost and uncertainty of decision trees. We describe features of Dagger and consider examples of this systems work on decision tables from UCI Machine Learning Repository. We also use Dagger to compare 16 different greedy algorithms for decision tree construction. © 2013 Taylor and Francis Group, LLC.

  11. The decision tree approach to classification

    Science.gov (United States)

    Wu, C.; Landgrebe, D. A.; Swain, P. H.

    1975-01-01

    A class of multistage decision tree classifiers is proposed and studied relative to the classification of multispectral remotely sensed data. The decision tree classifiers are shown to have the potential for improving both the classification accuracy and the computation efficiency. Dimensionality in pattern recognition is discussed and two theorems on the lower bound of logic computation for multiclass classification are derived. The automatic or optimization approach is emphasized. Experimental results on real data are reported, which clearly demonstrate the usefulness of decision tree classifiers.

  12. New Splitting Criteria for Decision Trees in Stationary Data Streams.

    Science.gov (United States)

    Jaworski, Maciej; Duda, Piotr; Rutkowski, Leszek; Jaworski, Maciej; Duda, Piotr; Rutkowski, Leszek; Rutkowski, Leszek; Duda, Piotr; Jaworski, Maciej

    2018-06-01

    The most popular tools for stream data mining are based on decision trees. In previous 15 years, all designed methods, headed by the very fast decision tree algorithm, relayed on Hoeffding's inequality and hundreds of researchers followed this scheme. Recently, we have demonstrated that although the Hoeffding decision trees are an effective tool for dealing with stream data, they are a purely heuristic procedure; for example, classical decision trees such as ID3 or CART cannot be adopted to data stream mining using Hoeffding's inequality. Therefore, there is an urgent need to develop new algorithms, which are both mathematically justified and characterized by good performance. In this paper, we address this problem by developing a family of new splitting criteria for classification in stationary data streams and investigating their probabilistic properties. The new criteria, derived using appropriate statistical tools, are based on the misclassification error and the Gini index impurity measures. The general division of splitting criteria into two types is proposed. Attributes chosen based on type- splitting criteria guarantee, with high probability, the highest expected value of split measure. Type- criteria ensure that the chosen attribute is the same, with high probability, as it would be chosen based on the whole infinite data stream. Moreover, in this paper, two hybrid splitting criteria are proposed, which are the combinations of single criteria based on the misclassification error and Gini index.

  13. Evaluation of Decision Trees for Cloud Detection from AVHRR Data

    Science.gov (United States)

    Shiffman, Smadar; Nemani, Ramakrishna

    2005-01-01

    Automated cloud detection and tracking is an important step in assessing changes in radiation budgets associated with global climate change via remote sensing. Data products based on satellite imagery are available to the scientific community for studying trends in the Earth's atmosphere. The data products include pixel-based cloud masks that assign cloud-cover classifications to pixels. Many cloud-mask algorithms have the form of decision trees. The decision trees employ sequential tests that scientists designed based on empirical astrophysics studies and simulations. Limitations of existing cloud masks restrict our ability to accurately track changes in cloud patterns over time. In a previous study we compared automatically learned decision trees to cloud masks included in Advanced Very High Resolution Radiometer (AVHRR) data products from the year 2000. In this paper we report the replication of the study for five-year data, and for a gold standard based on surface observations performed by scientists at weather stations in the British Islands. For our sample data, the accuracy of automatically learned decision trees was greater than the accuracy of the cloud masks p < 0.001.

  14. RE-Powering’s Electronic Decision Tree

    Science.gov (United States)

    Developed by US EPA's RE-Powering America's Land Initiative, the RE-Powering Decision Trees tool guides interested parties through a process to screen sites for their suitability for solar photovoltaics or wind installations

  15. VC-dimension of univariate decision trees.

    Science.gov (United States)

    Yildiz, Olcay Taner

    2015-02-01

    In this paper, we give and prove the lower bounds of the Vapnik-Chervonenkis (VC)-dimension of the univariate decision tree hypothesis class. The VC-dimension of the univariate decision tree depends on the VC-dimension values of its subtrees and the number of inputs. Via a search algorithm that calculates the VC-dimension of univariate decision trees exhaustively, we show that our VC-dimension bounds are tight for simple trees. To verify that the VC-dimension bounds are useful, we also use them to get VC-generalization bounds for complexity control using structural risk minimization in decision trees, i.e., pruning. Our simulation results show that structural risk minimization pruning using the VC-dimension bounds finds trees that are more accurate as those pruned using cross validation.

  16. Speech Recognition Using Randomized Relational Decision Trees

    National Research Council Canada - National Science Library

    Amit, Yali

    1999-01-01

    .... This implies that we recognize words as units, without recognizing their subcomponents. Multiple randomized decision trees are used to access the large pool of acoustic events in a systematic manner and are aggregated to produce the classifier.

  17. PRIA 3 Fee Determination Decision Tree

    Science.gov (United States)

    The PRIA 3 decision tree will help applicants requesting a pesticide registration or certain tolerance action to accurately identify the category of their application and the amount of the required fee before they submit the application.

  18. Totally optimal decision trees for Boolean functions

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2016-01-01

    We study decision trees which are totally optimal relative to different sets of complexity parameters for Boolean functions. A totally optimal tree is an optimal tree relative to each parameter from the set simultaneously. We consider the parameters

  19. Solar and Wind Site Screening Decision Trees

    Science.gov (United States)

    EPA and NREL created a decision tree to guide state and local governments and other stakeholders through a process for screening sites for their suitability for future redevelopment with solar photovoltaic (PV) energy and wind energy.

  20. Comprehensive decision tree models in bioinformatics.

    Directory of Open Access Journals (Sweden)

    Gregor Stiglic

    Full Text Available PURPOSE: Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. METHODS: This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. RESULTS: The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. CONCLUSIONS: The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets

  1. Comprehensive decision tree models in bioinformatics.

    Science.gov (United States)

    Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter

    2012-01-01

    Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly

  2. Multi-pruning of decision trees for knowledge representation and classification

    KAUST Repository

    Azad, Mohammad

    2016-06-09

    We consider two important questions related to decision trees: first how to construct a decision tree with reasonable number of nodes and reasonable number of misclassification, and second how to improve the prediction accuracy of decision trees when they are used as classifiers. We have created a dynamic programming based approach for bi-criteria optimization of decision trees relative to the number of nodes and the number of misclassification. This approach allows us to construct the set of all Pareto optimal points and to derive, for each such point, decision trees with parameters corresponding to that point. Experiments on datasets from UCI ML Repository show that, very often, we can find a suitable Pareto optimal point and derive a decision tree with small number of nodes at the expense of small increment in number of misclassification. Based on the created approach we have proposed a multi-pruning procedure which constructs decision trees that, as classifiers, often outperform decision trees constructed by CART. © 2015 IEEE.

  3. Multi-pruning of decision trees for knowledge representation and classification

    KAUST Repository

    Azad, Mohammad; Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2016-01-01

    We consider two important questions related to decision trees: first how to construct a decision tree with reasonable number of nodes and reasonable number of misclassification, and second how to improve the prediction accuracy of decision trees when they are used as classifiers. We have created a dynamic programming based approach for bi-criteria optimization of decision trees relative to the number of nodes and the number of misclassification. This approach allows us to construct the set of all Pareto optimal points and to derive, for each such point, decision trees with parameters corresponding to that point. Experiments on datasets from UCI ML Repository show that, very often, we can find a suitable Pareto optimal point and derive a decision tree with small number of nodes at the expense of small increment in number of misclassification. Based on the created approach we have proposed a multi-pruning procedure which constructs decision trees that, as classifiers, often outperform decision trees constructed by CART. © 2015 IEEE.

  4. Using Web-Based GIS in Introductory Human Geography

    Science.gov (United States)

    Songer, Lynn C.

    2010-01-01

    Advocates for using a geographic information system (GIS) in education assert that GIS improves student learning. However, studies to clarify the relationship between learning and using GIS are still needed. This study examines the effects of using Web-based GIS maps in place of paper maps on students' geography content knowledge and motivation…

  5. Automated Decision Tree Classification of Corneal Shape

    Science.gov (United States)

    Twa, Michael D.; Parthasarathy, Srinivasan; Roberts, Cynthia; Mahmoud, Ashraf M.; Raasch, Thomas W.; Bullimore, Mark A.

    2011-01-01

    Purpose The volume and complexity of data produced during videokeratography examinations present a challenge of interpretation. As a consequence, results are often analyzed qualitatively by subjective pattern recognition or reduced to comparisons of summary indices. We describe the application of decision tree induction, an automated machine learning classification method, to discriminate between normal and keratoconic corneal shapes in an objective and quantitative way. We then compared this method with other known classification methods. Methods The corneal surface was modeled with a seventh-order Zernike polynomial for 132 normal eyes of 92 subjects and 112 eyes of 71 subjects diagnosed with keratoconus. A decision tree classifier was induced using the C4.5 algorithm, and its classification performance was compared with the modified Rabinowitz–McDonnell index, Schwiegerling’s Z3 index (Z3), Keratoconus Prediction Index (KPI), KISA%, and Cone Location and Magnitude Index using recommended classification thresholds for each method. We also evaluated the area under the receiver operator characteristic (ROC) curve for each classification method. Results Our decision tree classifier performed equal to or better than the other classifiers tested: accuracy was 92% and the area under the ROC curve was 0.97. Our decision tree classifier reduced the information needed to distinguish between normal and keratoconus eyes using four of 36 Zernike polynomial coefficients. The four surface features selected as classification attributes by the decision tree method were inferior elevation, greater sagittal depth, oblique toricity, and trefoil. Conclusions Automated decision tree classification of corneal shape through Zernike polynomials is an accurate quantitative method of classification that is interpretable and can be generated from any instrument platform capable of raw elevation data output. This method of pattern classification is extendable to other classification

  6. A decision tree-based combination of ezrin-interacting proteins to estimate the prognostic risk of patients with esophageal squamous cell carcinoma.

    Science.gov (United States)

    He, Jian-Zhong; Wu, Zhi-Yong; Wang, Shao-Hong; Ji, Xia; Yang, Cui-Xia; Xu, Xiu-E; Liao, Lian-Di; Wu, Jian-Yi; Li, En-Min; Zhang, Kai; Xu, Li-Yan

    2017-08-01

    Our previous studies have highlighted the importance of ezrin in esophageal squamous cell carcinoma (ESCC). Here our objective was to explore the clinical significance of ezrin-interacting proteins, which would provide a theoretical basis for understanding the function of ezrin and potential therapeutic targets for ESCC. We used affinity purification and mass spectrometry to identify PDIA3, CNPY2, and STMN1 as potential ezrin-interacting proteins. Confocal microscopy and coimmunoprecipitation analysis further confirmed the colocalization and interaction of ezrin with PDIA3, CNPY2, and STMN1. Tissue microarray data of ESCC samples (n=263) showed that the 5-year overall survival (OS) and disease-free survival (DFS) were significantly lower for the CNPY2 (OS, P=.003; DFS, P=.011) and STMN1 (OS, P=.010; DFS, P=.002) high-expression groups compared with the low-expression groups. By contrast, overexpression of PDIA3 was significantly correlated with favorable survival (OS, P<.001; DFS, P=.001). Cox regression demonstrated the prognostic value of PDIA3, CNPY2, and STMN1 in ESCC. Furthermore, decision tree analysis revealed that the resulting classifier of both ezrin and its interacting proteins could be used to better predict OS and DFS of patients with ESCC. In conclusion, a signature of ezrin-interacting proteins accurately predicts ESCC patient survival or tumor recurrence. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Knowledge-based decision tree approach for mapping spatial distribution of rice crop using C-band synthetic aperture radar-derived information

    Science.gov (United States)

    Mishra, Varun Narayan; Prasad, Rajendra; Kumar, Pradeep; Srivastava, Prashant K.; Rai, Praveen Kumar

    2017-10-01

    Updated and accurate information of rice-growing areas is vital for food security and investigating the environmental impact of rice ecosystems. The intent of this work is to explore the feasibility of dual-polarimetric C-band Radar Imaging Satellite-1 (RISAT-1) data in delineating rice crop fields from other land cover features. A two polarization combination of RISAT-1 backscatter, namely ratio (HH/HV) and difference (HH-HV), significantly enhanced the backscatter difference between rice and nonrice categories. With these inputs, a QUEST decision tree (DT) classifier is successfully employed to extract the spatial distribution of rice crop areas. The results showed the optimal polarization combination to be HH along with HH/HV and HH-HV for rice crop mapping with an accuracy of 88.57%. Results were further compared with a Landsat-8 operational land imager (OLI) optical sensor-derived rice crop map. Spatial agreement of almost 90% was achieved between outputs produced from Landsat-8 OLI and RISAT-1 data. The simplicity of the approach used in this work may serve as an effective tool for rice crop mapping.

  8. The Performance Analysis of the Map-Aided Fuzzy Decision Tree Based on the Pedestrian Dead Reckoning Algorithm in an Indoor Environment

    Directory of Open Access Journals (Sweden)

    Kai-Wei Chiang

    2015-12-01

    Full Text Available Hardware sensors embedded in a smartphone allow the device to become an excellent mobile navigator. A smartphone is ideal for this task because its great international popularity has led to increased phone power and since most of the necessary infrastructure is already in place. However, using a smartphone for indoor pedestrian navigation can be problematic due to the low accuracy of sensors, imprecise predictability of pedestrian motion, and inaccessibility of the Global Navigation Satellite System (GNSS in some indoor environments. Pedestrian Dead Reckoning (PDR is one of the most common technologies used for pedestrian navigation, but in its present form, various errors tend to accumulate. This study introduces a fuzzy decision tree (FDT aided by map information to improve the accuracy and stability of PDR with less dependency on infrastructure. First, the map is quickly surveyed by the Indoor Mobile Mapping System (IMMS. Next, Bluetooth beacons are implemented to enable the initializing of any position. Finally, map-aided FDT can estimate navigation solutions in real time. The experiments were conducted in different fields using a variety of smartphones and users in order to verify stability. The contrast PDR system demonstrates low stability for each case without pre-calibration and post-processing, but the proposed low-complexity FDT algorithm shows good stability and accuracy under the same conditions.

  9. Stock Picking via Nonsymmetrically Pruned Binary Decision Trees

    OpenAIRE

    Anton Andriyashin

    2008-01-01

    Stock picking is the field of financial analysis that is of particular interest for many professional investors and researchers. In this study stock picking is implemented via binary classification trees. Optimal tree size is believed to be the crucial factor in forecasting performance of the trees. While there exists a standard method of tree pruning, which is based on the cost-complexity tradeoff and used in the majority of studies employing binary decision trees, this paper introduces a no...

  10. A decision tree for differentiating multiple system atrophy from Parkinson's disease using 3-T MR imaging.

    Science.gov (United States)

    Nair, Shalini Rajandran; Tan, Li Kuo; Mohd Ramli, Norlisah; Lim, Shen Yang; Rahmat, Kartini; Mohd Nor, Hazman

    2013-06-01

    To develop a decision tree based on standard magnetic resonance imaging (MRI) and diffusion tensor imaging to differentiate multiple system atrophy (MSA) from Parkinson's disease (PD). 3-T brain MRI and DTI (diffusion tensor imaging) were performed on 26 PD and 13 MSA patients. Regions of interest (ROIs) were the putamen, substantia nigra, pons, middle cerebellar peduncles (MCP) and cerebellum. Linear, volumetry and DTI (fractional anisotropy and mean diffusivity) were measured. A three-node decision tree was formulated, with design goals being 100 % specificity at node 1, 100 % sensitivity at node 2 and highest combined sensitivity and specificity at node 3. Nine parameters (mean width, fractional anisotropy (FA) and mean diffusivity (MD) of MCP; anteroposterior diameter of pons; cerebellar FA and volume; pons and mean putamen volume; mean FA substantia nigra compacta-rostral) showed statistically significant (P decision tree. Threshold values were 14.6 mm, 21.8 mm and 0.55, respectively. Overall performance of the decision tree was 92 % sensitivity, 96 % specificity, 92 % PPV and 96 % NPV. Twelve out of 13 MSA patients were accurately classified. Formation of the decision tree using these parameters was both descriptive and predictive in differentiating between MSA and PD. • Parkinson's disease and multiple system atrophy can be distinguished on MR imaging. • Combined conventional MRI and diffusion tensor imaging improves the accuracy of diagnosis. • A decision tree is descriptive and predictive in differentiating between clinical entities. • A decision tree can reliably differentiate Parkinson's disease from multiple system atrophy.

  11. Application of decision tree model for the ground subsidence hazard mapping near abandoned underground coal mines.

    Science.gov (United States)

    Lee, Saro; Park, Inhye

    2013-09-30

    Subsidence of ground caused by underground mines poses hazards to human life and property. This study analyzed the hazard to ground subsidence using factors that can affect ground subsidence and a decision tree approach in a geographic information system (GIS). The study area was Taebaek, Gangwon-do, Korea, where many abandoned underground coal mines exist. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 50/50 for training and validation of the models. A data-mining classification technique was applied to the GSH mapping, and decision trees were constructed using the chi-squared automatic interaction detector (CHAID) and the quick, unbiased, and efficient statistical tree (QUEST) algorithms. The frequency ratio model was also applied to the GSH mapping for comparing with probabilistic model. The resulting GSH maps were validated using area-under-the-curve (AUC) analysis with the subsidence area data that had not been used for training the model. The highest accuracy was achieved by the decision tree model using CHAID algorithm (94.01%) comparing with QUEST algorithms (90.37%) and frequency ratio model (86.70%). These accuracies are higher than previously reported results for decision tree. Decision tree methods can therefore be used efficiently for GSH analysis and might be widely used for prediction of various spatial events. Copyright © 2013. Published by Elsevier Ltd.

  12. IMPLEMENTATION OF GIS-BASED MULTICRITERIA DECISION ANALYSIS WITH VB IN ArcGIS

    OpenAIRE

    DERYA OZTURK; FATMAGUL BATUK

    2011-01-01

    This article focuses on the integration of multicriteria decision analysis (MCDA) and geographical information systems (GIS) and introduces a tool, GIS–MCDA, written in visual basic in ArcGIS for GIS-based MCDA. The GIS–MCDA deals with raster-based data sets and includes standardization, weighting and decision analysis methods, and sensitivity analysis. Simple additive weighting, weighted product method, technique for order preference by similarity to ideal solution, compromise programming, a...

  13. Comparison of Greedy Algorithms for Decision Tree Optimization

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2013-01-01

    This chapter is devoted to the study of 16 types of greedy algorithms for decision tree construction. The dynamic programming approach is used for construction of optimal decision trees. Optimization is performed relative to minimal values

  14. On algorithm for building of optimal α-decision trees

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2010-01-01

    The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic

  15. Using decision trees and their ensembles for analysis of NIR spectroscopic data

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey V.

    and interpretation of the models. In this presentation, we are going to discuss an applicability of decision trees based methods (including gradient boosting) for solving classification and regression tasks with NIR spectra as predictors. We will cover such aspects as evaluation, optimization and validation......Advanced machine learning methods, like convolutional neural networks and decision trees, became extremely popular in the last decade. This, first of all, is directly related to the current boom in Big data analysis, where traditional statistical methods are not efficient. According to the kaggle.......com — the most popular online resource for Big data problems and solutions — methods based on decision trees and their ensembles are most widely used for solving the problems. It can be noted that the decision trees and convolutional neural networks are not very popular in Chemometrics. One of the reasons...

  16. Decision tree-based modelling for identification of potential interactions between type 2 diabetes risk factors: a decade follow-up in a Middle East prospective cohort study.

    Science.gov (United States)

    Ramezankhani, Azra; Hadavandi, Esmaeil; Pournik, Omid; Shahrabi, Jamal; Azizi, Fereidoun; Hadaegh, Farzad

    2016-12-01

    The current study was undertaken for use of the decision tree (DT) method for development of different prediction models for incidence of type 2 diabetes (T2D) and for exploring interactions between predictor variables in those models. Prospective cohort study. Tehran Lipid and Glucose Study (TLGS). A total of 6647 participants (43.4% men) aged >20 years, without T2D at baselines ((1999-2001) and (2002-2005)), were followed until 2012. 2 series of models (with and without 2-hour postchallenge plasma glucose (2h-PCPG)) were developed using 3 types of DT algorithms. The performances of the models were assessed using sensitivity, specificity, area under the ROC curve (AUC), geometric mean (G-Mean) and F-Measure. T2D was primary outcome which defined if fasting plasma glucose (FPG) was ≥7 mmol/L or if the 2h-PCPG was ≥11.1 mmol/L or if the participant was taking antidiabetic medication. During a median follow-up of 9.5 years, 729 new cases of T2D were identified. The Quick Unbiased Efficient Statistical Tree (QUEST) algorithm had the highest sensitivity and G-Mean among all the models for men and women. The models that included 2h-PCPG had sensitivity and G-Mean of (78% and 0.75%) and (78% and 0.78%) for men and women, respectively. Both models achieved good discrimination power with AUC above 0.78. FPG, 2h-PCPG, waist-to-height ratio (WHtR) and mean arterial blood pressure (MAP) were the most important factors to incidence of T2D in both genders. Among men, those with an FPG≤4.9 mmol/L and 2h-PCPG≤7.7 mmol/L had the lowest risk, and those with an FPG>5.3 mmol/L and 2h-PCPG>4.4 mmol/L had the highest risk for T2D incidence. In women, those with an FPG≤5.2 mmol/L and WHtR≤0.55 had the lowest risk, and those with an FPG>5.2 mmol/L and WHtR>0.56 had the highest risk for T2D incidence. Our study emphasises the utility of DT for exploring interactions between predictor variables. Published by the BMJ Publishing Group Limited. For permission

  17. GIS In-Service Teacher Training Based on TPACK

    Science.gov (United States)

    Hong, Jung Eun; Stonier, Francis

    2015-01-01

    This article introduces the geographic information systems (GIS) in-service teacher training, focusing on the intersection of technological, pedagogical, and content knowledge (TPACK) for successful implementation of GIS in the classroom. Eleven social studies teachers in Georgia learned GIS technologies, inquiry-based learning, and social studies…

  18. Application of preprocessing filtering on Decision Tree C4.5 and rough set theory

    Science.gov (United States)

    Chan, Joseph C. C.; Lin, Tsau Y.

    2001-03-01

    This paper compares two artificial intelligence methods: the Decision Tree C4.5 and Rough Set Theory on the stock market data. The Decision Tree C4.5 is reviewed with the Rough Set Theory. An enhanced window application is developed to facilitate the pre-processing filtering by introducing the feature (attribute) transformations, which allows users to input formulas and create new attributes. Also, the application produces three varieties of data set with delaying, averaging, and summation. The results prove the improvement of pre-processing by applying feature (attribute) transformations on Decision Tree C4.5. Moreover, the comparison between Decision Tree C4.5 and Rough Set Theory is based on the clarity, automation, accuracy, dimensionality, raw data, and speed, which is supported by the rules sets generated by both algorithms on three different sets of data.

  19. Decision Trees for Helpdesk Advisor Graphs

    OpenAIRE

    Gkezerlis, Spyros; Kalles, Dimitris

    2017-01-01

    We use decision trees to build a helpdesk agent reference network to facilitate the on-the-job advising of junior or less experienced staff on how to better address telecommunication customer fault reports. Such reports generate field measurements and remote measurements which, when coupled with location data and client attributes, and fused with organization-level statistics, can produce models of how support should be provided. Beyond decision support, these models can help identify staff w...

  20. Highway traffic noise prediction based on GIS

    Science.gov (United States)

    Zhao, Jianghua; Qin, Qiming

    2014-05-01

    Before building a new road, we need to predict the traffic noise generated by vehicles. Traditional traffic noise prediction methods are based on certain locations and they are not only time-consuming, high cost, but also cannot be visualized. Geographical Information System (GIS) can not only solve the problem of manual data processing, but also can get noise values at any point. The paper selected a road segment from Wenxi to Heyang. According to the geographical overview of the study area and the comparison between several models, we combine the JTG B03-2006 model and the HJ2.4-2009 model to predict the traffic noise depending on the circumstances. Finally, we interpolate the noise values at each prediction point and then generate contours of noise. By overlaying the village data on the noise contour layer, we can get the thematic maps. The use of GIS for road traffic noise prediction greatly facilitates the decision-makers because of GIS spatial analysis function and visualization capabilities. We can clearly see the districts where noise are excessive, and thus it becomes convenient to optimize the road line and take noise reduction measures such as installing sound barriers and relocating villages and so on.

  1. Development of a New Decision Tree to Rapidly Screen Chemical Estrogenic Activities of Xenopus laevis.

    Science.gov (United States)

    Wang, Ting; Li, Weiying; Zheng, Xiaofeng; Lin, Zhifen; Kong, Deyang

    2014-02-01

    During the last past decades, there is an increasing number of studies about estrogenic activities of the environmental pollutants on amphibians and many determination methods have been proposed. However, these determination methods are time-consuming and expensive, and a rapid and simple method to screen and test the chemicals for estrogenic activities to amphibians is therefore imperative. Herein is proposed a new decision tree formulated not only with physicochemical parameters but also a biological parameter that was successfully used to screen estrogenic activities of the chemicals on amphibians. The biological parameter, CDOCKER interaction energy (Ebinding ) between chemicals and the target proteins was calculated based on the method of molecular docking, and it was used to revise the decision tree formulated by Hong only with physicochemical parameters for screening estrogenic activity of chemicals in rat. According to the correlation between Ebinding of rat and Xenopus laevis, a new decision tree for estrogenic activities in Xenopus laevis is finally proposed. Then it was validated by using the randomly 8 chemicals which can be frequently exposed to Xenopus laevis, and the agreement between the results from the new decision tree and the ones from experiments is generally satisfactory. Consequently, the new decision tree can be used to screen the estrogenic activities of the chemicals, and combinational use of the Ebinding and classical physicochemical parameters can greatly improves Hong's decision tree. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. [RS estimation of inventory parameters and carbon storage of moso bamboo forest based on synergistic use of object-based image analysis and decision tree].

    Science.gov (United States)

    Du, Hua Qiang; Sun, Xiao Yan; Han, Ning; Mao, Fang Jie

    2017-10-01

    By synergistically using the object-based image analysis (OBIA) and the classification and regression tree (CART) methods, the distribution information, the indexes (including diameter at breast, tree height, and crown closure), and the aboveground carbon storage (AGC) of moso bamboo forest in Shanchuan Town, Anji County, Zhejiang Province were investigated. The results showed that the moso bamboo forest could be accurately delineated by integrating the multi-scale ima ge segmentation in OBIA technique and CART, which connected the image objects at various scales, with a pretty good producer's accuracy of 89.1%. The investigation of indexes estimated by regression tree model that was constructed based on the features extracted from the image objects reached normal or better accuracy, in which the crown closure model archived the best estimating accuracy of 67.9%. The estimating accuracy of diameter at breast and tree height was relatively low, which was consistent with conclusion that estimating diameter at breast and tree height using optical remote sensing could not achieve satisfactory results. Estimation of AGC reached relatively high accuracy, and accuracy of the region of high value achieved above 80%.

  3. A survey of decision tree classifier methodology

    Science.gov (United States)

    Safavian, S. R.; Landgrebe, David

    1991-01-01

    Decision tree classifiers (DTCs) are used successfully in many diverse areas such as radar signal classification, character recognition, remote sensing, medical diagnosis, expert systems, and speech recognition. Perhaps the most important feature of DTCs is their capability to break down a complex decision-making process into a collection of simpler decisions, thus providing a solution which is often easier to interpret. A survey of current methods is presented for DTC designs and the various existing issues. After considering potential advantages of DTCs over single-state classifiers, subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed.

  4. Algorithms for optimal dyadic decision trees

    Energy Technology Data Exchange (ETDEWEB)

    Hush, Don [Los Alamos National Laboratory; Porter, Reid [Los Alamos National Laboratory

    2009-01-01

    A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.

  5. Constructing an optimal decision tree for FAST corner point detection

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2011-01-01

    In this paper, we consider a problem that is originated in computer vision: determining an optimal testing strategy for the corner point detection problem that is a part of FAST algorithm [11,12]. The problem can be formulated as building a decision tree with the minimum average depth for a decision table with all discrete attributes. We experimentally compare performance of an exact algorithm based on dynamic programming and several greedy algorithms that differ in the attribute selection criterion. © 2011 Springer-Verlag.

  6. Language Adaptive LVCSR Through Polyphone Decision Tree Specialization

    Science.gov (United States)

    2000-08-01

    crossing rate. After cepstral mean subtraction a linear discriminant analysis 1Mandarin is given in character based error rate, Japanese in hiragana ...data. For this purpose we German GE 9173 71 132K 16.7 introduce a polyphone decision tree specialization method. Several Japanese JA 9096 108 212K...12.1 2. Multiple Languages German 11.8 61K 200 44.5 43 9.0 For our experiments we developed monolingual LVCSR sys- Japanese 10.0 22K 230 33.8 33 7.9

  7. Computer-based instrumentation for partial discharge detection in GIS

    International Nuclear Information System (INIS)

    Md Enamul Haque; Ahmad Darus; Yaacob, M.M.; Halil Hussain; Feroz Ahmed

    2000-01-01

    Partial discharge is one of the prominent indicators of defects and insulation degradation in a Gas Insulated Switchgear (GIS). Partial discharges (PD) have a harmful effect on the life of insulation of high voltage equipment. The PD detection using acoustic technique and subsequent analysis is currently an efficient method of performing non-destructive testing of GIS apparatus. A low cost PC-based acoustic PD detection instrument has been developed for the non-destructive diagnosis of GIS. This paper describes the development of a PC-based instrumentation system for partial discharge detection in GIS and some experimental results have also presented. (Author)

  8. A GIS based hydrogeomorphic approach for identification of site ...

    Indian Academy of Sciences (India)

    a Geographical Information System (GIS) based hydrogeomorphic approach in the Bhatsa and. Kalu river basins of Thane district, in western DVP. The criteria adopted for the GIS analysis were based .... segments of the rivers. The majority of the lineaments correspond to either dyke ridges or stream channels which are of ...

  9. On algorithm for building of optimal α-decision trees

    KAUST Repository

    Alkhalid, Abdulaziz

    2010-01-01

    The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic programming and extends methods described in [4] to constructing approximate decision trees. Adjustable approximation rate allows controlling algorithm complexity. The algorithm is applied to build optimal α-decision trees for two data sets from UCI Machine Learning Repository [1]. © 2010 Springer-Verlag Berlin Heidelberg.

  10. Automatic design of decision-tree induction algorithms

    CERN Document Server

    Barros, Rodrigo C; Freitas, Alex A

    2015-01-01

    Presents a detailed study of the major design components that constitute a top-down decision-tree induction algorithm, including aspects such as split criteria, stopping criteria, pruning, and the approaches for dealing with missing values. Whereas the strategy still employed nowadays is to use a 'generic' decision-tree induction algorithm regardless of the data, the authors argue on the benefits that a bias-fitting strategy could bring to decision-tree induction, in which the ultimate goal is the automatic generation of a decision-tree induction algorithm tailored to the application domain o

  11. Shopping intention prediction using decision trees

    Directory of Open Access Journals (Sweden)

    Dario Šebalj

    2017-09-01

    Full Text Available Introduction: The price is considered to be neglected marketing mix element due to the complexity of price management and sensitivity of customers on price changes. It pulls the fastest customer reactions to that change. Accordingly, the process of making shopping decisions can be very challenging for customer. Objective: The aim of this paper is to create a model that is able to predict shopping intention and classify respondents into one of the two categories, depending on whether they intend to shop or not. Methods: Data sample consists of 305 respondents, who are persons older than 18 years involved in buying groceries for their household. The research was conducted in February 2017. In order to create a model, the decision trees method was used with its several classification algorithms. Results: All models, except the one that used RandomTree algorithm, achieved relatively high classification rate (over the 80%. The highest classification accuracy of 84.75% gave J48 and RandomForest algorithms. Since there is no statistically significant difference between those two algorithms, authors decided to choose J48 algorithm and build a decision tree. Conclusions: The value for money and price level in the store were the most significant variables for classification of shopping intention. Future study plans to compare this model with some other data mining techniques, such as neural networks or support vector machines since these techniques achieved very good accuracy in some previous research in this field.

  12. Format conversion between CAD data and GIS data based on ArcGIS

    Science.gov (United States)

    Xie, Qingqing; Wei, Bo; Zhang, Kailin; Wang, Zhichao

    2015-12-01

    To make full use of the data resources and realize a sharing for the different types of data in different industries, a method of format conversion between CAD data and GIS data based on ArcGIS was proposed. To keep the integrity of the converted data, some key steps to process CAD data before conversion were made in AutoCAD. For examples, deleting unnecessary elements such as title, border and legend avoided the appearance of unnecessary elements after conversion, as layering data again by a national standard avoided the different types of elements to appear in a same layer after conversion. In ArcGIS, converting CAD data to GIS data was executed by the correspondence of graphic element classification between AutoCAD and ArcGIS. In addition, an empty geographic database and feature set was required to create in ArcGIS for storing the text data of CAD data. The experimental results show that the proposed method avoids a large amount of editing work in data conversion and maintains the integrity of spatial data and attribute data between before and after conversion.

  13. Decision tree analysis in subarachnoid hemorrhage: prediction of outcome parameters during the course of aneurysmal subarachnoid hemorrhage using decision tree analysis.

    Science.gov (United States)

    Hostettler, Isabel Charlotte; Muroi, Carl; Richter, Johannes Konstantin; Schmid, Josef; Neidert, Marian Christoph; Seule, Martin; Boss, Oliver; Pangalu, Athina; Germans, Menno Robbert; Keller, Emanuela

    2018-01-19

    OBJECTIVE The aim of this study was to create prediction models for outcome parameters by decision tree analysis based on clinical and laboratory data in patients with aneurysmal subarachnoid hemorrhage (aSAH). METHODS The database consisted of clinical and laboratory parameters of 548 patients with aSAH who were admitted to the Neurocritical Care Unit, University Hospital Zurich. To examine the model performance, the cohort was randomly divided into a derivation cohort (60% [n = 329]; training data set) and a validation cohort (40% [n = 219]; test data set). The classification and regression tree prediction algorithm was applied to predict death, functional outcome, and ventriculoperitoneal (VP) shunt dependency. Chi-square automatic interaction detection was applied to predict delayed cerebral infarction on days 1, 3, and 7. RESULTS The overall mortality was 18.4%. The accuracy of the decision tree models was good for survival on day 1 and favorable functional outcome at all time points, with a difference between the training and test data sets of decision trees enables exploration of dependent variables in the context of multiple changing influences over the course of an illness. The decision tree currently generated increases awareness of the early systemic stress response, which is seemingly pertinent for prognostication.

  14. A tool for study of optimal decision trees

    KAUST Repository

    Alkhalid, Abdulaziz

    2010-01-01

    The paper describes a tool which allows us for relatively small decision tables to make consecutive optimization of decision trees relative to various complexity measures such as number of nodes, average depth, and depth, and to find parameters and the number of optimal decision trees. © 2010 Springer-Verlag Berlin Heidelberg.

  15. The Decision Tree: A Tool for Achieving Behavioral Change.

    Science.gov (United States)

    Saren, Dru

    1999-01-01

    Presents a "Decision Tree" process for structuring team decision making and problem solving about specific student behavioral goals. The Decision Tree involves a sequence of questions/decisions that can be answered in "yes/no" terms. Questions address reasonableness of the goal, time factors, importance of the goal, responsibilities, safety,…

  16. 15 CFR Supplement 1 to Part 732 - Decision Tree

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Decision Tree 1 Supplement 1 to Part 732 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU... THE EAR Pt. 732, Supp. 1 Supplement 1 to Part 732—Decision Tree ER06FE04.000 [69 FR 5687, Feb. 6, 2004] ...

  17. Greedy algorithm with weights for decision tree construction

    KAUST Repository

    Moshkov, Mikhail

    2010-12-01

    An approximate algorithm for minimization of weighted depth of decision trees is considered. A bound on accuracy of this algorithm is obtained which is unimprovable in general case. Under some natural assumptions on the class NP, the considered algorithm is close (from the point of view of accuracy) to best polynomial approximate algorithms for minimization of weighted depth of decision trees.

  18. Greedy algorithm with weights for decision tree construction

    KAUST Repository

    Moshkov, Mikhail

    2010-01-01

    An approximate algorithm for minimization of weighted depth of decision trees is considered. A bound on accuracy of this algorithm is obtained which is unimprovable in general case. Under some natural assumptions on the class NP, the considered algorithm is close (from the point of view of accuracy) to best polynomial approximate algorithms for minimization of weighted depth of decision trees.

  19. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail

    2015-01-01

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees

  20. In silico prediction of toxicity of phenols to Tetrahymena pyriformis by using genetic algorithm and decision tree-based modeling approach.

    Science.gov (United States)

    Abbasitabar, Fatemeh; Zare-Shahabadi, Vahid

    2017-04-01

    Risk assessment of chemicals is an important issue in environmental protection; however, there is a huge lack of experimental data for a large number of end-points. The experimental determination of toxicity of chemicals involves high costs and time-consuming process. In silico tools such as quantitative structure-toxicity relationship (QSTR) models, which are constructed on the basis of computational molecular descriptors, can predict missing data for toxic end-points for existing or even not yet synthesized chemicals. Phenol derivatives are known to be aquatic pollutants. With this background, we aimed to develop an accurate and reliable QSTR model for the prediction of toxicity of 206 phenols to Tetrahymena pyriformis. A multiple linear regression (MLR)-based QSTR was obtained using a powerful descriptor selection tool named Memorized_ACO algorithm. Statistical parameters of the model were 0.72 and 0.68 for R training 2 and R test 2 , respectively. To develop a high-quality QSTR model, classification and regression tree (CART) was employed. Two approaches were considered: (1) phenols were classified into different modes of action using CART and (2) the phenols in the training set were partitioned to several subsets by a tree in such a manner that in each subset, a high-quality MLR could be developed. For the first approach, the statistical parameters of the resultant QSTR model were improved to 0.83 and 0.75 for R training 2 and R test 2 , respectively. Genetic algorithm was employed in the second approach to obtain an optimal tree, and it was shown that the final QSTR model provided excellent prediction accuracy for the training and test sets (R training 2 and R test 2 were 0.91 and 0.93, respectively). The mean absolute error for the test set was computed as 0.1615. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Open Source GIS based integrated watershed management

    Science.gov (United States)

    Byrne, J. M.; Lindsay, J.; Berg, A. A.

    2013-12-01

    Optimal land and water management to address future and current resource stresses and allocation challenges requires the development of state-of-the-art geomatics and hydrological modelling tools. Future hydrological modelling tools should be of high resolution, process based with real-time capability to assess changing resource issues critical to short, medium and long-term enviromental management. The objective here is to merge two renowned, well published resource modeling programs to create an source toolbox for integrated land and water management applications. This work will facilitate a much increased efficiency in land and water resource security, management and planning. Following an 'open-source' philosophy, the tools will be computer platform independent with source code freely available, maximizing knowledge transfer and the global value of the proposed research. The envisioned set of water resource management tools will be housed within 'Whitebox Geospatial Analysis Tools'. Whitebox, is an open-source geographical information system (GIS) developed by Dr. John Lindsay at the University of Guelph. The emphasis of the Whitebox project has been to develop a user-friendly interface for advanced spatial analysis in environmental applications. The plugin architecture of the software is ideal for the tight-integration of spatially distributed models and spatial analysis algorithms such as those contained within the GENESYS suite. Open-source development extends knowledge and technology transfer to a broad range of end-users and builds Canadian capability to address complex resource management problems with better tools and expertise for managers in Canada and around the world. GENESYS (Generate Earth Systems Science input) is an innovative, efficient, high-resolution hydro- and agro-meteorological model for complex terrain watersheds developed under the direction of Dr. James Byrne. GENESYS is an outstanding research and applications tool to address

  2. An Efficient Method of Vibration Diagnostics For Rotating Machinery Using a Decision Tree

    Directory of Open Access Journals (Sweden)

    Bo Suk Yang

    2000-01-01

    Full Text Available This paper describes an efficient method to automatize vibration diagnosis for rotating machinery using a decision tree, which is applicable to vibration diagnosis expert system. Decision tree is a widely known formalism for expressing classification knowledge and has been used successfully in many diverse areas such as character recognition, medical diagnosis, and expert systems, etc. In order to build a decision tree for vibration diagnosis, we have to define classes and attributes. A set of cases based on past experiences is also needed. This training set is inducted using a result-cause matrix newly developed in the present work instead of using a conventionally implemented cause-result matrix. This method was applied to diagnostics for various cases taken from published work. It is found that the present method predicts causes of the abnormal vibration for test cases with high reliability.

  3. Boundary expansion algorithm of a decision tree induction for an imbalanced dataset

    Directory of Open Access Journals (Sweden)

    Kesinee Boonchuay

    2017-10-01

    Full Text Available A decision tree is one of the famous classifiers based on a recursive partitioning algorithm. This paper introduces the Boundary Expansion Algorithm (BEA to improve a decision tree induction that deals with an imbalanced dataset. BEA utilizes all attributes to define non-splittable ranges. The computed means of all attributes for minority instances are used to find the nearest minority instance, which will be expanded along all attributes to cover a minority region. As a result, BEA can successfully cope with an imbalanced dataset comparing with C4.5, Gini, asymmetric entropy, top-down tree, and Hellinger distance decision tree on 25 imbalanced datasets from the UCI Repository.

  4. IND - THE IND DECISION TREE PACKAGE

    Science.gov (United States)

    Buntine, W.

    1994-01-01

    A common approach to supervised classification and prediction in artificial intelligence and statistical pattern recognition is the use of decision trees. A tree is "grown" from data using a recursive partitioning algorithm to create a tree which has good prediction of classes on new data. Standard algorithms are CART (by Breiman Friedman, Olshen and Stone) and ID3 and its successor C4 (by Quinlan). As well as reimplementing parts of these algorithms and offering experimental control suites, IND also introduces Bayesian and MML methods and more sophisticated search in growing trees. These produce more accurate class probability estimates that are important in applications like diagnosis. IND is applicable to most data sets consisting of independent instances, each described by a fixed length vector of attribute values. An attribute value may be a number, one of a set of attribute specific symbols, or it may be omitted. One of the attributes is delegated the "target" and IND grows trees to predict the target. Prediction can then be done on new data or the decision tree printed out for inspection. IND provides a range of features and styles with convenience for the casual user as well as fine-tuning for the advanced user or those interested in research. IND can be operated in a CART-like mode (but without regression trees, surrogate splits or multivariate splits), and in a mode like the early version of C4. Advanced features allow more extensive search, interactive control and display of tree growing, and Bayesian and MML algorithms for tree pruning and smoothing. These often produce more accurate class probability estimates at the leaves. IND also comes with a comprehensive experimental control suite. IND consists of four basic kinds of routines: data manipulation routines, tree generation routines, tree testing routines, and tree display routines. The data manipulation routines are used to partition a single large data set into smaller training and test sets. The

  5. Influence diagrams and decision trees for severe accident management

    Energy Technology Data Exchange (ETDEWEB)

    Goetz, W.W.J.

    1996-09-01

    A review of relevant methodologies based on Influence Diagrams (IDs), Decision Trees (DTs), and Containment Event Trees (CETs) was conducted to assess the practicality of these methods for the selection of effective strategies for Severe Accident Management (SAM). The review included an evaluation of some software packages for these methods. The emphasis was on possible pitfalls of using IDs and on practical aspects, the latter by performance of a case study that was based on an existing Level 2 Probabilistic Safety Assessment (PSA). The study showed that the use of a combined ID/DT model has advantages over CET models, in particular when conservatisms in the Level 2 PSA have been identified and replaced by fair assessments of the uncertainties involved. It is recommended to use ID/DT models complementary to CET models. (orig.).

  6. Influence diagrams and decision trees for severe accident management

    International Nuclear Information System (INIS)

    Goetz, W.W.J.; Seebregts, A.J.; Bedford, T.J.

    1996-08-01

    A review of relevent methodologies based on Influence Diagrams (IDs), Decision Trees (DTs), and Containment Event Trees (CETs) was conducted to assess the practicality of these methods for the selection of effective strategies for Severe Accident Management (SAM). The review included an evaluation of some software packages for these methods. The emphasis was on possible pitfalls of using IDs and on practical aspects, the latter by performance of a case study that was based on an existing Level 2 Probabilistic Safety Assessment (PSA). The study showed that the use of a combined ID/DT model has advantages over CET models, in particular when conservatisms in the Level 2 PSA have been identified and replaced by fair assessments of the uncertainties involved. It is recommended to use ID/DT models as complementary to CET models. (orig.)

  7. Influence diagrams and decision trees for severe accident management

    International Nuclear Information System (INIS)

    Goetz, W.W.J.

    1996-09-01

    A review of relevant methodologies based on Influence Diagrams (IDs), Decision Trees (DTs), and Containment Event Trees (CETs) was conducted to assess the practicality of these methods for the selection of effective strategies for Severe Accident Management (SAM). The review included an evaluation of some software packages for these methods. The emphasis was on possible pitfalls of using IDs and on practical aspects, the latter by performance of a case study that was based on an existing Level 2 Probabilistic Safety Assessment (PSA). The study showed that the use of a combined ID/DT model has advantages over CET models, in particular when conservatisms in the Level 2 PSA have been identified and replaced by fair assessments of the uncertainties involved. It is recommended to use ID/DT models complementary to CET models. (orig.)

  8. Evaluation of AirGIS: a GIS-based air pollution and human exposure modelling system

    DEFF Research Database (Denmark)

    Ketzel, Matthias; Berkowicz, Ruwim; Hvidberg, Martin

    2011-01-01

    This study describes in brief the latest extensions of the Danish Geographic Information System (GIS)-based air pollution and human exposure modelling system (AirGIS), which has been developed in Denmark since 2001 and gives results of an evaluation with measured air pollution data. The system...... shows, in general, a good performance for both long-term averages (annual and monthly averages), short-term averages (hourly and daily) as well as when reproducing spatial variation in air pollution concentrations. Some shortcomings and future perspectives of the system are discussed too....

  9. Oblique decision trees using embedded support vector machines in classifier ensembles

    NARCIS (Netherlands)

    Menkovski, V.; Christou, I.; Efremidis, S.

    2008-01-01

    Classifier ensembles have emerged in recent years as a promising research area for boosting pattern recognition systems' performance. We present a new base classifier that utilizes oblique decision tree technology based on support vector machines for the construction of oblique (non-axis parallel)

  10. Ship Engine Room Casualty Analysis by Using Decision Tree Method

    Directory of Open Access Journals (Sweden)

    Ömür Yaşar SAATÇİOĞLU

    2017-03-01

    Full Text Available Ships may encounter undesirable conditions during operations. In consequence of a casualty, fire, explosion, flooding, grounding, injury even death may occur. Besides, these results can be avoidable with precautions and preventive operating processes. In maritime transportation, casualties depend on various factors. These were listed as misuse of the engine equipment and tools, defective machinery or equipment, inadequacy of operational procedure and measure of safety and force majeure effects. Casualty reports which were published in Australia, New Zealand, United Kingdom, Canada and United States until 2015 were examined and the probable causes and consequences of casualties were determined with their occurrence percentages. In this study, 89 marine investigation reports regarding engine room casualties were analyzed. Casualty factors were analyzed with their frequency percentages and also their main causes were constructed. This study aims to investigate engine room based casualties, frequency of each casualty type and main causes by using decision tree method.

  11. Relationships among various parameters for decision tree optimization

    KAUST Repository

    Hussain, Shahid

    2014-01-14

    In this chapter, we study, in detail, the relationships between various pairs of cost functions and between uncertainty measure and cost functions, for decision tree optimization. We provide new tools (algorithms) to compute relationship functions, as well as provide experimental results on decision tables acquired from UCI ML Repository. The algorithms presented in this paper have already been implemented and are now a part of Dagger, which is a software system for construction/optimization of decision trees and decision rules. The main results presented in this chapter deal with two types of algorithms for computing relationships; first, we discuss the case where we construct approximate decision trees and are interested in relationships between certain cost function, such as depth or number of nodes of a decision trees, and an uncertainty measure, such as misclassification error (accuracy) of decision tree. Secondly, relationships between two different cost functions are discussed, for example, the number of misclassification of a decision tree versus number of nodes in a decision trees. The results of experiments, presented in the chapter, provide further insight. © 2014 Springer International Publishing Switzerland.

  12. Relationships among various parameters for decision tree optimization

    KAUST Repository

    Hussain, Shahid

    2014-01-01

    In this chapter, we study, in detail, the relationships between various pairs of cost functions and between uncertainty measure and cost functions, for decision tree optimization. We provide new tools (algorithms) to compute relationship functions, as well as provide experimental results on decision tables acquired from UCI ML Repository. The algorithms presented in this paper have already been implemented and are now a part of Dagger, which is a software system for construction/optimization of decision trees and decision rules. The main results presented in this chapter deal with two types of algorithms for computing relationships; first, we discuss the case where we construct approximate decision trees and are interested in relationships between certain cost function, such as depth or number of nodes of a decision trees, and an uncertainty measure, such as misclassification error (accuracy) of decision tree. Secondly, relationships between two different cost functions are discussed, for example, the number of misclassification of a decision tree versus number of nodes in a decision trees. The results of experiments, presented in the chapter, provide further insight. © 2014 Springer International Publishing Switzerland.

  13. Data acquisition in modeling using neural networks and decision trees

    Directory of Open Access Journals (Sweden)

    R. Sika

    2011-04-01

    Full Text Available The paper presents a comparison of selected models from area of artificial neural networks and decision trees in relation with actualconditions of foundry processes. The work contains short descriptions of used algorithms, their destination and method of data preparation,which is a domain of work of Data Mining systems. First part concerns data acquisition realized in selected iron foundry, indicating problems to solve in aspect of casting process modeling. Second part is a comparison of selected algorithms: a decision tree and artificial neural network, that is CART (Classification And Regression Trees and BP (Backpropagation in MLP (Multilayer Perceptron networks algorithms.Aim of the paper is to show an aspect of selecting data for modeling, cleaning it and reducing, for example due to too strong correlationbetween some of recorded process parameters. Also, it has been shown what results can be obtained using two different approaches:first when modeling using available commercial software, for example Statistica, second when modeling step by step using Excel spreadsheetbasing on the same algorithm, like BP-MLP. Discrepancy of results obtained from these two approaches originates from a priorimade assumptions. Mentioned earlier Statistica universal software package, when used without awareness of relations of technologicalparameters, i.e. without user having experience in foundry and without scheduling ranks of particular parameters basing on acquisition, can not give credible basis to predict the quality of the castings. Also, a decisive influence of data acquisition method has been clearly indicated, the acquisition should be conducted according to repetitive measurement and control procedures. This paper is based on about 250 records of actual data, for one assortment for 6 month period, where only 12 data sets were complete (including two that were used for validation of neural network and useful for creating a model. It is definitely too

  14. TVA GIS-based biomass resource assessment

    Energy Technology Data Exchange (ETDEWEB)

    Noon, C.E. [Univ. of Tennessee, Knoxville, TN (United States)

    1993-12-31

    The focus of this paper is a computer-based system for estimating the costs of supplying wood fuel. The system is being developed for the Tennessee Valley Authority and is referred to as the Biomass Resource Assessment Version One (BRAVO) system. The main objective in developing the BRAVO system is to assist TVA in estimating the costs for supplying wood fuel to any one of its twelve coal-fired plants. The BRAVO system is developed within a Geographic Information System (GIS) platform and is designed to allow a user to perform {open_quotes}what if{close_quotes} analyses related to the costs of wood fuel supply. Three types of wood fuel are considered in the BRAVO system: mill residues, logging residues and short-rotation woody crops (SRWC). Each type of wood fuel has unique economic and supply characteristics. The input data for the system includes the specific locations, amount, and prices of the various types of wood fuel throughout the TVA region. The system input is completed by data on political boundaries, power plant locations, road networks and a model for estimating transportation costs as a function of distance. The result is a comprehensive system which includes information on all possible wood fuel supply joints, demand points and product movement costs. In addition, the BRAVO system has been designed to allow a user to perform sensitivity analysis on a variety of supply system parameters. This will enable TVA to thoroughly investigate the financial impacts of issues such as increased competition for wood fuel, environmental policies, fuel taxes, and regional economic cycles.

  15. TVA GIS-based biomass resource assessment

    International Nuclear Information System (INIS)

    Noon, C.E.

    1993-01-01

    The focus of this paper is a computer-based system for estimating the costs of supplying wood fuel. The system is being developed for the Tennessee Valley Authority and is referred to as the Biomass Resource Assessment Version One (BRAVO) system. The main objective in developing the BRAVO system is to assist TVA in estimating the cost for supplying wood fuel to any one of its twelve coal-fired power plants. The BRAVO system is developed within a Geographic Information System (GIS) platform and is designed to allow a user to perform open-quotes what ifclose quotes analyses related to the costs of wood fuel supply. Three types of wood fuel are considered in the Bravo system: mill residues, logging residues and short-rotation woody crops (SRWC). Each type of wood fuel has unique economic and supply characteristics. The input data for the system includes the specific locations, amounts, and prices of the various types of wood fuel throughout the TVA region. The system input is completed by data on political boundaries, power plant locations, road networks and a model for estimating transportation costs as a function of distance. The result is a comprehensive system which includes information on all possible wood fuel supply points, demand points and product movement costs. In additions, the BRAVO system has been designed to allow a user to perform sensitivity analysis on a variety of supply system parameters. This will enable TVA to thoroughly investigate the financial impacts of issues such as increased competition for wood fuel, environmental policies, fuel taxes, and regional economic cycles

  16. Which Types of Leadership Styles Do Followers Prefer? A Decision Tree Approach

    Science.gov (United States)

    Salehzadeh, Reza

    2017-01-01

    Purpose: The purpose of this paper is to propose a new method to find the appropriate leadership styles based on the followers' preferences using the decision tree technique. Design/methodology/approach: Statistical population includes the students of the University of Isfahan. In total, 750 questionnaires were distributed; out of which, 680…

  17. A decision tree approach using silvics to guide planning for forest restoration

    Science.gov (United States)

    Sharon M. Hermann; John S. Kush; John C. Gilbert

    2013-01-01

    We created a decision tree based on silvics of longleaf pine (Pinus palustris) and historical descriptions to develop approaches for restoration management at Horseshoe Bend National Military Park located in central Alabama. A National Park Service goal is to promote structure and composition of a forest that likely surrounded the 1814 battlefield....

  18. Assessing School Readiness for a Practice Arrangement Using Decision Tree Methodology.

    Science.gov (United States)

    Barger, Sara E.

    1998-01-01

    Questions in a decision-tree address mission, faculty interest, administrative support, and practice plan as a way of assessing arrangements for nursing faculty's clinical practice. Decisions should be based on congruence between the human resource allocation and the reward systems. (SK)

  19. Cloud Detection from Satellite Imagery: A Comparison of Expert-Generated and Automatically-Generated Decision Trees

    Science.gov (United States)

    Shiffman, Smadar

    2004-01-01

    Automated cloud detection and tracking is an important step in assessing global climate change via remote sensing. Cloud masks, which indicate whether individual pixels depict clouds, are included in many of the data products that are based on data acquired on- board earth satellites. Many cloud-mask algorithms have the form of decision trees, which employ sequential tests that scientists designed based on empirical astrophysics studies and astrophysics simulations. Limitations of existing cloud masks restrict our ability to accurately track changes in cloud patterns over time. In this study we explored the potential benefits of automatically-learned decision trees for detecting clouds from images acquired using the Advanced Very High Resolution Radiometer (AVHRR) instrument on board the NOAA-14 weather satellite of the National Oceanic and Atmospheric Administration. We constructed three decision trees for a sample of 8km-daily AVHRR data from 2000 using a decision-tree learning procedure provided within MATLAB(R), and compared the accuracy of the decision trees to the accuracy of the cloud mask. We used ground observations collected by the National Aeronautics and Space Administration Clouds and the Earth s Radiant Energy Systems S COOL project as the gold standard. For the sample data, the accuracy of automatically learned decision trees was greater than the accuracy of the cloud masks included in the AVHRR data product.

  20. Building of fuzzy decision trees using ID3 algorithm

    Science.gov (United States)

    Begenova, S. B.; Avdeenko, T. V.

    2018-05-01

    Decision trees are widely used in the field of machine learning and artificial intelligence. Such popularity is due to the fact that with the help of decision trees graphic models, text rules can be built and they are easily understood by the final user. Because of the inaccuracy of observations, uncertainties, the data, collected in the environment, often take an unclear form. Therefore, fuzzy decision trees becoming popular in the field of machine learning. This article presents a method that includes the features of the two above-mentioned approaches: a graphical representation of the rules system in the form of a tree and a fuzzy representation of the data. The approach uses such advantages as high comprehensibility of decision trees and the ability to cope with inaccurate and uncertain information in fuzzy representation. The received learning method is suitable for classifying problems with both numerical and symbolic features. In the article, solution illustrations and numerical results are given.

  1. Automated Sleep Stage Scoring by Decision Tree Learning

    National Research Council Canada - National Science Library

    Hanaoka, Masaaki

    2001-01-01

    In this paper we describe a waveform recognition method that extracts characteristic parameters from wave- forms and a method of automated sleep stage scoring using decision tree learning that is in...

  2. Decision tree approach for classification of remotely sensed satellite ...

    Indian Academy of Sciences (India)

    sensed satellite data using open source support. Richa Sharma .... Decision tree classification techniques have been .... the USGS Earth Resource Observation Systems. (EROS) ... for shallow water, 11% were for sparse and dense built-up ...

  3. Decision tree approach for classification of remotely sensed satellite

    Indian Academy of Sciences (India)

    DTC) algorithm for classification of remotely sensed satellite data (Landsat TM) using open source support. The decision tree is constructed by recursively partitioning the spectral distribution of the training dataset using WEKA, open source ...

  4. An automated approach to the design of decision tree classifiers

    Science.gov (United States)

    Argentiero, P.; Chin, R.; Beaudet, P.

    1982-01-01

    An automated technique is presented for designing effective decision tree classifiers predicated only on a priori class statistics. The procedure relies on linear feature extractions and Bayes table look-up decision rules. Associated error matrices are computed and utilized to provide an optimal design of the decision tree at each so-called 'node'. A by-product of this procedure is a simple algorithm for computing the global probability of correct classification assuming the statistical independence of the decision rules. Attention is given to a more precise definition of decision tree classification, the mathematical details on the technique for automated decision tree design, and an example of a simple application of the procedure using class statistics acquired from an actual Landsat scene.

  5. A tool for study of optimal decision trees

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2010-01-01

    The paper describes a tool which allows us for relatively small decision tables to make consecutive optimization of decision trees relative to various complexity measures such as number of nodes, average depth, and depth, and to find parameters

  6. Decision-Tree Formulation With Order-1 Lateral Execution

    Science.gov (United States)

    James, Mark

    2007-01-01

    A compact symbolic formulation enables mapping of an arbitrarily complex decision tree of a certain type into a highly computationally efficient multidimensional software object. The type of decision trees to which this formulation applies is that known in the art as the Boolean class of balanced decision trees. Parallel lateral slices of an object created by means of this formulation can be executed in constant time considerably less time than would otherwise be required. Decision trees of various forms are incorporated into almost all large software systems. A decision tree is a way of hierarchically solving a problem, proceeding through a set of true/false responses to a conclusion. By definition, a decision tree has a tree-like structure, wherein each internal node denotes a test on an attribute, each branch from an internal node represents an outcome of a test, and leaf nodes represent classes or class distributions that, in turn represent possible conclusions. The drawback of decision trees is that execution of them can be computationally expensive (and, hence, time-consuming) because each non-leaf node must be examined to determine whether to progress deeper into a tree structure or to examine an alternative. The present formulation was conceived as an efficient means of representing a decision tree and executing it in as little time as possible. The formulation involves the use of a set of symbolic algorithms to transform a decision tree into a multi-dimensional object, the rank of which equals the number of lateral non-leaf nodes. The tree can then be executed in constant time by means of an order-one table lookup. The sequence of operations performed by the algorithms is summarized as follows: 1. Determination of whether the tree under consideration can be encoded by means of this formulation. 2. Extraction of decision variables. 3. Symbolic optimization of the decision tree to minimize its form. 4. Expansion and transformation of all nested conjunctive

  7. TreePOD: Sensitivity-Aware Selection of Pareto-Optimal Decision Trees.

    Science.gov (United States)

    Muhlbacher, Thomas; Linhardt, Lorenz; Moller, Torsten; Piringer, Harald

    2018-01-01

    Balancing accuracy gains with other objectives such as interpretability is a key challenge when building decision trees. However, this process is difficult to automate because it involves know-how about the domain as well as the purpose of the model. This paper presents TreePOD, a new approach for sensitivity-aware model selection along trade-offs. TreePOD is based on exploring a large set of candidate trees generated by sampling the parameters of tree construction algorithms. Based on this set, visualizations of quantitative and qualitative tree aspects provide a comprehensive overview of possible tree characteristics. Along trade-offs between two objectives, TreePOD provides efficient selection guidance by focusing on Pareto-optimal tree candidates. TreePOD also conveys the sensitivities of tree characteristics on variations of selected parameters by extending the tree generation process with a full-factorial sampling. We demonstrate how TreePOD supports a variety of tasks involved in decision tree selection and describe its integration in a holistic workflow for building and selecting decision trees. For evaluation, we illustrate a case study for predicting critical power grid states, and we report qualitative feedback from domain experts in the energy sector. This feedback suggests that TreePOD enables users with and without statistical background a confident and efficient identification of suitable decision trees.

  8. Identifying Bank Frauds Using CRISP-DM and Decision Trees

    OpenAIRE

    Bruno Carneiro da Rocha; Rafael Timóteo de Sousa Júnior

    2010-01-01

    This article aims to evaluate the use of techniques of decision trees, in conjunction with the managementmodel CRISP-DM, to help in the prevention of bank fraud. This article offers a study on decision trees, animportant concept in the field of artificial intelligence. The study is focused on discussing how these treesare able to assist in the decision making process of identifying frauds by the analysis of informationregarding bank transactions. This information is captured with the use of t...

  9. Univariate decision tree induction using maximum margin classification

    OpenAIRE

    Yıldız, Olcay Taner

    2012-01-01

    In many pattern recognition applications, first decision trees are used due to their simplicity and easily interpretable nature. In this paper, we propose a new decision tree learning algorithm called univariate margin tree where, for each continuous attribute, the best split is found using convex optimization. Our simulation results on 47 data sets show that the novel margin tree classifier performs at least as good as C4.5 and linear discriminant tree (LDT) with a similar time complexity. F...

  10. Classification of premalignant pancreatic cancer mass-spectrometry data using decision tree ensembles

    Directory of Open Access Journals (Sweden)

    Wong G William

    2008-06-01

    Full Text Available Abstract Background Pancreatic cancer is the fourth leading cause of cancer death in the United States. Consequently, identification of clinically relevant biomarkers for the early detection of this cancer type is urgently needed. In recent years, proteomics profiling techniques combined with various data analysis methods have been successfully used to gain critical insights into processes and mechanisms underlying pathologic conditions, particularly as they relate to cancer. However, the high dimensionality of proteomics data combined with their relatively small sample sizes poses a significant challenge to current data mining methodology where many of the standard methods cannot be applied directly. Here, we propose a novel methodological framework using machine learning method, in which decision tree based classifier ensembles coupled with feature selection methods, is applied to proteomics data generated from premalignant pancreatic cancer. Results This study explores the utility of three different feature selection schemas (Student t test, Wilcoxon rank sum test and genetic algorithm to reduce the high dimensionality of a pancreatic cancer proteomic dataset. Using the top features selected from each method, we compared the prediction performances of a single decision tree algorithm C4.5 with six different decision-tree based classifier ensembles (Random forest, Stacked generalization, Bagging, Adaboost, Logitboost and Multiboost. We show that ensemble classifiers always outperform single decision tree classifier in having greater accuracies and smaller prediction errors when applied to a pancreatic cancer proteomics dataset. Conclusion In our cross validation framework, classifier ensembles generally have better classification accuracies compared to that of a single decision tree when applied to a pancreatic cancer proteomic dataset, thus suggesting its utility in future proteomics data analysis. Additionally, the use of feature selection

  11. Research on Livable Community Evaluation Based on GIS

    Science.gov (United States)

    Yin, Zhangcai; Wu, Yang; Jin, Zhanghaonan; Zhang, Xu

    2018-01-01

    Community is the basic unit of the city. Research on livable community could provide a bottom-up research path for the realization of livable city. Livability is the total factor affecting the quality of community life. In this paper, livable community evaluation indexes are evaluated based on GIS and fuzzy comprehensive evaluation method. Then the sum-index and sub-index of community livability are both calculated. And community livable evaluation index system is constructed based on the platform of GIS. This study provides theoretical support for the construction and management of livable communities, so as to guide the development and optimization of city.

  12. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  13. Bayesian averaging over Decision Tree models for trauma severity scoring.

    Science.gov (United States)

    Schetinin, V; Jakaite, L; Krzanowski, W

    2018-01-01

    Health care practitioners analyse possible risks of misleading decisions and need to estimate and quantify uncertainty in predictions. We have examined the "gold" standard of screening a patient's conditions for predicting survival probability, based on logistic regression modelling, which is used in trauma care for clinical purposes and quality audit. This methodology is based on theoretical assumptions about data and uncertainties. Models induced within such an approach have exposed a number of problems, providing unexplained fluctuation of predicted survival and low accuracy of estimating uncertainty intervals within which predictions are made. Bayesian method, which in theory is capable of providing accurate predictions and uncertainty estimates, has been adopted in our study using Decision Tree models. Our approach has been tested on a large set of patients registered in the US National Trauma Data Bank and has outperformed the standard method in terms of prediction accuracy, thereby providing practitioners with accurate estimates of the predictive posterior densities of interest that are required for making risk-aware decisions. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Application of alternating decision trees in selecting sparse linear solvers

    KAUST Repository

    Bhowmick, Sanjukta; Eijkhout, Victor; Freund, Yoav; Fuentes, Erika; Keyes, David E.

    2010-01-01

    The solution of sparse linear systems, a fundamental and resource-intensive task in scientific computing, can be approached through multiple algorithms. Using an algorithm well adapted to characteristics of the task can significantly enhance the performance, such as reducing the time required for the operation, without compromising the quality of the result. However, the best solution method can vary even across linear systems generated in course of the same PDE-based simulation, thereby making solver selection a very challenging problem. In this paper, we use a machine learning technique, Alternating Decision Trees (ADT), to select efficient solvers based on the properties of sparse linear systems and runtime-dependent features, such as the stages of simulation. We demonstrate the effectiveness of this method through empirical results over linear systems drawn from computational fluid dynamics and magnetohydrodynamics applications. The results also demonstrate that using ADT can resolve the problem of over-fitting, which occurs when limited amount of data is available. © 2010 Springer Science+Business Media LLC.

  15. Intracranial hypertension prediction using extremely randomized decision trees.

    Science.gov (United States)

    Scalzo, Fabien; Hamilton, Robert; Asgari, Shadnaz; Kim, Sunghan; Hu, Xiao

    2012-10-01

    Intracranial pressure (ICP) elevation (intracranial hypertension, IH) in neurocritical care is typically treated in a reactive fashion; it is only delivered after bedside clinicians notice prolonged ICP elevation. A proactive solution is desirable to improve the treatment of intracranial hypertension. Several studies have shown that the waveform morphology of the intracranial pressure pulse holds predictors about future intracranial hypertension and could therefore be used to alert the bedside clinician of a likely occurrence of the elevation in the immediate future. In this paper, a computational framework is proposed to predict prolonged intracranial hypertension based on morphological waveform features computed from the ICP. A key contribution of this work is to exploit an ensemble classifier method based on extremely randomized decision trees (Extra-Trees). Experiments on a representative set of 30 patients admitted for various intracranial pressure related conditions demonstrate the effectiveness of the predicting framework on ICP pulses acquired under clinical conditions and the superior results of the proposed approach in comparison to linear and AdaBoost classifiers. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  16. Multivariate analysis of flow cytometric data using decision trees.

    Science.gov (United States)

    Simon, Svenja; Guthke, Reinhard; Kamradt, Thomas; Frey, Oliver

    2012-01-01

    Characterization of the response of the host immune system is important in understanding the bidirectional interactions between the host and microbial pathogens. For research on the host site, flow cytometry has become one of the major tools in immunology. Advances in technology and reagents allow now the simultaneous assessment of multiple markers on a single cell level generating multidimensional data sets that require multivariate statistical analysis. We explored the explanatory power of the supervised machine learning method called "induction of decision trees" in flow cytometric data. In order to examine whether the production of a certain cytokine is depended on other cytokines, datasets from intracellular staining for six cytokines with complex patterns of co-expression were analyzed by induction of decision trees. After weighting the data according to their class probabilities, we created a total of 13,392 different decision trees for each given cytokine with different parameter settings. For a more realistic estimation of the decision trees' quality, we used stratified fivefold cross validation and chose the "best" tree according to a combination of different quality criteria. While some of the decision trees reflected previously known co-expression patterns, we found that the expression of some cytokines was not only dependent on the co-expression of others per se, but was also dependent on the intensity of expression. Thus, for the first time we successfully used induction of decision trees for the analysis of high dimensional flow cytometric data and demonstrated the feasibility of this method to reveal structural patterns in such data sets.

  17. ArborZ: PHOTOMETRIC REDSHIFTS USING BOOSTED DECISION TREES

    International Nuclear Information System (INIS)

    Gerdes, David W.; Sypniewski, Adam J.; McKay, Timothy A.; Hao, Jiangang; Weis, Matthew R.; Wechsler, Risa H.; Busha, Michael T.

    2010-01-01

    Precision photometric redshifts will be essential for extracting cosmological parameters from the next generation of wide-area imaging surveys. In this paper, we introduce a photometric redshift algorithm, ArborZ, based on the machine-learning technique of boosted decision trees. We study the algorithm using galaxies from the Sloan Digital Sky Survey (SDSS) and from mock catalogs intended to simulate both the SDSS and the upcoming Dark Energy Survey. We show that it improves upon the performance of existing algorithms. Moreover, the method naturally leads to the reconstruction of a full probability density function (PDF) for the photometric redshift of each galaxy, not merely a single 'best estimate' and error, and also provides a photo-z quality figure of merit for each galaxy that can be used to reject outliers. We show that the stacked PDFs yield a more accurate reconstruction of the redshift distribution N(z). We discuss limitations of the current algorithm and ideas for future work.

  18. USING PRECEDENTS FOR REDUCTION OF DECISION TREE BY GRAPH SEARCH

    Directory of Open Access Journals (Sweden)

    I. A. Bessmertny

    2015-01-01

    Full Text Available The paper considers the problem of mutual payment organization between business entities by means of clearing that is solved by search of graph paths. To reduce the decision tree complexity a method of precedents is proposed that consists in saving the intermediate solution during the moving along decision tree. An algorithm and example are presented demonstrating solution complexity coming close to a linear one. The tests carried out in civil aviation settlement system demonstrate approximately 30 percent shortage of real money transfer. The proposed algorithm is planned to be implemented also in other clearing organizations of the Russian Federation.

  19. WATGIS: A GIS-Based Lumped Parameter Water Quality Model

    Science.gov (United States)

    Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya

    2002-01-01

    A Geographic Information System (GIS)­based, lumped parameter water quality model was developed to estimate the spatial and temporal nitrogen­loading patterns for lower coastal plain watersheds in eastern North Carolina. The model uses a spatially distributed delivery ratio (DR) parameter to account for nitrogen retention or loss along a drainage network. Delivery...

  20. GIS-based bivariate statistical techniques for groundwater potential ...

    Indian Academy of Sciences (India)

    Ali Haghizadeh

    2017-11-23

    Nov 23, 2017 ... regions. This study shows the potency of two GIS-based data driven ... growth of these tools has also prepared another ..... Urban. 30467. 3. 0.06. 0.20. 0.74. 0.80. −0.64. Distance from road ..... and artificial neural networks for potential groundwater .... ping: A case study at Mehran region, Iran; Catena 137.

  1. GIS Teacher Training: Empirically-Based Indicators of Effectiveness

    Science.gov (United States)

    Höhnle, Steffen; Fögele, Janis; Mehren, Rainer; Schubert, Jan Christoph

    2016-01-01

    In spite of various actions, the implementation of GIS (geographic information systems) in German schools is still very low. In the presented research, teaching experts as well as teaching novices were presented with empirically based constraints for implementation stemming from an earlier survey. In the process of various group discussions, the…

  2. GIS-Based bivariate statistical techniques for groundwater potential ...

    Indian Academy of Sciences (India)

    24

    This study shows the potency of two GIS-based data driven bivariate techniques namely ... In the view of these weaknesses , there is a strong requirement for reassessment of .... Font color: Text 1, Not Expanded by / Condensed by , ...... West Bengal (India) using remote sensing, geographical information system and multi-.

  3. Predicting metabolic syndrome using decision tree and support vector machine methods

    Directory of Open Access Journals (Sweden)

    Farzaneh Karimi-Alavijeh

    2016-06-01

    Full Text Available BACKGROUND: Metabolic syndrome which underlies the increased prevalence of cardiovascular disease and Type 2 diabetes is considered as a group of metabolic abnormalities including central obesity, hypertriglyceridemia, glucose intolerance, hypertension, and dyslipidemia. Recently, artificial intelligence based health-care systems are highly regarded because of its success in diagnosis, prediction, and choice of treatment. This study employs machine learning technics for predict the metabolic syndrome. METHODS: This study aims to employ decision tree and support vector machine (SVM to predict the 7-year incidence of metabolic syndrome. This research is a practical one in which data from 2107 participants of Isfahan Cohort Study has been utilized. The subjects without metabolic syndrome according to the ATPIII criteria were selected. The features that have been used in this data set include: gender, age, weight, body mass index, waist circumference, waist-to-hip ratio, hip circumference, physical activity, smoking, hypertension, antihypertensive medication use, systolic blood pressure (BP, diastolic BP, fasting blood sugar, 2-hour blood glucose, triglycerides (TGs, total cholesterol, low-density lipoprotein, high density lipoprotein-cholesterol, mean corpuscular volume, and mean corpuscular hemoglobin. Metabolic syndrome was diagnosed based on ATPIII criteria and two methods of decision tree and SVM were selected to predict the metabolic syndrome. The criteria of sensitivity, specificity and accuracy were used for validation. RESULTS: SVM and decision tree methods were examined according to the criteria of sensitivity, specificity and accuracy. Sensitivity, specificity and accuracy were 0.774 (0.758, 0.74 (0.72 and 0.757 (0.739 in SVM (decision tree method. CONCLUSION: The results show that SVM method sensitivity, specificity and accuracy is more efficient than decision tree. The results of decision tree method show that the TG is the most

  4. Predicting metabolic syndrome using decision tree and support vector machine methods.

    Science.gov (United States)

    Karimi-Alavijeh, Farzaneh; Jalili, Saeed; Sadeghi, Masoumeh

    2016-05-01

    Metabolic syndrome which underlies the increased prevalence of cardiovascular disease and Type 2 diabetes is considered as a group of metabolic abnormalities including central obesity, hypertriglyceridemia, glucose intolerance, hypertension, and dyslipidemia. Recently, artificial intelligence based health-care systems are highly regarded because of its success in diagnosis, prediction, and choice of treatment. This study employs machine learning technics for predict the metabolic syndrome. This study aims to employ decision tree and support vector machine (SVM) to predict the 7-year incidence of metabolic syndrome. This research is a practical one in which data from 2107 participants of Isfahan Cohort Study has been utilized. The subjects without metabolic syndrome according to the ATPIII criteria were selected. The features that have been used in this data set include: gender, age, weight, body mass index, waist circumference, waist-to-hip ratio, hip circumference, physical activity, smoking, hypertension, antihypertensive medication use, systolic blood pressure (BP), diastolic BP, fasting blood sugar, 2-hour blood glucose, triglycerides (TGs), total cholesterol, low-density lipoprotein, high density lipoprotein-cholesterol, mean corpuscular volume, and mean corpuscular hemoglobin. Metabolic syndrome was diagnosed based on ATPIII criteria and two methods of decision tree and SVM were selected to predict the metabolic syndrome. The criteria of sensitivity, specificity and accuracy were used for validation. SVM and decision tree methods were examined according to the criteria of sensitivity, specificity and accuracy. Sensitivity, specificity and accuracy were 0.774 (0.758), 0.74 (0.72) and 0.757 (0.739) in SVM (decision tree) method. The results show that SVM method sensitivity, specificity and accuracy is more efficient than decision tree. The results of decision tree method show that the TG is the most important feature in predicting metabolic syndrome. According

  5. External validation of a decision tree early warning score using only laboratory data

    DEFF Research Database (Denmark)

    Holm Atkins, Tara E; Öhman, Malin C; Brabrand, Mikkel

    2018-01-01

    INTRODUCTION: Early warning scores (EWS) have been developed to identify the degree of illness severity among acutely ill patients. One system, The Laboratory Decision Tree Early Warning Score (LDT-EWS) is wholly laboratory data based. Laboratory data was used in the development of a rare...... computerized method, developing a decision tree analysis. This article externally validates LDT-EWS, which is obligatory for an EWS before clinical use. METHOD: We conducted a retrospective review of prospectively collected data based on a time limited sample of all patients admitted through the medical......) and calibration (precision) as Hosmer-Lemeshow Goodness of fit test. RESULTS: A total of 5858 patients were admitted and 4902 included (83.7%). In-hospital mortality in our final dataset (n=4902) was 3.5%. Discriminatory power (95% CI), identifying in-hospital death was 0.809 (0.777-0.842). Calibration was good...

  6. Minimizing the cost of translocation failure with decision-tree models that predict species' behavioral response in translocation sites.

    Science.gov (United States)

    Ebrahimi, Mehregan; Ebrahimie, Esmaeil; Bull, C Michael

    2015-08-01

    The high number of failures is one reason why translocation is often not recommended. Considering how behavior changes during translocations may improve translocation success. To derive decision-tree models for species' translocation, we used data on the short-term responses of an endangered Australian skink in 5 simulated translocations with different release conditions. We used 4 different decision-tree algorithms (decision tree, decision-tree parallel, decision stump, and random forest) with 4 different criteria (gain ratio, information gain, gini index, and accuracy) to investigate how environmental and behavioral parameters may affect the success of a translocation. We assumed behavioral changes that increased dispersal away from a release site would reduce translocation success. The trees became more complex when we included all behavioral parameters as attributes, but these trees yielded more detailed information about why and how dispersal occurred. According to these complex trees, there were positive associations between some behavioral parameters, such as fight and dispersal, that showed there was a higher chance, for example, of dispersal among lizards that fought than among those that did not fight. Decision trees based on parameters related to release conditions were easier to understand and could be used by managers to make translocation decisions under different circumstances. © 2015 Society for Conservation Biology.

  7. Efficient, reliable and fast high-level triggering using a bonsai boosted decision tree

    International Nuclear Information System (INIS)

    Gligorov, V V; Williams, M

    2013-01-01

    High-level triggering is a vital component of many modern particle physics experiments. This paper describes a modification to the standard boosted decision tree (BDT) classifier, the so-called bonsai BDT, that has the following important properties: it is more efficient than traditional cut-based approaches; it is robust against detector instabilities, and it is very fast. Thus, it is fit-for-purpose for the online running conditions faced by any large-scale data acquisition system.

  8. Hyper-parameter tuning of a decision tree induction algorithm

    NARCIS (Netherlands)

    Mantovani, R.G.; Horváth, T.; Cerri, R.; Vanschoren, J.; de Carvalho, A.C.P.L.F.

    2017-01-01

    Supervised classification is the most studied task in Machine Learning. Among the many algorithms used in such task, Decision Tree algorithms are a popular choice, since they are robust and efficient to construct. Moreover, they have the advantage of producing comprehensible models and satisfactory

  9. Transferability of decision trees for land cover classification in a ...

    African Journals Online (AJOL)

    This paper attempts to derive classification rules from training data of four Landsat-8 scenes by using the classification and regression tree (CART) implementation of the decision tree algorithm. The transferability of the ruleset was evaluated by classifying two adjacent scenes. The classification of the four mosaicked scenes ...

  10. Practical secure decision tree learning in a teletreatment application

    NARCIS (Netherlands)

    de Hoogh, Sebastiaan; Schoenmakers, Berry; Chen, Ping; op den Akker, Harm

    In this paper we develop a range of practical cryptographic protocols for secure decision tree learning, a primary problem in privacy preserving data mining. We focus on particular variants of the well-known ID3 algorithm allowing a high level of security and performance at the same time. Our

  11. Practical secure decision tree learning in a teletreatment application

    NARCIS (Netherlands)

    Hoogh, de S.J.A.; Schoenmakers, B.; Chen, Ping; Op den Akker, H.; Christin, N.; Safavi-Naini, R.

    2014-01-01

    In this paper we develop a range of practical cryptographic protocols for secure decision tree learning, a primary problem in privacy preserving data mining. We focus on particular variants of the well-known ID3 algorithm allowing a high level of security and performance at the same time. Our

  12. The Decision Tree for Teaching Management of Uncertainty

    Science.gov (United States)

    Knaggs, Sara J.; And Others

    1974-01-01

    A 'decision tree' consists of an outline of the patient's symptoms and a logic for decision and action. It is felt that this approach to the decisionmaking process better facilitates each learner's application of his own level of knowledge and skills. (Author)

  13. Ethnographic Decision Tree Modeling: A Research Method for Counseling Psychology.

    Science.gov (United States)

    Beck, Kirk A.

    2005-01-01

    This article describes ethnographic decision tree modeling (EDTM; C. H. Gladwin, 1989) as a mixed method design appropriate for counseling psychology research. EDTM is introduced and located within a postpositivist research paradigm. Decision theory that informs EDTM is reviewed, and the 2 phases of EDTM are highlighted. The 1st phase, model…

  14. A Decision Tree for Nonmetric Sex Assessment from the Skull.

    Science.gov (United States)

    Langley, Natalie R; Dudzik, Beatrix; Cloutier, Alesia

    2018-01-01

    This study uses five well-documented cranial nonmetric traits (glabella, mastoid process, mental eminence, supraorbital margin, and nuchal crest) and one additional trait (zygomatic extension) to develop a validated decision tree for sex assessment. The decision tree was built and cross-validated on a sample of 293 U.S. White individuals from the William M. Bass Donated Skeletal Collection. Ordinal scores from the six traits were analyzed using the partition modeling option in JMP Pro 12. A holdout sample of 50 skulls was used to test the model. The most accurate decision tree includes three variables: glabella, zygomatic extension, and mastoid process. This decision tree yielded 93.5% accuracy on the training sample, 94% on the cross-validated sample, and 96% on a holdout validation sample. Linear weighted kappa statistics indicate acceptable agreement among observers for these variables. Mental eminence should be avoided, and definitions and figures should be referenced carefully to score nonmetric traits. © 2017 American Academy of Forensic Sciences.

  15. Alternative measures of risk of extreme events in decision trees

    International Nuclear Information System (INIS)

    Frohwein, H.I.; Lambert, J.H.; Haimes, Y.Y.

    1999-01-01

    A need for a methodology to control the extreme events, defined as low-probability, high-consequence incidents, in sequential decisions is identified. A variety of alternative and complementary measures of the risk of extreme events are examined for their usability as objective functions in sequential decisions, represented as single- or multiple-objective decision trees. Earlier work had addressed difficulties, related to non-separability, with the minimization of some measures of the risk of extreme events in sequential decisions. In an extension of these results, it is shown how some non-separable measures of the risk of extreme events can be interpreted in terms of separable constituents of risk, thereby enabling a wider class of measures of the risk of extreme events to be handled in a straightforward manner in a decision tree. Also for extreme events, results are given to enable minimax- and Hurwicz-criterion analyses in decision trees. An example demonstrates the incorporation of different measures of the risk of extreme events in a multi-objective decision tree. Conceptual formulations for optimizing non-separable measures of the risk of extreme events are identified as an important area for future investigation

  16. Decision tree ensembles for online operation of large smart grids

    International Nuclear Information System (INIS)

    Steer, Kent C.B.; Wirth, Andrew; Halgamuge, Saman K.

    2012-01-01

    Highlights: ► We present a new technique for the online control of large smart grids. ► We use a Decision Tree Ensemble in a Receding Horizon Controller. ► Decision Trees can approximate online optimisation approaches. ► Decision Trees can make adjustments to their output in real time. ► The new technique outperforms heuristic online optimisation approaches. - Abstract: Smart grids utilise omnidirectional data transfer to operate a network of energy resources. Associated technologies present operators with greater control over system elements and more detailed information on the system state. While these features may improve the theoretical optimal operating performance, determining the optimal operating strategy becomes more difficult. In this paper, we show how a decision tree ensemble or ‘forest’ can produce a near-optimal control strategy in real time. The approach substitutes the decision forest for the simulation–optimisation sub-routine commonly employed in receding horizon controllers. The method is demonstrated on a small and a large network, and compared to controllers employing particle swarm optimisation and evolutionary strategies. For the smaller network the proposed method performs comparably in terms of total energy usage, but delivers a greater demand deficit. On the larger network the proposed method is superior with respect to all measures. We conclude that the method is useful when the time required to evaluate possible strategies via simulation is high.

  17. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any

  18. Relationships between depth and number of misclassifications for decision trees

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2011-01-01

    This paper describes a new tool for the study of relationships between depth and number of misclassifications for decision trees. In addition to the algorithm the paper also presents the results of experiments with three datasets from UCI Machine Learning Repository [3]. © 2011 Springer-Verlag.

  19. Circum-Arctic petroleum systems identified using decision-tree chemometrics

    Science.gov (United States)

    Peters, K.E.; Ramos, L.S.; Zumberge, J.E.; Valin, Z.C.; Scotese, C.R.; Gautier, D.L.

    2007-01-01

    Source- and age-related biomarker and isotopic data were measured for more than 1000 crude oil samples from wells and seeps collected above approximately 55??N latitude. A unique, multitiered chemometric (multivariate statistical) decision tree was created that allowed automated classification of 31 genetically distinct circumArctic oil families based on a training set of 622 oil samples. The method, which we call decision-tree chemometrics, uses principal components analysis and multiple tiers of K-nearest neighbor and SIMCA (soft independent modeling of class analogy) models to classify and assign confidence limits for newly acquired oil samples and source rock extracts. Geochemical data for each oil sample were also used to infer the age, lithology, organic matter input, depositional environment, and identity of its source rock. These results demonstrate the value of large petroleum databases where all samples were analyzed using the same procedures and instrumentation. Copyright ?? 2007. The American Association of Petroleum Geologists. All rights reserved.

  20. Classification and Optimization of Decision Trees for Inconsistent Decision Tables Represented as MVD Tables

    KAUST Repository

    Azad, Mohammad

    2015-10-11

    Decision tree is a widely used technique to discover patterns from consistent data set. But if the data set is inconsistent, where there are groups of examples (objects) with equal values of conditional attributes but different decisions (values of the decision attribute), then to discover the essential patterns or knowledge from the data set is challenging. We consider three approaches (generalized, most common and many-valued decision) to handle such inconsistency. We created different greedy algorithms using various types of impurity and uncertainty measures to construct decision trees. We compared the three approaches based on the decision tree properties of the depth, average depth and number of nodes. Based on the result of the comparison, we choose to work with the many-valued decision approach. Now to determine which greedy algorithms are efficient, we compared them based on the optimization and classification results. It was found that some greedy algorithms Mult\\\\_ws\\\\_entSort, and Mult\\\\_ws\\\\_entML are good for both optimization and classification.

  1. Classification and Optimization of Decision Trees for Inconsistent Decision Tables Represented as MVD Tables

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2015-01-01

    Decision tree is a widely used technique to discover patterns from consistent data set. But if the data set is inconsistent, where there are groups of examples (objects) with equal values of conditional attributes but different decisions (values of the decision attribute), then to discover the essential patterns or knowledge from the data set is challenging. We consider three approaches (generalized, most common and many-valued decision) to handle such inconsistency. We created different greedy algorithms using various types of impurity and uncertainty measures to construct decision trees. We compared the three approaches based on the decision tree properties of the depth, average depth and number of nodes. Based on the result of the comparison, we choose to work with the many-valued decision approach. Now to determine which greedy algorithms are efficient, we compared them based on the optimization and classification results. It was found that some greedy algorithms Mult\\_ws\\_entSort, and Mult\\_ws\\_entML are good for both optimization and classification.

  2. SOFTWARE ARCHITECTURE DESIGN OF GIS WEB SERVICE AGGREGATION BASED ON SERVICE GROUP

    Directory of Open Access Journals (Sweden)

    J.-C. Liu

    2012-08-01

    Full Text Available Based on the analysis of research status of domestic and international GIS web service aggregation and development tendency of public platform of GIS web service, the paper designed software architecture of GIS web service aggregation based on GIS web service group. Firstly, using heterogeneous GIS services model, the software architecture converted a variety of heterogeneous services to a unified interface of GIS services, and divided different types of GIS services into different service groups referring to description of GIS services. Secondly, a service aggregation process model was designed. This model completed the task of specific service aggregation instance, by automatically selecting member GIS Web services in the same service group. Dynamic capabilities and automatic adaptation of GIS Web services aggregation process were achieved. Thirdly, this paper designed a service evaluation model of GIS web service aggregation based on service group from three aspects, i.e. GIS Web Service itself, networking conditions and service consumer. This model implemented effective quality evaluation and performance monitoring of GIS web service aggregation. It could be used to guide the execution, monitor and service selection of aggregation process. Therefore, robustness of aggregated GIS web service was improved. Finally, the software architecture has been widely used in public platform of GIS web service and a number of geo-spatial framework constructions for digital city in Sichuan Province, and aggregated various GIS web services such as World Map(National Public Platform of Geo-spatial Service, ArcGIS, SuperMap, MapGIS, NewMap etc. Applications of items showed that this software architecture was practicability.

  3. Development and acceptability testing of decision trees for self-management of prosthetic socket fit in adults with lower limb amputation.

    Science.gov (United States)

    Lee, Daniel Joseph; Veneri, Diana A

    2018-05-01

    The most common complaint lower limb prosthesis users report is inadequacy of a proper socket fit. Adjustments to the residual limb-socket interface can be made by the prosthesis user without consultation of a clinician in many scenarios through skilled self-management. Decision trees guide prosthesis wearers through the self-management process, empowering them to rectify fit issues, or referring them to a clinician when necessary. This study examines the development and acceptability testing of patient-centered decision trees for lower limb prosthesis users. Decision trees underwent a four-stage process: literature review and expert consultation, designing, two-rounds of expert panel review and revisions, and target audience testing. Fifteen lower limb prosthesis users (average age 61 years) reviewed the decision trees and completed an acceptability questionnaire. Participants reported agreement of 80% or above in five of the eight questions related to acceptability of the decision trees. Disagreement was related to the level of experience of the respondent. Decision trees were found to be easy to use, illustrate correct solutions to common issues, and have terminology consistent with that of a new prosthesis user. Some users with greater than 1.5 years of experience would not use the decision trees based on their own self-management skills. Implications for Rehabilitation Discomfort of the residual limb-prosthetic socket interface is the most common reason for clinician visits. Prosthesis users can use decision trees to guide them through the process of obtaining a proper socket fit independently. Newer users may benefit from using the decision trees more than experienced users.

  4. Predicting gene function using hierarchical multi-label decision tree ensembles

    Directory of Open Access Journals (Sweden)

    Kocev Dragi

    2010-01-01

    Full Text Available Abstract Background S. cerevisiae, A. thaliana and M. musculus are well-studied organisms in biology and the sequencing of their genomes was completed many years ago. It is still a challenge, however, to develop methods that assign biological functions to the ORFs in these genomes automatically. Different machine learning methods have been proposed to this end, but it remains unclear which method is to be preferred in terms of predictive performance, efficiency and usability. Results We study the use of decision tree based models for predicting the multiple functions of ORFs. First, we describe an algorithm for learning hierarchical multi-label decision trees. These can simultaneously predict all the functions of an ORF, while respecting a given hierarchy of gene functions (such as FunCat or GO. We present new results obtained with this algorithm, showing that the trees found by it exhibit clearly better predictive performance than the trees found by previously described methods. Nevertheless, the predictive performance of individual trees is lower than that of some recently proposed statistical learning methods. We show that ensembles of such trees are more accurate than single trees and are competitive with state-of-the-art statistical learning and functional linkage methods. Moreover, the ensemble method is computationally efficient and easy to use. Conclusions Our results suggest that decision tree based methods are a state-of-the-art, efficient and easy-to-use approach to ORF function prediction.

  5. Prognostic Factors and Decision Tree for Long-term Survival in Metastatic Uveal Melanoma.

    Science.gov (United States)

    Lorenzo, Daniel; Ochoa, María; Piulats, Josep Maria; Gutiérrez, Cristina; Arias, Luis; Català, Jaum; Grau, María; Peñafiel, Judith; Cobos, Estefanía; Garcia-Bru, Pere; Rubio, Marcos Javier; Padrón-Pérez, Noel; Dias, Bruno; Pera, Joan; Caminal, Josep Maria

    2017-12-04

    The purpose of this study was to demonstrate the existence of a bimodal survival pattern in metastatic uveal melanoma. Secondary aims were to identify the characteristics and prognostic factors associated with long-term survival and to develop a clinical decision tree. The medical records of 99 metastatic uveal melanoma patients were retrospectively reviewed. Patients were classified as either short (≤ 12 months) or long-term survivors (> 12 months) based on a graphical interpretation of the survival curve after diagnosis of the first metastatic lesion. Ophthalmic and oncological characteristics were assessed in both groups. Of the 99 patients, 62 (62.6%) were classified as short-term survivors, and 37 (37.4%) as long-term survivors. The multivariate analysis identified the following predictors of long-term survival: age ≤ 65 years (p=0.012) and unaltered serum lactate dehydrogenase levels (p=0.018); additionally, the size (smaller vs. larger) of the largest liver metastasis showed a trend towards significance (p=0.063). Based on the variables significantly associated with long-term survival, we developed a decision tree to facilitate clinical decision-making. The findings of this study demonstrate the existence of a bimodal survival pattern in patients with metastatic uveal melanoma. The presence of certain clinical characteristics at diagnosis of distant disease is associated with long-term survival. A decision tree was developed to facilitate clinical decision-making and to counsel patients about the expected course of disease.

  6. Binary Decision Trees for Preoperative Periapical Cyst Screening Using Cone-beam Computed Tomography.

    Science.gov (United States)

    Pitcher, Brandon; Alaqla, Ali; Noujeim, Marcel; Wealleans, James A; Kotsakis, Georgios; Chrepa, Vanessa

    2017-03-01

    Cone-beam computed tomographic (CBCT) analysis allows for 3-dimensional assessment of periradicular lesions and may facilitate preoperative periapical cyst screening. The purpose of this study was to develop and assess the predictive validity of a cyst screening method based on CBCT volumetric analysis alone or combined with designated radiologic criteria. Three independent examiners evaluated 118 presurgical CBCT scans from cases that underwent apicoectomies and had an accompanying gold standard histopathological diagnosis of either a cyst or granuloma. Lesion volume, density, and specific radiologic characteristics were assessed using specialized software. Logistic regression models with histopathological diagnosis as the dependent variable were constructed for cyst prediction, and receiver operating characteristic curves were used to assess the predictive validity of the models. A conditional inference binary decision tree based on a recursive partitioning algorithm was constructed to facilitate preoperative screening. Interobserver agreement was excellent for volume and density, but it varied from poor to good for the radiologic criteria. Volume and root displacement were strong predictors for cyst screening in all analyses. The binary decision tree classifier determined that if the volume of the lesion was >247 mm 3 , there was 80% probability of a cyst. If volume was cyst probability was 60% (78% accuracy). The good accuracy and high specificity of the decision tree classifier renders it a useful preoperative cyst screening tool that can aid in clinical decision making but not a substitute for definitive histopathological diagnosis after biopsy. Confirmatory studies are required to validate the present findings. Published by Elsevier Inc.

  7. Role of decision-tree analysis in the performance of a complex feasibility study

    International Nuclear Information System (INIS)

    Dworkin, D.; Sarkar, A.; Motwani, J.

    1991-01-01

    This report presents the results of the Feasibility Study (FS) of a National Priorities List (NPL) site in New Jersey and the decision tree that made this FS possible. The development of the decision tree and the remedial action alternatives that address the hazards at the site are presented. The FS efforts were performed in accordance with U.S. EPA guidance under the authority of the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA) as amended by the Superfund Amendments and Reauthorization Act of 1986 (SARA). A Record of Decision (ROD) is expected in mid-1990. The subject site, the Myers Property site in Franklin Township, New Jersey, has been owned by several individuals and companies since 1811. Uses have included the production of DDT. A Remedial Investigation (RI) was performed by WESTON, the lead technical firm for this feasibility study, which identified soils/sediments, groundwater and buildings on the site as areas of concern. The major chemicals of concern are DDT and its metabolites, polynuclear aromatic compounds, various chlorinated benzenes, dioxin/furan homologues and heavy metals such as arsenic, cadmium, copper, lead, chromium and nickel. While this FS was developed in accordance with current CERCLA FS guidance and procedures, it was expanded to accommodate several outstanding technical and policy issues. Outstanding technical issues focused on uncertainties with respect to hydrogeologic conditions, and policy issues centered upon the development of the site-specific remedial action goals. A decision tree was established to facilitate the development of remedial strategies. The decision tree formed the basis for the FS and allowed remedial alternatives to be identified and evaluated based on key policy decisions. Site media were addressed as contaminated soils/sediments, groundwater and the on-site buildings

  8. Simple Prediction of Type 2 Diabetes Mellitus via Decision Tree Modeling

    Directory of Open Access Journals (Sweden)

    Mehrab Sayadi

    2017-06-01

    Full Text Available Background: Type 2 Diabetes Mellitus (T2DM is one of the most important risk factors in cardiovascular disorders considered as a common clinical and public health problem. Early diagnosis can reduce the burden of the disease. Decision tree, as an advanced data mining method, can be used as a reliable tool to predict T2DM. Objectives: This study aimed to present a simple model for predicting T2DM using decision tree modeling. Materials and Methods: This analytical model-based study used a part of the cohort data obtained from a database in Healthy Heart House of Shiraz, Iran. The data included routine information, such as age, gender, Body Mass Index (BMI, family history of diabetes, and systolic and diastolic blood pressure, which were obtained from the individuals referred for gathering baseline data in Shiraz cohort study from 2014 to 2015. Diabetes diagnosis was used as binary datum. Decision tree technique and J48 algorithm were applied using the WEKA software (version 3.7.5, New Zealand. Additionally, Receiver Operator Characteristic (ROC curve and Area Under Curve (AUC were used for checking the goodness of fit. Results: The age of the 11302 cases obtained after data preparation ranged from 18 to 89 years with the mean age of 48.1 ± 11.4 years. Additionally, 51.1% of the cases were male. In the tree structure, blood pressure and age were placed where most information was gained. In our model, however, gender was not important and was placed on the final branch of the tree. Total precision and AUC were 87% and 89%, respectively. This indicated that the model had good accuracy for distinguishing patients from normal individuals. Conclusions: The results showed that T2DM could be predicted via decision tree model without laboratory tests. Thus, this model can be used in pre-clinical and public health screening programs.

  9. Decision-tree approach to evaluating inactive uranium-processing sites for liner requirements

    International Nuclear Information System (INIS)

    Relyea, J.F.

    1983-03-01

    Recently, concern has been expressed about potential toxic effects of both radon emission and release of toxic elements in leachate from inactive uranium mill tailings piles. Remedial action may be required to meet disposal standards set by the states and the US Environmental Protection Agency (EPA). In some cases, a possible disposal option is the exhumation and reburial (either on site or at a new location) of tailings and reliance on engineered barriers to satisfy the objectives established for remedial actions. Liners under disposal pits are the major engineered barrier for preventing contaminant release to ground and surface water. The purpose of this report is to provide a logical sequence of action, in the form of a decision tree, which could be followed to show whether a selected tailings disposal design meets the objectives for subsurface contaminant release without a liner. This information can be used to determine the need and type of liner for sites exhibiting a potential groundwater problem. The decision tree is based on the capability of hydrologic and mass transport models to predict the movement of water and contaminants with time. The types of modeling capabilities and data needed for those models are described, and the steps required to predict water and contaminant movement are discussed. A demonstration of the decision tree procedure is given to aid the reader in evaluating the need for the adequacy of a liner

  10. Profit Driven Decision Trees for Churn Prediction

    OpenAIRE

    Höppner, Sebastiaan; Stripling, Eugen; Baesens, Bart; Broucke, Seppe vanden; Verdonck, Tim

    2017-01-01

    Customer retention campaigns increasingly rely on predictive models to detect potential churners in a vast customer base. From the perspective of machine learning, the task of predicting customer churn can be presented as a binary classification problem. Using data on historic behavior, classification algorithms are built with the purpose of accurately predicting the probability of a customer defecting. The predictive churn models are then commonly selected based on accuracy related performan...

  11. gis-based hydrological model based hydrological model upstream

    African Journals Online (AJOL)

    eobe

    (GIS) environment to simulate various parame attributed to a ... water, sediment and agricultural chem large complex .... The 90m resolution topography data (see Figure 2) used for this .... results of the calibration and validation exercise are as.

  12. A GIS-based Power Transmission Management Information System

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    <正>Based on analyzing the shortcomings of electric power enterprise in traditional operation pattern,this paper combines components GIS with the application of management information system,and uses the structure which unifies three layers C/S and B/S.Also,proposed using the GPS intellectualization patrol.This may be useful to guarantees the transmission line’s operation to be safe and stable.

  13. GIS Based Measurement and Regulatory Zoning of Urban Ecological Vulnerability

    Directory of Open Access Journals (Sweden)

    Xiaorui Zhang

    2015-07-01

    Full Text Available Urban ecological vulnerability is measured on the basis of ecological sensitivity and resilience based on the concept analysis of vulnerability. GIS-based multicriteria decision analysis (GIS-MCDA methods are used, supported by the spatial analysis tools of GIS, to define different levels of vulnerability for areas of the urban ecology. These areas are further classified into different types of regulatory zones. Taking the city of Hefei in China as the empirical research site, this study uses GIS-MCDA, including the index system, index weights and overlay rules, to measure the degree of its ecological vulnerability on the GIS platform. There are eight indices in the system. Raking and analytical hierarchy process (AHP methods are used to calculate index weights according to the characteristics of the index system. The integrated overlay rule, including selection of the maximum value, and weighted linear combination (WLC are applied as the overlay rules. In this way, five types of vulnerability areas have been classified as follows: very low vulnerability, low vulnerability, medium vulnerability, high vulnerability and very high vulnerability. They can be further grouped into three types of regulatory zone of ecological green line, ecological grey line and ecological red line. The study demonstrates that ecological green line areas are the largest (53.61% of the total study area and can be intensively developed; ecological grey line areas (19.59% of the total area can serve as the ecological buffer zone, and ecological red line areas (26.80% cannot be developed and must be protected. The results indicate that ecological green line areas may provide sufficient room for future urban development in Hefei city. Finally, the respective regulatory countermeasures are put forward. This research provides a scientific basis for decision-making around urban ecological protection, construction and sustainable development. It also provides theoretical method

  14. Bi-Criteria Optimization of Decision Trees with Applications to Data Analysis

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2017-01-01

    : the study of relationships among depth, average depth and number of nodes for decision trees for corner point detection (such trees are used in computer vision for object tracking), study of systems of decision rules derived from decision trees

  15. Total Path Length and Number of Terminal Nodes for Decision Trees

    KAUST Repository

    Hussain, Shahid

    2014-01-01

    This paper presents a new tool for study of relationships between total path length (average depth) and number of terminal nodes for decision trees. These relationships are important from the point of view of optimization of decision trees

  16. Application of GIS Technology - cadastral base of the municipality of Gostivar

    OpenAIRE

    Serafimoski, Toni

    2011-01-01

    In the thesis is presented modern GIS technology and its application in the field of cadastre. GIS technology is essentially based on the display and information processing related to given geographical coordinates. In the paper is implemented GIS software package Microstation, widely applied in the field of cadastre. The paper developed model using data from the cadastre of the municipality of Gostivar. Applied GIS technology opens up wide possibilities of processing of cadastral data,...

  17. Hydropower systems planning in distribution networks based on GIS

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez-Rosado, I.J.; Zorzano-Santamaria, P.; Fernandez-Jiminez, L.A.; Garcia-Garrido, E.; Zorzano-Alba, E.; Lara-Santillan, P.M.; Mendoza-Villena, M. [La Rioja Univ., La Rioja (Spain). Dept. of Electrical Engineering

    2005-07-01

    Planning models for hydraulic energy systems developed with a new computational geographic information system (GIS) tool were presented. The GIS tool was used to identify sections of rivers that are suitable for small-scale run-of-river hydropower generation in the La Rioja region of Spain. The tool was also used to evaluate economic data related to financing grants, technological costs, installation costs, and forecast demands for various consumption scenarios. Case studies of 2 hydropower systems were used to test the tool. The planning models were based on levelized electric costs (LEC) of run-of-river hydropower systems, and enabled the systems to be analyzed in isolation as well as when connected to an existing electricity network. Results of the modelling studies showed that the tool is capable of analyzing the LEC of potential hydropower systems in a range of different scenarios while maintaining the geographic characteristics and requirements of individual regions. 7 refs., 4 figs.

  18. Online decision trees to support the control of gastrointestinal worms in ruminants

    OpenAIRE

    Koopmann, Regine; Dämmrich, Michaela; Ploeger, Harm

    2014-01-01

    Control of gastrointestinal worms is crucial to any pasture system for ruminants. To support the farmer's foresighted planning of pasturage and to avoid excessive deworm-ing in Germany we created four decision trees and put them online. They are freely accessible at www.weide-parasiten.de. There is one decision tree for young first season cattle in intensive dairy husbandry, one decision tree for young cattle in suckling-cow management and one decision tree for sheep and goats, respectively.

  19. Extensions of dynamic programming as a new tool for decision tree optimization

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2013-01-01

    The chapter is devoted to the consideration of two types of decision trees for a given decision table: α-decision trees (the parameter α controls the accuracy of tree) and decision trees (which allow arbitrary level of accuracy). We study

  20. The Utility of Decision Trees in Oncofertility Care in Japan.

    Science.gov (United States)

    Ito, Yuki; Shiraishi, Eriko; Kato, Atsuko; Haino, Takayuki; Sugimoto, Kouhei; Okamoto, Aikou; Suzuki, Nao

    2017-03-01

    To identify the utility and issues associated with the use of decision trees in oncofertility patient care in Japan. A total of 35 women who had been diagnosed with cancer, but had not begun anticancer treatment, were enrolled. We applied the oncofertility decision tree for women published by Gardino et al. to counsel a consecutive series of women on fertility preservation (FP) options following cancer diagnosis. Percentage of women who decided to undergo oocyte retrieval for embryo cryopreservation and the expected live-birth rate for these patients were calculated using the following equation: expected live-birth rate = pregnancy rate at each age per embryo transfer × (1 - miscarriage rate) × No. of cryopreserved embryos. Oocyte retrieval was performed for 17 patients (48.6%; mean ± standard deviation [SD] age, 36.35 ± 3.82 years). The mean ± SD number of cryopreserved embryos was 5.29 ± 4.63. The expected live-birth rate was 0.66. The expected live-birth rate with FP indicated that one in three oncofertility patients would not expect to have a live birth following oocyte retrieval and embryo cryopreservation. While the decision trees were useful as decision-making tools for women contemplating FP, in the context of the current restrictions on oocyte donation and the extremely small number of adoptions in Japan, the remaining options for fertility after cancer are limited. In order for cancer survivors to feel secure in their decisions, the decision tree may need to be adapted simultaneously with improvements to the social environment, such as greater support for adoption.

  1. Comparing wavefront-optimized, wavefront-guided and topography-guided laser vision correction: clinical outcomes using an objective decision tree.

    Science.gov (United States)

    Stonecipher, Karl; Parrish, Joseph; Stonecipher, Megan

    2018-05-18

    This review is intended to update and educate the reader on the currently available options for laser vision correction, more specifically, laser-assisted in-situ keratomileusis (LASIK). In addition, some related clinical outcomes data from over 1000 cases performed over a 1-year are presented to highlight some differences between the various treatment profiles currently available including the rapidity of visual recovery. The cases in question were performed on the basis of a decision tree to segregate patients on the basis of anatomical, topographic and aberrometry findings; the decision tree was formulated based on the data available in some of the reviewed articles. Numerous recent studies reported in the literature provide data related to the risks and benefits of LASIK; alternatives to a laser refractive procedure are also discussed. The results from these studies have been used to prepare a decision tree to assist the surgeon in choosing the best option for the patient based on the data from several standard preoperative diagnostic tests. The data presented here should aid surgeons in understanding the effects of currently available LASIK treatment profiles. Surgeons should also be able to appreciate how the findings were used to create a decision tree to help choose the most appropriate treatment profile for patients. Finally, the retrospective evaluation of clinical outcomes based on the decision tree should provide surgeons with a realistic expectation for their own outcomes should they adopt such a decision tree in their own practice.

  2. Discovering Decision Knowledge from Web Log Portfolio for Managing Classroom Processes by Applying Decision Tree and Data Cube Technology.

    Science.gov (United States)

    Chen, Gwo-Dong; Liu, Chen-Chung; Ou, Kuo-Liang; Liu, Baw-Jhiune

    2000-01-01

    Discusses the use of Web logs to record student behavior that can assist teachers in assessing performance and making curriculum decisions for distance learning students who are using Web-based learning systems. Adopts decision tree and data cube information processing methodologies for developing more effective pedagogical strategies. (LRW)

  3. The use of GIS-based support of recreational trail planning by local governments

    DEFF Research Database (Denmark)

    Olafsson, Anton Stahl; Skov-Petersen, Hans

    2014-01-01

    In the last decade, multiple GIS-based planning support systems have been developed in order to improve the basis for spatial planning. Recent research has focused on problems regarding a lack of utilisation or barriers to the use of advanced GIS and GIS-based planning support systems in planning...... applications. This paper discusses these research findings in the context of outdoor recreational planning by local governments in Denmark. According to a national survey of municipal planners, GIS-based planning support is widely used in Denmark, but more GIS is needed and is being requested for local outdoor...... recreation planning. However, considerable differences exist between the ways municipalities assess their need for and use of GIS planning support. These differences are explored in more detail using a factor analysis of planning variables and uses of GIS. Three situations are described: 1) extensive network...

  4. Extensions of dynamic programming as a new tool for decision tree optimization

    KAUST Repository

    Alkhalid, Abdulaziz

    2013-01-01

    The chapter is devoted to the consideration of two types of decision trees for a given decision table: α-decision trees (the parameter α controls the accuracy of tree) and decision trees (which allow arbitrary level of accuracy). We study possibilities of sequential optimization of α-decision trees relative to different cost functions such as depth, average depth, and number of nodes. For decision trees, we analyze relationships between depth and number of misclassifications. We also discuss results of computer experiments with some datasets from UCI ML Repository. ©Springer-Verlag Berlin Heidelberg 2013.

  5. Classification of Parkinsonian syndromes from FDG-PET brain data using decision trees with SSM/PCA features.

    Science.gov (United States)

    Mudali, D; Teune, L K; Renken, R J; Leenders, K L; Roerdink, J B T M

    2015-01-01

    Medical imaging techniques like fluorodeoxyglucose positron emission tomography (FDG-PET) have been used to aid in the differential diagnosis of neurodegenerative brain diseases. In this study, the objective is to classify FDG-PET brain scans of subjects with Parkinsonian syndromes (Parkinson's disease, multiple system atrophy, and progressive supranuclear palsy) compared to healthy controls. The scaled subprofile model/principal component analysis (SSM/PCA) method was applied to FDG-PET brain image data to obtain covariance patterns and corresponding subject scores. The latter were used as features for supervised classification by the C4.5 decision tree method. Leave-one-out cross validation was applied to determine classifier performance. We carried out a comparison with other types of classifiers. The big advantage of decision tree classification is that the results are easy to understand by humans. A visual representation of decision trees strongly supports the interpretation process, which is very important in the context of medical diagnosis. Further improvements are suggested based on enlarging the number of the training data, enhancing the decision tree method by bagging, and adding additional features based on (f)MRI data.

  6. GIS-based assessment of landslide susceptibility using certainty ...

    Indian Academy of Sciences (India)

    tion systems (GIS) using different models. Many of these ... as spatial multicriteria decision analysis (MCDA) approach ... 2013), support vec- ... Landslide susceptibility assessment using mathematical methods in GIS, Qianyang China 1401.

  7. Measuring performance in health care: case-mix adjustment by boosted decision trees.

    Science.gov (United States)

    Neumann, Anke; Holstein, Josiane; Le Gall, Jean-Roger; Lepage, Eric

    2004-10-01

    The purpose of this paper is to investigate the suitability of boosted decision trees for the case-mix adjustment involved in comparing the performance of various health care entities. First, we present logistic regression, decision trees, and boosted decision trees in a unified framework. Second, we study in detail their application for two common performance indicators, the mortality rate in intensive care and the rate of potentially avoidable hospital readmissions. For both examples the technique of boosting decision trees outperformed standard prognostic models, in particular linear logistic regression models, with regard to predictive power. On the other hand, boosting decision trees was computationally demanding and the resulting models were rather complex and needed additional tools for interpretation. Boosting decision trees represents a powerful tool for case-mix adjustment in health care performance measurement. Depending on the specific priorities set in each context, the gain in predictive power might compensate for the inconvenience in the use of boosted decision trees.

  8. Evaluation with Decision Trees of Efficacy and Safety of Semirigid Ureteroscopy in the Treatment of Proximal Ureteral Calculi.

    Science.gov (United States)

    Sancak, Eyup Burak; Kılınç, Muhammet Fatih; Yücebaş, Sait Can

    2017-01-01

    The decision on the choice of proximal ureteral stone therapy depends on many factors, and sometimes urologists have difficulty in choosing the treatment option. This study is aimed at evaluating the factors affecting the success of semirigid ureterorenoscopy (URS) using the "decision tree" method. From January 2005 to November 2015, the data of consecutive patients treated for proximal ureteral stone were retrospectively analyzed. A total of 920 patients with proximal ureteral stone treated with semirigid URS were included in the study. All statistically significant attributes were tested using the decision tree method. The model created using decision tree had a sensitivity of 0.993 and an accuracy of 0.857. While URS treatment was successful in 752 patients (81.7%), it was unsuccessful in 168 patients (18.3%). According to the decision tree method, the most important factor affecting the success of URS is whether the stone is impacted to the ureteral wall. The second most important factor affecting treatment was intramural stricture requiring dilatation if the stone is impacted, and the size of the stone if not impacted. Our study suggests that the impacted stone, intramural stricture requiring dilatation and stone size may have a significant effect on the success rate of semirigid URS for proximal ureteral stone. Further studies with population-based and longitudinal design should be conducted to confirm this finding. © 2017 S. Karger AG, Basel.

  9. Decision Tree and Texture Analysis for Mapping Debris-Covered Glaciers in the Kangchenjunga Area, Eastern Himalaya

    Directory of Open Access Journals (Sweden)

    Adina Racoviteanu

    2012-10-01

    Full Text Available In this study we use visible, short-wave infrared and thermal Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER data validated with high-resolution Quickbird (QB and Worldview2 (WV2 for mapping debris cover in the eastern Himalaya using two independent approaches: (a a decision tree algorithm, and (b texture analysis. The decision tree algorithm was based on multi-spectral and topographic variables, such as band ratios, surface reflectance, kinetic temperature from ASTER bands 10 and 12, slope angle, and elevation. The decision tree algorithm resulted in 64 km2 classified as debris-covered ice, which represents 11% of the glacierized area. Overall, for ten glacier tongues in the Kangchenjunga area, there was an area difference of 16.2 km2 (25% between the ASTER and the QB areas, with mapping errors mainly due to clouds and shadows. Texture analysis techniques included co-occurrence measures, geostatistics and filtering in spatial/frequency domain. Debris cover had the highest variance of all terrain classes, highest entropy and lowest homogeneity compared to the other classes, for example a mean variance of 15.27 compared to 0 for clouds and 0.06 for clean ice. Results of the texture image for debris-covered areas were comparable with those from the decision tree algorithm, with 8% area difference between the two techniques.

  10. A GIS-BASED ENVIRONMENTAL HEALTH INFORMATION SOURCE FOR MALAYSIAN CONTEXT

    OpenAIRE

    Lau Tiu Chung; Lau Bee Theng; Henry Lee Seldon

    2013-01-01

    In this paper, we propose a GIS-based system for collection and targeted distribution of latest alerts and real-time environmental factors to the Malaysian population. We call it the Environmental Health Management System (EHMS). This GIS-based system is designed to facilitate and encourage research into environmental health quality issues by providing a comprehensive tracking and monitoring tool. This GIS-based system is embedded with Google Maps API and Geocoding API services to visualize t...

  11. GIS-Based Population Model Applied to Nevada Transportation Routes

    International Nuclear Information System (INIS)

    Mills, G.S.; Neuhauser, K.S.

    1999-01-01

    Recently, a model based on geographic information system (GIS) processing of US Census Block data has made high-resolution population analysis for transportation risk analysis technically and economically feasible. Population density bordering each kilometer of a route may be tabulated with specific route sections falling into each of three categories (Rural, Suburban or Urban) identified for separate risk analysis. In addition to the improvement in resolution of Urban areas along a route, the model provides a statistically-based correction to population densities in Rural and Suburban areas where Census Block dimensions may greatly exceed the 800-meter scale of interest. A semi-automated application of the GIS model to a subset of routes in Nevada (related to the Yucca Mountain project) are presented, and the results compared to previous models including a model based on published Census and other data. These comparisons demonstrate that meaningful improvement in accuracy and specificity of transportation risk analyses is dependent on correspondingly accurate and geographically-specific population density data

  12. Entropy lower bounds of quantum decision tree complexity

    OpenAIRE

    Shi, Yaoyun

    2000-01-01

    We prove a general lower bound of quantum decision tree complexity in terms of some entropy notion. We regard the computation as a communication process in which the oracle and the computer exchange several rounds of messages, each round consisting of O(log(n)) bits. Let E(f) be the Shannon entropy of the random variable f(X), where X is uniformly random in f's domain. Our main result is that it takes \\Omega(E(f)) queries to compute any \\emph{total} function f. It is interesting to contrast t...

  13. Boosted decision trees as an alternative to artificial neural networks for particle identification

    International Nuclear Information System (INIS)

    Roe, Byron P.; Yang Haijun; Zhu Ji; Liu Yong; Stancu, Ion; McGregor, Gordon

    2005-01-01

    The efficacy of particle identification is compared using artificial neutral networks and boosted decision trees. The comparison is performed in the context of the MiniBooNE, an experiment at Fermilab searching for neutrino oscillations. Based on studies of Monte Carlo samples of simulated data, particle identification with boosting algorithms has better performance than that with artificial neural networks for the MiniBooNE experiment. Although the tests in this paper were for one experiment, it is expected that boosting algorithms will find wide application in physics

  14. Studies of stability and robustness for artificial neural networks and boosted decision trees

    International Nuclear Information System (INIS)

    Yang, H.-J.; Roe, Byron P.; Zhu Ji

    2007-01-01

    In this paper, we compare the performance, stability and robustness of Artificial Neural Networks (ANN) and Boosted Decision Trees (BDT) using MiniBooNE Monte Carlo samples. These methods attempt to classify events given a number of identification variables. The BDT algorithm has been discussed by us in previous publications. Testing is done in this paper by smearing and shifting the input variables of testing samples. Based on these studies, BDT has better particle identification performance than ANN. The degradation of the classifications obtained by shifting or smearing variables of testing results is smaller for BDT than for ANN

  15. GENERATION OF 2D LAND COVER MAPS FOR URBAN AREAS USING DECISION TREE CLASSIFICATION

    DEFF Research Database (Denmark)

    Höhle, Joachim

    2014-01-01

    A 2D land cover map can automatically and efficiently be generated from high-resolution multispectral aerial images. First, a digital surface model is produced and each cell of the elevation model is then supplemented with attributes. A decision tree classification is applied to extract map objects...... of stereo-observations of false-colour stereopairs. The stratified statistical assessment of the produced land cover map with six classes and based on 91 points per class reveals a high thematic accuracy for classes ‘building’ (99%, 95% CI: 95%-100%) and ‘road and parking lot’ (90%, 95% CI: 83%-95%). Some...

  16. PROMAB-GIS: A GIS based Tool for Estimating Runoff and Sediment Yield in running Waters

    Science.gov (United States)

    Jenewein, S.; Rinderer, M.; Ploner, A.; Sönser, T.

    2003-04-01

    In recent times settlements have expanded, traffic and tourist activities have increased in most alpine regions. As a consequence, on the one hand humans and goods are affected by natural hazard processes more often, while on the other hand the demand for protection by both technical constructions and planning measures carried out by public authorities is growing. This situation results in an ever stronger need of reproducibility, comparability, transparency of all methods applied in modern natural hazard management. As a contribution to a new way of coping this situation Promab-GIS Version 1.0 has been developed. Promab-Gis has been designed as a model for time- and space-dependent determination of both runoff and bedload transport in rivers of small alpine catchment areas. The estimation of the unit hydrograph relies upon the "rational formula" and the time-area curves of the watershed. The time area diagram is a graph of cumulative drainage area contributing to discharge at the watershed outlet within a specified time of travel. The sediment yield is estimated for each cell of the channel network by determining the actual process type (erosion, transport or accumulation). Two types of transport processes are considered, sediment transport and debris flows. All functions of Promab-GIS are integrated in the graphical user interface of ArcView as pull-up menus and tool buttons. Hence the application of Promab-GIS does not rely on a sophisticated knowledge of GIS in general, respectively the ArcView software. However, despite the use of computer assistance, Promab-GIS still is an expert support system. In order to obtain plausible results, the users must be familiar with all the relevant processes controlling runoff and sediment yield in torrent catchments.

  17. A Javascript GIS Platform Based on Invocable Geospatial Web Services

    Directory of Open Access Journals (Sweden)

    Konstantinos Evangelidis

    2018-04-01

    Full Text Available Semantic Web technologies are being increasingly adopted by the geospatial community during last decade through the utilization of open standards for expressing and serving geospatial data. This was also dramatically assisted by the ever-increasing access and usage of geographic mapping and location-based services via smart devices in people’s daily activities. In this paper, we explore the developmental framework of a pure JavaScript client-side GIS platform exclusively based on invocable geospatial Web services. We also extend JavaScript utilization on the server side by deploying a node server acting as a bridge between open source WPS libraries and popular geoprocessing engines. The vehicle for such an exploration is a cross platform Web browser capable of interpreting JavaScript commands to achieve interaction with geospatial providers. The tool is a generic Web interface providing capabilities of acquiring spatial datasets, composing layouts and applying geospatial processes. In an ideal form the end-user will have to identify those services, which satisfy a geo-related need and put them in the appropriate row. The final output may act as a potential collector of freely available geospatial web services. Its server-side components may exploit geospatial processing suppliers composing that way a light-weight fully transparent open Web GIS platform.

  18. GIS-BASED PREDICTION OF HURRICANE FLOOD INUNDATION

    Energy Technology Data Exchange (ETDEWEB)

    JUDI, DAVID [Los Alamos National Laboratory; KALYANAPU, ALFRED [Los Alamos National Laboratory; MCPHERSON, TIMOTHY [Los Alamos National Laboratory; BERSCHEID, ALAN [Los Alamos National Laboratory

    2007-01-17

    A simulation environment is being developed for the prediction and analysis of the inundation consequences for infrastructure systems from extreme flood events. This decision support architecture includes a GIS-based environment for model input development, simulation integration tools for meteorological, hydrologic, and infrastructure system models and damage assessment tools for infrastructure systems. The GIS-based environment processes digital elevation models (30-m from the USGS), land use/cover (30-m NLCD), stream networks from the National Hydrography Dataset (NHD) and soils data from the NRCS (STATSGO) to create stream network, subbasins, and cross-section shapefiles for drainage basins selected for analysis. Rainfall predictions are made by a numerical weather model and ingested in gridded format into the simulation environment. Runoff hydrographs are estimated using Green-Ampt infiltration excess runoff prediction and a 1D diffusive wave overland flow routing approach. The hydrographs are fed into the stream network and integrated in a dynamic wave routing module using the EPA's Storm Water Management Model (SWMM) to predict flood depth. The flood depths are then transformed into inundation maps and exported for damage assessment. Hydrologic/hydraulic results are presented for Tropical Storm Allison.

  19. 3D GIS spatial operation based on extended Euler operators

    Science.gov (United States)

    Xu, Hongbo; Lu, Guonian; Sheng, Yehua; Zhou, Liangchen; Guo, Fei; Shang, Zuoyan; Wang, Jing

    2008-10-01

    The implementation of 3 dimensions spatial operations, based on certain data structure, has a lack of universality and is not able to treat with non-manifold cases, at present. ISO/DIS 19107 standard just presents the definition of Boolean operators and set operators for topological relationship query, and OGC GeoXACML gives formal definitions for several set functions without implementation detail. Aiming at these problems, based mathematical foundation on cell complex theory, supported by non-manifold data structure and using relevant research in the field of non-manifold geometry modeling for reference, firstly, this paper according to non-manifold Euler-Poincaré formula constructs 6 extended Euler operators and inverse operators to carry out creating, updating and deleting 3D spatial elements, as well as several pairs of supplementary Euler operators to convenient for implementing advanced functions. Secondly, we change topological element operation sequence of Boolean operation and set operation as well as set functions defined in GeoXACML into combination of extended Euler operators, which separates the upper functions and lower data structure. Lastly, we develop underground 3D GIS prototype system, in which practicability and credibility of extended Euler operators faced to 3D GIS presented by this paper are validated.

  20. Applying of Decision Tree Analysis to Risk Factors Associated with Pressure Ulcers in Long-Term Care Facilities.

    Science.gov (United States)

    Moon, Mikyung; Lee, Soo-Kyoung

    2017-01-01

    The purpose of this study was to use decision tree analysis to explore the factors associated with pressure ulcers (PUs) among elderly people admitted to Korean long-term care facilities. The data were extracted from the 2014 National Inpatient Sample (NIS)-data of Health Insurance Review and Assessment Service (HIRA). A MapReduce-based program was implemented to join and filter 5 tables of the NIS. The outcome predicted by the decision tree model was the prevalence of PUs as defined by the Korean Standard Classification of Disease-7 (KCD-7; code L89 * ). Using R 3.3.1, a decision tree was generated with the finalized 15,856 cases and 830 variables. The decision tree displayed 15 subgroups with 8 variables showing 0.804 accuracy, 0.820 sensitivity, and 0.787 specificity. The most significant primary predictor of PUs was length of stay less than 0.5 day. Other predictors were the presence of an infectious wound dressing, followed by having diagnoses numbering less than 3.5 and the presence of a simple dressing. Among diagnoses, "injuries to the hip and thigh" was the top predictor ranking 5th overall. Total hospital cost exceeding 2,200,000 Korean won (US $2,000) rounded out the top 7. These results support previous studies that showed length of stay, comorbidity, and total hospital cost were associated with PUs. Moreover, wound dressings were commonly used to treat PUs. They also show that machine learning, such as a decision tree, could effectively predict PUs using big data.

  1. The risk factors of laryngeal pathology in Korean adults using a decision tree model.

    Science.gov (United States)

    Byeon, Haewon

    2015-01-01

    The purpose of this study was to identify risk factors affecting laryngeal pathology in the Korean population and to evaluate the derived prediction model. Cross-sectional study. Data were drawn from the 2008 Korea National Health and Nutritional Examination Survey. The subjects were 3135 persons (1508 male and 2114 female) aged 19 years and older living in the community. The independent variables were age, sex, occupation, smoking, alcohol drinking, and self-reported voice problems. A decision tree analysis was done to identify risk factors for predicting a model of laryngeal pathology. The significant risk factors of laryngeal pathology were age, gender, occupation, smoking, and self-reported voice problem in decision tree model. Four significant paths were identified in the decision tree model for the prediction of laryngeal pathology. Those identified as high risk groups for laryngeal pathology included those who self-reported a voice problem, those who were males in their 50s who did not recognize a voice problem, those who were not economically active males in their 40s, and male workers aged 19 and over and under 50 or 60 and over who currently smoked. The results of this study suggest that individual risk factors, such as age, sex, occupation, health behavior, and self-reported voice problem, affect the onset of laryngeal pathology in a complex manner. Based on the results of this study, early management of the high-risk groups is needed for the prevention of laryngeal pathology. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  2. Peripheral Exophytic Oral Lesions: A Clinical Decision Tree

    Directory of Open Access Journals (Sweden)

    Hamed Mortazavi

    2017-01-01

    Full Text Available Diagnosis of peripheral oral exophytic lesions might be quite challenging. This review article aimed to introduce a decision tree for oral exophytic lesions according to their clinical features. General search engines and specialized databases including PubMed, PubMed Central, Medline Plus, EBSCO, Science Direct, Scopus, Embase, and authenticated textbooks were used to find relevant topics by means of keywords such as “oral soft tissue lesion,” “oral tumor like lesion,” “oral mucosal enlargement,” and “oral exophytic lesion.” Related English-language articles published since 1988 to 2016 in both medical and dental journals were appraised. Upon compilation of data, peripheral oral exophytic lesions were categorized into two major groups according to their surface texture: smooth (mesenchymal or nonsquamous epithelium-originated and rough (squamous epithelium-originated. Lesions with smooth surface were also categorized into three subgroups according to their general frequency: reactive hyperplastic lesions/inflammatory hyperplasia, salivary gland lesions (nonneoplastic and neoplastic, and mesenchymal lesions (benign and malignant neoplasms. In addition, lesions with rough surface were summarized in six more common lesions. In total, 29 entities were organized in the form of a decision tree in order to help clinicians establish a logical diagnosis by a stepwise progression method.

  3. A framework for sensitivity analysis of decision trees.

    Science.gov (United States)

    Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław

    2018-01-01

    In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.

  4. Modeling and Testing Landslide Hazard Using Decision Tree

    Directory of Open Access Journals (Sweden)

    Mutasem Sh. Alkhasawneh

    2014-01-01

    Full Text Available This paper proposes a decision tree model for specifying the importance of 21 factors causing the landslides in a wide area of Penang Island, Malaysia. These factors are vegetation cover, distance from the fault line, slope angle, cross curvature, slope aspect, distance from road, geology, diagonal length, longitude curvature, rugosity, plan curvature, elevation, rain perception, soil texture, surface area, distance from drainage, roughness, land cover, general curvature, tangent curvature, and profile curvature. Decision tree models are used for prediction, classification, and factors importance and are usually represented by an easy to interpret tree like structure. Four models were created using Chi-square Automatic Interaction Detector (CHAID, Exhaustive CHAID, Classification and Regression Tree (CRT, and Quick-Unbiased-Efficient Statistical Tree (QUEST. Twenty-one factors were extracted using digital elevation models (DEMs and then used as input variables for the models. A data set of 137570 samples was selected for each variable in the analysis, where 68786 samples represent landslides and 68786 samples represent no landslides. 10-fold cross-validation was employed for testing the models. The highest accuracy was achieved using Exhaustive CHAID (82.0% compared to CHAID (81.9%, CRT (75.6%, and QUEST (74.0% model. Across the four models, five factors were identified as most important factors which are slope angle, distance from drainage, surface area, slope aspect, and cross curvature.

  5. Decision Tree Approach to Discovering Fraud in Leasing Agreements

    Directory of Open Access Journals (Sweden)

    Horvat Ivan

    2014-09-01

    Full Text Available Background: Fraud attempts create large losses for financing subjects in modern economies. At the same time, leasing agreements have become more and more popular as a means of financing objects such as machinery and vehicles, but are more vulnerable to fraud attempts. Objectives: The goal of the paper is to estimate the usability of the data mining approach in discovering fraud in leasing agreements. Methods/Approach: Real-world data from one Croatian leasing firm was used for creating tow models for fraud detection in leasing. The decision tree method was used for creating a classification model, and the CHAID algorithm was deployed. Results: The decision tree model has indicated that the object of the leasing agreement had the strongest impact on the probability of fraud. Conclusions: In order to enhance the probability of the developed model, it would be necessary to develop software that would enable automated, quick and transparent retrieval of data from the system, processing according to the rules and displaying the results in multiple categories.

  6. Geospatial analysis based on GIS integrated with LADAR.

    Science.gov (United States)

    Fetterman, Matt R; Freking, Robert; Fernandez-Cull, Christy; Hinkle, Christopher W; Myne, Anu; Relyea, Steven; Winslow, Jim

    2013-10-07

    In this work, we describe multi-layered analyses of a high-resolution broad-area LADAR data set in support of expeditionary activities. High-level features are extracted from the LADAR data, such as the presence and location of buildings and cars, and then these features are used to populate a GIS (geographic information system) tool. We also apply line-of-sight (LOS) analysis to develop a path-planning module. Finally, visualization is addressed and enhanced with a gesture-based control system that allows the user to navigate through the enhanced data set in a virtual immersive experience. This work has operational applications including military, security, disaster relief, and task-based robotic path planning.

  7. Evaluating the Implementation and Effectiveness of GIS-Based Application in Secondary School Geography Lessons

    OpenAIRE

    Ali Ali

    2008-01-01

    The purpose of the study was to investigate the barriers preventing the use of Geographic Information Systems (GIS) in secondary school geography lessons and to determine its effectiveness on students success. A workshop focusing on ways to implement GIS-based application in the classroom for 14 teachers from nine high schools was conducted in 2006. The teachers were given GIS software, digital data for an application, and the necessary written documents describing the application. Due to var...

  8. Smart caching based on mobile agent of power WebGIS platform.

    Science.gov (United States)

    Wang, Xiaohui; Wu, Kehe; Chen, Fei

    2013-01-01

    Power information construction is developing towards intensive, platform, distributed direction with the expansion of power grid and improvement of information technology. In order to meet the trend, power WebGIS was designed and developed. In this paper, we first discuss the architecture and functionality of power WebGIS, and then we study caching technology in detail, which contains dynamic display cache model, caching structure based on mobile agent, and cache data model. We have designed experiments of different data capacity to contrast performance between WebGIS with the proposed caching model and traditional WebGIS. The experimental results showed that, with the same hardware environment, the response time of WebGIS with and without caching model increased as data capacity growing, while the larger the data was, the higher the performance of WebGIS with proposed caching model improved.

  9. Totally Optimal Decision Trees for Monotone Boolean Functions with at Most Five Variables

    KAUST Repository

    Chikalov, Igor

    2013-01-01

    In this paper, we present the empirical results for relationships between time (depth) and space (number of nodes) complexity of decision trees computing monotone Boolean functions, with at most five variables. We use Dagger (a tool for optimization of decision trees and decision rules) to conduct experiments. We show that, for each monotone Boolean function with at most five variables, there exists a totally optimal decision tree which is optimal with respect to both depth and number of nodes.

  10. Classification of soil respiration in areas of sugarcane renewal using decision tree

    Directory of Open Access Journals (Sweden)

    Camila Viana Vieira Farhate

    Full Text Available ABSTRACT: The use of data mining is a promising alternative to predict soil respiration from correlated variables. Our objective was to build a model using variable selection and decision tree induction to predict different levels of soil respiration, taking into account physical, chemical and microbiological variables of soil as well as precipitation in renewal of sugarcane areas. The original dataset was composed of 19 variables (18 independent variables and one dependent (or response variable. The variable-target refers to soil respiration as the target classification. Due to a large number of variables, a procedure for variable selection was conducted to remove those with low correlation with the variable-target. For that purpose, four approaches of variable selection were evaluated: no variable selection, correlation-based feature selection (CFS, chisquare method (χ2 and Wrapper. To classify soil respiration, we used the decision tree induction technique available in the Weka software package. Our results showed that data mining techniques allow the development of a model for soil respiration classification with accuracy of 81 %, resulting in a knowledge base composed of 27 rules for prediction of soil respiration. In particular, the wrapper method for variable selection identified a subset of only five variables out of 18 available in the original dataset, and they had the following order of influence in determining soil respiration: soil temperature > precipitation > macroporosity > soil moisture > potential acidity.

  11. Quantifying human and organizational factors in accident management using decision trees: the HORAAM method

    International Nuclear Information System (INIS)

    Baumont, G.; Menage, F.; Schneiter, J.R.; Spurgin, A.; Vogel, A.

    2000-01-01

    In the framework of the level 2 Probabilistic Safety Study (PSA 2) project, the Institute for Nuclear Safety and Protection (IPSN) has developed a method for taking into account Human and Organizational Reliability Aspects during accident management. Actions are taken during very degraded installation operations by teams of experts in the French framework of Crisis Organization (ONC). After describing the background of the framework of the Level 2 PSA, the French specific Crisis Organization and the characteristics of human actions in the Accident Progression Event Tree, this paper describes the method developed to introduce in PSA the Human and Organizational Reliability Analysis in Accident Management (HORAAM). This method is based on the Decision Tree method and has gone through a number of steps in its development. The first one was the observation of crisis center exercises, in order to identify the main influence factors (IFs) which affect human and organizational reliability. These IFs were used as headings in the Decision Tree method. Expert judgment was used in order to verify the IFs, to rank them, and to estimate the value of the aggregated factors to simplify the quantification of the tree. A tool based on Mathematica was developed to increase the flexibility and the efficiency of the study

  12. GIS-based poverty and population distribution analysis in China

    Science.gov (United States)

    Cui, Jing; Wang, Yingjie; Yan, Hong

    2009-07-01

    Geographically, poverty status is not only related with social-economic factors but also strongly affected by geographical environment. In the paper, GIS-based poverty and population distribution analysis method is introduced for revealing their regional differences. More than 100000 poor villages and 592 national key poor counties are chosen for the analysis. The results show that poverty distribution tends to concentrate in most of west China and mountainous rural areas of mid China. Furthermore, the fifth census data are overlaid to those poor areas in order to gain its internal diversity of social-economic characteristics. By overlaying poverty related social-economic parameters, such as sex ratio, illiteracy, education level, percentage of ethnic minorities, family composition, finding shows that poverty distribution is strongly correlated with high illiteracy rate, high percentage minorities, and larger family member.

  13. SAGA GIS based processing of spatial high resolution temperature data

    International Nuclear Information System (INIS)

    Gerlitz, Lars; Bechtel, Benjamin; Kawohl, Tobias; Boehner, Juergen; Zaksek, Klemen

    2013-01-01

    Many climate change impact studies require surface and near surface temperature data with high spatial and temporal resolution. The resolution of state of the art climate models and remote sensing data is often by far to coarse to represent the meso- and microscale distinctions of temperatures. This is particularly the case for regions with a huge variability of topoclimates, such as mountainous or urban areas. Statistical downscaling techniques are promising methods to refine gridded temperature data with limited spatial resolution, particularly due to their low demand for computer capacity. This paper presents two downscaling approaches - one for climate model output and one for remote sensing data. Both are methodically based on the FOSS-GIS platform SAGA. (orig.)

  14. Public participatory GIS in community-based disaster risk reduction

    Directory of Open Access Journals (Sweden)

    Randall B. Kemp

    2008-12-01

    Introducing PPGIS tools into community-based DRR is not a neutral effort. The information and communication technologies (ICT embedded in GIS can both aid the DRR efforts as well as impact the community in unintended ways. ICTs may be common in communities engaged in DRR efforts so the introduction of PPGIS may have minimal impact. What are the societal ramifications, however, of PPGIS methods in DRR efforts when ICTs are a relatively new aspect of a given community?  What are the communication methods pertinent to PPGIS in the DRR context?  How does the ICT literature address PPGIS methods?  The paper addresses these and other influences of ICT on societies prone to natural hazards.

  15. [Quality Suitability of Magnolia officinalis in China Based on GIS].

    Science.gov (United States)

    Shi, Lei; Zhang, Cheng-cheng; Ming, Meng-die; Guo, Lan-ping; Zhu, Shou-dong; Yang, Hong-bing

    2015-04-01

    To study the quality suitability rank dividing of Magnolia officinalis on the basis of investigation on the correlation between the ratio of magnolol and honokiol in Magnoliae Officinalis Cortex and ecological factors, in order to provide scientific basis for its planting area of high-quality medicinal materials. Based on the samples' quality analysis of 43 sampling points of Magnolia officinalis,the relationship between the ratio of magnolol and honokiol in Magnoliae officinalis Cortex and ecological factors was analyzed by statistical analysis. The geographic information system(GIS) was applied to assess the quality suitability rank dividing of Magnolia officinalis in China. There were 12 ecological factors mainly affecting the quality of Magnoliae Officinalis Cortex; The ratio of magnolol and honokiol had obvious characteristics of regional quality. Conclusion: Magnoliae Officinalis Cortex which produced in Hubei and Chongqing is dao-di herbs.

  16. Improved GIS-based Methods for Traffic Noise Impact Assessment

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker; Bloch, Karsten Sand

    1996-01-01

    When vector-based GIS-packages are used for traffic noise impact assessments, the buffer-technique is usually employed for the study: 1. For each road segment buffer-zones representing different noise-intervals are generated, 2. The buffers from all road segments are smoothed together, and 3....... The number of buildings within the buffers are enumerated. This technique provides an inaccurate assessment of the noise diffusion since it does not correct for buildings barrier and reflection to noise. The paper presents the results from a research project where the traditional noise buffer technique...... was compared with a new method which includes these corrections. Both methods follow the Common Nordic Noise Calculation Model, although the traditional buffer technique ignores parts of the model. The basis for the work was a digital map of roads and building polygons, combined with a traffic- and road...

  17. GIS Based Application of Advanced Traveler Information System in India

    Science.gov (United States)

    Kumar, P.; Singh, V.

    2012-02-01

    Developed countries like USA, Canada, Japan, UK, Australia and Germany have adopted advanced traveler information technologies expeditiously in comparison to developing countries. But, unlike developed countries, developing countries face considerable financial and framework constraints. Moreover local traffic, roadway, signalization, demographic, topological and social conditions in developing countries are quite different from those in developed countries. In this paper, a comprehensive framework comprising of system architecture, development methodology and salient features of a developed Advanced Traveler Information System (ATIS) for metropolitan cities in developing countries has been discussed. Development of proposed system is based on integration of two well known information technologies viz. Geographic Information Systems (GIS) and World Wide Web (WWW). Combination of these technologies can be utilized to develop an integrated ATIS that targets different types of travelers like private vehicle owners, transit users and casual outside visitors.

  18. The Architecture of a Complex GIS & Spreadsheet Based DSS

    Directory of Open Access Journals (Sweden)

    Dinu Airinei

    2010-01-01

    Full Text Available The decision support applications available on today market use to combine the decision analysis of historical databased on On-Line Analytical Processing (OLAP products or spreadsheet pivot tables with some new reporting facilities as alerts or key performance indicators available in portal dashboards or in complex spreadsheet-like reports, both corresponding to a new approach of the field called Business Intelligence. Moreover the geographical features of GIS added to DSS applications become more and more required by many kinds of businesses. In fact they are more useful this way than as distinctive parts.The paper tries to present a certain DSS architecture based on the association between such approaches and technologies. The particular examples are meant to support all the theoretical arguments and to complete the understanding of the interaction schemas available.

  19. Using histograms to introduce randomization in the generation of ensembles of decision trees

    Science.gov (United States)

    Kamath, Chandrika; Cantu-Paz, Erick; Littau, David

    2005-02-22

    A system for decision tree ensembles that includes a module to read the data, a module to create a histogram, a module to evaluate a potential split according to some criterion using the histogram, a module to select a split point randomly in an interval around the best split, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method includes the steps of reading the data; creating a histogram; evaluating a potential split according to some criterion using the histogram, selecting a split point randomly in an interval around the best split, splitting the data, and combining multiple decision trees in ensembles.

  20. GIS-based Integration of Interdisciplinary Ecological Data to Detect ...

    African Journals Online (AJOL)

    Key words: mangrove, satellite imagery, environment, aerial photography, GIS, Kenya Historic environmental, faunal, floral and socioeconomic data of Gazi Bay in coastal Kenya were collated and integrated into a GIS environment and data of impacts due to various factors were then related to remotely sensed data.

  1. Identification of Biomarkers for Esophageal Squamous Cell Carcinoma Using Feature Selection and Decision Tree Methods

    Directory of Open Access Journals (Sweden)

    Chun-Wei Tung

    2013-01-01

    Full Text Available Esophageal squamous cell cancer (ESCC is one of the most common fatal human cancers. The identification of biomarkers for early detection could be a promising strategy to decrease mortality. Previous studies utilized microarray techniques to identify more than one hundred genes; however, it is desirable to identify a small set of biomarkers for clinical use. This study proposes a sequential forward feature selection algorithm to design decision tree models for discriminating ESCC from normal tissues. Two potential biomarkers of RUVBL1 and CNIH were identified and validated based on two public available microarray datasets. To test the discrimination ability of the two biomarkers, 17 pairs of expression profiles of ESCC and normal tissues from Taiwanese male patients were measured by using microarray techniques. The classification accuracies of the two biomarkers in all three datasets were higher than 90%. Interpretable decision tree models were constructed to analyze expression patterns of the two biomarkers. RUVBL1 was consistently overexpressed in all three datasets, although we found inconsistent CNIH expression possibly affected by the diverse major risk factors for ESCC across different areas.

  2. Bayesian additive decision trees of biomarker by treatment interactions for predictive biomarker detection and subgroup identification.

    Science.gov (United States)

    Zhao, Yang; Zheng, Wei; Zhuo, Daisy Y; Lu, Yuefeng; Ma, Xiwen; Liu, Hengchang; Zeng, Zhen; Laird, Glen

    2017-10-11

    Personalized medicine, or tailored therapy, has been an active and important topic in recent medical research. Many methods have been proposed in the literature for predictive biomarker detection and subgroup identification. In this article, we propose a novel decision tree-based approach applicable in randomized clinical trials. We model the prognostic effects of the biomarkers using additive regression trees and the biomarker-by-treatment effect using a single regression tree. Bayesian approach is utilized to periodically revise the split variables and the split rules of the decision trees, which provides a better overall fitting. Gibbs sampler is implemented in the MCMC procedure, which updates the prognostic trees and the interaction tree separately. We use the posterior distribution of the interaction tree to construct the predictive scores of the biomarkers and to identify the subgroup where the treatment is superior to the control. Numerical simulations show that our proposed method performs well under various settings comparing to existing methods. We also demonstrate an application of our method in a real clinical trial.

  3. Computational Prediction of Blood-Brain Barrier Permeability Using Decision Tree Induction

    Directory of Open Access Journals (Sweden)

    Jörg Huwyler

    2012-08-01

    Full Text Available Predicting blood-brain barrier (BBB permeability is essential to drug development, as a molecule cannot exhibit pharmacological activity within the brain parenchyma without first transiting this barrier. Understanding the process of permeation, however, is complicated by a combination of both limited passive diffusion and active transport. Our aim here was to establish predictive models for BBB drug permeation that include both active and passive transport. A database of 153 compounds was compiled using in vivo surface permeability product (logPS values in rats as a quantitative parameter for BBB permeability. The open source Chemical Development Kit (CDK was used to calculate physico-chemical properties and descriptors. Predictive computational models were implemented by machine learning paradigms (decision tree induction on both descriptor sets. Models with a corrected classification rate (CCR of 90% were established. Mechanistic insight into BBB transport was provided by an Ant Colony Optimization (ACO-based binary classifier analysis to identify the most predictive chemical substructures. Decision trees revealed descriptors of lipophilicity (aLogP and charge (polar surface area, which were also previously described in models of passive diffusion. However, measures of molecular geometry and connectivity were found to be related to an active drug transport component.

  4. Data mining usage in health care management: literature survey and decision tree application

    Directory of Open Access Journals (Sweden)

    Dijana Ćosić

    2008-02-01

    Full Text Available Aim To show the benefits of data mining in health care management.In this example, we are going to show a way to raise awarenessof women in terms of contraceptive methods they use (do notuse.Methods Goal of the data mining analysis was to determine ifthere are common characteristics of the women according to theirchoice of contraception (typical classification problem. Therefore,we decided to use decision trees. We have generated a CHAIDmodel in “Statistica”, based on the database that was formed as aresult of an Indonesian research that was conducted in 1987. Thesample contains married women who were either not pregnant ordid not know if they were pregnant at the time of the interview.The database consists of 1473 cases. Also, an extensive internetsearch was conducted in order to detect a number of articles citedin scientific databases published on the subject of data mining inhealth care management.Results It has shown that the most important variable in case ofwomen’s choice of contraceptive methods is – a husband’s profession.Also we retrieved 221 articles published on the application ofdata mining in health care.Conclusion The goal of the paper is achieved in two ways: first,retrieving 221 articles published on the subject we have proved thebenefits of data mining in the health care management. Second,the decision tree method is successfully applied in explanation ofwomen’s choice of contraceptive methods.

  5. Decision trees to characterise the roles of permeability and solubility on the prediction of oral absorption.

    Science.gov (United States)

    Newby, Danielle; Freitas, Alex A; Ghafourian, Taravat

    2015-01-27

    Oral absorption of compounds depends on many physiological, physiochemical and formulation factors. Two important properties that govern oral absorption are in vitro permeability and solubility, which are commonly used as indicators of human intestinal absorption. Despite this, the nature and exact characteristics of the relationship between these parameters are not well understood. In this study a large dataset of human intestinal absorption was collated along with in vitro permeability, aqueous solubility, melting point, and maximum dose for the same compounds. The dataset allowed a permeability threshold to be established objectively to predict high or low intestinal absorption. Using this permeability threshold, classification decision trees incorporating a solubility-related parameter such as experimental or predicted solubility, or the melting point based absorption potential (MPbAP), along with structural molecular descriptors were developed and validated to predict oral absorption class. The decision trees were able to determine the individual roles of permeability and solubility in oral absorption process. Poorly permeable compounds with high solubility show low intestinal absorption, whereas poorly water soluble compounds with high or low permeability may have high intestinal absorption provided that they have certain molecular characteristics such as a small polar surface or specific topology. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  6. Electronic Nose Odor Classification with Advanced Decision Tree Structures

    Directory of Open Access Journals (Sweden)

    S. Guney

    2013-09-01

    Full Text Available Electronic nose (e-nose is an electronic device which can measure chemical compounds in air and consequently classify different odors. In this paper, an e-nose device consisting of 8 different gas sensors was designed and constructed. Using this device, 104 different experiments involving 11 different odor classes (moth, angelica root, rose, mint, polis, lemon, rotten egg, egg, garlic, grass, and acetone were performed. The main contribution of this paper is the finding that using the chemical domain knowledge it is possible to train an accurate odor classification system. The domain knowledge about chemical compounds is represented by a decision tree whose nodes are composed of classifiers such as Support Vector Machines and k-Nearest Neighbor. The overall accuracy achieved with the proposed algorithm and the constructed e-nose device was 97.18 %. Training and testing data sets used in this paper are published online.

  7. An overview of decision tree applied to power systems

    DEFF Research Database (Denmark)

    Liu, Leo; Rather, Zakir Hussain; Chen, Zhe

    2013-01-01

    The corrosive volume of available data in electric power systems motivate the adoption of data mining techniques in the emerging field of power system data analytics. The mainstream of data mining algorithm applied to power system, Decision Tree (DT), also named as Classification And Regression...... Tree (CART), has gained increasing interests because of its high performance in terms of computational efficiency, uncertainty manageability, and interpretability. This paper presents an overview of a variety of DT applications to power systems for better interfacing of power systems with data...... analytics. The fundamental knowledge of CART algorithm is also introduced which is then followed by examples of both classification tree and regression tree with the help of case study for security assessment of Danish power system....

  8. Career Path Suggestion using String Matching and Decision Trees

    Science.gov (United States)

    Nagpal, Akshay; P. Panda, Supriya

    2015-05-01

    High school and college graduates seemingly are often battling for the courses they should major in order to achieve their target career. In this paper, we worked on suggesting a career path to a graduate to reach his/her dream career given the current educational status. Firstly, we collected the career data of professionals and academicians from various career fields and compiled the data set by using the necessary information from the data. Further, this was used as the basis to suggest the most appropriate career path for the person given his/her current educational status. Decision trees and string matching algorithms were employed to suggest the appropriate career path for a person. Finally, an analysis of the result has been done directing to further improvements in the model.

  9. Toward the Decision Tree for Inferring Requirements Maturation Types

    Science.gov (United States)

    Nakatani, Takako; Kondo, Narihito; Shirogane, Junko; Kaiya, Haruhiko; Hori, Shozo; Katamine, Keiichi

    Requirements are elicited step by step during the requirements engineering (RE) process. However, some types of requirements are elicited completely after the scheduled requirements elicitation process is finished. Such a situation is regarded as problematic situation. In our study, the difficulties of eliciting various kinds of requirements is observed by components. We refer to the components as observation targets (OTs) and introduce the word “Requirements maturation.” It means when and how requirements are elicited completely in the project. The requirements maturation is discussed on physical and logical OTs. OTs Viewed from a logical viewpoint are called logical OTs, e.g. quality requirements. The requirements of physical OTs, e.g., modules, components, subsystems, etc., includes functional and non-functional requirements. They are influenced by their requesters' environmental changes, as well as developers' technical changes. In order to infer the requirements maturation period of each OT, we need to know how much these factors influence the OTs' requirements maturation. According to the observation of actual past projects, we defined the PRINCE (Pre Requirements Intelligence Net Consideration and Evaluation) model. It aims to guide developers in their observation of the requirements maturation of OTs. We quantitatively analyzed the actual cases with their requirements elicitation process and extracted essential factors that influence the requirements maturation. The results of interviews of project managers are analyzed by WEKA, a data mining system, from which the decision tree was derived. This paper introduces the PRINCE model and the category of logical OTs to be observed. The decision tree that helps developers infer the maturation type of an OT is also described. We evaluate the tree through real projects and discuss its ability to infer the requirements maturation types.

  10. Coronal Mass Ejection Data Clustering and Visualization of Decision Trees

    Science.gov (United States)

    Ma, Ruizhe; Angryk, Rafal A.; Riley, Pete; Filali Boubrahimi, Soukaina

    2018-05-01

    Coronal mass ejections (CMEs) can be categorized as either “magnetic clouds” (MCs) or non-MCs. Features such as a large magnetic field, low plasma-beta, and low proton temperature suggest that a CME event is also an MC event; however, so far there is neither a definitive method nor an automatic process to distinguish the two. Human labeling is time-consuming, and results can fluctuate owing to the imprecise definition of such events. In this study, we approach the problem of MC and non-MC distinction from a time series data analysis perspective and show how clustering can shed some light on this problem. Although many algorithms exist for traditional data clustering in the Euclidean space, they are not well suited for time series data. Problems such as inadequate distance measure, inaccurate cluster center description, and lack of intuitive cluster representations need to be addressed for effective time series clustering. Our data analysis in this work is twofold: clustering and visualization. For clustering we compared the results from the popular hierarchical agglomerative clustering technique to a distance density clustering heuristic we developed previously for time series data clustering. In both cases, dynamic time warping will be used for similarity measure. For classification as well as visualization, we use decision trees to aggregate single-dimensional clustering results to form a multidimensional time series decision tree, with averaged time series to present each decision. In this study, we achieved modest accuracy and, more importantly, an intuitive interpretation of how different parameters contribute to an MC event.

  11. A fuzzy decision tree method for fault classification in the steam generator of a pressurized water reactor

    International Nuclear Information System (INIS)

    Zio, Enrico; Baraldi, Piero; Popescu, Irina Crenguta

    2009-01-01

    This paper extends a method previously introduced by the authors for building a transparent fault classification algorithm by combining the fuzzy clustering, fuzzy logic and decision trees techniques. The baseline method transforms an opaque, fuzzy clustering-based classification model into a fuzzy logic inference model based on linguistic rules which can be represented by a decision tree formalism. The classification model thereby obtained is transparent in that it allows direct interpretation and inspection of the model. An extension in the procedure for the development of the fuzzy logic inference model is introduced to allow the treatment of more complicated cases, e.g. splitted and overlapping clusters. The corresponding computational tool developed relies on a number of parameters which can be tuned by the user to optimally compromise the level of transparency of the classification process and its efficiency. A numerical application is presented with regards to the fault classification in the Steam Generator of a Pressurized Water Reactor.

  12. Development of geophysical and geochemical data processing software based on component GIS

    International Nuclear Information System (INIS)

    Ke Dan; Yu Xiang; Wu Qubo; Han Shaoyang; Li Xi

    2013-01-01

    Based on component GIS and mixed programming techniques, a software which combines the basic GIS functions, conventional and unconventional data process methods for the regional geophysical and geochemical data together, is designed and developed. The software has many advantages, such as friendly interface, easy to use and utility functions and provides a useful platform for regional geophysical and geochemical data processing. (authors)

  13. CLOUD BASED WEB 3D GIS TAIWAN PLATFORM

    Directory of Open Access Journals (Sweden)

    W.-F. Tsai

    2012-09-01

    Full Text Available This article presents the status of the web 3D GIS platform, which has been developed in the National Applied Research Laboratories. The purpose is to develop a global earth observation 3D GIS platform for applications to disaster monitoring and assessment in Taiwan. For quick response to preliminary and detailed assessment after a natural disaster occurs, the web 3D GIS platform is useful to access, transfer, integrate, display and analyze the multi-scale huge data following the international OGC standard. The framework of cloud service for data warehousing management and efficiency enhancement using VMWare is illustrated in this article.

  14. Identification of Potential Sources of Mercury (Hg) in Farmland Soil Using a Decision Tree Method in China.

    Science.gov (United States)

    Zhong, Taiyang; Chen, Dongmei; Zhang, Xiuying

    2016-11-09

    Identification of the sources of soil mercury (Hg) on the provincial scale is helpful for enacting effective policies to prevent further contamination and take reclamation measurements. The natural and anthropogenic sources and their contributions of Hg in Chinese farmland soil were identified based on a decision tree method. The results showed that the concentrations of Hg in parent materials were most strongly associated with the general spatial distribution pattern of Hg concentration on a provincial scale. The decision tree analysis gained an 89.70% total accuracy in simulating the influence of human activities on the additions of Hg in farmland soil. Human activities-for example, the production of coke, application of fertilizers, discharge of wastewater, discharge of solid waste, and the production of non-ferrous metals-were the main external sources of a large amount of Hg in the farmland soil.

  15. A greedy algorithm for construction of decision trees for tables with many-valued decisions - A comparative study

    KAUST Repository

    Azad, Mohammad

    2013-11-25

    In the paper, we study a greedy algorithm for construction of decision trees. This algorithm is applicable to decision tables with many-valued decisions where each row is labeled with a set of decisions. For a given row, we should find a decision from the set attached to this row. Experimental results for data sets from UCI Machine Learning Repository and randomly generated tables are presented. We make a comparative study of the depth and average depth of the constructed decision trees for proposed approach and approach based on generalized decision. The obtained results show that the proposed approach can be useful from the point of view of knowledge representation and algorithm construction.

  16. Calculation of the number of branches of multi-valued decision trees in computer aided importance rank of parameters

    Directory of Open Access Journals (Sweden)

    Tiszbierek Agnieszka

    2017-01-01

    Full Text Available An elaborated digital computer programme supporting the time-consuming process of selecting the importance rank of construction and operation parameters by means of stating optimum sets is based on the Quine – McCluskey algorithm of minimizing individual partial multi-valued logic functions. The example with real time data, calculated by means of the programme, showed that among the obtained optimum sets there were such which had a different number of real branches after being presented on the multi-valued logic decision tree. That is why an idea of elaborating another functionality of the programme – a module calculating the number of branches of real, multi-valued logic decision trees presenting optimum sets chosen by the programme was pursued. This paper presents the idea and the method for developing a module calculating the number of branches, real for each of optimum sets indicated by the programme, as well as to the calculation process.

  17. A greedy algorithm for construction of decision trees for tables with many-valued decisions - A comparative study

    KAUST Repository

    Azad, Mohammad; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    In the paper, we study a greedy algorithm for construction of decision trees. This algorithm is applicable to decision tables with many-valued decisions where each row is labeled with a set of decisions. For a given row, we should find a decision from the set attached to this row. Experimental results for data sets from UCI Machine Learning Repository and randomly generated tables are presented. We make a comparative study of the depth and average depth of the constructed decision trees for proposed approach and approach based on generalized decision. The obtained results show that the proposed approach can be useful from the point of view of knowledge representation and algorithm construction.

  18. Decision Tree Algorithm-Generated Single-Nucleotide Polymorphism Barcodes of rbcL Genes for 38 Brassicaceae Species Tagging.

    Science.gov (United States)

    Yang, Cheng-Hong; Wu, Kuo-Chuan; Chuang, Li-Yeh; Chang, Hsueh-Wei

    2018-01-01

    DNA barcode sequences are accumulating in large data sets. A barcode is generally a sequence larger than 1000 base pairs and generates a computational burden. Although the DNA barcode was originally envisioned as straightforward species tags, the identification usage of barcode sequences is rarely emphasized currently. Single-nucleotide polymorphism (SNP) association studies provide us an idea that the SNPs may be the ideal target of feature selection to discriminate between different species. We hypothesize that SNP-based barcodes may be more effective than the full length of DNA barcode sequences for species discrimination. To address this issue, we tested a r ibulose diphosphate carboxylase ( rbcL ) S NP b arcoding (RSB) strategy using a decision tree algorithm. After alignment and trimming, 31 SNPs were discovered in the rbcL sequences from 38 Brassicaceae plant species. In the decision tree construction, these SNPs were computed to set up the decision rule to assign the sequences into 2 groups level by level. After algorithm processing, 37 nodes and 31 loci were required for discriminating 38 species. Finally, the sequence tags consisting of 31 rbcL SNP barcodes were identified for discriminating 38 Brassicaceae species based on the decision tree-selected SNP pattern using RSB method. Taken together, this study provides the rational that the SNP aspect of DNA barcode for rbcL gene is a useful and effective sequence for tagging 38 Brassicaceae species.

  19. Development of a web based GIS for health facilities mapping ...

    African Journals Online (AJOL)

    Hilary Mushonga

    Key Words: Spatial Decision Support System, Web GIS, Mapping, Health geography. 1. Introduction ... Health geography is an area of medical research that incorporates geographic techniques into the study of ... street water pump. Once the ...

  20. The new decision tree for the evaluation of pesticide leaching from soils

    NARCIS (Netherlands)

    Linden AMA van der; Boesten JJTI; Cornelese AA; Kruijne R; Leistra M; Linders JBHJ; Pol JW; Tiktak A; Verschoor AJ; Alterra; CTB; LDL; SEC; LER; Alterra

    2004-01-01

    The Dutch decision tree on leaching from soil has been re-designed to be more in line with EU guidelines on the assessment of the leaching potential of substances. The new decision tree explicitly defines reasonable worst-case conditions as the 90th percentile of the area to which a substance is

  1. An Improved Decision Tree for Predicting a Major Product in Competing Reactions

    Science.gov (United States)

    Graham, Kate J.

    2014-01-01

    When organic chemistry students encounter competing reactions, they are often overwhelmed by the task of evaluating multiple factors that affect the outcome of a reaction. The use of a decision tree is a useful tool to teach students to evaluate a complex situation and propose a likely outcome. Specifically, a decision tree can help students…

  2. A new approach to enhance the performance of decision tree for classifying gene expression data.

    Science.gov (United States)

    Hassan, Md; Kotagiri, Ramamohanarao

    2013-12-20

    Gene expression data classification is a challenging task due to the large dimensionality and very small number of samples. Decision tree is one of the popular machine learning approaches to address such classification problems. However, the existing decision tree algorithms use a single gene feature at each node to split the data into its child nodes and hence might suffer from poor performance specially when classifying gene expression dataset. By using a new decision tree algorithm where, each node of the tree consists of more than one gene, we enhance the classification performance of traditional decision tree classifiers. Our method selects suitable genes that are combined using a linear function to form a derived composite feature. To determine the structure of the tree we use the area under the Receiver Operating Characteristics curve (AUC). Experimental analysis demonstrates higher classification accuracy using the new decision tree compared to the other existing decision trees in literature. We experimentally compare the effect of our scheme against other well known decision tree techniques. Experiments show that our algorithm can substantially boost the classification performance of the decision tree.

  3. Constructing multi-labelled decision trees for junction design using the predicted probabilities

    NARCIS (Netherlands)

    Bezembinder, Erwin M.; Wismans, Luc J. J.; Van Berkum, Eric C.

    2017-01-01

    In this paper, we evaluate the use of traditional decision tree algorithms CRT, CHAID and QUEST to determine a decision tree which can be used to predict a set of (Pareto optimal) junction design alternatives (e.g. signal or roundabout) for a given traffic demand pattern and available space. This is

  4. Decision-Tree Models of Categorization Response Times, Choice Proportions, and Typicality Judgments

    Science.gov (United States)

    Lafond, Daniel; Lacouture, Yves; Cohen, Andrew L.

    2009-01-01

    The authors present 3 decision-tree models of categorization adapted from T. Trabasso, H. Rollins, and E. Shaughnessy (1971) and use them to provide a quantitative account of categorization response times, choice proportions, and typicality judgments at the individual-participant level. In Experiment 1, the decision-tree models were fit to…

  5. A Decision Tree for Psychology Majors: Supplying Questions as Well as Answers.

    Science.gov (United States)

    Poe, Retta E.

    1988-01-01

    Outlines the development of a psychology careers decision tree to help faculty advise students plan their program. States that students using the decision tree may benefit by learning more about their career options and by acquiring better question-asking skills. (GEA)

  6. A Branch-and-Price approach to find optimal decision trees

    NARCIS (Netherlands)

    Firat, M.; Crognier, Guillaume; Gabor, Adriana; Zhang, Y.

    2018-01-01

    In Artificial Intelligence (AI) field, decision trees have gained certain importance due to their effectiveness in solving classification and regression problems. Recently, in the literature we see finding optimal decision trees are formulated as Mixed Integer Linear Programming (MILP) models. This

  7. Environmental applications based on GIS and GRID technologies

    Science.gov (United States)

    Demontis, R.; Lorrai, E.; Marrone, V. A.; Muscas, L.; Spanu, V.; Vacca, A.; Valera, P.

    2009-04-01

    In the last decades, the collection and use of environmental data has enormously increased in a wide range of applications. Simultaneously, the explosive development of information technology and its ever wider data accessibility have made it possible to store and manipulate huge quantities of data. In this context, the GRID approach is emerging worldwide as a tool allowing to provision a computational task with administratively-distant resources. The aim of this paper is to present three environmental applications (Land Suitability, Desertification Risk Assessment, Georesources and Environmental Geochemistry) foreseen within the AGISGRID (Access and query of a distributed GIS/Database within the GRID infrastructure, http://grida3.crs4.it/enginframe/agisgrid/index.xml) activities of the GRIDA3 (Administrator of sharing resources for data analysis and environmental applications, http://grida3.crs4.it) project. This project, co-funded by the Italian Ministry of research, is based on the use of shared environmental data through GRID technologies and accessible by a WEB interface, aimed at public and private users in the field of environmental management and land use planning. The technologies used for AGISGRID include: - the client-server-middleware iRODS™ (Integrated Rule-Oriented Data System) (https://irods.org); - the EnginFrame system (http://www.nice-italy.com/main/index.php?id=32), the grid portal that supplies a frame to make available, via Intranet/Internet, the developed GRID applications; - the software GIS GRASS (Geographic Resources Analysis Support System) (http://grass.itc.it); - the relational database PostgreSQL (http://www.posgresql.org) and the spatial database extension PostGis; - the open source multiplatform Mapserver (http://mapserver.gis.umn.edu), used to represent the geospatial data through typical WEB GIS functionalities. Three GRID nodes are directly involved in the applications: the application workflow is implemented at the CRS4 (Pula

  8. “GEOHeritage” - GIS Based Application for Movable Heritage

    Directory of Open Access Journals (Sweden)

    Albina Moscicka

    2011-12-01

    Full Text Available The paper will present the results of a research project „A methodology for mapping movable heritage”.  This project, financed by the Polish Ministry of Science and Higher Education in 2008-2010, was realized by the Institute of Geodesy and Cartography in cooperation with the Research and Academic Computer Network (portal Polska.pl, the Central Archives of Historical Records and Department of Art History of the Wroclaw University. The idea of the project was to simplify access to digital movable cultural heritage by the use of spatial information. The main aspect of the project was to use a Geographic Information System (GIS - as a technology and as a tool - to integrate different digital collections, present their content in one space and provide online access to them from one common level – from an online map. The essence of the research was to present on the online map movable monument as multi-spatial object. The base of this assumption is that most of monuments, especially movable ones, can have several places in the geographical space that are connected with them (several various space relations. As a rule archival documents were created in one place, describe the other, today can be kept in places far away from the place they were prepared, and what more the parts of the same collection can be kept in different archives. Moreover, one single document can be connected or have relations (typological, thematically, temporal, spatial with other relations to the same or the other one. The reason for it is that documents concerning various places are housed in the same archive, various documents can present the same place or the place of creating particular document can be the place of housing another. In the project the basic source material was digital collections of original records. Their metadata defined in the international standards of monuments’ description were used for connecting digital monuments with the geographic space

  9. Landslides along Highways: GIS-based Inventory and Planning Issues

    Science.gov (United States)

    Jaeger, Ann-Kathrin; Klose, Martin; Damm, Bodo

    2015-04-01

    Highways rank as critical transportation infrastructures that are at risk of landslides in many areas worldwide (e.g., Hungr et al., 1999; Bhandary et al., 2013). Safe and affordable operations of traffic routes constitute the two main criteria for transportation planning in landslide-prone terrain. A right balancing of these often conflicting priorities requires profound knowledge of landslide locations across highway networks and the costs caused by landslides in the past (e.g., Saha et al., 2005). Much of the direct costs affecting transportation departments relate to capital investments for landslide repair or mitigation and operational expenditures in connection with maintenance works. A systematic collection and inventory of such data sets combined with an acquisition of hazard information on vulnerable road sections is still rarely the case in engineering practice. This is despite significant cost impacts and budgetary burdens, especially in peripheral mountain areas where financial resources are naturally limited (e.g., Klose et al., 2014). The present contribution introduces a regional inventory of landslides along highways in the Harz Mountains, NW Germany. As subset of a landslide database for the entire country, this focused GIS-based inventory has been compiled in close collaboration with the Lower Saxony Department of Transportation. The inventory includes data sets gathered by archive studies and relies on high-quality information sources such as maintenance protocols, geotechnical reports, and documents from tendering, controlling, and accounting. A mapping tool in ArcGIS format is used to specify and visualize road sections affected by landslides. This spatial information on hazard exposure is complemented by narrative risk profiles for landslide sites showing a long history of damage events. By summarizing the occurrence dates of landslides, the associated damages, and the types and costs of repair or prevention, such risk profiles are useful to

  10. Gis-based procedures for hydropower potential spotting

    Energy Technology Data Exchange (ETDEWEB)

    Larentis, Dante G.; Collischonn, Walter; Tucci, Carlos E.M. [Instituto de Pesquisas Hidraulicas da UFRGS, Av. Bento Goncalves, 9500, CEP 91501-970, Caixa Postal 15029, Porto Alegre, RS (Brazil); Olivera, Francisco (Texas A and M University, Zachry Department of Civil Engineering 3136 TAMU, College Station, TX 77843-3136, US)

    2010-10-15

    The increasing demand for energy, especially from renewable and sustainable sources, spurs the development of small hydropower plants and encourages investment in new survey studies. Preliminary hydropower survey studies usually carry huge uncertainties about the technical, economic and environmental feasibility of the undeveloped potential. This paper presents a methodology for large-scale survey of hydropower potential sites to be applied in the inception phase of hydroelectric development planning. The sequence of procedures to identify hydropower sites is based on remote sensing and regional streamflow data and was automated within a GIS-based computational program: Hydrospot. The program allows spotting more potential sites along the drainage network than it would be possible in a traditional survey study, providing different types of dam-powerhouse layouts and two types (operating modes) of projects: run-of-the-river and storage projects. Preliminary results from its applications in a hydropower-developed basin in Brazil have shown Hydrospot's limitations and potentialities in giving support to the mid-to-long-term planning of the electricity sector. (author)

  11. Development and challenges of using web-based GIS for health applications

    DEFF Research Database (Denmark)

    Gao, Sheng; Mioc, Darka; Boley, Harold

    2011-01-01

    Web-based GIS is increasingly used in health applications. It has the potential to provide critical information in a timely manner, support health care policy development, and educate decision makers and the general public. This paper describes the trends and recent development of health...... applications using a Web-based GIS. Recent progress on the database storage and geospatial Web Services has advanced the use of Web-based GIS for health applications, with various proprietary software, open source software, and Application Programming Interfaces (APIs) available. Current challenges in applying...... care planning, and public health participation....

  12. GIS-based hydrogeological databases and groundwater modelling

    Science.gov (United States)

    Gogu, Radu Constantin; Carabin, Guy; Hallet, Vincent; Peters, Valerie; Dassargues, Alain

    2001-12-01

    Reliability and validity of groundwater analysis strongly depend on the availability of large volumes of high-quality data. Putting all data into a coherent and logical structure supported by a computing environment helps ensure validity and availability and provides a powerful tool for hydrogeological studies. A hydrogeological geographic information system (GIS) database that offers facilities for groundwater-vulnerability analysis and hydrogeological modelling has been designed in Belgium for the Walloon region. Data from five river basins, chosen for their contrasting hydrogeological characteristics, have been included in the database, and a set of applications that have been developed now allow further advances. Interest is growing in the potential for integrating GIS technology and groundwater simulation models. A "loose-coupling" tool was created between the spatial-database scheme and the groundwater numerical model interface GMS (Groundwater Modelling System). Following time and spatial queries, the hydrogeological data stored in the database can be easily used within different groundwater numerical models. Résumé. La validité et la reproductibilité de l'analyse d'un aquifère dépend étroitement de la disponibilité de grandes quantités de données de très bonne qualité. Le fait de mettre toutes les données dans une structure cohérente et logique soutenue par les logiciels nécessaires aide à assurer la validité et la disponibilité et fournit un outil puissant pour les études hydrogéologiques. Une base de données pour un système d'information géographique (SIG) hydrogéologique qui offre toutes les facilités pour l'analyse de la vulnérabilité des eaux souterraines et la modélisation hydrogéologique a été établi en Belgique pour la région Wallonne. Les données de cinq bassins de rivières, choisis pour leurs caractéristiques hydrogéologiques différentes, ont été introduites dans la base de données, et un ensemble d

  13. Chi-squared Automatic Interaction Detection Decision Tree Analysis of Risk Factors for Infant Anemia in Beijing, China.

    Science.gov (United States)

    Ye, Fang; Chen, Zhi-Hua; Chen, Jie; Liu, Fang; Zhang, Yong; Fan, Qin-Ying; Wang, Lin

    2016-05-20

    In the past decades, studies on infant anemia have mainly focused on rural areas of China. With the increasing heterogeneity of population in recent years, available information on infant anemia is inconclusive in large cities of China, especially with comparison between native residents and floating population. This population-based cross-sectional study was implemented to determine the anemic status of infants as well as the risk factors in a representative downtown area of Beijing. As useful methods to build a predictive model, Chi-squared automatic interaction detection (CHAID) decision tree analysis and logistic regression analysis were introduced to explore risk factors of infant anemia. A total of 1091 infants aged 6-12 months together with their parents/caregivers living at Heping Avenue Subdistrict of Beijing were surveyed from January 1, 2013 to December 31, 2014. The prevalence of anemia was 12.60% with a range of 3.47%-40.00% in different subgroup characteristics. The CHAID decision tree model has demonstrated multilevel interaction among risk factors through stepwise pathways to detect anemia. Besides the three predictors identified by logistic regression model including maternal anemia during pregnancy, exclusive breastfeeding in the first 6 months, and floating population, CHAID decision tree analysis also identified the fourth risk factor, the maternal educational level, with higher overall classification accuracy and larger area below the receiver operating characteristic curve. The infant anemic status in metropolis is complex and should be carefully considered by the basic health care practitioners. CHAID decision tree analysis has demonstrated a better performance in hierarchical analysis of population with great heterogeneity. Risk factors identified by this study might be meaningful in the early detection and prompt treatment of infant anemia in large cities.

  14. Prevalence and Determinants of Preterm Birth in Tehran, Iran: A Comparison between Logistic Regression and Decision Tree Methods.

    Science.gov (United States)

    Amini, Payam; Maroufizadeh, Saman; Samani, Reza Omani; Hamidi, Omid; Sepidarkish, Mahdi

    2017-06-01

    Preterm birth (PTB) is a leading cause of neonatal death and the second biggest cause of death in children under five years of age. The objective of this study was to determine the prevalence of PTB and its associated factors using logistic regression and decision tree classification methods. This cross-sectional study was conducted on 4,415 pregnant women in Tehran, Iran, from July 6-21, 2015. Data were collected by a researcher-developed questionnaire through interviews with mothers and review of their medical records. To evaluate the accuracy of the logistic regression and decision tree methods, several indices such as sensitivity, specificity, and the area under the curve were used. The PTB rate was 5.5% in this study. The logistic regression outperformed the decision tree for the classification of PTB based on risk factors. Logistic regression showed that multiple pregnancies, mothers with preeclampsia, and those who conceived with assisted reproductive technology had an increased risk for PTB ( p logistic regression model for the classification of risk groups for PTB.

  15. Decision tree for accurate infection timing in individuals newly diagnosed with HIV-1 infection.

    Science.gov (United States)

    Verhofstede, Chris; Fransen, Katrien; Van Den Heuvel, Annelies; Van Laethem, Kristel; Ruelle, Jean; Vancutsem, Ellen; Stoffels, Karolien; Van den Wijngaert, Sigi; Delforge, Marie-Luce; Vaira, Dolores; Hebberecht, Laura; Schauvliege, Marlies; Mortier, Virginie; Dauwe, Kenny; Callens, Steven

    2017-11-29

    There is today no gold standard method to accurately define the time passed since infection at HIV diagnosis. Infection timing and incidence measurement is however essential to better monitor the dynamics of local epidemics and the effect of prevention initiatives. Three methods for infection timing were evaluated using 237 serial samples from documented seroconversions and 566 cross sectional samples from newly diagnosed patients: identification of antibodies against the HIV p31 protein in INNO-LIA, SediaTM BED CEIA and SediaTM LAg-Avidity EIA. A multi-assay decision tree for infection timing was developed. Clear differences in recency window between BED CEIA, LAg-Avidity EIA and p31 antibody presence were observed with a switch from recent to long term infection a median of 169.5, 108.0 and 64.5 days after collection of the pre-seroconversion sample respectively. BED showed high reliability for identification of long term infections while LAg-Avidity is highly accurate for identification of recent infections. Using BED as initial assay to identify the long term infections and LAg-Avidity as a confirmatory assay for those classified as recent infection by BED, explores the strengths of both while reduces the workload. The short recency window of p31 antibodies allows to discriminate very early from early infections based on this marker. BED recent infection results not confirmed by LAg-Avidity are considered to reflect a period more distant from the infection time. False recency predictions in this group can be minimized by elimination of patients with a CD4 count of less than 100 cells/mm3 or without no p31 antibodies. For 566 cross sectional sample the outcome of the decision tree confirmed the infection timing based on the results of all 3 markers but reduced the overall cost from 13.2 USD to 5.2 USD per sample. A step-wise multi assay decision tree allows accurate timing of the HIV infection at diagnosis at affordable effort and cost and can be an important

  16. Comparison of rule induction, decision trees and formal concept analysis approaches for classification

    Science.gov (United States)

    Kotelnikov, E. V.; Milov, V. R.

    2018-05-01

    Rule-based learning algorithms have higher transparency and easiness to interpret in comparison with neural networks and deep learning algorithms. These properties make it possible to effectively use such algorithms to solve descriptive tasks of data mining. The choice of an algorithm depends also on its ability to solve predictive tasks. The article compares the quality of the solution of the problems with binary and multiclass classification based on the experiments with six datasets from the UCI Machine Learning Repository. The authors investigate three algorithms: Ripper (rule induction), C4.5 (decision trees), In-Close (formal concept analysis). The results of the experiments show that In-Close demonstrates the best quality of classification in comparison with Ripper and C4.5, however the latter two generate more compact rule sets.

  17. Sistem Pakar Untuk Diagnosa Penyakit Kehamilan Menggunakan Metode Dempster-Shafer Dan Decision Tree

    Directory of Open Access Journals (Sweden)

    joko popo minardi

    2016-01-01

    Full Text Available Dempster-Shafer theory is a mathematical theory of evidence based on belief functions and plausible reasoning, which is used to combine separate pieces of information. Dempster-Shafer theory an alternative to traditional probabilistic theory for the mathematical representation of uncertainty. In the diagnosis of diseases of pregnancy information obtained from the patient sometimes incomplete, with Dempster-Shafer method and expert system rules can be a combination of symptoms that are not complete to get an appropriate diagnosis while the decision tree is used as a decision support tool reference tracking of disease symptoms This Research aims to develop an expert system that can perform a diagnosis of pregnancy using Dempster Shafer method, which can produce a trust value to a disease diagnosis. Based on the results of diagnostic testing Dempster-Shafer method and expert systems, the resulting accuracy of 76%.   Keywords: Expert system; Diseases of pregnancy; Dempster Shafer

  18. Integrated environmental decision support tool based on GIS technology

    International Nuclear Information System (INIS)

    Doctor, P.G.; O'Neil, T.K.; Sackschewsky, M.R.; Becker, J.M.; Rykiel, E.J.; Walters, T.B.; Brandt, C.A.; Hall, J.A.

    1995-01-01

    Environmental restoration and management decisions facing the US Department of Energy require balancing trade-offs between diverse land uses and impacts over multiple spatial and temporal scales. Many types of environmental data have been collected for the Hanford Site and the Columbia River in Washington State over the past fifty years. Pacific Northwest National Laboratory (PNNL) is integrating these data into a Geographic Information System (GIS) based computer decision support tool. This tool provides a comprehensive and concise description of the current environmental landscape that can be used to evaluate the ecological and monetary trade-offs between future land use, restoration and remediation options before action is taken. Ecological impacts evaluated include effects to individual species of concern and habitat loss and fragmentation. Monetary impacts include those associated with habitat mitigation. The tool is organized as both a browsing tool for educational purposes, and as a framework that leads a project manager through the steps needed to be in compliance with environmental requirements

  19. Cell Based GIS as Cellular Automata for Disaster Spreading Predictions and Required Data Systems

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-03-01

    Full Text Available A method for prediction and simulation based on the Cell Based Geographic Information System(GIS as Cellular Automata (CA is proposed together with required data systems, in particular metasearch engine usage in an unified way. It is confirmed that the proposed cell based GIS as CA has flexible usage of the attribute information that is attached to the cell in concert with location information and does work for disaster spreading simulation and prediction.

  20. Depth data research of GIS based on clustering analysis algorithm

    Science.gov (United States)

    Xiong, Yan; Xu, Wenli

    2018-03-01

    The data of GIS have spatial distribution. Geographic data has both spatial characteristics and attribute characteristics, and also changes with time. Therefore, the amount of data is very large. Nowadays, many industries and departments in the society are using GIS. However, without proper data analysis and mining scheme, GIS will not exert its maximum effectiveness and will waste a lot of data. In this paper, we use the geographic information demand of a national security department as the experimental object, combining the characteristics of GIS data, taking into account the characteristics of time, space, attributes and so on, and using cluster analysis algorithm. We further study the mining scheme for depth data, and get the algorithm model. This algorithm can automatically classify sample data, and then carry out exploratory analysis. The research shows that the algorithm model and the information mining scheme can quickly find hidden depth information from the surface data of GIS, thus improving the efficiency of the security department. This algorithm can also be extended to other fields.

  1. Prediction of adverse drug reactions using decision tree modeling.

    Science.gov (United States)

    Hammann, F; Gutmann, H; Vogt, N; Helma, C; Drewe, J

    2010-07-01

    Drug safety is of great importance to public health. The detrimental effects of drugs not only limit their application but also cause suffering in individual patients and evoke distrust of pharmacotherapy. For the purpose of identifying drugs that could be suspected of causing adverse reactions, we present a structure-activity relationship analysis of adverse drug reactions (ADRs) in the central nervous system (CNS), liver, and kidney, and also of allergic reactions, for a broad variety of drugs (n = 507) from the Swiss drug registry. Using decision tree induction, a machine learning method, we determined the chemical, physical, and structural properties of compounds that predispose them to causing ADRs. The models had high predictive accuracies (78.9-90.2%) for allergic, renal, CNS, and hepatic ADRs. We show the feasibility of predicting complex end-organ effects using simple models that involve no expensive computations and that can be used (i) in the selection of the compound during the drug discovery stage, (ii) to understand how drugs interact with the target organ systems, and (iii) for generating alerts in postmarketing drug surveillance and pharmacovigilance.

  2. Discovering Patterns in Brain Signals Using Decision Trees

    Directory of Open Access Journals (Sweden)

    Narusci S. Bastos

    2016-01-01

    Full Text Available Even with emerging technologies, such as Brain-Computer Interfaces (BCI systems, understanding how our brains work is a very difficult challenge. So we propose to use a data mining technique to help us in this task. As a case of study, we analyzed the brain’s behaviour of blind people and sighted people in a spatial activity. There is a common belief that blind people compensate their lack of vision using the other senses. If an object is given to sighted people and we asked them to identify this object, probably the sense of vision will be the most determinant one. If the same experiment was repeated with blind people, they will have to use other senses to identify the object. In this work, we propose a methodology that uses decision trees (DT to investigate the difference of how the brains of blind people and people with vision react against a spatial problem. We choose the DT algorithm because it can discover patterns in the brain signal, and its presentation is human interpretable. Our results show that using DT to analyze brain signals can help us to understand the brain’s behaviour.

  3. Efficient OCR using simple features and decision trees with backtracking

    International Nuclear Information System (INIS)

    Abuhaiba, Ibrahim S.I.

    2006-01-01

    In this paper, it is shown that it is adequate to use simple and easy-to-compute figures such as those we call sliced horizontal and vertical projections to solve the OCR problem for machine-printed documents. Recognition is achieved using a decision tree supported with backtracking, smoothing, row and column cropping, and other additions to increase the success rate. Symbols from Times New Roman type face are used to train our system. Activating backtracking, smoothing and cropping achieved more than 98% successes rate for a recognition time below 30ms per character. The recognition algorithm was exposed to a hard test by polluting the original dataset with additional artificial noise and could maintain a high successes rate and low error rate for highly polluted images, which is a result of backtracking, and smoothing and row and column cropping. Results indicate that we can depend on simple features and hints to reliably recognize characters. The error rate can be decreased by increasing the size of training dataset. The recognition time can be reduced by using some programming optimization techniques and more powerful computers. (author)

  4. Modeling flash floods in ungauged mountain catchments of China: A decision tree learning approach for parameter regionalization

    Science.gov (United States)

    Ragettli, S.; Zhou, J.; Wang, H.; Liu, C.

    2017-12-01

    Flash floods in small mountain catchments are one of the most frequent causes of loss of life and property from natural hazards in China. Hydrological models can be a useful tool for the anticipation of these events and the issuing of timely warnings. Since sub-daily streamflow information is unavailable for most small basins in China, one of the main challenges is finding appropriate parameter values for simulating flash floods in ungauged catchments. In this study, we use decision tree learning to explore parameter set transferability between different catchments. For this purpose, the physically-based, semi-distributed rainfall-runoff model PRMS-OMS is set up for 35 catchments in ten Chinese provinces. Hourly data from more than 800 storm runoff events are used to calibrate the model and evaluate the performance of parameter set transfers between catchments. For each catchment, 58 catchment attributes are extracted from several data sets available for whole China. We then use a data mining technique (decision tree learning) to identify catchment similarities that can be related to good transfer performance. Finally, we use the splitting rules of decision trees for finding suitable donor catchments for ungauged target catchments. We show that decision tree learning allows to optimally utilize the information content of available catchment descriptors and outperforms regionalization based on a conventional measure of physiographic-climatic similarity by 15%-20%. Similar performance can be achieved with a regionalization method based on spatial proximity, but decision trees offer flexible rules for selecting suitable donor catchments, not relying on the vicinity of gauged catchments. This flexibility makes the method particularly suitable for implementation in sparsely gauged environments. We evaluate the probability to detect flood events exceeding a given return period, considering measured discharge and PRMS-OMS simulated flows with regionalized parameters

  5. Small Core, Big Network: A Comprehensive Approach to GIS Teaching Practice Based on Digital Three-Dimensional Campus Reconstruction

    Science.gov (United States)

    Cheng, Liang; Zhang, Wen; Wang, Jiechen; Li, Manchun; Zhong, Lishan

    2014-01-01

    Geographic information science (GIS) features a wide range of disciplines and has broad applicability. Challenges associated with rapidly developing GIS technology and the currently limited teaching and practice materials hinder universities from cultivating highly skilled GIS graduates. Based on the idea of "small core, big network," a…

  6. Georeferenced Population Datasets of Mexico (GEO-MEX): Raster Based GIS Coverage of Mexican Population

    Data.gov (United States)

    National Aeronautics and Space Administration — The Raster Based GIS Coverage of Mexican Population is a gridded coverage (1 x 1 km) of Mexican population. The data were converted from vector into raster. The...

  7. A hybrid model using decision tree and neural network for credit scoring problem

    Directory of Open Access Journals (Sweden)

    Amir Arzy Soltan

    2012-08-01

    Full Text Available Nowadays credit scoring is an important issue for financial and monetary organizations that has substantial impact on reduction of customer attraction risks. Identification of high risk customer can reduce finished cost. An accurate classification of customer and low type 1 and type 2 errors have been investigated in many studies. The primary objective of this paper is to develop a new method, which chooses the best neural network architecture based on one column hidden layer MLP, multiple columns hidden layers MLP, RBFN and decision trees and ensembling them with voting methods. The proposed method of this paper is run on an Australian credit data and a private bank in Iran called Export Development Bank of Iran and the results are used for making solution in low customer attraction risks.

  8. Integrating individual trip planning in energy efficiency – Building decision tree models for Danish fisheries

    DEFF Research Database (Denmark)

    Bastardie, Francois; Nielsen, J. Rasmus; Andersen, Bo Sølgaard

    2013-01-01

    efficiency for the value of catch per unit of fuel consumed is analysed by merging the questionnaire, logbook and VMS (vessel monitoring system) information. Logic decision trees and conditional behaviour probabilities are established from the responses of fishermen regarding a range of sequential......-intensive but efficient vessels conducting pelagic or industrial fishing are more inclined to base their decision on fish price only, while numerous smaller and less efficient vessels conducting demersal mixed or crustacean fishery usually consider other flexible factors, e.g., the potential for a large catch, weather...... the adaptations of individual fishermen to resource availability dynamics, increasing fuel prices, changes in regulations, and the consequences of socioeconomic external pressures on harvested stocks. A new methodology is described here to obtain quantitative information on the fishermen’s micro-scale decisions...

  9. Multi-test decision tree and its application to microarray data classification.

    Science.gov (United States)

    Czajkowski, Marcin; Grześ, Marek; Kretowski, Marek

    2014-05-01

    The desirable property of tools used to investigate biological data is easy to understand models and predictive decisions. Decision trees are particularly promising in this regard due to their comprehensible nature that resembles the hierarchical process of human decision making. However, existing algorithms for learning decision trees have tendency to underfit gene expression data. The main aim of this work is to improve the performance and stability of decision trees with only a small increase in their complexity. We propose a multi-test decision tree (MTDT); our main contribution is the application of several univariate tests in each non-terminal node of the decision tree. We also search for alternative, lower-ranked features in order to obtain more stable and reliable predictions. Experimental validation was performed on several real-life gene expression datasets. Comparison results with eight classifiers show that MTDT has a statistically significantly higher accuracy than popular decision tree classifiers, and it was highly competitive with ensemble learning algorithms. The proposed solution managed to outperform its baseline algorithm on 14 datasets by an average 6%. A study performed on one of the datasets showed that the discovered genes used in the MTDT classification model are supported by biological evidence in the literature. This paper introduces a new type of decision tree which is more suitable for solving biological problems. MTDTs are relatively easy to analyze and much more powerful in modeling high dimensional microarray data than their popular counterparts. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Extensions of Dynamic Programming: Decision Trees, Combinatorial Optimization, and Data Mining

    KAUST Repository

    Hussain, Shahid

    2016-01-01

    This thesis is devoted to the development of extensions of dynamic programming to the study of decision trees. The considered extensions allow us to make multi-stage optimization of decision trees relative to a sequence of cost functions, to count the number of optimal trees, and to study relationships: cost vs cost and cost vs uncertainty for decision trees by construction of the set of Pareto-optimal points for the corresponding bi-criteria optimization problem. The applications include study of totally optimal (simultaneously optimal relative to a number of cost functions) decision trees for Boolean functions, improvement of bounds on complexity of decision trees for diagnosis of circuits, study of time and memory trade-off for corner point detection, study of decision rules derived from decision trees, creation of new procedure (multi-pruning) for construction of classifiers, and comparison of heuristics for decision tree construction. Part of these extensions (multi-stage optimization) was generalized to well-known combinatorial optimization problems: matrix chain multiplication, binary search trees, global sequence alignment, and optimal paths in directed graphs.

  11. Extensions of Dynamic Programming: Decision Trees, Combinatorial Optimization, and Data Mining

    KAUST Repository

    Hussain, Shahid

    2016-07-10

    This thesis is devoted to the development of extensions of dynamic programming to the study of decision trees. The considered extensions allow us to make multi-stage optimization of decision trees relative to a sequence of cost functions, to count the number of optimal trees, and to study relationships: cost vs cost and cost vs uncertainty for decision trees by construction of the set of Pareto-optimal points for the corresponding bi-criteria optimization problem. The applications include study of totally optimal (simultaneously optimal relative to a number of cost functions) decision trees for Boolean functions, improvement of bounds on complexity of decision trees for diagnosis of circuits, study of time and memory trade-off for corner point detection, study of decision rules derived from decision trees, creation of new procedure (multi-pruning) for construction of classifiers, and comparison of heuristics for decision tree construction. Part of these extensions (multi-stage optimization) was generalized to well-known combinatorial optimization problems: matrix chain multiplication, binary search trees, global sequence alignment, and optimal paths in directed graphs.

  12. Identification of Water Bodies in a Landsat 8 OLI Image Using a J48 Decision Tree.

    Science.gov (United States)

    Acharya, Tri Dev; Lee, Dong Ha; Yang, In Tae; Lee, Jae Kang

    2016-07-12

    Water bodies are essential to humans and other forms of life. Identification of water bodies can be useful in various ways, including estimation of water availability, demarcation of flooded regions, change detection, and so on. In past decades, Landsat satellite sensors have been used for land use classification and water body identification. Due to the introduction of a New Operational Land Imager (OLI) sensor on Landsat 8 with a high spectral resolution and improved signal-to-noise ratio, the quality of imagery sensed by Landsat 8 has improved, enabling better characterization of land cover and increased data size. Therefore, it is necessary to explore the most appropriate and practical water identification methods that take advantage of the improved image quality and use the fewest inputs based on the original OLI bands. The objective of the study is to explore the potential of a J48 decision tree (JDT) in identifying water bodies using reflectance bands from Landsat 8 OLI imagery. J48 is an open-source decision tree. The test site for the study is in the Northern Han River Basin, which is located in Gangwon province, Korea. Training data with individual bands were used to develop the JDT model and later applied to the whole study area. The performance of the model was statistically analysed using the kappa statistic and area under the curve (AUC). The results were compared with five other known water identification methods using a confusion matrix and related statistics. Almost all the methods showed high accuracy, and the JDT was successfully applied to the OLI image using only four bands, where the new additional deep blue band of OLI was found to have the third highest information gain. Thus, the JDT can be a good method for water body identification based on images with improved resolution and increased size.

  13. Using Decision Trees to Detect and Isolate Simulated Leaks in the J-2X Rocket Engine

    Science.gov (United States)

    Schwabacher, Mark A.; Aguilar, Robert; Figueroa, Fernando F.

    2009-01-01

    The goal of this work was to use data-driven methods to automatically detect and isolate faults in the J-2X rocket engine. It was decided to use decision trees, since they tend to be easier to interpret than other data-driven methods. The decision tree algorithm automatically "learns" a decision tree by performing a search through the space of possible decision trees to find one that fits the training data. The particular decision tree algorithm used is known as C4.5. Simulated J-2X data from a high-fidelity simulator developed at Pratt & Whitney Rocketdyne and known as the Detailed Real-Time Model (DRTM) was used to "train" and test the decision tree. Fifty-six DRTM simulations were performed for this purpose, with different leak sizes, different leak locations, and different times of leak onset. To make the simulations as realistic as possible, they included simulated sensor noise, and included a gradual degradation in both fuel and oxidizer turbine efficiency. A decision tree was trained using 11 of these simulations, and tested using the remaining 45 simulations. In the training phase, the C4.5 algorithm was provided with labeled examples of data from nominal operation and data including leaks in each leak location. From the data, it "learned" a decision tree that can classify unseen data as having no leak or having a leak in one of the five leak locations. In the test phase, the decision tree produced very low false alarm rates and low missed detection rates on the unseen data. It had very good fault isolation rates for three of the five simulated leak locations, but it tended to confuse the remaining two locations, perhaps because a large leak at one of these two locations can look very similar to a small leak at the other location.

  14. GIS-based integrated groundwater potential assessment of Osun ...

    African Journals Online (AJOL)

    Topographic parameters including surface curvatures, slope and elevation were derived from SPOT DEM and used to generate landform and relief maps for the study area. Spatial Analysis Module of ArcGIS 10.3 software was used to generate lineament proximity, hydrolineament density and drainage density maps for the ...

  15. GIS based procedure of cumulative environmental impact assessment.

    Science.gov (United States)

    Balakrishna Reddy, M; Blah, Baiantimon

    2009-07-01

    Scale and spatial limits of impact assessment study in a GIS platform are two very important factors that could have a bearing on the genuineness and quality of impact assessment. While effect of scale has been documented and well understood, no significant study has been carried out on spatial considerations in an impact assessment study employing GIS technique. A novel technique of impact assessment demonstrable through GIS approach termed hereby as 'spatial data integrated GIS impact assessment method (SGIAM)' is narrated in this paper. The technique makes a fundamental presumption that the importance of environmental impacts is dependent, among other things, on spatial distribution of the effects of the proposed action and of the affected receptors in a study area. For each environmental component considered (e.g., air quality), impact indices are calculated through aggregation of impact indicators which are measures of the severity of the impact. The presence and spread of environmental descriptors are suitably quantified through modeling techniques and depicted. The environmental impact index is calculated from data exported from ArcINFO, thus giving significant importance to spatial data in the impact assessment exercise.

  16. Development of a web based GIS for health facilities mapping ...

    African Journals Online (AJOL)

    Around the world health professionals and authorities, in many cases, do not have the ability to visualize health related spatial information to make timely decisions. The high cost of deploying a desktop Geographical Information System (GIS) for Public Health management coupled with the need for specialised training in ...

  17. Creating ensembles of oblique decision trees with evolutionary algorithms and sampling

    Science.gov (United States)

    Cantu-Paz, Erick [Oakland, CA; Kamath, Chandrika [Tracy, CA

    2006-06-13

    A decision tree system that is part of a parallel object-oriented pattern recognition system, which in turn is part of an object oriented data mining system. A decision tree process includes the step of reading the data. If necessary, the data is sorted. A potential split of the data is evaluated according to some criterion. An initial split of the data is determined. The final split of the data is determined using evolutionary algorithms and statistical sampling techniques. The data is split. Multiple decision trees are combined in ensembles.

  18. The value of decision tree analysis in planning anaesthetic care in obstetrics.

    Science.gov (United States)

    Bamber, J H; Evans, S A

    2016-08-01

    The use of decision tree analysis is discussed in the context of the anaesthetic and obstetric management of a young pregnant woman with joint hypermobility syndrome with a history of insensitivity to local anaesthesia and a previous difficult intubation due to a tongue tumour. The multidisciplinary clinical decision process resulted in the woman being delivered without complication by elective caesarean section under general anaesthesia after an awake fibreoptic intubation. The decision process used is reviewed and compared retrospectively to a decision tree analytical approach. The benefits and limitations of using decision tree analysis are reviewed and its application in obstetric anaesthesia is discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Minimization of decision tree depth for multi-label decision tables

    KAUST Repository

    Azad, Mohammad

    2014-10-01

    In this paper, we consider multi-label decision tables that have a set of decisions attached to each row. Our goal is to find one decision from the set of decisions for each row by using decision tree as our tool. Considering our target to minimize the depth of the decision tree, we devised various kinds of greedy algorithms as well as dynamic programming algorithm. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of depth of decision trees.

  20. Construction of α-decision trees for tables with many-valued decisions

    KAUST Repository

    Moshkov, Mikhail; Zielosko, Beata

    2011-01-01

    The paper is devoted to the study of greedy algorithm for construction of approximate decision trees (α-decision trees). This algorithm is applicable to decision tables with many-valued decisions where each row is labeled with a set of decisions. For a given row, we should find a decision from the set attached to this row. We consider bound on the number of algorithm steps, and bound on the algorithm accuracy relative to the depth of decision trees. © 2011 Springer-Verlag.

  1. Minimization of decision tree depth for multi-label decision tables

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2014-01-01

    In this paper, we consider multi-label decision tables that have a set of decisions attached to each row. Our goal is to find one decision from the set of decisions for each row by using decision tree as our tool. Considering our target to minimize the depth of the decision tree, we devised various kinds of greedy algorithms as well as dynamic programming algorithm. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of depth of decision trees.

  2. Diagnostic assessment of intraoperative cytology for papillary thyroid carcinoma: using a decision tree analysis.

    Science.gov (United States)

    Pyo, J-S; Sohn, J H; Kang, G

    2017-03-01

    The aim of this study was to elucidate the cytological characteristics and the diagnostic usefulness of intraoperative cytology (IOC) for papillary thyroid carcinoma (PTC). In addition, using decision tree analysis, effective features for accurate cytological diagnosis were sought. We investigated cellularity, cytological features and diagnosis based on the Bethesda System for Reporting Thyroid Cytopathology in IOC of 240 conventional PTCs. The cytological features were evaluated in terms of nuclear score with nuclear features, and additional figures such as presence of swirling sheets, psammoma bodies, and multinucleated giant cells. The nuclear score (range 0-7) was made via seven nuclear features, including (1) enlarged, (2) oval or irregularly shaped nuclei, (3) longitudinal nuclear grooves, (4) intranuclear cytoplasmic pseudoinclusion, (5) pale nuclei with powdery chromatin, (6) nuclear membrane thickening, and (7) marginally placed micronucleoli. Nuclear scores in PTC, suspicious for malignancy, and atypia of undetermined significance cases were 6.18 ± 0.80, 4.48 ± 0.82, and 3.15 ± 0.67, respectively. Additional figures more frequent in PTC than in other diagnostic categories were identified. Cellularity of IOC significantly correlated with tumor size, nuclear score, and presence of additional figures. Also, IOCs with higher nuclear scores (4-7) significantly correlated with larger tumor size and presence of additional figures. In decision tree analysis, IOCs with nuclear score >5 and swirling sheets could be considered diagnostic for PTCs. Our study suggests that IOCs using nuclear features and additional figures could be useful with decreasing the likelihood of inconclusive results.

  3. DECISION TREE CLASSIFIERS FOR STAR/GALAXY SEPARATION

    International Nuclear Information System (INIS)

    Vasconcellos, E. C.; Ruiz, R. S. R.; De Carvalho, R. R.; Capelato, H. V.; Gal, R. R.; LaBarbera, F. L.; Frago Campos Velho, H.; Trevisan, M.

    2011-01-01

    We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS-DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of 884,126 SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: 14 ≤ r ≤ 21 (85.2%) and r ≥ 19 (82.1%). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT, and Ball et al. We find that our FT classifier is comparable to or better in completeness over the full magnitude range 15 ≤ r ≤ 21, with much lower contamination than all but the Ball et al. classifier. At the faintest magnitudes (r > 19), our classifier is the only one that maintains high completeness (>80%) while simultaneously achieving low contamination (∼2.5%). We also examine the SDSS parametric classifier (psfMag - modelMag) to see if the dividing line between stars and galaxies can be adjusted to improve the classifier. We find that currently stars in close pairs are often misclassified as galaxies, and suggest a new cut to improve the classifier. Finally, we apply our FT classifier to separate stars from galaxies in the full set of 69,545,326 SDSS photometric objects in the magnitude range 14 ≤ r ≤ 21.

  4. Gis-Based Smart Cartography Using 3d Modeling

    Science.gov (United States)

    Malinverni, E. S.; Tassetti, A. N.

    2013-08-01

    3D City Models have evolved to be important tools for urban decision processes and information systems, especially in planning, simulation, analysis, documentation and heritage management. On the other hand existing and in use numerical cartography is often not suitable to be used in GIS because not geometrically and topologically correctly structured. The research aim is to 3D structure and organize a numeric cartography for GIS and turn it into CityGML standardized features. The work is framed around a first phase of methodological analysis aimed to underline which existing standard (like ISO and OGC rules) can be used to improve the quality requirement of a cartographic structure. Subsequently, from this technical specifics, it has been investigated the translation in formal contents, using an owner interchange software (SketchUp), to support some guide lines implementations to generate a GIS3D structured in GML3. It has been therefore predisposed a test three-dimensional numerical cartography (scale 1:500, generated from range data captured by 3D laser scanner), tested on its quality according to the previous standard and edited when and where necessary. Cad files and shapefiles are converted into a final 3D model (Google SketchUp model) and then exported into a 3D city model (CityGML LoD1/LoD2). The GIS3D structure has been managed in a GIS environment to run further spatial analysis and energy performance estimate, not achievable in a 2D environment. In particular geometrical building parameters (footprint, volume etc.) are computed and building envelop thermal characteristics are derived from. Lastly, a simulation is carried out to deal with asbestos and home renovating charges and show how the built 3D city model can support municipal managers with risk diagnosis of the present situation and development of strategies for a sustainable redevelop.

  5. A GIS-based disaggregate spatial watershed analysis using RADAR data

    International Nuclear Information System (INIS)

    Al-Hamdan, M.

    2002-01-01

    Hydrology is the study of water in all its forms, origins, and destinations on the earth.This paper develops a novel modeling technique using a geographic information system (GIS) to facilitate watershed hydrological routing using RADAR data. The RADAR rainfall data, segmented to 4 km by 4 km blocks, divides the watershed into several sub basins which are modeled independently. A case study for the GIS-based disaggregate spatial watershed analysis using RADAR data is provided for South Fork Cowikee Creek near Batesville, Alabama. All the data necessary to complete the analysis is maintained in the ArcView GIS software. This paper concludes that the GIS-Based disaggregate spatial watershed analysis using RADAR data is a viable method to calculate hydrological routing for large watersheds. (author)

  6. WebGIS based on semantic grid model and web services

    Science.gov (United States)

    Zhang, WangFei; Yue, CaiRong; Gao, JianGuo

    2009-10-01

    As the combination point of the network technology and GIS technology, WebGIS has got the fast development in recent years. With the restriction of Web and the characteristics of GIS, traditional WebGIS has some prominent problems existing in development. For example, it can't accomplish the interoperability of heterogeneous spatial databases; it can't accomplish the data access of cross-platform. With the appearance of Web Service and Grid technology, there appeared great change in field of WebGIS. Web Service provided an interface which can give information of different site the ability of data sharing and inter communication. The goal of Grid technology was to make the internet to a large and super computer, with this computer we can efficiently implement the overall sharing of computing resources, storage resource, data resource, information resource, knowledge resources and experts resources. But to WebGIS, we only implement the physically connection of data and information and these is far from the enough. Because of the different understanding of the world, following different professional regulations, different policies and different habits, the experts in different field will get different end when they observed the same geographic phenomenon and the semantic heterogeneity produced. Since these there are large differences to the same concept in different field. If we use the WebGIS without considering of the semantic heterogeneity, we will answer the questions users proposed wrongly or we can't answer the questions users proposed. To solve this problem, this paper put forward and experienced an effective method of combing semantic grid and Web Services technology to develop WebGIS. In this paper, we studied the method to construct ontology and the method to combine Grid technology and Web Services and with the detailed analysis of computing characteristics and application model in the distribution of data, we designed the WebGIS query system driven by

  7. GIS-based Watershed Management Modeling for Surface Runoff Calculation in Tatara River Basin, Japan

    OpenAIRE

    Ix, Hour; Mori, Makito; Hiramatsu, Kazuaki; Harada, Masayoshi

    2007-01-01

    In the past few decades when Geographical Information System (GIS) technology was not fully developed in practical use, watershed delineation work used to be conducted manually by hydrologists based on topographic maps. The work was a tedious operation, since it had to be done repeatedly in similar manner for each basin or sub-basin of interest, and its process always left some unpredicted errors. Nowadays, GIS software is being upgraded regularly with powerful tools responding to the needs o...

  8. Analysis Of Educational Services Distribution-Based Geographic Information System GIS

    OpenAIRE

    Waleed Lagrab; Noura AKNIN

    2015-01-01

    Abstract This study analyzes the spatial distribution of kindergarten facilities in the study area based on the Geographic Information Systems GIS in order to test an efficiency of GIS technology to redistribute the existing kindergarten and choose the best location in the future and applying the standard criteria for selecting the suitable locations for kindergarten. To achieve this goal the data and information are collected via interviews and comprehensive statistics on the education facil...

  9. Evaluation of the improvement of the double-contrast radiographic image quality in the prone position brought about by the use of a decision tree in the screening examination

    International Nuclear Information System (INIS)

    Yamada, Yasuji; Nakamura, Syunichi; Ohno, Ryuichi; Azuma, Hiroshi; Fujinaga, Akira; Nagai, Makoto

    2009-01-01

    We designed a decision tree in order to improve the double-contrast radiographic image quality in the prone position and compensate for the disparity of technique among radiological technologists. We evaluated 391 consecutive individuals who underwent medical checkups at our hospital. Three decision trees, id est (i.e.), Tree 1, Tree 2 and Tree 3, were constructed based on the axis and contortion of the stomach with the use of a prone filling image, and then the insertion site of compression pillow was altered. The image quality at the gastric angulus, the gastric body and the antrum was evaluated based on our original numeric scale, and was compared between the previous method and the present method which employs a decision tree. The image quality improved more significantly in the present method employing a decision tree compared with the previous method, for each decision tree: from 90 points to 100 points for Tree 1, from 70 points to 95 points for Tree 2, and from 39.5 points for 85.7 points in Tree 3. These results indicate that our original procedure employing a decision tree improves the radiographic image quality in the prone position and compensates for the disparity of technique among radiological technologists. Therefore, the present method may be expected to serve as the standard procedure of double-contrast radiography in the prone position. (author)

  10. Using Decision Trees to Detect and Isolate Leaks in the J-2X

    Data.gov (United States)

    National Aeronautics and Space Administration — Full title: Using Decision Trees to Detect and Isolate Simulated Leaks in the J-2X Rocket Engine Mark Schwabacher, NASA Ames Research Center Robert Aguilar, Pratt...

  11. Comparison of the use of binary decision trees and neural networks in top-quark detection

    International Nuclear Information System (INIS)

    Bowser-Chao, D.; Dzialo, D.L.

    1993-01-01

    The use of neural networks for signal versus background discrimination in high-energy physics experiments has been investigated and has compared favorably with the efficiency of traditional kinematic cuts. Recent work in top-quark identification produced a neural network that, for a given top-quark mass, yielded a higher signal-to-background ratio in Monte Carlo simulation than a corresponding set of conventional cuts. In this article we discuss another pattern-recognition algorithm, the binary decision tree. We apply a binary decision tree to top-quark identification at the Fermilab Tevatron and find it to be comparable in performance to the neural network. Furthermore, reservations about the ''black box'' nature of neural network discriminators do not appy to binary decision trees; a binary decision tree may be reduced to a set of kinematic cuts subject to conventional error analysis

  12. Statistical Sensitive Data Protection and Inference Prevention with Decision Tree Methods

    National Research Council Canada - National Science Library

    Chang, LiWu

    2003-01-01

    .... We consider inference as correct classification and approach it with decision tree methods. As in our previous work, sensitive data are viewed as classes of those test data and non-sensitive data are the rest attribute values...

  13. Totally Optimal Decision Trees for Monotone Boolean Functions with at Most Five Variables

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2013-01-01

    In this paper, we present the empirical results for relationships between time (depth) and space (number of nodes) complexity of decision trees computing monotone Boolean functions, with at most five variables. We use Dagger (a tool for optimization

  14. Bi-Criteria Optimization of Decision Trees with Applications to Data Analysis

    KAUST Repository

    Chikalov, Igor

    2017-10-19

    This paper is devoted to the study of bi-criteria optimization problems for decision trees. We consider different cost functions such as depth, average depth, and number of nodes. We design algorithms that allow us to construct the set of Pareto optimal points (POPs) for a given decision table and the corresponding bi-criteria optimization problem. These algorithms are suitable for investigation of medium-sized decision tables. We discuss three examples of applications of the created tools: the study of relationships among depth, average depth and number of nodes for decision trees for corner point detection (such trees are used in computer vision for object tracking), study of systems of decision rules derived from decision trees, and comparison of different greedy algorithms for decision tree construction as single- and bi-criteria optimization algorithms.

  15. The research and implementation of coalfield spontaneous combustion of carbon emission WebGIS based on Silverlight and ArcGIS server

    International Nuclear Information System (INIS)

    Zhu, Z; Bi, J; Wang, X; Zhu, W

    2014-01-01

    As an important sub-topic of the natural process of carbon emission data public information platform construction, coalfield spontaneous combustion of carbon emission WebGIS system has become an important study object. In connection with data features of coalfield spontaneous combustion carbon emissions (i.e. a wide range of data, which is rich and complex) and the geospatial characteristics, data is divided into attribute data and spatial data. Based on full analysis of the data, completed the detailed design of the Oracle database and stored on the Oracle database. Through Silverlight rich client technology and the expansion of WCF services, achieved the attribute data of web dynamic query, retrieval, statistical, analysis and other functions. For spatial data, we take advantage of ArcGIS Server and Silverlight-based API to invoke GIS server background published map services, GP services, Image services and other services, implemented coalfield spontaneous combustion of remote sensing image data and web map data display, data analysis, thematic map production. The study found that the Silverlight technology, based on rich client and object-oriented framework for WCF service, can efficiently constructed a WebGIS system. And then, combined with ArcGIS Silverlight API to achieve interactive query attribute data and spatial data of coalfield spontaneous emmission, can greatly improve the performance of WebGIS system. At the same time, it provided a strong guarantee for the construction of public information on China's carbon emission data

  16. Minimization of Decision Tree Average Depth for Decision Tables with Many-valued Decisions

    KAUST Repository

    Azad, Mohammad

    2014-09-13

    The paper is devoted to the analysis of greedy algorithms for the minimization of average depth of decision trees for decision tables such that each row is labeled with a set of decisions. The goal is to find one decision from the set of decisions. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of average depth of decision trees.

  17. Greedy heuristics for minimization of number of terminal nodes in decision trees

    KAUST Repository

    Hussain, Shahid

    2014-10-01

    This paper describes, in detail, several greedy heuristics for construction of decision trees. We study the number of terminal nodes of decision trees, which is closely related with the cardinality of the set of rules corresponding to the tree. We compare these heuristics empirically for two different types of datasets (datasets acquired from UCI ML Repository and randomly generated data) as well as compare with the optimal results obtained using dynamic programming method.

  18. Greedy heuristics for minimization of number of terminal nodes in decision trees

    KAUST Repository

    Hussain, Shahid

    2014-01-01

    This paper describes, in detail, several greedy heuristics for construction of decision trees. We study the number of terminal nodes of decision trees, which is closely related with the cardinality of the set of rules corresponding to the tree. We compare these heuristics empirically for two different types of datasets (datasets acquired from UCI ML Repository and randomly generated data) as well as compare with the optimal results obtained using dynamic programming method.

  19. Minimization of Decision Tree Average Depth for Decision Tables with Many-valued Decisions

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2014-01-01

    The paper is devoted to the analysis of greedy algorithms for the minimization of average depth of decision trees for decision tables such that each row is labeled with a set of decisions. The goal is to find one decision from the set of decisions. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of average depth of decision trees.

  20. Enabling Participatory Decision Making Through Web-Based GIS

    International Nuclear Information System (INIS)

    Sheng, Grant; Yam, Kevin; Hassas, Aranak

    2001-01-01

    obligations to fellow human beings, animals, and the environment. People perceive that information travels essentially one way in the processes and the voices of the community and its members are not heard. Subsequently, they feel excluded from the actual decision making process and even from being able to participate meaningfully in the process. Recent advances in informatics and geomatics technology, such as the Internet, web-based software and geographic information systems (GIS), have made it possible to address these issues more effectively. We believe that the combined features of two software developed at the York Centre for Applied Sustainability can facilitate access to information, provide a virtual forum for discussion and debate, and it possible for individuals to participate in decision making process, and to infer peoples' values from their choice criteria selection

  1. Metric Sex Determination of the Human Coxal Bone on a Virtual Sample using Decision Trees.

    Science.gov (United States)

    Savall, Frédéric; Faruch-Bilfeld, Marie; Dedouit, Fabrice; Sans, Nicolas; Rousseau, Hervé; Rougé, Daniel; Telmon, Norbert

    2015-11-01

    Decision trees provide an alternative to multivariate discriminant analysis, which is still the most commonly used in anthropometric studies. Our study analyzed the metric characterization of a recent virtual sample of 113 coxal bones using decision trees for sex determination. From 17 osteometric type I landmarks, a dataset was built with five classic distances traditionally reported in the literature and six new distances selected using the two-step ratio method. A ten-fold cross-validation was performed, and a decision tree was established on two subsamples (training and test sets). The decision tree established on the training set included three nodes and its application to the test set correctly classified 92% of individuals. This percentage was similar to the data of the literature. The usefulness of decision trees has been demonstrated in numerous fields. They have been already used in sex determination, body mass prediction, and ancestry estimation. This study shows another use of decision trees enabling simple and accurate sex determination. © 2015 American Academy of Forensic Sciences.

  2. Implementation of Data Mining to Analyze Drug Cases Using C4.5 Decision Tree

    Science.gov (United States)

    Wahyuni, Sri

    2018-03-01

    Data mining was the process of finding useful information from a large set of databases. One of the existing techniques in data mining was classification. The method used was decision tree method and algorithm used was C4.5 algorithm. The decision tree method was a method that transformed a very large fact into a decision tree which was presenting the rules. Decision tree method was useful for exploring data, as well as finding a hidden relationship between a number of potential input variables with a target variable. The decision tree of the C4.5 algorithm was constructed with several stages including the selection of attributes as roots, created a branch for each value and divided the case into the branch. These stages would be repeated for each branch until all the cases on the branch had the same class. From the solution of the decision tree there would be some rules of a case. In this case the researcher classified the data of prisoners at Labuhan Deli prison to know the factors of detainees committing criminal acts of drugs. By applying this C4.5 algorithm, then the knowledge was obtained as information to minimize the criminal acts of drugs. From the findings of the research, it was found that the most influential factor of the detainee committed the criminal act of drugs was from the address variable.

  3. Improving Land Use/Land Cover Classification by Integrating Pixel Unmixing and Decision Tree Methods

    Directory of Open Access Journals (Sweden)

    Chao Yang

    2017-11-01

    Full Text Available Decision tree classification is one of the most efficient methods for obtaining land use/land cover (LULC information from remotely sensed imageries. However, traditional decision tree classification methods cannot effectively eliminate the influence of mixed pixels. This study aimed to integrate pixel unmixing and decision tree to improve LULC classification by removing mixed pixel influence. The abundance and minimum noise fraction (MNF results that were obtained from mixed pixel decomposition were added to decision tree multi-features using a three-dimensional (3D Terrain model, which was created using an image fusion digital elevation model (DEM, to select training samples (ROIs, and improve ROI separability. A Landsat-8 OLI image of the Yunlong Reservoir Basin in Kunming was used to test this proposed method. Study results showed that the Kappa coefficient and the overall accuracy of integrated pixel unmixing and decision tree method increased by 0.093% and 10%, respectively, as compared with the original decision tree method. This proposed method could effectively eliminate the influence of mixed pixels and improve the accuracy in complex LULC classifications.

  4. SVM-based Partial Discharge Pattern Classification for GIS

    Science.gov (United States)

    Ling, Yin; Bai, Demeng; Wang, Menglin; Gong, Xiaojin; Gu, Chao

    2018-01-01

    Partial discharges (PD) occur when there are localized dielectric breakdowns in small regions of gas insulated substations (GIS). It is of high importance to recognize the PD patterns, through which we can diagnose the defects caused by different sources so that predictive maintenance can be conducted to prevent from unplanned power outage. In this paper, we propose an approach to perform partial discharge pattern classification. It first recovers the PRPD matrices from the PRPD2D images; then statistical features are extracted from the recovered PRPD matrix and fed into SVM for classification. Experiments conducted on a dataset containing thousands of images demonstrates the high effectiveness of the method.

  5. Construction and validation of a decision tree for treating metabolic acidosis in calves with neonatal diarrhea

    Directory of Open Access Journals (Sweden)

    Trefz Florian M

    2012-12-01

    Full Text Available Abstract Background The aim of the present prospective study was to investigate whether a decision tree based on basic clinical signs could be used to determine the treatment of metabolic acidosis in calves successfully without expensive laboratory equipment. A total of 121 calves with a diagnosis of neonatal diarrhea admitted to a veterinary teaching hospital were included in the study. The dosages of sodium bicarbonate administered followed simple guidelines based on the results of a previous retrospective analysis. Calves that were neither dehydrated nor assumed to be acidemic received an oral electrolyte solution. In cases in which intravenous correction of acidosis and/or dehydration was deemed necessary, the provided amount of sodium bicarbonate ranged from 250 to 750 mmol (depending on alterations in posture and infusion volumes from 1 to 6.25 liters (depending on the degree of dehydration. Individual body weights of calves were disregarded. During the 24 hour study period the investigator was blinded to all laboratory findings. Results After being lifted, many calves were able to stand despite base excess levels below −20 mmol/l. Especially in those calves, metabolic acidosis was undercorrected with the provided amount of 500 mmol sodium bicarbonate, which was intended for calves standing insecurely. In 13 calves metabolic acidosis was not treated successfully as defined by an expected treatment failure or a measured base excess value below −5 mmol/l. By contrast, 24 hours after the initiation of therapy, a metabolic alkalosis was present in 55 calves (base excess levels above +5 mmol/l. However, the clinical status was not affected significantly by the metabolic alkalosis. Conclusions Assuming re-evaluation of the calf after 24 hours, the tested decision tree can be recommended for the use in field practice with minor modifications. Calves that stand insecurely and are not able to correct their position if pushed

  6. Assembly and concept of a web-based GIS within the paleolimnological project CONTINENT (Lake Baikal, Russia)

    OpenAIRE

    B. Heim; Jens Klump; N. Fagel; Hedi Oberhänsli

    2008-01-01

    Web-based Geographical Information Systems (GIS) are excellent tools within interdisciplinary and multi-national geoscience projects to exchange and visualize project data. The web-based GIS presented in this paper was designed for the paleolimnological project 'High-resolution CONTINENTal paleoclimate record in Lake Baikal' (CONTINENT) (Lake Baikal, Siberia, Russia) to allow the interactive handling of spatial data. The GIS database combines project data (core positions, sample positions, th...

  7. Enabling Participatory Decision Making Through Web-Based GIS

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, Grant; Yam, Kevin; Hassas, Aranak [York Univ., Toronto, ON (Canada). Faculty of Environmental Studies

    2001-07-01

    , morals, ethical behaviour, our relationship with, and obligations to fellow human beings, animals, and the environment. People perceive that information travels essentially one way in the processes and the voices of the community and its members are not heard. Subsequently, they feel excluded from the actual decision making process and even from being able to participate meaningfully in the process. Recent advances in informatics and geomatics technology, such as the Internet, web-based software and geographic information systems (GIS), have made it possible to address these issues more effectively. We believe that the combined features of two software developed at the York Centre for Applied Sustainability can facilitate access to information, provide a virtual forum for discussion and debate, and it possible for individuals to participate in decision making process, and to infer peoples' values from their choice criteria selection.

  8. Blood oxygen level dependent magnetic resonance imaging for detecting pathological patterns in lupus nephritis patients: a preliminary study using a decision tree model.

    Science.gov (United States)

    Shi, Huilan; Jia, Junya; Li, Dong; Wei, Li; Shang, Wenya; Zheng, Zhenfeng

    2018-02-09

    Precise renal histopathological diagnosis will guide therapy strategy in patients with lupus nephritis. Blood oxygen level dependent (BOLD) magnetic resonance imaging (MRI) has been applicable noninvasive technique in renal disease. This current study was performed to explore whether BOLD MRI could contribute to diagnose renal pathological pattern. Adult patients with lupus nephritis renal pathological diagnosis were recruited for this study. Renal biopsy tissues were assessed based on the lupus nephritis ISN/RPS 2003 classification. The Blood oxygen level dependent magnetic resonance imaging (BOLD-MRI) was used to obtain functional magnetic resonance parameter, R2* values. Several functions of R2* values were calculated and used to construct algorithmic models for renal pathological patterns. In addition, the algorithmic models were compared as to their diagnostic capability. Both Histopathology and BOLD MRI were used to examine a total of twelve patients. Renal pathological patterns included five classes III (including 3 as class III + V) and seven classes IV (including 4 as class IV + V). Three algorithmic models, including decision tree, line discriminant, and logistic regression, were constructed to distinguish the renal pathological pattern of class III and class IV. The sensitivity of the decision tree model was better than that of the line discriminant model (71.87% vs 59.48%, P decision tree model was equivalent to that of the line discriminant model (63.87% vs 63.73%, P = 0.939) and higher than that of the logistic regression model (63.87% vs 38.0%, P decision tree model was greater than that of the line discriminant model (0.765 vs 0.629, P Decision tree models constructed using functions of R2* values may facilitate the prediction of renal pathological patterns.

  9. Creation of a Web-Based GIS Server and Custom Geoprocessing Tools for Enhanced Hydrologic Applications

    Science.gov (United States)

    Welton, B.; Chouinard, K.; Sultan, M.; Becker, D.; Milewski, A.; Becker, R.

    2010-12-01

    Rising populations in the arid and semi arid parts of the World are increasing the demand for fresh water supplies worldwide. Many data sets needed for assessment of hydrologic applications across vast regions of the world are expensive, unpublished, difficult to obtain, or at varying scales which complicates their use. Fortunately, this situation is changing with the development of global remote sensing datasets and web-based platforms such as GIS Server. GIS provides a cost effective vehicle for comparing, analyzing, and querying a variety of spatial datasets as geographically referenced layers. We have recently constructed a web-based GIS, that incorporates all relevant geological, geochemical, geophysical, and remote sensing data sets that were readily used to identify reservoir types and potential well locations on local and regional scales in various tectonic settings including: (1) extensional environment (Red Sea rift), (2) transcurrent fault system (Najd Fault in the Arabian-Nubian Shield), and (3) compressional environments (Himalayas). The web-based GIS could also be used to detect spatial and temporal trends in precipitation, recharge, and runoff in large watersheds on local, regional, and continental scales. These applications were enabled through the construction of a web-based ArcGIS Server with Google Map’s interface and the development of customized geoprocessing tools. ArcGIS Server provides out-of-the-box setups that are generic in nature. This platform includes all of the standard web based GIS tools (e.g. pan, zoom, identify, search, data querying, and measurement). In addition to the standard suite of tools provided by ArcGIS Server an additional set of advanced data manipulation and display tools was also developed to allow for a more complete and customizable view of the area of interest. The most notable addition to the standard GIS Server tools is the custom on-demand geoprocessing tools (e.g., graph, statistical functions, custom raster

  10. Security attack detection algorithm for electric power gis system based on mobile application

    Science.gov (United States)

    Zhou, Chao; Feng, Renjun; Wang, Liming; Huang, Wei; Guo, Yajuan

    2017-05-01

    Electric power GIS is one of the key information technologies to satisfy the power grid construction in China, and widely used in power grid construction planning, weather, and power distribution management. The introduction of electric power GIS based on mobile applications is an effective extension of the geographic information system that has been widely used in the electric power industry. It provides reliable, cheap and sustainable power service for the country. The accurate state estimation is the important conditions to maintain the normal operation of the electric power GIS. Recent research has shown that attackers can inject the complex false data into the power system. The injection attack of this new type of false data (load integrity attack LIA) can successfully bypass the routine detection to achieve the purpose of attack, so that the control center will make a series of wrong decision. Eventually, leading to uneven distribution of power in the grid. In order to ensure the safety of the electric power GIS system based on mobile application, it is very important to analyze the attack mechanism and propose a new type of attack, and to study the corresponding detection method and prevention strategy in the environment of electric power GIS system based on mobile application.

  11. Automatic design of decision-tree induction algorithms tailored to flexible-receptor docking data.

    Science.gov (United States)

    Barros, Rodrigo C; Winck, Ana T; Machado, Karina S; Basgalupp, Márcio P; de Carvalho, André C P L F; Ruiz, Duncan D; de Souza, Osmar Norberto

    2012-11-21

    This paper addresses the prediction of the free energy of binding of a drug candidate with enzyme InhA associated with Mycobacterium tuberculosis. This problem is found within rational drug design, where interactions between drug candidates and target proteins are verified through molecular docking simulations. In this application, it is important not only to correctly predict the free energy of binding, but also to provide a comprehensible model that could be validated by a domain specialist. Decision-tree induction algorithms have been successfully used in drug-design related applications, specially considering that decision trees are simple to understand, interpret, and validate. There are several decision-tree induction algorithms available for general-use, but each one has a bias that makes it more suitable for a particular data distribution. In this article, we propose and investigate the automatic design of decision-tree induction algorithms tailored to particular drug-enzyme binding data sets. We investigate the performance of our new method for evaluating binding conformations of different drug candidates to InhA, and we analyze our findings with respect to decision tree accuracy, comprehensibility, and biological relevance. The empirical analysis indicates that our method is capable of automatically generating decision-tree induction algorithms that significantly outperform the traditional C4.5 algorithm with respect to both accuracy and comprehensibility. In addition, we provide the biological interpretation of the rules generated by our approach, reinforcing the importance of comprehensible predictive models in this particular bioinformatics application. We conclude that automatically designing a decision-tree algorithm tailored to molecular docking data is a promising alternative for the prediction of the free energy from the binding of a drug candidate with a flexible-receptor.

  12. Hybrid artificial intelligence approach based on neural fuzzy inference model and metaheuristic optimization for flood susceptibilitgy modeling in a high-frequency tropical cyclone area using GIS

    Science.gov (United States)

    Tien Bui, Dieu; Pradhan, Biswajeet; Nampak, Haleh; Bui, Quang-Thanh; Tran, Quynh-An; Nguyen, Quoc-Phi

    2016-09-01

    This paper proposes a new artificial intelligence approach based on neural fuzzy inference system and metaheuristic optimization for flood susceptibility modeling, namely MONF. In the new approach, the neural fuzzy inference system was used to create an initial flood susceptibility model and then the model was optimized using two metaheuristic algorithms, Evolutionary Genetic and Particle Swarm Optimization. A high-frequency tropical cyclone area of the Tuong Duong district in Central Vietnam was used as a case study. First, a GIS database for the study area was constructed. The database that includes 76 historical flood inundated areas and ten flood influencing factors was used to develop and validate the proposed model. Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Receiver Operating Characteristic (ROC) curve, and area under the ROC curve (AUC) were used to assess the model performance and its prediction capability. Experimental results showed that the proposed model has high performance on both the training (RMSE = 0.306, MAE = 0.094, AUC = 0.962) and validation dataset (RMSE = 0.362, MAE = 0.130, AUC = 0.911). The usability of the proposed model was evaluated by comparing with those obtained from state-of-the art benchmark soft computing techniques such as J48 Decision Tree, Random Forest, Multi-layer Perceptron Neural Network, Support Vector Machine, and Adaptive Neuro Fuzzy Inference System. The results show that the proposed MONF model outperforms the above benchmark models; we conclude that the MONF model is a new alternative tool that should be used in flood susceptibility mapping. The result in this study is useful for planners and decision makers for sustainable management of flood-prone areas.

  13. A GIS-BASED ENVIRONMENTAL HEALTH INFORMATION SOURCE FOR MALAYSIAN CONTEXT

    Directory of Open Access Journals (Sweden)

    Lau Tiu Chung

    2013-04-01

    Full Text Available In this paper, we propose a GIS-based system for collection and targeted distribution of latest alerts and real-time environmental factors to the Malaysian population. We call it the Environmental Health Management System (EHMS. This GIS-based system is designed to facilitate and encourage research into environmental health quality issues by providing a comprehensive tracking and monitoring tool. This GIS-based system is embedded with Google Maps API and Geocoding API services to visualize the location and environmental health reports from the aggregated online newspaper and social media news feeds. We introduce the design and implementation of EHMS, including the web frontend, backend, ontology, database, data acquisition, classification engine, and the standard news feeds.

  14. Study of Earthquake Disaster Prediction System of Langfang city Based on GIS

    Science.gov (United States)

    Huang, Meng; Zhang, Dian; Li, Pan; Zhang, YunHui; Zhang, RuoFei

    2017-07-01

    In this paper, according to the status of China’s need to improve the ability of earthquake disaster prevention, this paper puts forward the implementation plan of earthquake disaster prediction system of Langfang city based on GIS. Based on the GIS spatial database, coordinate transformation technology, GIS spatial analysis technology and PHP development technology, the seismic damage factor algorithm is used to predict the damage of the city under different intensity earthquake disaster conditions. The earthquake disaster prediction system of Langfang city is based on the B / S system architecture. Degree and spatial distribution and two-dimensional visualization display, comprehensive query analysis and efficient auxiliary decision-making function to determine the weak earthquake in the city and rapid warning. The system has realized the transformation of the city’s earthquake disaster reduction work from static planning to dynamic management, and improved the city’s earthquake and disaster prevention capability.

  15. Defining Primary Care Shortage Areas: Do GIS-based Measures Yield Different Results?

    Science.gov (United States)

    Daly, Michael R; Mellor, Jennifer M; Millones, Marco

    2018-02-12

    To examine whether geographic information systems (GIS)-based physician-to-population ratios (PPRs) yield determinations of geographic primary care shortage areas that differ from those based on bounded-area PPRs like those used in the Health Professional Shortage Area (HPSA) designation process. We used geocoded data on primary care physician (PCP) locations and census block population counts from 1 US state to construct 2 shortage area indicators. The first is a bounded-area shortage indicator defined without GIS methods; the second is a GIS-based measure that measures the populations' spatial proximity to PCP locations. We examined agreement and disagreement between bounded shortage areas and GIS-based shortage areas. Bounded shortage area indicators and GIS-based shortage area indicators agree for the census blocks where the vast majority of our study populations reside. Specifically, 95% and 98% of the populations in our full and urban samples, respectively, reside in census blocks where the 2 indicators agree. Although agreement is generally high in rural areas (ie, 87% of the rural population reside in census blocks where the 2 indicators agree), agreement is significantly lower compared to urban areas. One source of disagreement suggests that bounded-area measures may "overlook" some shortages in rural areas; however, other aspects of the HPSA designation process likely mitigate this concern. Another source of disagreement arises from the border-crossing problem, and it is more prevalent. The GIS-based PPRs we employed would yield shortage area determinations that are similar to those based on bounded-area PPRs defined for Primary Care Service Areas. Disagreement rates were lower than previous studies have found. © 2018 National Rural Health Association.

  16. Predicting Lung Radiotherapy-Induced Pneumonitis Using a Model Combining Parametric Lyman Probit With Nonparametric Decision Trees

    International Nuclear Information System (INIS)

    Das, Shiva K.; Zhou Sumin; Zhang, Junan; Yin, F.-F.; Dewhirst, Mark W.; Marks, Lawrence B.

    2007-01-01

    Purpose: To develop and test a model to predict for lung radiation-induced Grade 2+ pneumonitis. Methods and Materials: The model was built from a database of 234 lung cancer patients treated with radiotherapy (RT), of whom 43 were diagnosed with pneumonitis. The model augmented the predictive capability of the parametric dose-based Lyman normal tissue complication probability (LNTCP) metric by combining it with weighted nonparametric decision trees that use dose and nondose inputs. The decision trees were sequentially added to the model using a 'boosting' process that enhances the accuracy of prediction. The model's predictive capability was estimated by 10-fold cross-validation. To facilitate dissemination, the cross-validation result was used to extract a simplified approximation to the complicated model architecture created by boosting. Application of the simplified model is demonstrated in two example cases. Results: The area under the model receiver operating characteristics curve for cross-validation was 0.72, a significant improvement over the LNTCP area of 0.63 (p = 0.005). The simplified model used the following variables to output a measure of injury: LNTCP, gender, histologic type, chemotherapy schedule, and treatment schedule. For a given patient RT plan, injury prediction was highest for the combination of pre-RT chemotherapy, once-daily treatment, female gender and lowest for the combination of no pre-RT chemotherapy and nonsquamous cell histologic type. Application of the simplified model to the example cases revealed that injury prediction for a given treatment plan can range from very low to very high, depending on the settings of the nondose variables. Conclusions: Radiation pneumonitis prediction was significantly enhanced by decision trees that added the influence of nondose factors to the LNTCP formulation

  17. Detection of clinical mastitis with sensor data from automatic milking systems is improved by using decision-tree induction.

    Science.gov (United States)

    Kamphuis, C; Mollenhorst, H; Heesterbeek, J A P; Hogeveen, H

    2010-08-01

    %. This indicates the inability to detect all CM cases based on sensor data alone. Sensitivity levels varied largely when the decision tree was validated per herd. This trend was confirmed when decision trees were trained using data from 8 herds and tested on data from the ninth herd. This indicates that when using the decision tree as a generic CM detection model in practice, some herds will continue having difficulties in detecting CM using mastitis alert lists, whereas others will perform well. Copyright (c) 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. A GIS-based atmospheric dispersion model for pollutants emitted by complex source areas.

    Science.gov (United States)

    Teggi, Sergio; Costanzini, Sofia; Ghermandi, Grazia; Malagoli, Carlotta; Vinceti, Marco

    2018-01-01

    Gaussian dispersion models are widely used to simulate the concentrations and deposition fluxes of pollutants emitted by source areas. Very often, the calculation time limits the number of sources and receptors and the geometry of the sources must be simple and without holes. This paper presents CAREA, a new GIS-based Gaussian model for complex source areas. CAREA was coded in the Python language, and is largely based on a simplified formulation of the very popular and recognized AERMOD model. The model allows users to define in a GIS environment thousands of gridded or scattered receptors and thousands of complex sources with hundreds of vertices and holes. CAREA computes ground level, or near ground level, concentrations and dry deposition fluxes of pollutants. The input/output and the runs of the model can be completely managed in GIS environment (e.g. inside a GIS project). The paper presents the CAREA formulation and its applications to very complex test cases. The tests shows that the processing time are satisfactory and that the definition of sources and receptors and the output retrieval are quite easy in a GIS environment. CAREA and AERMOD are compared using simple and reproducible test cases. The comparison shows that CAREA satisfactorily reproduces AERMOD simulations and is considerably faster than AERMOD. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. The Studies of Decision Tree in Estimation of Breast Cancer Risk by Using Polymorphism Nucleotide

    Directory of Open Access Journals (Sweden)

    Frida Seyedmir

    2017-07-01

    Full Text Available Abstract Introduction:   Decision tree is the data mining tools to collect, accurate prediction and sift information from massive amounts of data that are used widely in the field of computational biology and bioinformatics. In bioinformatics can be predict on diseases, including breast cancer. The use of genomic data including single nucleotide polymorphisms is a very important factor in predicting the risk of diseases. The number of seven important SNP among hundreds of thousands genetic markers were identified as factors associated with breast cancer. The objective of this study is to evaluate the training data on decision tree predictor error of the risk of breast cancer by using single nucleotide polymorphism genotype. Methods: The risk of breast cancer were calculated associated with the use of SNP formula:xj = fo * In human,  The decision tree can be used To predict the probability of disease using single nucleotide polymorphisms .Seven SNP with different odds ratio associated with breast cancer considered and coding and design of decision tree model, C4.5, by  Csharp2013 programming language were done. In the decision tree created with the coding, the four important associated SNP was considered. The decision tree error in two case of coding and using WEKA were assessment and percentage of decision tree accuracy in prediction of breast cancer were calculated. The number of trained samples was obtained with systematic sampling. With coding, two scenarios as well as software WEKA, three scenarios with different sets of data and the number of different learning and testing, were evaluated. Results: In both scenarios of coding, by increasing the training percentage from 66/66 to 86/42, the error reduced from 55/56 to 9/09. Also by running of WEKA on three scenarios with different sets of data, the number of different education, and different tests by increasing records number from 81 to 2187, the error rate decreased from 48/15 to 13

  20. GIS based application tool -- history of East India Company

    Science.gov (United States)

    Phophaliya, Sudhir

    The emphasis of the thesis is to build an intuitive and robust GIS (Geographic Information systems) Tool which gives an in depth information on history of East India Company. The GIS tool also incorporates various achievements of East India Company which helped to establish their business all over world especially India. The user has the option to select these movements and acts by clicking on any of the marked states on the World map. The World Map also incorporates key features for East India Company like landing of East India Company in India, Darjeeling Tea Establishment, East India Company Stock Redemption Act etc. The user can know more about these features simply by clicking on each of them. The primary focus of the tool is to give the user a unique insight about East India Company; for this the tool has several HTML (Hypertext markup language) pages which the user can select. These HTML pages give information on various topics like the first Voyage, Trade with China, 1857 Revolt etc. The tool has been developed in JAVA. For the Indian map MOJO (Map Objects Java Objects) is used. MOJO is developed by ESRI. The major features shown on the World map was designed using MOJO. MOJO made it easy to incorporate the statistical data with these features. The user interface was intentionally kept simple and easy to use. To keep the user engaged, key aspects are explained using HTML pages. The idea is that pictures will help the user garner interest in the history of East India Company.

  1. Computerized Adaptive Test vs. decision trees: Development of a support decision system to identify suicidal behavior.

    Science.gov (United States)

    Delgado-Gomez, D; Baca-Garcia, E; Aguado, D; Courtet, P; Lopez-Castroman, J

    2016-12-01

    Several Computerized Adaptive Tests (CATs) have been proposed to facilitate assessments in mental health. These tests are built in a standard way, disregarding useful and usually available information not included in the assessment scales that could increase the precision and utility of CATs, such as the history of suicide attempts. Using the items of a previously developed scale for suicidal risk, we compared the performance of a standard CAT and a decision tree in a support decision system to identify suicidal behavior. We included the history of past suicide attempts as a class for the separation of patients in the decision tree. The decision tree needed an average of four items to achieve a similar accuracy than a standard CAT with nine items. The accuracy of the decision tree, obtained after 25 cross-validations, was 81.4%. A shortened test adapted for the separation of suicidal and non-suicidal patients was developed. CATs can be very useful tools for the assessment of suicidal risk. However, standard CATs do not use all the information that is available. A decision tree can improve the precision of the assessment since they are constructed using a priori information. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Using Decision Trees to Characterize Verbal Communication During Change and Stuck Episodes in the Therapeutic Process

    Directory of Open Access Journals (Sweden)

    Víctor Hugo eMasías

    2015-04-01

    Full Text Available Methods are needed for creating models to characterize verbal communication between therapists and their patients that are suitable for teaching purposes without losing analytical potential. A technique meeting these twin requirements is proposed that uses decision trees to identify both change and stuck episodes in therapist-patient communication. Three decision tree algorithms (C4.5, NBtree, and REPtree are applied to the problem of characterizing verbal responses into change and stuck episodes in the therapeutic process. The data for the problem is derived from a corpus of 8 successful individual therapy sessions with 1,760 speaking turns in a psychodynamic context. The decision tree model that performed best was generated by the C4.5 algorithm. It delivered 15 rules characterizing the verbal communication in the two types of episodes. Decision trees are a promising technique for analyzing verbal communication during significant therapy events and have much potential for use in teaching practice on changes in therapeutic communication. The development of pedagogical methods using decision trees can support the transmission of academic knowledge to therapeutic practice.

  3. [Comparison of Discriminant Analysis and Decision Trees for the Detection of Subclinical Keratoconus].

    Science.gov (United States)

    Kleinhans, Sonja; Herrmann, Eva; Kohnen, Thomas; Bühren, Jens

    2017-08-15

    Background Iatrogenic keratectasia is one of the most dreaded complications of refractive surgery. In most cases, keratectasia develops after refractive surgery of eyes suffering from subclinical stages of keratoconus with few or no signs. Unfortunately, there has been no reliable procedure for the early detection of keratoconus. In this study, we used binary decision trees (recursive partitioning) to assess their suitability for discrimination between normal eyes and eyes with subclinical keratoconus. Patients and Methods The method of decision tree analysis was compared with discriminant analysis which has shown good results in previous studies. Input data were 32 eyes of 32 patients with newly diagnosed keratoconus in the contralateral eye and preoperative data of 10 eyes of 5 patients with keratectasia after laser in-situ keratomileusis (LASIK). The control group was made up of 245 normal eyes after LASIK and 12-month follow-up without any signs of iatrogenic keratectasia. Results Decision trees gave better accuracy and specificity than did discriminant analysis. The sensitivity of decision trees was lower than the sensitivity of discriminant analysis. Conclusion On the basis of the patient population of this study, decision trees did not prove to be superior to linear discriminant analysis for the detection of subclinical keratoconus. Georg Thieme Verlag KG Stuttgart · New York.

  4. Using decision trees to characterize verbal communication during change and stuck episodes in the therapeutic process.

    Science.gov (United States)

    Masías, Víctor H; Krause, Mariane; Valdés, Nelson; Pérez, J C; Laengle, Sigifredo

    2015-01-01

    Methods are needed for creating models to characterize verbal communication between therapists and their patients that are suitable for teaching purposes without losing analytical potential. A technique meeting these twin requirements is proposed that uses decision trees to identify both change and stuck episodes in therapist-patient communication. Three decision tree algorithms (C4.5, NBTree, and REPTree) are applied to the problem of characterizing verbal responses into change and stuck episodes in the therapeutic process. The data for the problem is derived from a corpus of 8 successful individual therapy sessions with 1760 speaking turns in a psychodynamic context. The decision tree model that performed best was generated by the C4.5 algorithm. It delivered 15 rules characterizing the verbal communication in the two types of episodes. Decision trees are a promising technique for analyzing verbal communication during significant therapy events and have much potential for use in teaching practice on changes in therapeutic communication. The development of pedagogical methods using decision trees can support the transmission of academic knowledge to therapeutic practice.

  5. Methodology for a GIS-based damage assessment for researchers following large scale disasters

    Science.gov (United States)

    Crawford, Patrick Shane

    The 1990s were designated the International Decade for Natural Disaster Reduction by the United Nations General Assembly. This push for decrease of loss of life, property destruction, and social and economic disruption brought advancements in disaster management, including damage assessment. Damage assessment in the wake of natural and manmade disasters is a useful tool for government agencies, insurance companies, and researchers. As technologies evolve damage assessment processes constantly evolve as well. Alongside the advances in Geographic Information Systems (GIS), remote sensing, and Global Positioning System (GPS) technology, as well as the growing awareness of the needs of a standard operating procedure for GIS-based damage assessment and a need to make the damage assessment process as quick and accurate as possible, damage assessment procedures are becoming easier to execute and the results are becoming more accurate and robust. With these technological breakthroughs, multi-disciplinary damage assessment reconnaissance teams have become more efficient in their assessment methods through better organization and more robust through addition of new datasets. Damage assessment personnel are aided by software tools that offer high-level analysis and increasingly rapid damage assessment methods. GIS software has advanced the damage assessment methods of these teams by combining remotely sensed aerial imagery, GPS, and other technologies to expand the uses of the data. GIS allows researchers to use aerial imagery to show field collected data in the geographic location that it was collected so that information can be revisited, measurements can be taken, and data can be disseminated to other researchers and the public. The GIS-based data available to the reconnaissance team includes photographs of damage, worksheets, calculations, voice messages collected while studying the affected area, and many other datasets which are based on the type of disaster and the

  6. A positioning system for forest diseases and pests based on GIS and PTZ camera

    International Nuclear Information System (INIS)

    Wang, Z B; Zhao, F F; Wang, C B; Wang, L L

    2014-01-01

    Forest diseases and pests cause enormous economic losses and ecological damage every year in China. To prevent and control forest diseases and pests, the key is to get accurate information timely. In order to improve monitoring coverage rate and economize on manpower, a cooperative investigation model for forest diseases and pests is put forward. It is composed of video positioning system and manual labor reconnaissance with mobile GIS embedded in PDA. Video system is used to scan the disaster area, and is particularly effective on where trees are withered. Forest diseases prevention and control workers can check disaster area with PDA system. To support this investigation model, we developed a positioning algorithm and a positioning system. The positioning algorithm is based on DEM and PTZ camera. Moreover, the algorithm accuracy is validated. The software consists of 3D GIS subsystem, 2D GIS subsystem, video control subsystem and disaster positioning subsystem. 3D GIS subsystem makes positioning visual, and practically easy to operate. 2D GIS subsystem can output disaster thematic map. Video control subsystem can change Pan/Tilt/Zoom of a digital camera remotely, to focus on the suspected area. Disaster positioning subsystem implements the positioning algorithm. It is proved that the positioning system can observe forest diseases and pests in practical application for forest departments

  7. Two Trees: Migrating Fault Trees to Decision Trees for Real Time Fault Detection on International Space Station

    Science.gov (United States)

    Lee, Charles; Alena, Richard L.; Robinson, Peter

    2004-01-01

    We started from ISS fault trees example to migrate to decision trees, presented a method to convert fault trees to decision trees. The method shows that the visualizations of root cause of fault are easier and the tree manipulating becomes more programmatic via available decision tree programs. The visualization of decision trees for the diagnostic shows a format of straight forward and easy understands. For ISS real time fault diagnostic, the status of the systems could be shown by mining the signals through the trees and see where it stops at. The other advantage to use decision trees is that the trees can learn the fault patterns and predict the future fault from the historic data. The learning is not only on the static data sets but also can be online, through accumulating the real time data sets, the decision trees can gain and store faults patterns in the trees and recognize them when they come.

  8. GIS-Based Planning and Modeling for Renewable Energy: Challenges and Future Research Avenues

    Directory of Open Access Journals (Sweden)

    Bernd Resch

    2014-05-01

    Full Text Available In the face of the broad political call for an “energy turnaround”, we are currently witnessing three essential trends with regard to energy infrastructure planning, energy generation and storage: from planned production towards fluctuating production on the basis of renewable energy sources, from centralized generation towards decentralized generation and from expensive energy carriers towards cost-free energy carriers. These changes necessitate considerable modifications of the energy infrastructure. Even though most of these modifications are inherently motivated by geospatial questions and challenges, the integration of energy system models and Geographic Information Systems (GIS is still in its infancy. This paper analyzes the shortcomings of previous approaches in using GIS in renewable energy-related projects, extracts distinct challenges from these previous efforts and, finally, defines a set of core future research avenues for GIS-based energy infrastructure planning with a focus on the use of renewable energy. These future research avenues comprise the availability base data and their “geospatial awareness”, the development of a generic and unified data model, the usage of volunteered geographic information (VGI and crowdsourced data in analysis processes, the integration of 3D building models and 3D data analysis, the incorporation of network topologies into GIS, the harmonization of the heterogeneous views on aggregation issues in the fields of energy and GIS, fine-grained energy demand estimation from freely-available data sources, decentralized storage facility planning, the investigation of GIS-based public participation mechanisms, the transition from purely structural to operational planning, data privacy aspects and, finally, the development of a new dynamic power market design.

  9. Development of integrated evaluation information system for sandstone-hosted uranium deposits based on ComGIS

    International Nuclear Information System (INIS)

    Ke Dan; Han Shaoyang; Hou Huiqun; Hu Shuiqing

    2006-01-01

    Component GIS owns many merits such as flexible function and easy to be used, which makes it very suitable for the development of professional GIS application system in geoscience field. During the process of exploration and evaluation for uranium resources, computer application system has been a powerful tool owning to the following functions: professional processing, management and storage of data, spatial analysis, visualization and mapping, etc. In this paper technological routine is presented for the development of integrated evaluation information system for sandstone-hosted uranium deposits based on ArcGIS Engine. General GIS functions can be realized with ArcGIS Engine easily and quickly, the developers could concentrate their attentions on developing professional functions, which will shorten the developing period and make the system more efficient. It is showed that the system developed on ArcGIS Engine platform is stable in operation, flexible in functions and professional in application. (authors)

  10. Comparative study of biodegradability prediction of chemicals using decision trees, functional trees, and logistic regression.

    Science.gov (United States)

    Chen, Guangchao; Li, Xuehua; Chen, Jingwen; Zhang, Ya-Nan; Peijnenburg, Willie J G M

    2014-12-01

    Biodegradation is the principal environmental dissipation process of chemicals. As such, it is a dominant factor determining the persistence and fate of organic chemicals in the environment, and is therefore of critical importance to chemical management and regulation. In the present study, the authors developed in silico methods assessing biodegradability based on a large heterogeneous set of 825 organic compounds, using the techniques of the C4.5 decision tree, the functional inner regression tree, and logistic regression. External validation was subsequently carried out by 2 independent test sets of 777 and 27 chemicals. As a result, the functional inner regression tree exhibited the best predictability with predictive accuracies of 81.5% and 81.0%, respectively, on the training set (825 chemicals) and test set I (777 chemicals). Performance of the developed models on the 2 test sets was subsequently compared with that of the Estimation Program Interface (EPI) Suite Biowin 5 and Biowin 6 models, which also showed a better predictability of the functional inner regression tree model. The model built in the present study exhibits a reasonable predictability compared with existing models while possessing a transparent algorithm. Interpretation of the mechanisms of biodegradation was also carried out based on the models developed. © 2014 SETAC.

  11. Comparison between Decision Tree and Genetic Programming to distinguish healthy from stroke postural sway patterns.

    Science.gov (United States)

    Marrega, Luiz H G; Silva, Simone M; Manffra, Elisangela F; Nievola, Julio C

    2015-01-01

    Maintaining balance is a motor task of crucial importance for humans to perform their daily activities safely and independently. Studies in the field of Artificial Intelligence have considered different classification methods in order to distinguish healthy subjects from patients with certain motor disorders based on their postural strategies during the balance control. The main purpose of this paper is to compare the performance between Decision Tree (DT) and Genetic Programming (GP) - both classification methods of easy interpretation by health professionals - to distinguish postural sway patterns produced by healthy and stroke individuals based on 16 widely used posturographic variables. For this purpose, we used a posturographic dataset of time-series of center-of-pressure displacements derived from 19 stroke patients and 19 healthy matched subjects in three quiet standing tasks of balance control. Then, DT and GP models were trained and tested under two different experiments where accuracy, sensitivity and specificity were adopted as performance metrics. The DT method has performed statistically significant (P < 0.05) better in both cases, showing for example an accuracy of 72.8% against 69.2% from GP in the second experiment of this paper.

  12. Evaluating terrain based criteria for snow avalanche exposure ratings using GIS

    Science.gov (United States)

    Delparte, Donna; Jamieson, Bruce; Waters, Nigel

    2010-05-01

    Snow avalanche terrain in backcountry regions of Canada is increasingly being assessed based upon the Avalanche Terrain Exposure Scale (ATES). ATES is a terrain based classification introduced in 2004 by Parks Canada to identify "simple", "challenging" and "complex" backcountry areas. The ATES rating system has been applied to well over 200 backcountry routes, has been used in guidebooks, trailhead signs and maps and is part of the trip planning component of the AVALUATOR™, a simple decision-support tool for backcountry users. Geographic Information Systems (GIS) offers a means to model and visualize terrain based criteria through the use of digital elevation model (DEM) and land cover data. Primary topographic variables such as slope, aspect and curvature are easily derived from a DEM and are compatible with the equivalent evaluation criteria in ATES. Other components of the ATES classification are difficult to extract from a DEM as they are not strictly terrain based. An overview is provided of the terrain variables that can be generated from DEM and land cover data; criteria from ATES which are not clearly terrain based are identified for further study or revision. The second component of this investigation was the development of an algorithm for inputting suitable ATES criteria into a GIS, thereby mimicking the process avalanche experts use when applying the ATES classification to snow avalanche terrain. GIS based classifications were compared to existing expert assessments for validity. The advantage of automating the ATES classification process through GIS is to assist avalanche experts with categorizing and mapping remote backcountry terrain.

  13. GIS-based landscape design research: Stourhead landscape garden as a case study

    Directory of Open Access Journals (Sweden)

    Steffen Nijhuis

    2017-11-01

    Full Text Available Landscape design research is important for cultivating spatial intelligence in landscape architecture. This study explores GIS (geographic information systems as a tool for landscape design research - investigating landscape designs to understand them as architectonic compositions (architectonic plan analysis. The concept ‘composition’ refers to a conceivable arrangement, an architectural expression of a mental construct that is legible and open to interpretation. Landscape architectonic compositions and their representations embody a great wealth of design knowledge as objects of our material culture and reflect the possible treatment of the ground, space, image and program as a characteristic coherence. By exploring landscape architectonic compositions with GIS, design researchers can acquire design knowledge that can be used in the creation and refinement of a design.  The research aims to identify and illustrate the potential role of GIS as a tool in landscape design research, so as to provide insight into the possibilities and limitations of using GIS in this capacity. The critical, information-oriented case of Stourhead landscape garden (Wiltshire, UK, an example of a designed landscape that covers the scope and remit of landscape architecture design, forms the heart of the study. The exploration of Stourhead by means of GIS can be understood as a plausibility probe. Here the case study is considered a form of ‘quasi-experiment’, testing the hypothesis and generating a learning process that constitutes a prerequisite for advanced understanding, while using an adjusted version of the framework for landscape design analysis by Steenbergen and Reh (2003. This is a theoretically informed analytical method based on the formal interpretation of the landscape architectonic composition addressing four landscape architectonic categories: the basic, the spatial, the symbolic and the programmatic form. This study includes new aspects to be

  14. Geomorphodiversity derived by a GIS-based geomorphological map: case study the Czech Republic

    Czech Academy of Sciences Publication Activity Database

    Demek, J.; Kirchner, Karel; Mackovčin, P.; Slavík, P.

    2011-01-01

    Roč. 55, č. 4 (2011), s. 415-436 ISSN 0372-8854 Institutional research plan: CEZ:AV0Z30860518 Keywords : geomorphodiversity * GIS geomorphological data base * GIS-geomorphological map * Czech Republic Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 0.642, year: 2011 http://www.ingentaconnect.com/content/schweiz/zfg/2011/00000055/00000004/art00001?token=004d10697e442f2067217d763b442c233e46533f34687627504541676249266d656cbe1a808f2

  15. Design and realization of tourism spatial decision support system based on GIS

    Science.gov (United States)

    Ma, Zhangbao; Qi, Qingwen; Xu, Li

    2008-10-01

    In this paper, the existing problems of current tourism management information system are analyzed. GIS, tourism as well as spatial decision support system are introduced, and the application of geographic information system technology and spatial decision support system to tourism management and the establishment of tourism spatial decision support system based on GIS are proposed. System total structure, system hardware and software environment, database design and structure module design of this system are introduced. Finally, realization methods of this systemic core functions are elaborated.

  16. Total Path Length and Number of Terminal Nodes for Decision Trees

    KAUST Repository

    Hussain, Shahid

    2014-09-13

    This paper presents a new tool for study of relationships between total path length (average depth) and number of terminal nodes for decision trees. These relationships are important from the point of view of optimization of decision trees. In this particular case of total path length and number of terminal nodes, the relationships between these two cost functions are closely related with space-time trade-off. In addition to algorithm to compute the relationships, the paper also presents results of experiments with datasets from UCI ML Repository1. These experiments show how two cost functions behave for a given decision table and the resulting plots show the Pareto frontier or Pareto set of optimal points. Furthermore, in some cases this Pareto frontier is a singleton showing the total optimality of decision trees for the given decision table.

  17. Minimizing size of decision trees for multi-label decision tables

    KAUST Repository

    Azad, Mohammad

    2014-09-29

    We used decision tree as a model to discover the knowledge from multi-label decision tables where each row has a set of decisions attached to it and our goal is to find out one arbitrary decision from the set of decisions attached to a row. The size of the decision tree can be small as well as very large. We study here different greedy as well as dynamic programming algorithms to minimize the size of the decision trees. When we compare the optimal result from dynamic programming algorithm, we found some greedy algorithms produce results which are close to the optimal result for the minimization of number of nodes (at most 18.92% difference), number of nonterminal nodes (at most 20.76% difference), and number of terminal nodes (at most 18.71% difference).

  18. Comparison of Taxi Time Prediction Performance Using Different Taxi Speed Decision Trees

    Science.gov (United States)

    Lee, Hanbong

    2017-01-01

    In the STBO modeler and tactical surface scheduler for ATD-2 project, taxi speed decision trees are used to calculate the unimpeded taxi times of flights taxiing on the airport surface. The initial taxi speed values in these decision trees did not show good prediction accuracy of taxi times. Using the more recent, reliable surveillance data, new taxi speed values in ramp area and movement area were computed. Before integrating these values into the STBO system, we performed test runs using live data from Charlotte airport, with different taxi speed settings: 1) initial taxi speed values and 2) new ones. Taxi time prediction performance was evaluated by comparing various metrics. The results show that the new taxi speed decision trees can calculate the unimpeded taxi-out times more accurately.

  19. Minimizing size of decision trees for multi-label decision tables

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2014-01-01

    We used decision tree as a model to discover the knowledge from multi-label decision tables where each row has a set of decisions attached to it and our goal is to find out one arbitrary decision from the set of decisions attached to a row. The size of the decision tree can be small as well as very large. We study here different greedy as well as dynamic programming algorithms to minimize the size of the decision trees. When we compare the optimal result from dynamic programming algorithm, we found some greedy algorithms produce results which are close to the optimal result for the minimization of number of nodes (at most 18.92% difference), number of nonterminal nodes (at most 20.76% difference), and number of terminal nodes (at most 18.71% difference).

  20. A web GIS based integrated flood assessment modeling tool for coastal urban watersheds

    Science.gov (United States)

    Kulkarni, A. T.; Mohanty, J.; Eldho, T. I.; Rao, E. P.; Mohan, B. K.

    2014-03-01

    Urban flooding has become an increasingly important issue in many parts of the world. In this study, an integrated flood assessment model (IFAM) is presented for the coastal urban flood simulation. A web based GIS framework has been adopted to organize the spatial datasets for the study area considered and to run the model within this framework. The integrated flood model consists of a mass balance based 1-D overland flow model, 1-D finite element based channel flow model based on diffusion wave approximation and a quasi 2-D raster flood inundation model based on the continuity equation. The model code is written in MATLAB and the application is integrated within a web GIS server product viz: Web Gram Server™ (WGS), developed at IIT Bombay, using Java, JSP and JQuery technologies. Its user interface is developed using open layers and the attribute data are stored in MySQL open source DBMS. The model is integrated within WGS and is called via Java script. The application has been demonstrated for two coastal urban watersheds of Navi Mumbai, India. Simulated flood extents for extreme rainfall event of 26 July, 2005 in the two urban watersheds of Navi Mumbai city are presented and discussed. The study demonstrates the effectiveness of the flood simulation tool in a web GIS environment to facilitate data access and visualization of GIS datasets and simulation results.

  1. Multivariate analysis of flow cytometric data using decision trees

    Directory of Open Access Journals (Sweden)

    Svenja eSimon

    2012-04-01

    Full Text Available Characterization of the response of the host immune system is important in understanding the bidirectional interactions between the host and microbial pathogens. For research on the host site, flow cytometry has become one of the major tools in immunology. Advances in technology and reagents allow now the simultaneous assessment of multiple markers on a single cell level generating multidimensional data sets that require multivariate statistical analysis. We explored the explanatory power of the supervised machine learning method called 'induction of decision trees' in flow cytometric data. In order to examine whether the production of a certain cytokine is depended on other cytokines, datasets from intracellular staining for six cytokines with complex patterns of co-expression were analyzed by induction of decision trees. After weighting the data according to their class probabilities, we created a total of 13,392 different decision trees for each given cytokine with different parameter settings. For a more realistic estimation of the decision trees's quality, we used stratified 5-fold cross-validation and chose the 'best' tree according to a combination of different quality criteria. While some of the decision trees reflected previously known co-expression patterns, we found that the expression of some cytokines was not only dependent on the co-expression of others per se, but was also dependent on the intensity of expression. Thus, for the first time we successfully used induction of decision trees for the analysis of high dimensional flow cytometric data and demonstrated the feasibility of this method to reveal structural patterns in such data sets.

  2. Decision Trees Predicting Tumor Shrinkage for Head and Neck Cancer: Implications for Adaptive Radiotherapy.

    Science.gov (United States)

    Surucu, Murat; Shah, Karan K; Mescioglu, Ibrahim; Roeske, John C; Small, William; Choi, Mehee; Emami, Bahman

    2016-02-01

    To develop decision trees predicting for tumor volume reduction in patients with head and neck (H&N) cancer using pretreatment clinical and pathological parameters. Forty-eight patients treated with definitive concurrent chemoradiotherapy for squamous cell carcinoma of the nasopharynx, oropharynx, oral cavity, or hypopharynx were retrospectively analyzed. These patients were rescanned at a median dose of 37.8 Gy and replanned to account for anatomical changes. The percentages of gross tumor volume (GTV) change from initial to rescan computed tomography (CT; %GTVΔ) were calculated. Two decision trees were generated to correlate %GTVΔ in primary and nodal volumes with 14 characteristics including age, gender, Karnofsky performance status (KPS), site, human papilloma virus (HPV) status, tumor grade, primary tumor growth pattern (endophytic/exophytic), tumor/nodal/group stages, chemotherapy regimen, and primary, nodal, and total GTV volumes in the initial CT scan. The C4.5 Decision Tree induction algorithm was implemented. The median %GTVΔ for primary, nodal, and total GTVs was 26.8%, 43.0%, and 31.2%, respectively. Type of chemotherapy, age, primary tumor growth pattern, site, KPS, and HPV status were the most predictive parameters for primary %GTVΔ decision tree, whereas for nodal %GTVΔ, KPS, site, age, primary tumor growth pattern, initial primary GTV, and total GTV volumes were predictive. Both decision trees had an accuracy of 88%. There can be significant changes in primary and nodal tumor volumes during the course of H&N chemoradiotherapy. Considering the proposed decision trees, radiation oncologists can select patients predicted to have high %GTVΔ, who would theoretically gain the most benefit from adaptive radiotherapy, in order to better use limited clinical resources. © The Author(s) 2015.

  3. ARABIC TEXT CLASSIFICATION USING NEW STEMMER FOR FEATURE SELECTION AND DECISION TREES

    Directory of Open Access Journals (Sweden)

    SAID BAHASSINE

    2017-06-01

    Full Text Available Text classification is the process of assignment of unclassified text to appropriate classes based on their content. The most prevalent representation for text classification is the bag of words vector. In this representation, the words that appear in documents often have multiple morphological structures, grammatical forms. In most cases, this morphological variant of words belongs to the same category. In the first part of this paper, anew stemming algorithm was developed in which each term of a given document is represented by its root. In the second part, a comparative study is conducted of the impact of two stemming algorithms namely Khoja’s stemmer and our new stemmer (referred to hereafter by origin-stemmer on Arabic text classification. This investigation was carried out using chi-square as a feature of selection to reduce the dimensionality of the feature space and decision tree classifier. In order to evaluate the performance of the classifier, this study used a corpus that consists of 5070 documents independently classified into six categories: sport, entertainment, business, Middle East, switch and world on WEKA toolkit. The recall, f-measure and precision measures are used to compare the performance of the obtained models. The experimental results show that text classification using rout stemmer outperforms classification using Khoja’s stemmer. The f-measure was 92.9% in sport category and 89.1% in business category.

  4. OmniGA: Optimized Omnivariate Decision Trees for Generalizable Classification Models

    KAUST Repository

    Magana-Mora, Arturo

    2017-06-14

    Classification problems from different domains vary in complexity, size, and imbalance of the number of samples from different classes. Although several classification models have been proposed, selecting the right model and parameters for a given classification task to achieve good performance is not trivial. Therefore, there is a constant interest in developing novel robust and efficient models suitable for a great variety of data. Here, we propose OmniGA, a framework for the optimization of omnivariate decision trees based on a parallel genetic algorithm, coupled with deep learning structure and ensemble learning methods. The performance of the OmniGA framework is evaluated on 12 different datasets taken mainly from biomedical problems and compared with the results obtained by several robust and commonly used machine-learning models with optimized parameters. The results show that OmniGA systematically outperformed these models for all the considered datasets, reducing the F score error in the range from 100% to 2.25%, compared to the best performing model. This demonstrates that OmniGA produces robust models with improved performance. OmniGA code and datasets are available at www.cbrc.kaust.edu.sa/omniga/.

  5. Using boosted decision trees for tau identification in the ATLAS experiment

    CERN Document Server

    Godfrey, Jennifer

    The ATLAS detector will begin taking data from p - p collisions in 2009. This experiment will allo w for man y dif ferent physics measurements and searches. The production of tau leptons at the LHC is a key signature of the decay of both the standard model Higgs (via H ! t t ) and SUSY particles. Taus have a short lifetime ( c t = 87 m m) and decay hadroni- cally 65% of the time. Man y QCD interactions produce similar hadronic sho wers and have cross-sections about 1 billion times lar ger than tau production. Multi variate techniques are therefore often used to distinguish taus from this background. Boosted Decision Trees (BDTs) are a machine-learning technique for developing cut-based discriminants which can signicantly aid in extracting small signal samples from overwhelming backgrounds. In this study , BDTs are used for tau identication for the ATLAS experiment. The y are a fast, exible alternati ve to existing discriminants with comparable or better performance.

  6. Plant MicroRNA Prediction by Supervised Machine Learning Using C5.0 Decision Trees

    Directory of Open Access Journals (Sweden)

    Philip H. Williams

    2012-01-01

    Full Text Available MicroRNAs (miRNAs are nonprotein coding RNAs between 20 and 22 nucleotides long that attenuate protein production. Different types of sequence data are being investigated for novel miRNAs, including genomic and transcriptomic sequences. A variety of machine learning methods have successfully predicted miRNA precursors, mature miRNAs, and other nonprotein coding sequences. MirTools, mirDeep2, and miRanalyzer require “read count” to be included with the input sequences, which restricts their use to deep-sequencing data. Our aim was to train a predictor using a cross-section of different species to accurately predict miRNAs outside the training set. We wanted a system that did not require read-count for prediction and could therefore be applied to short sequences extracted from genomic, EST, or RNA-seq sources. A miRNA-predictive decision-tree model has been developed by supervised machine learning. It only requires that the corresponding genome or transcriptome is available within a sequence window that includes the precursor candidate so that the required sequence features can be collected. Some of the most critical features for training the predictor are the miRNA:miRNA∗ duplex energy and the number of mismatches in the duplex. We present a cross-species plant miRNA predictor with 84.08% sensitivity and 98.53% specificity based on rigorous testing by leave-one-out validation.

  7. Predicting Short-Term Subway Ridership and Prioritizing Its Influential Factors Using Gradient Boosting Decision Trees

    Directory of Open Access Journals (Sweden)

    Chuan Ding

    2016-10-01

    Full Text Available Understanding the relationship between short-term subway ridership and its influential factors is crucial to improving the accuracy of short-term subway ridership prediction. Although there has been a growing body of studies on short-term ridership prediction approaches, limited effort is made to investigate the short-term subway ridership prediction considering bus transfer activities and temporal features. To fill this gap, a relatively recent data mining approach called gradient boosting decision trees (GBDT is applied to short-term subway ridership prediction and used to capture the associations with the independent variables. Taking three subway stations in Beijing as the cases, the short-term subway ridership and alighting passengers from its adjacent bus stops are obtained based on transit smart card data. To optimize the model performance with different combinations of regularization parameters, a series of GBDT models are built with various learning rates and tree complexities by fitting a maximum of trees. The optimal model performance confirms that the gradient boosting approach can incorporate different types of predictors, fit complex nonlinear relationships, and automatically handle the multicollinearity effect with high accuracy. In contrast to other machine learning methods—or “black-box” procedures—the GBDT model can identify and rank the relative influences of bus transfer activities and temporal features on short-term subway ridership. These findings suggest that the GBDT model has considerable advantages in improving short-term subway ridership prediction in a multimodal public transportation system.

  8. Plant MicroRNA Prediction by Supervised Machine Learning Using C5.0 Decision Trees.

    Science.gov (United States)

    Williams, Philip H; Eyles, Rod; Weiller, Georg

    2012-01-01

    MicroRNAs (miRNAs) are nonprotein coding RNAs between 20 and 22 nucleotides long that attenuate protein production. Different types of sequence data are being investigated for novel miRNAs, including genomic and transcriptomic sequences. A variety of machine learning methods have successfully predicted miRNA precursors, mature miRNAs, and other nonprotein coding sequences. MirTools, mirDeep2, and miRanalyzer require "read count" to be included with the input sequences, which restricts their use to deep-sequencing data. Our aim was to train a predictor using a cross-section of different species to accurately predict miRNAs outside the training set. We wanted a system that did not require read-count for prediction and could therefore be applied to short sequences extracted from genomic, EST, or RNA-seq sources. A miRNA-predictive decision-tree model has been developed by supervised machine learning. It only requires that the corresponding genome or transcriptome is available within a sequence window that includes the precursor candidate so that the required sequence features can be collected. Some of the most critical features for training the predictor are the miRNA:miRNA(∗) duplex energy and the number of mismatches in the duplex. We present a cross-species plant miRNA predictor with 84.08% sensitivity and 98.53% specificity based on rigorous testing by leave-one-out validation.

  9. OmniGA: Optimized Omnivariate Decision Trees for Generalizable Classification Models

    KAUST Repository

    Magana-Mora, Arturo; Bajic, Vladimir B.

    2017-01-01

    Classification problems from different domains vary in complexity, size, and imbalance of the number of samples from different classes. Although several classification models have been proposed, selecting the right model and parameters for a given classification task to achieve good performance is not trivial. Therefore, there is a constant interest in developing novel robust and efficient models suitable for a great variety of data. Here, we propose OmniGA, a framework for the optimization of omnivariate decision trees based on a parallel genetic algorithm, coupled with deep learning structure and ensemble learning methods. The performance of the OmniGA framework is evaluated on 12 different datasets taken mainly from biomedical problems and compared with the results obtained by several robust and commonly used machine-learning models with optimized parameters. The results show that OmniGA systematically outperformed these models for all the considered datasets, reducing the F score error in the range from 100% to 2.25%, compared to the best performing model. This demonstrates that OmniGA produces robust models with improved performance. OmniGA code and datasets are available at www.cbrc.kaust.edu.sa/omniga/.

  10. Use of fault and decision tree analyses to protect against industrial sabotage

    International Nuclear Information System (INIS)

    Fullwood, R.R.; Erdmann, R.C.

    1975-01-01

    Fault tree and decision tree analyses provide systematic bases for evaluation of safety systems and procedures. Heuristically, this paper shows applications of these methods for industrial sabotage analysis at a reprocessing plant. Fault trees constructed by ''leak path'' analysis for completeness through path inventory. The escape fault tree is readily developed by this method and using the reciprocal character of the trees, the attack fault tree is constructed. After construction, the events on the fault tree are corrected for their nonreciprocal character. The fault trees are algebraically solved and the protection that is afforded is ranked by the number of barriers that must be penetrated. No attempt is made to assess the barrier penetration probabilities or penetration time duration. Event trees are useful for dynamic plant protection analysis through their time-sequencing character. To illustrate their usefulness, a simple attack scenario is devised and event-tree analyzed. Two saboteur success paths and 21 failure paths are found. This example clearly shows the event tree usefulness for concisely presenting the time sequencing of key decision points. However, event trees have the disadvantage of being scenario dependent, therefore requiring a separate event tree for each scenario

  11. Boosted decision trees in the CMS Level-1 endcap muon trigger

    CERN Document Server

    Low, Jia Fu; Busch, Elena Laura; Carnes, Andrew Mathew; Furic, Ivan-Kresimir; Gleyzer, Sergei; Kotov, Khristian; Madorsky, Alexander; Rorie, Jamal Tildon; Scurlock, Bobby; Shi, Wei; Acosta, Darin Edward

    2017-01-01

    The first implementation of Boosted Decision Trees (BDTs) inside a Level-1 trigger system at the LHC is presented. The Endcap Muon Track Finder (EMTF) at CMS uses BDTs to infer the momentum of muons in the forward region of the detector, based on 25 different variables. Combinations of these variables are evaluated offline using regression BDTs, whose output is stored in 1.2 GB look-up tables (LUTs) in the EMTF hardware. These BDTs take advantage of complex correlations between variables, the inhomogeneous magnetic field, and non-linear effects such as inelastic scattering to distinguish high-momentum signal muons from the overwhelming low-momentum background. The LUTs are used to turn the complex BDT evaluation into a simple look-up operation in fixed low latency. The new momentum assignment algorithm has reduced the trigger rate by a factor of 3 at the 25 GeV trigger threshold with respect to the legacy system, with further improvements foreseen in the coming year.

  12. Detection of Spam Email by Combining Harmony Search Algorithm and Decision Tree

    Directory of Open Access Journals (Sweden)

    M. Z. Gashti

    2017-06-01

    Full Text Available Spam emails is probable the main problem faced by most e-mail users. There are many features in spam email detection and some of these features have little effect on detection and cause skew detection and classification of spam email. Thus, Feature Selection (FS is one of the key topics in spam email detection systems. With choosing the important and effective features in classification, its performance can be optimized. Selector features has the task of finding a subset of features to improve the accuracy of its predictions. In this paper, a hybrid of Harmony Search Algorithm (HSA and decision tree is used for selecting the best features and classification. The obtained results on Spam-base dataset show that the rate of recognition accuracy in the proposed model is 95.25% which is high in comparison with models such as SVM, NB, J48 and MLP. Also, the accuracy of the proposed model on the datasets of Ling-spam and PU1 is high in comparison with models such as NB, SVM and LR.

  13. A GIS-based framework for evaluating investments in fire management: Spatial allocation of recreation values

    Science.gov (United States)

    Kenneth A. Baerenklau; Armando González-Cabán; Catrina I. Páez; Edgard Chávez

    2009-01-01

    The U.S. Forest Service is responsible for developing tools to facilitate effective and efficient fire management on wildlands and urban-wildland interfaces. Existing GIS-based fire modeling software only permits estimation of the costs of fire prevention and mitigation efforts as well as the effects of those efforts on fire behavior. This research demonstrates how the...

  14. Map-IT! A Web-Based GIS Tool for Watershed Science Education.

    Science.gov (United States)

    Curtis, David H.; Hewes, Christopher M.; Lossau, Matthew J.

    This paper describes the development of a prototypic, Web-accessible GIS solution for K-12 science education and citizen-based watershed monitoring. The server side consists of ArcView IMS running on an NT workstation. The client is built around MapCafe. The client interface, which runs through a standard Web browser, supports standard MapCafe…

  15. Health risks zonation in megacities vis-à-vis PM using GIS-based ...

    African Journals Online (AJOL)

    Since the air quality, considering PM2.5 varies over space and time, in this paper, RBF method in a based GIS model was utilized to zone air quality and its health risks upon PM2.5 concentrations dispersion over Tehran, during one year, from 21 March 2013 to 20 March 2014. The RBF method was applied to obtain the ...

  16. The Design and Implement of Tourism Information System Based on GIS

    Science.gov (United States)

    Chunchang, Fu; Nan, Zhang

    From the geographical information system concept, discusses the main contents of the geographic information system, and the current of the geographic information system key technological measures of tourism information system, the application of tourism information system for specific requirements and goals, and analyzes a relational database model based on the tourist information system in GIS application methods of realization.

  17. GIS-Based Railway Route Selection for the Proposed Kenya-Sudan ...

    African Journals Online (AJOL)

    This paper describes a procedure developed for automated railway feasibility analysis based on GIS and Multi-Criteria analysis. Good engineering railway alignment requirements are assessed and suitable factors and constraints for feasibility analysis derived. Relevant digital map layers are then prepared and reclassified ...

  18. Automatic generation of water distribution systems based on GIS data.

    Science.gov (United States)

    Sitzenfrei, Robert; Möderl, Michael; Rauch, Wolfgang

    2013-09-01

    In the field of water distribution system (WDS) analysis, case study research is needed for testing or benchmarking optimisation strategies and newly developed software. However, data availability for the investigation of real cases is limited due to time and cost needed for data collection and model setup. We present a new algorithm that addresses this problem by generating WDSs from GIS using population density, housing density and elevation as input data. We show that the resulting WDSs are comparable to actual systems in terms of network properties and hydraulic performance. For example, comparing the pressure heads for an actual and a generated WDS results in pressure head differences of ±4 m or less for 75% of the supply area. Although elements like valves and pumps are not included, the new methodology can provide water distribution systems of varying levels of complexity (e.g., network layouts, connectivity, etc.) to allow testing design/optimisation algorithms on a large number of networks. The new approach can be used to estimate the construction costs of planned WDSs aimed at addressing population growth or at comparisons of different expansion strategies in growth corridors.

  19. Identification of radon anomalies in soil gas using decision trees and neural networks

    International Nuclear Information System (INIS)

    Zmazek, B.; Dzeroski, S.; Torkar, D.; Vaupotic, J.; Kobal, I.

    2010-01-01

    The time series of radon ( 222 Rn) concentration in soil gas at a fault, together with the environmental parameters, have been analysed applying two machine learning techniques: (I) decision trees and (II) neural networks, with the aim at identifying radon anomalies caused by seismic events and not simply ascribed to the effect of the environmental parameters. By applying neural networks, 10 radon anomalies were observed for 12 earthquakes, while with decision trees, the anomaly was found for every earthquake, but, undesirably, some anomalies appeared also during periods without earthquakes. (authors)

  20. The decision tree classifier - Design and potential. [for Landsat-1 data

    Science.gov (United States)

    Hauska, H.; Swain, P. H.

    1975-01-01

    A new classifier has been developed for the computerized analysis of remote sensor data. The decision tree classifier is essentially a maximum likelihood classifier using multistage decision logic. It is characterized by the fact that an unknown sample can be classified into a class using one or several decision functions in a successive manner. The classifier is applied to the analysis of data sensed by Landsat-1 over Kenosha Pass, Colorado. The classifier is illustrated by a tree diagram which for processing purposes is encoded as a string of symbols such that there is a unique one-to-one relationship between string and decision tree.

  1. Vlsi implementation of flexible architecture for decision tree classification in data mining

    Science.gov (United States)

    Sharma, K. Venkatesh; Shewandagn, Behailu; Bhukya, Shankar Nayak

    2017-07-01

    The Data mining algorithms have become vital to researchers in science, engineering, medicine, business, search and security domains. In recent years, there has been a terrific raise in the size of the data being collected and analyzed. Classification is the main difficulty faced in data mining. In a number of the solutions developed for this problem, most accepted one is Decision Tree Classification (DTC) that gives high precision while handling very large amount of data. This paper presents VLSI implementation of flexible architecture for Decision Tree classification in data mining using c4.5 algorithm.

  2. A comparative assessment of decision trees algorithms for flash flood susceptibility modeling at Haraz watershed, northern Iran.

    Science.gov (United States)

    Khosravi, Khabat; Pham, Binh Thai; Chapi, Kamran; Shirzadi, Ataollah; Shahabi, Himan; Revhaug, Inge; Prakash, Indra; Tien Bui, Dieu

    2018-06-15

    Floods are one of the most damaging natural hazards causing huge loss of property, infrastructure and lives. Prediction of occurrence of flash flood locations is very difficult due to sudden change in climatic condition and manmade factors. However, prior identification of flood susceptible areas can be done with the help of machine learning techniques for proper timely management of flood hazards. In this study, we tested four decision trees based machine learning models namely Logistic Model Trees (LMT), Reduced Error Pruning Trees (REPT), Naïve Bayes Trees (NBT), and Alternating Decision Trees (ADT) for flash flood susceptibility mapping at the Haraz Watershed in the northern part of Iran. For this, a spatial database was constructed with 201 present and past flood locations and eleven flood-influencing factors namely ground slope, altitude, curvature, Stream Power Index (SPI), Topographic Wetness Index (TWI), land use, rainfall, river density, distance from river, lithology, and Normalized Difference Vegetation Index (NDVI). Statistical evaluation measures, the Receiver Operating Characteristic (ROC) curve, and Freidman and Wilcoxon signed-rank tests were used to validate and compare the prediction capability of the models. Results show that the ADT model has the highest prediction capability for flash flood susceptibility assessment, followed by the NBT, the LMT, and the REPT, respectively. These techniques have proven successful in quickly determining flood susceptible areas. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Mapping potential carbon and timber losses from hurricanes using a decision tree and ecosystem services driver model.

    Science.gov (United States)

    Delphin, S; Escobedo, F J; Abd-Elrahman, A; Cropper, W

    2013-11-15

    Information on the effect of direct drivers such as hurricanes on ecosystem services is relevant to landowners and policy makers due to predicted effects from climate change. We identified forest damage risk zones due to hurricanes and estimated the potential loss of 2 key ecosystem services: aboveground carbon storage and timber volume. Using land cover, plot-level forest inventory data, the Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) model, and a decision tree-based framework; we determined potential damage to subtropical forests from hurricanes in the Lower Suwannee River (LS) and Pensacola Bay (PB) watersheds in Florida, US. We used biophysical factors identified in previous studies as being influential in forest damage in our decision tree and hurricane wind risk maps. Results show that 31% and 0.5% of the total aboveground carbon storage in the LS and PB, respectively was located in high forest damage risk (HR) zones. Overall 15% and 0.7% of the total timber net volume in the LS and PB, respectively, was in HR zones. This model can also be used for identifying timber salvage areas, developing ecosystem service provision and management scenarios, and assessing the effect of other drivers on ecosystem services and goods. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Behaviour change in overweight and obese pregnancy: a decision tree to support the development of antenatal lifestyle interventions.

    Science.gov (United States)

    Ainscough, Kate M; Lindsay, Karen L; O'Sullivan, Elizabeth J; Gibney, Eileen R; McAuliffe, Fionnuala M

    2017-10-01

    Antenatal healthy lifestyle interventions are frequently implemented in overweight and obese pregnancy, yet there is inconsistent reporting of the behaviour-change methods and behavioural outcomes. This limits our understanding of how and why such interventions were successful or not. The current paper discusses the application of behaviour-change theories and techniques within complex lifestyle interventions in overweight and obese pregnancy. The authors propose a decision tree to help guide researchers through intervention design, implementation and evaluation. The implications for adopting behaviour-change theories and techniques, and using appropriate guidance when constructing and evaluating interventions in research and clinical practice are also discussed. To enhance the evidence base for successful behaviour-change interventions during pregnancy, adoption of behaviour-change theories and techniques, and use of published guidelines when designing lifestyle interventions are necessary. The proposed decision tree may be a useful guide for researchers working to develop effective behaviour-change interventions in clinical settings. This guide directs researchers towards key literature sources that will be important in each stage of study development.

  5. Virtual Cultural Landscape Laboratory Based on Internet GIS Technology

    Science.gov (United States)

    Bill, R.

    2012-07-01

    In recent years the transfer of old documents (books, paintings, maps etc.) from analogue to digital form has gained enormous importance. Numerous interventions are concentrated in the digitalisation of library collections, but also commercial companies like Microsoft or Google try to convert large analogue stocks such as books, paintings, etc. in digital form. Data in digital form can be much easier made accessible to a large user community, especially to the interested scientific community. The aim of the described research project is to set up a virtual research environment for interdisciplinary research focusing on the landscape of the historical Mecklenburg in the north-east of Germany. Georeferenced old maps from 1786 and 1890 covering complete Mecklenburg should be combined with current geo-information, satellite and aerial imagery to support spatio-temporal research aspects in different scales in space (regional 1:200,000 to local 1:25.000) and time (nearly 250 years in three time steps, the last 30 years also in three time slices). The Virtual Laboratory for Cultural Landscape Research (VKLandLab) is designed and developed by the Chair of Geodesy and Geoinformatics, hosted at the Computing Centre (ITMZ) and linked to the Digital Library (UB) at Rostock University. VKLandLab includes new developments such as wikis, blogs, data tagging, etc. and proven components already integrated in various data-related infrastructures such as InternetGIS, data repositories and authentication structures. The focus is to build a data-related infrastructure and a work platform that supports students as well as researchers from different disciplines in their research in space and time.

  6. VIRTUAL CULTURAL LANDSCAPE LABORATORY BASED ON INTERNET GIS TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    R. Bill

    2012-07-01

    Full Text Available In recent years the transfer of old documents (books, paintings, maps etc. from analogue to digital form has gained enormous importance. Numerous interventions are concentrated in the digitalisation of library collections, but also commercial companies like Microsoft or Google try to convert large analogue stocks such as books, paintings, etc. in digital form. Data in digital form can be much easier made accessible to a large user community, especially to the interested scientific community. The aim of the described research project is to set up a virtual research environment for interdisciplinary research focusing on the landscape of the historical Mecklenburg in the north-east of Germany. Georeferenced old maps from 1786 and 1890 covering complete Mecklenburg should be combined with current geo-information, satellite and aerial imagery to support spatio-temporal research aspects in different scales in space (regional 1:200,000 to local 1:25.000 and time (nearly 250 years in three time steps, the last 30 years also in three time slices. The Virtual Laboratory for Cultural Landscape Research (VKLandLab is designed and developed by the Chair of Geodesy and Geoinformatics, hosted at the Computing Centre (ITMZ and linked to the Digital Library (UB at Rostock University. VKLandLab includes new developments such as wikis, blogs, data tagging, etc. and proven components already integrated in various data-related infrastructures such as InternetGIS, data repositories and authentication structures. The focus is to build a data-related infrastructure and a work platform that supports students as well as researchers from different disciplines in their research in space and time.

  7. A WebGIS-based system for analyzing and visualizing air quality data for Shanghai Municipality

    Science.gov (United States)

    Wang, Manyi; Liu, Chaoshun; Gao, Wei

    2014-10-01

    An online visual analytical system based on Java Web and WebGIS for air quality data for Shanghai Municipality was designed and implemented to quantitatively analyze and qualitatively visualize air quality data. By analyzing the architecture of WebGIS and Java Web, we firstly designed the overall scheme for system architecture, then put forward the software and hardware environment and also determined the main function modules for the system. The visual system was ultimately established with the DIV + CSS layout method combined with JSP, JavaScript, and some other computer programming languages based on the Java programming environment. Moreover, Struts, Spring, and Hibernate frameworks (SSH) were integrated in the system for the purpose of easy maintenance and expansion. To provide mapping service and spatial analysis functions, we selected ArcGIS for Server as the GIS server. We also used Oracle database and ESRI file geodatabase to store spatial data and non-spatial data in order to ensure the data security. In addition, the response data from the Web server are resampled to implement rapid visualization through the browser. The experimental successes indicate that this system can quickly respond to user's requests, and efficiently return the accurate processing results.

  8. A Framework for Sharing and Integrating Remote Sensing and GIS Models Based on Web Service

    Science.gov (United States)

    Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin

    2014-01-01

    Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a “black box” and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users. PMID:24901016

  9. WATERSHED BASED WEB GIS: CASE STUDY OF PALOPO WATERSHED AREA SOUTH SULAWESI, INDONESIA

    Directory of Open Access Journals (Sweden)

    Jalaluddin Rumi PRASAD

    2017-09-01

    Full Text Available Data and land resource information complete, accurate, and current is an input in management planning, evaluation, and monitoring Watershed. Implementation of this research is conducted with optimum utilization of secondary data that is supported by direct field measurement data, digitalizing the maps associated, Geographic Information Systems modeling, and model calibration. This research has resulted in a Geographic Information System Management of potential Watershed GIS Web-based or abbreviated WEB GIS MPPDAS using Palopo watershed area, South Sulawesi as a case study sites for the development of a prototype that consists of three applications the main website ie Web Portal, Web GIS, and Web Tutorial. The system is built to show online (and offline maps watershed in the administrative area of Palopo along with the location of its potential accumulated in the four (4 groups of layers, including groups of main layer (2 layer, a group of base layer (14 layers, groups of thematic layers (12 layers, a group of policy layer (8 layer. In addition to display a map, use the WEB application of GIS MPPDAS can also use tools or controls in the application to perform analyzes in its monitoring and evaluation, including: Geocoding, Add layer, Digitizing, Selection, Measurements, Graph, Filtering, Geolocation, Overlay cartographic, and etc.

  10. A framework for sharing and integrating remote sensing and GIS models based on Web service.

    Science.gov (United States)

    Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin

    2014-01-01

    Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a "black box" and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users.

  11. Decision tree analysis to evaluate dry cow strategies under UK conditions

    NARCIS (Netherlands)

    Berry, E.A.; Hogeveen, H.; Hillerton, J.E.

    2004-01-01

    Economic decisions on animal health strategies address the cost-benefit aspect along with animal welfare and public health concerns. Decision tree analysis at an individual cow level highlighted that there is little economic difference between the use of either dry cow antibiotic or an internal teat

  12. Construction and application of hierarchical decision tree for classification of ultrasonographic prostate images

    NARCIS (Netherlands)

    Giesen, R. J.; Huynen, A. L.; Aarnink, R. G.; de la Rosette, J. J.; Debruyne, F. M.; Wijkstra, H.

    1996-01-01

    A non-parametric algorithm is described for the construction of a binary decision tree classifier. This tree is used to correlate textural features, computed from ultrasonographic prostate images, with the histopathology of the imaged tissue. The algorithm consists of two parts; growing and pruning.

  13. Visualization of Decision Tree State for the Classification of Parkinson's Disease

    NARCIS (Netherlands)

    Valentijn, E

    2016-01-01

    Decision trees have been shown to be effective at classifying subjects with Parkinson’s disease when provided with features (subject scores) derived from FDG-PET data. Such subject scores have strong discriminative power but are not intuitive to understand. We therefore augment each decision node

  14. Exploratory Use of Decision Tree Analysis in Classification of Outcome in Hypoxic-Ischemic Brain Injury.

    Science.gov (United States)

    Phan, Thanh G; Chen, Jian; Singhal, Shaloo; Ma, Henry; Clissold, Benjamin B; Ly, John; Beare, Richard

    2018-01-01

    Prognostication following hypoxic ischemic encephalopathy (brain injury) is important for clinical management. The aim of this exploratory study is to use a decision tree model to find clinical and MRI associates of severe disability and death in this condition. We evaluate clinical model and then the added value of MRI data. The inclusion criteria were as follows: age ≥17 years, cardio-respiratory arrest, and coma on admission (2003-2011). Decision tree analysis was used to find clinical [Glasgow Coma Score (GCS), features about cardiac arrest, therapeutic hypothermia, age, and sex] and MRI (infarct volume) associates of severe disability and death. We used the area under the ROC (auROC) to determine accuracy of model. There were 41 (63.7% males) patients having MRI imaging with the average age 51.5 ± 18.9 years old. The decision trees showed that infarct volume and age were important factors for discrimination between mild to moderate disability and severe disability and death at day 0 and day 2. The auROC for this model was 0.94 (95% CI 0.82-1.00). At day 7, GCS value was the only predictor; the auROC was 0.96 (95% CI 0.86-1.00). Our findings provide proof of concept for further exploration of the role of MR imaging and decision tree analysis in the early prognostication of hypoxic ischemic brain injury.

  15. Test Reviews: Euler, B. L. (2007). "Emotional Disturbance Decision Tree". Lutz, FL: Psychological Assessment Resources

    Science.gov (United States)

    Tansy, Michael

    2009-01-01

    The Emotional Disturbance Decision Tree (EDDT) is a teacher-completed norm-referenced rating scale published by Psychological Assessment Resources, Inc., in Lutz, Florida. The 156-item EDDT was developed for use as part of a broader assessment process to screen and assist in the identification of 5- to 18-year-old children for the special…

  16. A multivariate decision tree analysis of biophysical factors in tropical forest fire occurrence

    Science.gov (United States)

    Rey S. Ofren; Edward Harvey

    2000-01-01

    A multivariate decision tree model was used to quantify the relative importance of complex hierarchical relationships between biophysical variables and the occurrence of tropical forest fires. The study site is the Huai Kha Kbaeng wildlife sanctuary, a World Heritage Site in northwestern Thailand where annual fires are common and particularly destructive. Thematic...

  17. Using decision tree induction systems for modeling space-time behavior

    NARCIS (Netherlands)

    Arentze, T.A.; Hofman, F.; Mourik, van H.; Timmermans, H.J.P.; Wets, G.

    2000-01-01

    Discrete choice models are commonly used to predict individuals' activity and travel choices either separately or simultaneously in activity scheduling models. This paper investigates the possibilities of decision tree induction systems as an alternative approach. The ability of decision frees to

  18. What Satisfies Students?: Mining Student-Opinion Data with Regression and Decision Tree Analysis

    Science.gov (United States)

    Thomas, Emily H.; Galambos, Nora

    2004-01-01

    To investigate how students' characteristics and experiences affect satisfaction, this study uses regression and decision tree analysis with the CHAID algorithm to analyze student-opinion data. A data mining approach identifies the specific aspects of students' university experience that most influence three measures of general satisfaction. The…

  19. The Americans with Disabilities Act: A Decision Tree for Social Services Administrators

    Science.gov (United States)

    O'Brien, Gerald V.; Ellegood, Christina

    2005-01-01

    The 1990 Americans with Disabilities Act has had a profound influence on social workers and social services administrators in virtually all work settings. Because of the multiple elements of the act, however, assessing the validity of claims can be a somewhat arduous and complicated task. This article provides a "decision tree" for…

  20. [Analysis of the characteristics of the older adults with depression using data mining decision tree analysis].

    Science.gov (United States)

    Park, Myonghwa; Choi, Sora; Shin, A Mi; Koo, Chul Hoi

    2013-02-01

    The purpose of this study was to develop a prediction model for the characteristics of older adults with depression using the decision tree method. A large dataset from the 2008 Korean Elderly Survey was used and data of 14,970 elderly people were analyzed. Target variable was depression and 53 input variables were general characteristics, family & social relationship, economic status, health status, health behavior, functional status, leisure & social activity, quality of life, and living environment. Data were analyzed by decision tree analysis, a data mining technique using SPSS Window 19.0 and Clementine 12.0 programs. The decision trees were classified into five different rules to define the characteristics of older adults with depression. Classification & Regression Tree (C&RT) showed the best prediction with an accuracy of 80.81% among data mining models. Factors in the rules were life satisfaction, nutritional status, daily activity difficulty due to pain, functional limitation for basic or instrumental daily activities, number of chronic diseases and daily activity difficulty due to disease. The different rules classified by the decision tree model in this study should contribute as baseline data for discovering informative knowledge and developing interventions tailored to these individual characteristics.

  1. Learning decision trees with flexible constraints and objectives using integer optimization

    NARCIS (Netherlands)

    Verwer, S.; Zhang, Y.

    2017-01-01

    We encode the problem of learning the optimal decision tree of a given depth as an integer optimization problem. We show experimentally that our method (DTIP) can be used to learn good trees up to depth 5 from data sets of size up to 1000. In addition to being efficient, our new formulation allows

  2. Relationships between average depth and number of misclassifications for decision trees

    KAUST Repository

    Chikalov, Igor

    2014-02-14

    This paper presents a new tool for the study of relationships between the total path length or the average depth and the number of misclassifications for decision trees. In addition to algorithm, the paper also presents the results of experiments with datasets from UCI ML Repository [9] and datasets representing Boolean functions with 10 variables.

  3. Relationships Between Average Depth and Number of Nodes for Decision Trees

    KAUST Repository

    Chikalov, Igor

    2013-07-24

    This paper presents a new tool for the study of relationships between total path length or average depth and number of nodes of decision trees. In addition to algorithm, the paper also presents the results of experiments with datasets from UCI ML Repository [1]. © Springer-Verlag Berlin Heidelberg 2014.

  4. Relationships Between Average Depth and Number of Nodes for Decision Trees

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2013-01-01

    This paper presents a new tool for the study of relationships between total path length or average depth and number of nodes of decision trees. In addition to algorithm, the paper also presents the results of experiments with datasets from UCI ML

  5. Relationships between average depth and number of misclassifications for decision trees

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2014-01-01

    This paper presents a new tool for the study of relationships between the total path length or the average depth and the number of misclassifications for decision trees. In addition to algorithm, the paper also presents the results of experiments with datasets from UCI ML Repository [9] and datasets representing Boolean functions with 10 variables.

  6. Web-based GIS for spatial pattern detection: application to malaria incidence in Vietnam.

    Science.gov (United States)

    Bui, Thanh Quang; Pham, Hai Minh

    2016-01-01

    There is a great concern on how to build up an interoperable health information system of public health and health information technology within the development of public information and health surveillance programme. Technically, some major issues remain regarding to health data visualization, spatial processing of health data, health information dissemination, data sharing and the access of local communities to health information. In combination with GIS, we propose a technical framework for web-based health data visualization and spatial analysis. Data was collected from open map-servers and geocoded by open data kit package and data geocoding tools. The Web-based system is designed based on Open-source frameworks and libraries. The system provides Web-based analyst tool for pattern detection through three spatial tests: Nearest neighbour, K function, and Spatial Autocorrelation. The result is a web-based GIS, through which end users can detect disease patterns via selecting area, spatial test parameters and contribute to managers and decision makers. The end users can be health practitioners, educators, local communities, health sector authorities and decision makers. This web-based system allows for the improvement of health related services to public sector users as well as citizens in a secure manner. The combination of spatial statistics and web-based GIS can be a solution that helps empower health practitioners in direct and specific intersectional actions, thus provide for better analysis, control and decision-making.

  7. The application of a decision tree to establish the parameters associated with hypertension.

    Science.gov (United States)

    Tayefi, Maryam; Esmaeili, Habibollah; Saberi Karimian, Maryam; Amirabadi Zadeh, Alireza; Ebrahimi, Mahmoud; Safarian, Mohammad; Nematy, Mohsen; Parizadeh, Seyed Mohammad Reza; Ferns, Gordon A; Ghayour-Mobarhan, Majid

    2017-02-01

    Hypertension is an important risk factor for cardiovascular disease (CVD). The goal of this study was to establish the factors associated with hypertension by using a decision-tree algorithm as a supervised classification method of data mining. Data from a cross-sectional study were used in this study. A total of 9078 subjects who met the inclusion criteria were recruited. 70% of these subjects (6358 cases) were randomly allocated to the training dataset for the constructing of the decision-tree. The remaining 30% (2720 cases) were used as the testing dataset to evaluate the performance of decision-tree. Two models were evaluated in this study. In model I, age, gender, body mass index, marital status, level of education, occupation status, depression and anxiety status, physical activity level, smoking status, LDL, TG, TC, FBG, uric acid and hs-CRP were considered as input variables and in model II, age, gender, WBC, RBC, HGB, HCT MCV, MCH, PLT, RDW and PDW were considered as input variables. The validation of the model was assessed by constructing a receiver operating characteristic (ROC) curve. The prevalence rates of hypertension were 32% in our population. For the decision-tree model I, the accuracy, sensitivity, specificity and area under the ROC curve (AUC) value for identifying the related risk factors of hypertension were 73%, 63%, 77% and 0.72, respectively. The corresponding values for model II were 70%, 61%, 74% and 0.68, respectively. We have developed a decision tree model to identify the risk factors associated with hypertension that maybe used to develop programs for hypertension management. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Predicting the probability of mortality of gastric cancer patients using decision tree.

    Science.gov (United States)

    Mohammadzadeh, F; Noorkojuri, H; Pourhoseingholi, M A; Saadat, S; Baghestani, A R

    2015-06-01

    Gastric cancer is the fourth most common cancer worldwide. This reason motivated us to investigate and introduce gastric cancer risk factors utilizing statistical methods. The aim of this study was to identify the most important factors influencing the mortality of patients who suffer from gastric cancer disease and to introduce a classification approach according to decision tree model for predicting the probability of mortality from this disease. Data on 216 patients with gastric cancer, who were registered in Taleghani hospital in Tehran,Iran, were analyzed. At first, patients were divided into two groups: the dead and alive. Then, to fit decision tree model to our data, we randomly selected 20% of dataset to the test sample and remaining dataset considered as the training sample. Finally, the validity of the model examined with sensitivity, specificity, diagnosis accuracy and the area under the receiver operating characteristic curve. The CART version 6.0 and SPSS version 19.0 softwares were used for the analysis of the data. Diabetes, ethnicity, tobacco, tumor size, surgery, pathologic stage, age at diagnosis, exposure to chemical weapons and alcohol consumption were determined as effective factors on mortality of gastric cancer. The sensitivity, specificity and accuracy of decision tree were 0.72, 0.75 and 0.74 respectively. The indices of sensitivity, specificity and accuracy represented that the decision tree model has acceptable accuracy to prediction the probability of mortality in gastric cancer patients. So a simple decision tree consisted of factors affecting on mortality of gastric cancer may help clinicians as a reliable and practical tool to predict the probability of mortality in these patients.

  9. Application Of Decision Tree Approach To Student Selection Model- A Case Study

    Science.gov (United States)

    Harwati; Sudiya, Amby

    2016-01-01

    The main purpose of the institution is to provide quality education to the students and to improve the quality of managerial decisions. One of the ways to improve the quality of students is to arrange the selection of new students with a more selective. This research takes the case in the selection of new students at Islamic University of Indonesia, Yogyakarta, Indonesia. One of the university's selection is through filtering administrative selection based on the records of prospective students at the high school without paper testing. Currently, that kind of selection does not yet has a standard model and criteria. Selection is only done by comparing candidate application file, so the subjectivity of assessment is very possible to happen because of the lack standard criteria that can differentiate the quality of students from one another. By applying data mining techniques classification, can be built a model selection for new students which includes criteria to certain standards such as the area of origin, the status of the school, the average value and so on. These criteria are determined by using rules that appear based on the classification of the academic achievement (GPA) of the students in previous years who entered the university through the same way. The decision tree method with C4.5 algorithm is used here. The results show that students are given priority for admission is that meet the following criteria: came from the island of Java, public school, majoring in science, an average value above 75, and have at least one achievement during their study in high school.

  10. A prediction rule for the development of delirium among patients in medical wards: Chi-Square Automatic Interaction Detector (CHAID) decision tree analysis model.

    Science.gov (United States)

    Kobayashi, Daiki; Takahashi, Osamu; Arioka, Hiroko; Koga, Shinichiro; Fukui, Tsuguya

    2013-10-01

    To predict development of delirium among patients in medical wards by a Chi-Square Automatic Interaction Detector (CHAID) decision tree model. This was a retrospective cohort study of all adult patients admitted to medical wards at a large community hospital. The subject patients were randomly assigned to either a derivation or validation group (2:1) by computed random number generation. Baseline data and clinically relevant factors were collected from the electronic chart. Primary outcome was the development of delirium during hospitalization. All potential predictors were included in a forward stepwise logistic regression model. CHAID decision tree analysis was also performed to make another prediction model with the same group of patients. Receiver operating characteristic curves were drawn, and the area under the curves (AUCs) were calculated for both models. In the validation group, these receiver operating characteristic curves and AUCs were calculated based on the rules from derivation. A total of 3,570 patients were admitted: 2,400 patients assigned to the derivation group and 1,170 to the validation group. A total of 91 and 51 patients, respectively, developed delirium. Statistically significant predictors were delirium history, age, underlying malignancy, and activities of daily living impairment in CHAID decision tree model, resulting in six distinctive groups by the level of risk. AUC was 0.82 in derivation and 0.82 in validation with CHAID model and 0.78 in derivation and 0.79 in validation with logistic model. We propose a validated CHAID decision tree prediction model to predict the development of delirium among medical patients. Copyright © 2013 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  11. On the relationship between the prices of oil and the precious metals: Revisiting with a multivariate regime-switching decision tree

    International Nuclear Information System (INIS)

    Charlot, Philippe; Marimoutou, Vêlayoudom

    2014-01-01

    This study examines the volatility and correlation and their relationships among the euro/US dollar exchange rates, the S and P500 equity indices, and the prices of WTI crude oil and the precious metals (gold, silver, and platinum) over the period 2005 to 2012. Our model links the univariate volatilities with the correlations via a hidden stochastic decision tree. The ensuing Hidden Markov Decision Tree (HMDT) model is in fact an extension of the Hidden Markov Model (HMM) introduced by Jordan et al. (1997). The architecture of this model is the opposite that of the classical deterministic approach based on a binary decision tree and, it allows a probabilistic vision of the relationship between univariate volatility and correlation. Our results are categorized into three groups, namely (1) exchange rates and oil, (2) S and P500 indices, and (3) precious metals. A switching dynamics is seen to characterize the volatilities, while, in the case of the correlations, the series switch from one regime to another, this movement touching a peak during the period of the Subprime crisis in the US, and again during the days following the Tohoku earthquake in Japan. Our findings show that the relationships between volatility and correlation are dependent upon the nature of the series considered, sometimes corresponding to those found in econometric studies, according to which correlation increases in bear markets, at other times differing from them. - Highlights: • This study examines the volatility and correlation and their relationships of precious metals and crude oil. • Our model links the univariate volatilities with the correlations via a hidden stochastic decision tree. • This model allows a probabilistic point of view of the relationship between univariate volatility and correlation. • Results show the relationships between volatility and correlation are dependent upon the nature of the series considered

  12. An uncertainty and sensitivity analysis approach for GIS-based multicriteria landslide susceptibility mapping

    Science.gov (United States)

    Feizizadeh, Bakhtiar; Blaschke, Thomas

    2014-01-01

    GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster–Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster–Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC

  13. Integration agent-based models and GIS as a virtual urban dynamic laboratory

    Science.gov (United States)

    Chen, Peng; Liu, Miaolong

    2007-06-01

    Based on the Agent-based Model and spatial data model, a tight-coupling integrating method of GIS and Agent-based Model (ABM) is to be discussed in this paper. The use of object-orientation for both spatial data and spatial process models facilitates their integration, which can allow exploration and explanation of spatial-temporal phenomena such as urban dynamic. In order to better understand how tight coupling might proceed and to evaluate the possible functional and efficiency gains from such a tight coupling, the agent-based model and spatial data model are discussed, and then the relationships affecting spatial data model and agent-based process models interaction. After that, a realistic crowd flow simulation experiment is presented. Using some tools provided by general GIS systems and a few specific programming languages, a new software system integrating GIS and MAS as a virtual laboratory applicable for simulating pedestrian flows in a crowd activity centre has been developed successfully. Under the environment supported by the software system, as an applicable case, a dynamic evolution process of the pedestrian's flows (dispersed process for the spectators) in a crowds' activity center - The Shanghai Stadium has been simulated successfully. At the end of the paper, some new research problems have been pointed out for the future.

  14. Decision tree sensitivity analysis for cost-effectiveness of chest FDG-PET in patients with a pulmonary tumor (non-small cell carcinoma)

    International Nuclear Information System (INIS)

    Kosuda, Shigeru; Watanabe, Masumi; Kobayashi, Hideo; Kusano, Shoichi; Ichihara, Kiyoshi

    1998-01-01

    Decision tree analysis was used to assess cost-effectiveness of chest FDG-PET in patients with a pulmonary tumor (non-small cell carcinoma, ≤Stage IIIB), based on the data of the current decision tree. Decision tree models were constructed with two competing strategies (CT alone and CT plus chest FDG-PET) in 1,000 patient population with 71.4% prevalence. Baselines of FDG-PET sensitivity and specificity on detection of lung cancer and lymph node metastasis, and mortality and life expectancy were available from references. Chest CT plus chest FDG-PET strategy increased a total cost by 10.5% when a chest FDG-PET study costs 0.1 million yen, since it increased the number of mediastinoscopy and curative thoracotomy despite reducing the number of bronchofiberscopy to half. However, the strategy resulted in a remarkable increase by 115 patients with curable thoracotomy and decrease by 51 patients with non-curable thoracotomy. In addition, an average life expectancy increased by 0.607 year/patient, which means increase in medical cost is approximately 218,080 yen/year/patient when a chest FDG-PET study costs 0.1 million yen. In conclusion, chest CT plus chest FDG-PET strategy might not be cost-effective in Japan, but we are convinced that the strategy is useful in cost-benefit analysis. (author)

  15. [Application of electronic fence technology based on GIS in Oncomelania hupensis snail monitoring].

    Science.gov (United States)

    Zhi-Hua, Chen; Yi-Sheng, Zhu; Zhi-Qiang, Xue; Xue-Bing, Li; Yi-Min, Ding; Li-Jun, Bi; Kai-Min, Gao; You, Zhang

    2017-07-27

    To study the application of Geographic Information System (GIS) electronic fence technique in Oncomelania hupensis snail monitoring. The electronic fence was set around the history and existing snail environments in the electronic map, the information about snail monitoring and controlling was linked to the electronic fence, and the snail monitoring information system was established on these bases. The monitoring information was input through the computer and smart phone. The electronic fence around the history and existing snail environments was set in the electronic map (Baidu map), and the snail monitoring information system and smart phone APP were established. The monitoring information was input and upload real-time, and the snail monitoring information was demonstrated in real time on Baidu map. By using the electronic fence technology based on GIS, the unique "environment electronic archives" for each snail monitoring environment can be established in the electronic map, and real-time, dynamic monitoring and visual management can be realized.

  16. GIS-based Approaches to Catchment Area Analyses of Mass Transit

    DEFF Research Database (Denmark)

    Andersen, Jonas Lohmann Elkjær; Landex, Alex

    2009-01-01

    Catchment area analyses of stops or stations are used to investigate potential number of travelers to public transportation. These analyses are considered a strong decision tool in the planning process of mass transit especially railroads. Catchment area analyses are GIS-based buffer and overlay...... analyses with different approaches depending on the desired level of detail. A simple but straightforward approach to implement is the Circular Buffer Approach where catchment areas are circular. A more detailed approach is the Service Area Approach where catchment areas are determined by a street network...... search to simulate the actual walking distances. A refinement of the Service Area Approach is to implement additional time resistance in the network search to simulate obstacles in the walking environment. This paper reviews and compares the different GIS-based catchment area approaches, their level...

  17. Design of Mobile Operation and Maintenance System Based on Power GIS and GPS

    Directory of Open Access Journals (Sweden)

    Jing TAO

    2017-01-01

    Full Text Available In order to solve the low efficiency of operation and maintenance and the problems of management coordination caused by the coexistence of off-line operation and maintenance and multi system coexistence in current power telecommunication operation and maintenance, this paper proposes a mobile operation and maintenance system based on power GIS and GPS location technology. The system is based on the C/S architecture, and the mobile operation and maintenance terminal is connected with the operation and maintenance management terminal through wireless and wired network. In addition to meeting basic business requirements, the system integrates power GIS functions to visually demonstrate communication resources, personnel, and failures. The staff can use the wireless mobile terminal which integrated GPS module for to operate and maintain, and can locate accurately in real time.

  18. Technical Potential Assessment for the Renewable Energy Zone (REZ) Process: A GIS-Based Approach

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Nathan [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Roberts, Billy J [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-04-05

    Geographic Information Systems (GIS)-based energy resource and technical potential assessments identify areas capable of supporting high levels of renewable energy (RE) development as part of a Renewable Energy Zone (REZ) Transmission Planning process. This document expands on the REZ Process to aid practitioners in conducting GIS-based RE resource and technical potential assessments. The REZ process is an approach to plan, approve, and build transmission infrastructure that connects REZs - geographic areas that have high-quality RE resources, suitable topography and land-use designations, and demonstrated developer interest - to the power system. The REZ process helps to increase the share of solar photovoltaic (PV), wind, and other resources while also maintaining reliability and economics.

  19. GIS based generation of dynamic hydrological and land patch simulation models for rural watershed areas

    Directory of Open Access Journals (Sweden)

    M. Varga

    2016-03-01

    Full Text Available This paper introduces a GIS based methodology to generate dynamic process model for the simulation based analysis of a sensitive rural watershed. The Direct Computer Mapping (DCM based solution starts from GIS layers and, via the graph interpretation and graphical edition of the process network, the expert interface is able to integrate the field experts’ knowledge in the computer aided generation of the simulation model. The methodology was applied and tested for the Southern catchment basin of Lake Balaton, Hungary. In the simplified hydrological model the GIS description of nine watercourses, 121 water sections, 57 small lakes and 20 Lake Balaton compartments were mapped through the expert interface to the dynamic databases of the DCM model. The hydrological model involved precipitation, evaporation, transpiration, runoff, infiltration. The COoRdination of INformation on the Environment (CORINE land cover based simplified “land patch” model considered the effect of meteorological and hydrological scenarios on freshwater resources in the land patches, rivers and lakes. The first results show that the applied model generation methodology helps to build complex models, which, after validation can support the analysis of various land use, with the consideration of environmental aspects.

  20. Predictors of suicidal ideation in older people: a decision tree analysis.

    Science.gov (United States)

    Handley, Tonelle E; Hiles, Sarah A; Inder, Kerry J; Kay-Lambkin, Frances J; Kelly, Brian J; Lewin, Terry J; McEvoy, Mark; Peel, Roseanne; Attia, John R

    2014-11-01

    Suicide among older adults is a major public health issue worldwide. Although studies have identified psychological, physical, and social contributors to suicidal thoughts in older adults, few have explored the specific interactions between these factors. This article used a novel statistical approach to explore predictors of suicidal ideation in a community-based sample of older adults. Prospective cohort study. Participants aged 55-85 years were randomly selected from the Hunter Region, a large regional center in New South Wales, Australia. Baseline psychological, physical, and social factors, including psychological distress, physical functioning, and social support, were used to predict suicidal ideation at the 5-year follow-up. Classification and regression tree modeling was used to determine specific risk profiles for participants depending on their individual well-being in each of these key areas. Psychological distress was the strongest predictor, with 25% of people with high distress reporting suicidal ideation. Within high psychological distress, lower physical functioning significantly increased the likelihood of suicidal ideation, with high distress and low functioning being associated with ideation in 50% of cases. A substantial subgroup reported suicidal ideation in the absence of psychological distress; dissatisfaction with social support was the most important predictor among this group. The performance of the model was high (area under the curve: 0.81). Decision tree modeling enabled individualized "risk" profiles for suicidal ideation to be determined. Although psychological factors are important for predicting suicidal ideation, both physical and social factors significantly improved the predictive ability of the model. Assessing these factors may enhance identification of older people at risk of suicidal ideation. Copyright © 2014. Published by Elsevier Inc.

  1. Application of GIS-based SCS-CN method in West Bank catchments, Palestine

    OpenAIRE

    Sameer Shadeed; Mohammad Almasri

    2010-01-01

    Among the most basic challenges of hydrology are the prediction and quantification of catchment surface runoff. The runoff curve number (CN) is a key factor in determining runoff in the SCS (Soil Conservation Service) based hydrologic modeling method. The traditional SCS-CN method for calculating the composite curve number is very tedious and consumes a major portion of the hydrologic modeling time. Therefore, geographic information systems (GIS) are now being used in combination with the SCS...

  2. Landslide susceptibility mapping using GIS-based statistical models and Remote sensing data in tropical environment.

    Science.gov (United States)

    Shahabi, Himan; Hashim, Mazlan

    2015-04-22

    This research presents the results of the GIS-based statistical models for generation of landslide susceptibility mapping using geographic information system (GIS) and remote-sensing data for Cameron Highlands area in Malaysia. Ten factors including slope, aspect, soil, lithology, NDVI, land cover, distance to drainage, precipitation, distance to fault, and distance to road were extracted from SAR data, SPOT 5 and WorldView-1 images. The relationships between the detected landslide locations and these ten related factors were identified by using GIS-based statistical models including analytical hierarchy process (AHP), weighted linear combination (WLC) and spatial multi-criteria evaluation (SMCE) models. The landslide inventory map which has a total of 92 landslide locations was created based on numerous resources such as digital aerial photographs, AIRSAR data, WorldView-1 images, and field surveys. Then, 80% of the landslide inventory was used for training the statistical models and the remaining 20% was used for validation purpose. The validation results using the Relative landslide density index (R-index) and Receiver operating characteristic (ROC) demonstrated that the SMCE model (accuracy is 96%) is better in prediction than AHP (accuracy is 91%) and WLC (accuracy is 89%) models. These landslide susceptibility maps would be useful for hazard mitigation purpose and regional planning.

  3. Application of decision tree algorithm for identification of rock forming minerals using energy dispersive spectrometry

    Science.gov (United States)

    Akkaş, Efe; Çubukçu, H. Evren; Artuner, Harun

    2014-05-01

    Rapid and automated mineral identification is compulsory in certain applications concerning natural rocks. Among all microscopic and spectrometric methods, energy dispersive X-ray spectrometers (EDS) integrated with scanning electron microscopes produce rapid information with reliable chemical data. Although obtaining elemental data with EDS analyses is fast and easy by the help of improving technology, it is rather challenging to perform accurate and rapid identification considering the large quantity of minerals in a rock sample with varying dimensions ranging between nanometer to centimeter. Furthermore, the physical properties of the specimen (roughness, thickness, electrical conductivity, position in the instrument etc.) and the incident electron beam (accelerating voltage, beam current, spot size etc.) control the produced characteristic X-ray, which in turn affect the elemental analyses. In order to minimize the effects of these physical constraints and develop an automated mineral identification system, a rule induction paradigm has been applied to energy dispersive spectral data. Decision tree classifiers divide training data sets into subclasses using generated rules or decisions and thereby it produces classification or recognition associated with these data sets. A number of thinsections prepared from rock samples with suitable mineralogy have been investigated and a preliminary 12 distinct mineral groups (olivine, orthopyroxene, clinopyroxene, apatite, amphibole, plagioclase, K- feldspar, zircon, magnetite, titanomagnetite, biotite, quartz), comprised mostly of silicates and oxides, have been selected. Energy dispersive spectral data for each group, consisting of 240 reference and 200 test analyses, have been acquired under various, non-standard, physical and electrical conditions. The reference X-Ray data have been used to assign the spectral distribution of elements to the specified mineral groups. Consequently, the test data have been analyzed using

  4. Agroforestry suitability analysis based upon nutrient availability mapping: a GIS based suitability mapping

    Directory of Open Access Journals (Sweden)

    Firoz Ahmad

    2017-05-01

    Full Text Available Agroforestry has drawn the attention of researchers due to its capacity to reduce the poverty and land degradation, improve food security and mitigate the climate change. However, the progress in promoting agroforestry is held back due to the lack of reliable data sets and appropriate tools to accurately map and to have an adequate decision making system for agroforestry modules. Agroforestry suitability being one special form of land suitability is very pertinent to study in the current times when there is tremendous pressure on the land as it is a limited commodity. The study aims for applying the geo-spatial tools towards visualizing various soil and environmental data to reveal the trends and interrelationships and to achieve a nutrient availability and agroforestry suitability map. Using weight matrix and ranks, individual maps were developed in ArcGIS 10.1 platform to generate nutrient availability map, which was later used to develop agroforestry suitability map. Watersheds were delineated using DEM in some part of the study area and were evaluated for prioritizing it and agroforestry suitability of the watersheds were also done as per the schematic flowchart. Agroforestry suitability regions were delineated based upon the weight and ranks by integrated mapping. The total open area was identified 42.4% out of which 21.6% area was found to have high suitability towards agroforestry. Within the watersheds, 22 village points were generated for creating buffers, which were further evaluated showing its proximity to high suitable agroforestry sites thus generating tremendous opportunity to the villagers to carry out agroforestry projects locally. This research shows the capability of remote sensing in studying agroforestry practices and in estimating the prominent factors for its optimal productivity. The ongoing agroforestry projects can be potentially diverted in the areas of high suitability as an extension. The use of ancillary data in GIS

  5. Integrated prediction based on GIS for sandstone-type uranium deposits in the northwest of Ordos Basin

    International Nuclear Information System (INIS)

    Han Shaoyang; Ke Dan; Hu Shuiqing; Guo Qingyin; Hou Huiqun

    2005-01-01

    The integrated prediction model of sandstone-type uranium deposits and its integrated evaluation methods as well as flow of the work based on GIS are studied. A software for extracting metallogenic information is also developed. A multi-source exploring information database is established in the northwest of Ordos Basin, and an integrated digital mineral deposit prospecting model of sandstone-type uranium deposits is designed based on GIS. The authors have completed metallogenic information extraction and integrated evaluation of sandstone-type uranium deposits based on GIS in the study area. Research results prove that the integrated prediction of sandstone-type uranium deposits based on GIS may further delineate prospective target areas rapidly and improve the predictive precision. (authors)

  6. GIS-based Geotechnical Microzonation Mapping using Analytic ...

    African Journals Online (AJOL)

    Bheema

    based ... of parameters that are important for different multivariate decision making ... The methodology used in this study is shown in the flow chart (Fig 4). .... layer was prepared by considering the highest elevations of the static water levels.

  7. Evaluating public ambulance service levels by applying a GIS based ...

    African Journals Online (AJOL)

    Hunadi Mokgalaka

    based analysis of ambulance response time was undertaken. The purpose was to .... He tested the travel time of the primary response vehicles and ..... Real CORP Proceedings / Tagungsband, ISBN: 978-3-9502139-7-3, pp, 22-25, viewed 18.

  8. A method and application study on holistic decision tree for human reliability analysis in nuclear power plant

    International Nuclear Information System (INIS)

    Sun Feng; Zhong Shan; Wu Zhiyu

    2008-01-01

    The paper introduces a human reliability analysis method mainly used in Nuclear Power Plant Safety Assessment and the Holistic Decision Tree (HDT) method and how to apply it. The focus is primarily on providing the basic framework and some background of HDT method and steps to perform it. Influence factors and quality descriptors are formed by the interview with operators in Qinshan Nuclear Power Plant and HDT analysis performed for SGTR and SLOCA based on this information. The HDT model can use a graphic tree structure to indicate that error rate is a function of influence factors. HDT method is capable of dealing with the uncertainty in HRA, and it is reliable and practical. (authors)

  9. Bayesian Decision Trees for predicting survival of patients: a study on the US National Trauma Data Bank.

    Science.gov (United States)

    Schetinin, Vitaly; Jakaite, Livia; Jakaitis, Janis; Krzanowski, Wojtek

    2013-09-01

    Trauma and Injury Severity Score (TRISS) models have been developed for predicting the survival probability of injured patients the majority of which obtain up to three injuries in six body regions. Practitioners have noted that the accuracy of TRISS predictions is unacceptable for patients with a larger number of injuries. Moreover, the TRISS method is incapable of providing accurate estimates of predictive density of survival, that are required for calculating confidence intervals. In this paper we propose Bayesian inference for estimating the desired predictive density. The inference is based on decision tree models which split data along explanatory variables, that makes these models interpretable. The proposed method has outperformed the TRISS method in terms of accuracy of prediction on the cases recorded in the US National Trauma Data Bank. The developed method has been made available for evaluation purposes as a stand-alone application. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. A study on spatial decision support systems for HIV/AIDS prevention based on COM GIS technology

    Science.gov (United States)

    Yang, Kun; Luo, Huasong; Peng, Shungyun; Xu, Quanli

    2007-06-01

    Based on the deeply analysis of the current status and the existing problems of GIS technology applications in Epidemiology, this paper has proposed the method and process for establishing the spatial decision support systems of AIDS epidemic prevention by integrating the COM GIS, Spatial Database, GPS, Remote Sensing, and Communication technologies, as well as ASP and ActiveX software development technologies. One of the most important issues for constructing the spatial decision support systems of AIDS epidemic prevention is how to integrate the AIDS spreading models with GIS. The capabilities of GIS applications in the AIDS epidemic prevention have been described here in this paper firstly. Then some mature epidemic spreading models have also been discussed for extracting the computation parameters. Furthermore, a technical schema has been proposed for integrating the AIDS spreading models with GIS and relevant geospatial technologies, in which the GIS and model running platforms share a common spatial database and the computing results can be spatially visualized on Desktop or Web GIS clients. Finally, a complete solution for establishing the decision support systems of AIDS epidemic prevention has been offered in this paper based on the model integrating methods and ESRI COM GIS software packages. The general decision support systems are composed of data acquisition sub-systems, network communication sub-systems, model integrating sub-systems, AIDS epidemic information spatial database sub-systems, AIDS epidemic information querying and statistical analysis sub-systems, AIDS epidemic dynamic surveillance sub-systems, AIDS epidemic information spatial analysis and decision support sub-systems, as well as AIDS epidemic information publishing sub-systems based on Web GIS.

  11. [Ecological environmental quality assessment of Hangzhou urban area based on RS and GIS].

    Science.gov (United States)

    Xu, Pengwei; Zhao, Duo

    2006-06-01

    In allusion to the shortage of traditional ecological environmental quality assessment, this paper studied the spatial distribution of assessing factors at a mid-small scale, and the conversion of integer character to girding assessing cells. The main assessing factors including natural environmental condition, environmental quality, natural landscape and urbanization pressure, which were classified into four types with about eleven assessing factors, were selected from RS images and GIS-spatial analyzing environmental quality vector graph. Based on GIS, a comprehensive assessment model for the ecological environmental quality in Hangzhou urban area was established. In comparison with observed urban heat island effects, the assessment results were in good agreement with the ecological environmental quality in the urban area of Hangzhou.

  12. [Location selection for Shenyang urban parks based on GIS and multi-objective location allocation model].

    Science.gov (United States)

    Zhou, Yuan; Shi, Tie-Mao; Hu, Yuan-Man; Gao, Chang; Liu, Miao; Song, Lin-Qi

    2011-12-01

    Based on geographic information system (GIS) technology and multi-objective location-allocation (LA) model, and in considering of four relatively independent objective factors (population density level, air pollution level, urban heat island effect level, and urban land use pattern), an optimized location selection for the urban parks within the Third Ring of Shenyang was conducted, and the selection results were compared with the spatial distribution of existing parks, aimed to evaluate the rationality of the spatial distribution of urban green spaces. In the location selection of urban green spaces in the study area, the factor air pollution was most important, and, compared with single objective factor, the weighted analysis results of multi-objective factors could provide optimized spatial location selection of new urban green spaces. The combination of GIS technology with LA model would be a new approach for the spatial optimizing of urban green spaces.

  13. A GIS-based method for household recruitment in a prospective pesticide exposure study

    Directory of Open Access Journals (Sweden)

    Phillips Michael J

    2008-04-01

    Full Text Available Abstract Background Recent advances in GIS technology and remote sensing have provided new opportunities to collect ecologic data on agricultural pesticide exposure. Many pesticide studies have used historical or records-based data on crops and their associated pesticide applications to estimate exposure by measuring residential proximity to agricultural fields. Very few of these studies collected environmental and biological samples from study participants. One of the reasons for this is the cost of identifying participants who reside near study fields and analyzing samples obtained from them. In this paper, we present a cost-effective, GIS-based method for crop field selection and household recruitment in a prospective pesticide exposure study in a remote location. For the most part, our multi-phased approach was carried out in a research facility, but involved two brief episodes of fieldwork for ground truthing purposes. This method was developed for a larger study designed to examine the validity of indirect pesticide exposure estimates by comparing measured exposures in household dust, water and urine with records-based estimates that use crop location, residential proximity and pesticide application data. The study focused on the pesticide atrazine, a broadleaf herbicide used in corn production and one of the most widely-used pesticides in the U.S. Results We successfully used a combination of remotely-sensed data, GIS-based methods and fieldwork to select study fields and recruit participants in Illinois, a state with high corn production and heavy atrazine use. Our several-step process consisted of the identification of potential study fields and residential areas using aerial photography; verification of crop patterns and land use via site visits; development of a GIS-based algorithm to define recruitment areas around crop fields; acquisition of geocoded household-level data within each recruitment area from a commercial vendor; and

  14. Application of GIS-based SCS-CN method in West Bank catchments, Palestine

    Directory of Open Access Journals (Sweden)

    Sameer Shadeed

    2010-03-01

    Full Text Available Among the most basic challenges of hydrology are the prediction and quantification of catchment surface runoff. The runoff curve number (CN is a key factor in determining runoff in the SCS (Soil Conservation Service based hydrologic modeling method. The traditional SCS-CN method for calculating the composite curve number is very tedious and consumes a major portion of the hydrologic modeling time. Therefore, geographic information systems (GIS are now being used in combination with the SCS-CN method. This paper assesses the modeling of flow in West Bank catchments using the GIS-based SCS-CN method. The West Bank, Palestine, is characterized as an arid to semi-arid region with annual rainfall depths ranging between 100 mm in the vicinity of the Jordan River to 700 mm in the mountains extending across the central parts of the region. The estimated composite curve number for the entire West Bank is about 50 assuming dry conditions. This paper clearly demonstrates that the integration of GIS with the SCS-CN method provides a powerful tool for estimating runoff volumes in West Bank catchments, representing arid to semi-arid catchments of Palestine.

  15. A Web GIS Framework for Participatory Sensing Service: An Open Source-Based Implementation

    Directory of Open Access Journals (Sweden)

    Yu Nakayama

    2017-04-01

    Full Text Available Participatory sensing is the process in which individuals or communities collect and analyze systematic data using mobile phones and cloud services. To efficiently develop participatory sensing services, some server-side technologies have been proposed. Although they provide a good platform for participatory sensing, they are not optimized for spatial data management and processing. For the purpose of spatial data collection and management, many web GIS approaches have been studied. However, they still have not focused on the optimal framework for participatory sensing services. This paper presents a web GIS framework for participatory sensing service (FPSS. The proposed FPSS enables an integrated deployment of spatial data capture, storage, and data management functions. In various types of participatory sensing experiments, users can collect and manage spatial data in a unified manner. This feature is realized by the optimized system architecture and use case based on the general requirements for participatory sensing. We developed an open source GIS-based implementation of the proposed framework, which can overcome financial difficulties that are one of the major problems of deploying sensing experiments. We confirmed with the prototype that participatory sensing experiments can be performed efficiently with the proposed FPSS.

  16. GIS-based modelling of odour emitted from the waste processing plant: case study

    Directory of Open Access Journals (Sweden)

    Sόwka Izabela

    2017-01-01

    Full Text Available The emission of odours into the atmospheric air from the municipal economy and industrial plants, especially in urbanized areas, causes a serious problem, which the mankind has been struggling with for years. The excessive exposure of people to odours may result in many negative health effects, including, for example, headaches and vomiting. There are many different methods that are used in order to evaluate the odour nuisance. The results obtained through those methods can then be used to carry out a visualization and an analysis of a distribution of the odour concentrations in a given area by using the GIS (Geographic Information System. By their application to the spatial analysis of the impact of odours, we can enable the assessment of the magnitude and likelihood of the occurrence of odour nuisance. Modelling using GIS tools and spatial interpolation like IDW method and kriging can provide an alternative to the standard modelling tools, which generally use the emission values from sources that are identified as major emitters of odours. The work presents the result, based on the odour measurements data from waste processing plant, of the attempt to connect two different tools – the reference model OPERAT FB and GIS-based dispersion modelling performed using IDW method and ordinary kriging to analyse their behaviour in terms of limited observation values.

  17. Research on evaluation of degree of complexity of mining fault network based on GIS

    Energy Technology Data Exchange (ETDEWEB)

    Hua Zhang; Yun-jia Wang; Chuan-zhi Liu [China University of Mining and Technology, Jiangsu (China). School of Environment Science and Spatial Informatics

    2007-03-15

    A large number of spatial and attribute data are involved in coal resource evaluation. Databases are a relatively advanced data management technology, but their major defects are the poor graphic and spatial data functions, from which it is difficult to realize scientific management of evaluation data with spatial characteristics and evaluation result maps. On account of these deficiencies, the evaluation of degree of complexity of mining fault network based on a geographic information system (GIS) is proposed which integrates management of spatial and attribute data. A fractal is an index which can reflect the comprehensive information of faults' number, density, size, composition and dynamics mechanism. A fractal dimension is used as the quantitative evaluation index. Evaluation software has been developed based on a component GIS-MapX, with which the degree of complexity of fault network is evaluated quantitatively using the quantitative index of fractal dimensions in Liuqiao No.2 coal mine as an example. Results show that it is effective in acquiring model parameters and enhancing the definition of data and evaluation results with the application of GIS technology. The fault network is a system with fractal structure and its complexity can be described reasonably and accurately by fractal dimension, which provides an effective method for coal resource evaluation. 9 refs., 6 figs., 2 tabs.

  18. Learning machines and sleeping brains: Automatic sleep stage classification using decision-tree multi-class support vector machines.

    Science.gov (United States)

    Lajnef, Tarek; Chaibi, Sahbi; Ruby, Perrine; Aguera, Pierre-Emmanuel; Eichenlaub, Jean-Baptiste; Samet, Mounir; Kachouri, Abdennaceur; Jerbi, Karim

    2015-07-30

    Sleep staging is a critical step in a range of electrophysiological signal processing pipelines used in clinical routine as well as in sleep research. Although the results currently achievable with automatic sleep staging methods are promising, there is need for improvement, especially given the time-consuming and tedious nature of visual sleep scoring. Here we propose a sleep staging framework that consists of a multi-class support vector machine (SVM) classification based on a decision tree approach. The performance of the method was evaluated using polysomnographic data from 15 subjects (electroencephalogram (EEG), electrooculogram (EOG) and electromyogram (EMG) recordings). The decision tree, or dendrogram, was obtained using a hierarchical clustering technique and a wide range of time and frequency-domain features were extracted. Feature selection was carried out using forward sequential selection and classification was evaluated using k-fold cross-validation. The dendrogram-based SVM (DSVM) achieved mean specificity, sensitivity and overall accuracy of 0.92, 0.74 and 0.88 respectively, compared to expert visual scoring. Restricting DSVM classification to data where both experts' scoring was consistent (76.73% of the data) led to a mean specificity, sensitivity and overall accuracy of 0.94, 0.82 and 0.92 respectively. The DSVM framework outperforms classification with more standard multi-class "one-against-all" SVM and linear-discriminant analysis. The promising results of the proposed methodology suggest that it may be a valuable alternative to existing automatic methods and that it could accelerate visual scoring by providing a robust starting hypnogram that can be further fine-tuned by expert inspection. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Single nucleotide polymorphism barcoding of cytochrome c oxidase I sequences for discriminating 17 species of Columbidae by decision tree algorithm.

    Science.gov (United States)

    Yang, Cheng-Hong; Wu, Kuo-Chuan; Dahms, Hans-Uwe; Chuang, Li-Yeh; Chang, Hsueh-Wei

    2017-07-01

    DNA barcodes are widely used in taxonomy, systematics, species identification, food safety, and forensic science. Most of the conventional DNA barcode sequences contain the whole information of a given barcoding gene. Most of the sequence information does not vary and is uninformative for a given group of taxa within a monophylum. We suggest here a method that reduces the amount of noninformative nucleotides in a given barcoding sequence of a major taxon, like the prokaryotes, or eukaryotic animals, plants, or fungi. The actual differences in genetic sequences, called single nucleotide polymorphism (SNP) genotyping, provide a tool for developing a rapid, reliable, and high-throughput assay for the discrimination between known species. Here, we investigated SNPs as robust markers of genetic variation for identifying different pigeon species based on available cytochrome c oxidase I (COI) data. We propose here a decision tree-based SNP barcoding (DTSB) algorithm where SNP patterns are selected from the DNA barcoding sequence of several evolutionarily related species in order to identify a single species with pigeons as an example. This approach can make use of any established barcoding system. We here firstly used as an example the mitochondrial gene COI information of 17 pigeon species (Columbidae, Aves) using DTSB after sequence trimming and alignment. SNPs were chosen which followed the rule of decision tree and species-specific SNP barcodes. The shortest barcode of about 11 bp was then generated for discriminating 17 pigeon species using the DTSB method. This method provides a sequence alignment and tree decision approach to parsimoniously assign a unique and shortest SNP barcode for any known species of a chosen monophyletic taxon where a barcoding sequence is available.

  20. Gis-Based Multi-Criteria Decision Analysis for Forest Fire Risk Mapping

    Science.gov (United States)

    Akay, A. E.; Erdoğan, A.

    2017-11-01

    The forested areas along the coastal zone of the Mediterranean region in Turkey are classified as first-degree fire sensitive areas. Forest fires are major environmental disaster that affects the sustainability of forest ecosystems. Besides, forest fires result in important economic losses and even threaten human lives. Thus, it is critical to determine the forested areas with fire risks and thereby minimize the damages on forest resources by taking necessary precaution measures in these areas. The risk of forest fire can be assessed based on various factors such as forest vegetation structures (tree species, crown closure, tree stage), topographic features (slope and aspect), and climatic parameters (temperature, wind). In this study, GIS-based Multi-Criteria Decision Analysis (MCDA) method was used to generate forest fire risk map. The study was implemented in the forested areas within Yayla Forest Enterprise Chiefs at Dursunbey Forest Enterprise Directorate which is classified as first degree fire sensitive area. In the solution process, "extAhp 2.0" plug-in running Analytic Hierarchy Process (AHP) method in ArcGIS 10.4.1 was used to categorize study area under five fire risk classes: extreme risk, high risk, moderate risk, and low risk. The results indicated that 23.81 % of the area was of extreme risk, while 25.81 % was of high risk. The result indicated that the most effective criterion was tree species, followed by tree stages. The aspect had the least effective criterion on forest fire risk. It was revealed that GIS techniques integrated with MCDA methods are effective tools to quickly estimate forest fire risk at low cost. The integration of these factors into GIS can be very useful to determine forested areas with high fire risk and also to plan forestry management after fire.

  1. GIS based analysis of future district heating potential in Denmark

    DEFF Research Database (Denmark)

    Nielsen, Steffen; Möller, Bernd

    2013-01-01

    in Denmark have been mapped in a heat atlas which includes all buildings and their heat demands. This article focuses on developing a method for assessing the costs associated with supplying these buildings with DH. The analysis is based on the existing DH areas in Denmark. By finding the heat production...

  2. Building confidence and credibility into CAD with belief decision trees

    Science.gov (United States)

    Affenit, Rachael N.; Barns, Erik R.; Furst, Jacob D.; Rasin, Alexander; Raicu, Daniela S.

    2017-03-01

    Creating classifiers for computer-aided diagnosis in the absence of ground truth is a challenging problem. Using experts' opinions as reference truth is difficult because the variability in the experts' interpretations introduces uncertainty in the labeled diagnostic data. This uncertainty translates into noise, which can significantly affect the performance of any classifier on test data. To address this problem, we propose a new label set weighting approach to combine the experts' interpretations and their variability, as well as a selective iterative classification (SIC) approach that is based on conformal prediction. Using the NIH/NCI Lung Image Database Consortium (LIDC) dataset in which four radiologists interpreted the lung nodule characteristics, including the degree of malignancy, we illustrate the benefits of the proposed approach. Our results show that the proposed 2-label-weighted approach significantly outperforms the accuracy of the original 5- label and 2-label-unweighted classification approaches by 39.9% and 7.6%, respectively. We also found that the weighted 2-label models produce higher skewness values by 1.05 and 0.61 for non-SIC and SIC respectively on root mean square error (RMSE) distributions. When each approach was combined with selective iterative classification, this further improved the accuracy of classification for the 2-weighted-label by 7.5% over the original, and improved the skewness of the 5-label and 2-unweighted-label by 0.22 and 0.44 respectively.

  3. ADA perceived disability claims: a decision-tree analysis.

    Science.gov (United States)

    Draper, William R; Hawley, Carolyn E; McMahon, Brian T; Reid, Christine A; Barbir, Lara A

    2014-06-01

    The purpose of this study is to examine the possible interactions of predictor variables pertaining to perceived disability claims contained in a large governmental database. Specifically, it is a retrospective analysis of US Equal Employment Opportunity Commission (EEOC) data for the entire population of workplace discrimination claims based on the "regarded as disabled" prong of the Americans with Disabilities Act (ADA) definition of disability. The study utilized records extracted from a "master database" of over two million charges of workplace discrimination in the Integrated Mission System of the EEOC. This database includes all ADA-related discrimination allegations filed from July 26, 1992 through December 31, 2008. Chi squared automatic interaction detection (CHAID) was employed to analyze interaction effects of relevant variables, such as issue (grievance) and industry type. The research question addressed by CHAID is: What combination of factors are associated with merit outcomes for people making ADA EEOC allegations who are "regarded as" having disabilities? The CHAID analysis shows how merit outcome is predicted by the interaction of relevant variables. Issue was found to be the most prominent variable in determining merit outcome, followed by industry type, but the picture is made more complex by qualifications regarding age and race data. Although discharge was the most frequent grievance among charging parties in the perceived disability group, its merit outcome was significantly less than that for the leading factor of hiring.

  4. Prediction of the compression ratio for municipal solid waste using decision tree.

    Science.gov (United States)

    Heshmati R, Ali Akbar; Mokhtari, Maryam; Shakiba Rad, Saeed

    2014-01-01

    The compression ratio of municipal solid waste (MSW) is an essential parameter for evaluation of waste settlement and landfill design. However, no appropriate model has been proposed to estimate the waste compression ratio so far. In this study, a decision tree method was utilized to predict the waste compression ratio (C'c). The tree was constructed using Quinlan's M5 algorithm. A reliable database retrieved from the literature was used to develop a practical model that relates C'c to waste composition and properties, including dry density, dry weight water content, and percentage of biodegradable organic waste using the decision tree method. The performance of the developed model was examined in terms of different statistical criteria, including correlation coefficient, root mean squared error, mean absolute error and mean bias error, recommended by researchers. The obtained results demonstrate that the suggested model is able to evaluate the compression ratio of MSW effectively.

  5. Learning from examples - Generation and evaluation of decision trees for software resource analysis

    Science.gov (United States)

    Selby, Richard W.; Porter, Adam A.

    1988-01-01

    A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for software resource data analysis. The trees identify classes of objects (software modules) that had high development effort. Sixteen software systems ranging from 3,000 to 112,000 source lines were selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4,700 objects, captured information about the development effort, faults, changes, design style, and implementation style. A total of 9,600 decision trees were automatically generated and evaluated. The trees correctly identified 79.3 percent of the software modules that had high development effort or faults, and the trees generated from the best parameter combinations correctly identified 88.4 percent of the modules on the average.

  6. Three-dimensional object recognition using similar triangles and decision trees

    Science.gov (United States)

    Spirkovska, Lilly

    1993-01-01

    A system, TRIDEC, that is capable of distinguishing between a set of objects despite changes in the objects' positions in the input field, their size, or their rotational orientation in 3D space is described. TRIDEC combines very simple yet effective features with the classification capabilities of inductive decision tree methods. The feature vector is a list of all similar triangles defined by connecting all combinations of three pixels in a coarse coded 127 x 127 pixel input field. The classification is accomplished by building a decision tree using the information provided from a limited number of translated, scaled, and rotated samples. Simulation results are presented which show that TRIDEC achieves 94 percent recognition accuracy in the 2D invariant object recognition domain and 98 percent recognition accuracy in the 3D invariant object recognition domain after training on only a small sample of transformed views of the objects.

  7. 'Misclassification error' greedy heuristic to construct decision trees for inconsistent decision tables

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2014-01-01

    A greedy algorithm has been presented in this paper to construct decision trees for three different approaches (many-valued decision, most common decision, and generalized decision) in order to handle the inconsistency of multiple decisions in a decision table. In this algorithm, a greedy heuristic ‘misclassification error’ is used which performs faster, and for some cost function, results are better than ‘number of boundary subtables’ heuristic in literature. Therefore, it can be used in the case of larger data sets and does not require huge amount of memory. Experimental results of depth, average depth and number of nodes of decision trees constructed by this algorithm are compared in the framework of each of the three approaches.

  8. Binary Decision Tree Development for Probabilistic Safety Assessment Applications

    International Nuclear Information System (INIS)

    Simic, Z.; Banov, R.; Mikulicic, V.

    2008-01-01

    The aim of this article is to describe state of the development for the relatively new approach in the probabilistic safety analysis (PSA). This approach is based on the application of binary decision diagrams (BDD) representation for the logical function on the quantitative and qualitative analysis of complex systems that are presented by fault trees and event trees in the PSA applied for the nuclear power plants risk determination. Even BDD approach offers full solution comparing to the partial one from the conventional quantification approach there are still problems to be solved before new approach could be fully implemented. Major problem with full application of BDD is difficulty of getting any solution for the PSA models of certain complexity. This paper is comparing two approaches in PSA quantification. Major focus of the paper is description of in-house developed BDD application with implementation of the original algorithms. Resulting number of nodes required to represent the BDD is extremely sensitive to the chosen order of variables (i.e., basic events in PSA). The problem of finding an optimal order of variables that form the BDD falls under the class of NP-complete complexity. This paper presents an original approach to the problem of finding the initial order of variables utilized for the BDD construction by various dynamical reordering schemes. Main advantage of this approach compared to the known methods of finding the initial order is with better results in respect to the required working memory and time needed to finish the BDD construction. Developed method is compared against results from well known methods such as depth-first, breadth-first search procedures. Described method may be applied in finding of an initial order for fault trees/event trees being created from basic events by means of logical operations (e.g. negation, and, or, exclusive or). With some testing models a significant reduction of used memory has been achieved, sometimes

  9. Exploratory Use of Decision Tree Analysis in Classification of Outcome in Hypoxic–Ischemic Brain Injury

    Directory of Open Access Journals (Sweden)

    Thanh G. Phan

    2018-03-01

    Full Text Available BackgroundPrognostication following hypoxic ischemic encephalopathy (brain injury is important for clinical management. The aim of this exploratory study is to use a decision tree model to find clinical and MRI associates of severe disability and death in this condition. We evaluate clinical model and then the added value of MRI data.MethodThe inclusion criteria were as follows: age ≥17 years, cardio-respiratory arrest, and coma on admission (2003–2011. Decision tree analysis was used to find clinical [Glasgow Coma Score (GCS, features about cardiac arrest, therapeutic hypothermia, age, and sex] and MRI (infarct volume associates of severe disability and death. We used the area under the ROC (auROC to determine accuracy of model. There were 41 (63.7% males patients having MRI imaging with the average age 51.5 ± 18.9 years old. The decision trees showed that infarct volume and age were important factors for discrimination between mild to moderate disability and severe disability and death at day 0 and day 2. The auROC for this model was 0.94 (95% CI 0.82–1.00. At day 7, GCS value was the only predictor; the auROC was 0.96 (95% CI 0.86–1.00.ConclusionOur findings provide proof of concept for further exploration of the role of MR imaging and decision tree analysis in the early prognostication of hypoxic ischemic brain injury.

  10. Validating a decision tree for serious infection: diagnostic accuracy in acutely ill children in ambulatory care.

    Science.gov (United States)

    Verbakel, Jan Y; Lemiengre, Marieke B; De Burghgraeve, Tine; De Sutter, An; Aertgeerts, Bert; Bullens, Dominique M A; Shinkins, Bethany; Van den Bruel, Ann; Buntinx, Frank

    2015-08-07

    Acute infection is the most common presentation of children in primary care with only few having a serious infection (eg, sepsis, meningitis, pneumonia). To avoid complications or death, early recognition and adequate referral are essential. Clinical prediction rules have the potential to improve diagnostic decision-making for rare but serious conditions. In this study, we aimed to validate a recently developed decision tree in a new but similar population. Diagnostic accuracy study validating a clinical prediction rule. Acutely ill children presenting to ambulatory care in Flanders, Belgium, consisting of general practice and paediatric assessment in outpatient clinics or the emergency department. Physicians were asked to score the decision tree in every child. The outcome of interest was hospital admission for at least 24 h with a serious infection within 5 days after initial presentation. We report the diagnostic accuracy of the decision tree in sensitivity, specificity, likelihood ratios and predictive values. In total, 8962 acute illness episodes were included, of which 283 lead to admission to hospital with a serious infection. Sensitivity of the decision tree was 100% (95% CI 71.5% to 100%) at a specificity of 83.6% (95% CI 82.3% to 84.9%) in the general practitioner setting with 17% of children testing positive. In the paediatric outpatient and emergency department setting, sensitivities were below 92%, with specificities below 44.8%. In an independent validation cohort, this clinical prediction rule has shown to be extremely sensitive to identify children at risk of hospital admission for a serious infection in general practice, making it suitable for ruling out. NCT02024282. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  11. Office of Legacy Management Decision Tree for Solar Photovoltaic Projects - 13317

    International Nuclear Information System (INIS)

    Elmer, John; Butherus, Michael; Barr, Deborah L.

    2013-01-01

    To support consideration of renewable energy power development as a land reuse option, the DOE Office of Legacy Management (LM) and the National Renewable Energy Laboratory (NREL) established a partnership to conduct an assessment of wind and solar renewable energy resources on LM lands. From a solar capacity perspective, the larger sites in the western United States present opportunities for constructing solar photovoltaic (PV) projects. A detailed analysis and preliminary plan was developed for three large sites in New Mexico, assessing the costs, the conceptual layout of a PV system, and the electric utility interconnection process. As a result of the study, a 1,214-hectare (3,000-acre) site near Grants, New Mexico, was chosen for further study. The state incentives, utility connection process, and transmission line capacity were key factors in assessing the feasibility of the project. LM's Durango, Colorado, Disposal Site was also chosen for consideration because the uranium mill tailings disposal cell is on a hillside facing south, transmission lines cross the property, and the community was very supportive of the project. LM worked with the regulators to demonstrate that the disposal cell's long-term performance would not be impacted by the installation of a PV solar system. A number of LM-unique issues were resolved in making the site available for a private party to lease a portion of the site for a solar PV project. A lease was awarded in September 2012. Using a solar decision tree that was developed and launched by the EPA and NREL, LM has modified and expanded the decision tree structure to address the unique aspects and challenges faced by LM on its multiple sites. The LM solar decision tree covers factors such as land ownership, usable acreage, financial viability of the project, stakeholder involvement, and transmission line capacity. As additional sites are transferred to LM in the future, the decision tree will assist in determining whether a solar

  12. Penerapan Algoritma Decision Tree C4.5 Untuk Penilaian Rumah Tinggal

    OpenAIRE

    Setiadi, Budi

    2015-01-01

    There is still a possibility of assessment error homes as a reference value of credit, which will open opportunities for NPL. So we need a way of assessment (predictive value) is quite proportional, credible and accurate. Inaccurate predictions led to the planning of improper credit management. Prediction value of collateral house has attracted the interest of many researchers because of its importance both in theoretical andempirical.Namely C4.5 decision tree algorithm, CART and CHAID that c...

  13. Non-compliance with a postmastectomy radiotherapy guideline: Decision tree and cause analysis

    OpenAIRE

    Razavi, Amir R; Gill, Hans; Åhlfeldt, Hans; Shahsavar, Nosrat

    2008-01-01

    Background: The guideline for postmastectomy radiotherapy (PMRT), which is prescribed to reduce recurrence of breast cancer in the chest wall and improve overall survival, is not always followed. Identifying and extracting important patterns of non-compliance are crucial in maintaining the quality of care in Oncology. Methods: Analysis of 759 patients with malignant breast cancer using decision tree induction (DTI) found patterns of non-compliance with the guideline. The PMRT guideline was us...

  14. Combining evolutionary algorithms with oblique decision trees to detect bent-double galaxies

    Science.gov (United States)

    Cantu-Paz, Erick; Kamath, Chandrika

    2000-10-01

    Decision tress have long been popular in classification as they use simple and easy-to-understand tests at each node. Most variants of decision trees test a single attribute at a node, leading to axis- parallel trees, where the test results in a hyperplane which is parallel to one of the dimensions in the attribute space. These trees can be rather large and inaccurate in cases where the concept to be learned is best approximated by oblique hyperplanes. In such cases, it may be more appropriate to use an oblique decision tree, where the decision at each node is a linear combination of the attributes. Oblique decision trees have not gained wide popularity in part due to the complexity of constructing good oblique splits and the tendency of existing splitting algorithms to get stuck in local minima. Several alternatives have been proposed to handle these problems including randomization in conjunction wiht deterministic hill-climbing and the use of simulated annealing. In this paper, we use evolutionary algorithms (EAs) to determine the split. EAs are well suited for this problem because of their global search properties, their tolerance to noisy fitness evaluations, and their scalability to large dimensional search spaces. We demonstrate our technique on a synthetic data set, and then we apply it to a practical problem from astronomy, namely, the classification of galaxies with a bent-double morphology. In addition, we describe our experiences with several split evaluation criteria. Our results suggest that, in some cases, the evolutionary approach is faster and more accurate than existing oblique decision tree algorithms. However, for our astronomical data, the accuracy is not significantly different than the axis-parallel trees.

  15. Decision tree analysis to evaluate dry cow strategies under UK conditions

    OpenAIRE

    Berry, E.A.; Hogeveen, H.; Hillerton, J.E.

    2005-01-01

    Economic decisions on animal health strategies address the cost-benefit aspect along with animal welfare and public health concerns. Decision tree analysis at an individual cow level highlighted that there is little economic difference between the use of either dry cow antibiotic or an internal teat sealant in preventing a new intramammary infection in a cow free of infection in all quarters of the mammary gland at drying off. However, a potential net loss of over pound20 per cow might occur ...

  16. Multivariate decision tree designing for the classification of multi-jet topologies in e sup + e sup - collisions

    CERN Document Server

    Mjahed, M

    2002-01-01

    The binary decision tree method is used to separate between several multi-jet topologies in e sup + e sup - collisions. Instead of the univariate process usually taken, a new design procedure for constructing multivariate decision trees is proposed. The segmentation is obtained by considering some features functions, where linear and non-linear discriminant functions and a minimal distance method are used. The classification focuses on ALEPH simulated events, with multi-jet topologies. Compared to a standard univariate tree, the multivariate decision trees offer significantly better performance.

  17. A GIS-based hedonic price model for agricultural land

    Science.gov (United States)

    Demetriou, Demetris

    2015-06-01

    Land consolidation is a very effective land management planning approach that aims towards rural/agricultural sustainable development. Land reallocation which involves land tenure restructuring is the most important, complex and time consuming component of land consolidation. Land reallocation relies on land valuation since its fundamental principle provides that after consolidation, each landowner shall be granted a property of an aggregate value that is approximately the same as the value of the property owned prior to consolidation. Therefore, land value is the crucial factor for the land reallocation process and hence for the success and acceptance of the final land consolidation plan. Land valuation is a process of assigning values to all parcels (and its contents) and it is usually carried out by an ad-hoc committee. However, the process faces some problems such as it is time consuming hence costly, outcomes may present inconsistency since it is carried out manually and empirically without employing systematic analytical tools and in particular spatial analysis tools and techniques such as statistical/mathematical. A solution to these problems can be the employment of mass appraisal land valuation methods using automated valuation models (AVM) based on international standards. In this context, this paper presents a spatial based linear hedonic price model which has been developed and tested in a case study land consolidation area in Cyprus. Results showed that the AVM is capable to produce acceptable in terms of accuracy and reliability land values and to reduce time hence cost required by around 80%.

  18. Modeling flash floods in ungauged mountain catchments of China: A decision tree learning approach for parameter regionalization

    Science.gov (United States)

    Ragettli, S.; Zhou, J.; Wang, H.; Liu, C.; Guo, L.

    2017-12-01

    Flash floods in small mountain catchments are one of the most frequent causes of loss of life and property from natural hazards in China. Hydrological models can be a useful tool for the anticipation of these events and the issuing of timely warnings. One of the main challenges of setting up such a system is finding appropriate model parameter values for ungauged catchments. Previous studies have shown that the transfer of parameter sets from hydrologically similar gauged catchments is one of the best performing regionalization methods. However, a remaining key issue is the identification of suitable descriptors of similarity. In this study, we use decision tree learning to explore parameter set transferability in the full space of catchment descriptors. For this purpose, a semi-distributed rainfall-runoff model is set up for 35 catchments in ten Chinese provinces. Hourly runoff data from in total 858 storm events are used to calibrate the model and to evaluate the performance of parameter set transfers between catchments. We then present a novel technique that uses the splitting rules of classification and regression trees (CART) for finding suitable donor catchments for ungauged target catchments. The ability of the model to detect flood events in assumed ungauged catchments is evaluated in series of leave-one-out tests. We show that CART analysis increases the probability of detection of 10-year flood events in comparison to a conventional measure of physiographic-climatic similarity by up to 20%. Decision tree learning can outperform other regionalization approaches because it generates rules that optimally consider spatial proximity and physical similarity. Spatial proximity can be used as a selection criteria but is skipped in the case where no similar gauged catchments are in the vicinity. We conclude that the CART regionalization concept is particularly suitable for implementation in sparsely gauged and topographically complex environments where a proximity-based

  19. [Early warning for various internal faults of GIS based on ultraviolet spectroscopy].

    Science.gov (United States)

    Zhao, Yu; Wang, Xian-pei; Hu, Hong-hong; Dai, Dang-dang; Long, Jia-chuan; Tian, Meng; Zhu, Guo-wei; Huang, Yun-guang

    2015-02-01

    As the basis of accurate diagnosis, fault early-warning of gas insulation switchgear (GIS) focuses on the time-effectiveness and the applicability. It would be significant to research the method of unified early-warning for partial discharge (PD) and overheated faults in GIS. In the present paper, SO2 is proposed as the common and typical by-product. The unified monitoring could be achieved through ultraviolet spectroscopy (UV) detection of SO2. The derivative method and Savitzky-Golay filtering are employed for baseline correction and smoothing. The wavelength range of 290-310 nm is selected for quantitative detection of SO2. Through UV method, the spectral interference of SF6 and other complex by-products, e.g., SOF2 and SOF2, can be avoided and the features of trace SO2 in GIS can be extracted. The detection system is featured by compacted structure, low maintenance and satisfactory suitability in filed surveillance. By conducting SF6 decomposition experiments, including two types of PD faults and the overheated faults between 200-400 degrees C, the feasibility of proposed UV method has been verified. Fourier transform infrared spectroscopy and gas chromatography methods can be used for subsequent fault diagnosis. The different decomposition features in two kinds of faults are confirmed and the diagnosis strategy has been briefly analyzed. The main by-products under PD are SOF2 and SO2F2. The generated SO2 is significantly less than SOF2. More carbonous by-products will be generated when PD involves epoxy. By contrast, when the material of heater is stainless steel, SF6 decomposes at about 300 "C and the main by-products in overheated faults are SO2 and SO2F2. When heated over 350 degrees C, SO2 is generated much faster. SOz content stably increases when the GIS fault lasts. The faults types could be preliminarily identified based on the generation features of SO2.

  20. [Wetland landscape pattern change based on GIS and RS: a review].

    Science.gov (United States)

    Kong, Fan-Ting; Xi, Min; Li, Yue; Kong, Fan-Long; Chen, Wan

    2013-04-01

    Wetland is an ecological landscape with most biodiversity in nature, which has unique ecological structure and function, and contains abundant natural resources to provide material guarantee for human's living and development. Wetland landscape pattern is the comprehensive result of various ecological processes, and has become a hot issue in wetland ecological study. At present, the combination of geographic information system (GIS) and remote sensing (RS) technologies is an important way to study the wetland landscape pattern change. This paper reviewed the research progress in the wetland landscape change based on GIS and RS from the aspects of the research methods of wetland landscape pattern, index of wetland landscape pattern, and driving forces of wetland landscape pattern evolution, and discussed the applications of the combination of GIS and RS in monitoring the wetland landscape pattern change, the index selection of wetland landscape pattern, and the driving mechanisms of the combined action of human and nature. Some deficiencies in the current studies were put forward, and the directions of the future-studies were prospected.