WorldWideScience

Sample records for agent based classification

  1. A strategy learning model for autonomous agents based on classification

    Directory of Open Access Journals (Sweden)

    Śnieżyński Bartłomiej

    2015-09-01

    Full Text Available In this paper we propose a strategy learning model for autonomous agents based on classification. In the literature, the most commonly used learning method in agent-based systems is reinforcement learning. In our opinion, classification can be considered a good alternative. This type of supervised learning can be used to generate a classifier that allows the agent to choose an appropriate action for execution. Experimental results show that this model can be successfully applied for strategy generation even if rewards are delayed. We compare the efficiency of the proposed model and reinforcement learning using the farmer-pest domain and configurations of various complexity. In complex environments, supervised learning can improve the performance of agents much faster that reinforcement learning. If an appropriate knowledge representation is used, the learned knowledge may be analyzed by humans, which allows tracking the learning process

  2. Agent Collaborative Target Localization and Classification in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Sheng Wang

    2007-07-01

    Full Text Available Wireless sensor networks (WSNs are autonomous networks that have beenfrequently deployed to collaboratively perform target localization and classification tasks.Their autonomous and collaborative features resemble the characteristics of agents. Suchsimilarities inspire the development of heterogeneous agent architecture for WSN in thispaper. The proposed agent architecture views WSN as multi-agent systems and mobileagents are employed to reduce in-network communication. According to the architecture,an energy based acoustic localization algorithm is proposed. In localization, estimate oftarget location is obtained by steepest descent search. The search algorithm adapts tomeasurement environments by dynamically adjusting its termination condition. With theagent architecture, target classification is accomplished by distributed support vectormachine (SVM. Mobile agents are employed for feature extraction and distributed SVMlearning to reduce communication load. Desirable learning performance is guaranteed bycombining support vectors and convex hull vectors. Fusion algorithms are designed tomerge SVM classification decisions made from various modalities. Real world experimentswith MICAz sensor nodes are conducted for vehicle localization and classification.Experimental results show the proposed agent architecture remarkably facilitates WSNdesigns and algorithm implementation. The localization and classification algorithms alsoprove to be accurate and energy efficient.

  3. Odor Classification using Agent Technology

    Directory of Open Access Journals (Sweden)

    Sigeru OMATU

    2014-03-01

    Full Text Available In order to measure and classify odors, Quartz Crystal Microbalance (QCM can be used. In the present study, seven QCM sensors and three different odors are used. The system has been developed as a virtual organization of agents using an agent platform called PANGEA (Platform for Automatic coNstruction of orGanizations of intElligent Agents. This is a platform for developing open multi-agent systems, specifically those including organizational aspects. The main reason for the use of agents is the scalability of the platform, i.e. the way in which it models the services. The system models functionalities as services inside the agents, or as Service Oriented Approach (SOA architecture compliant services using Web Services. This way the adaptation of the odor classification systems with new algorithms, tools and classification techniques is allowed.

  4. Multi-Agent Information Classification Using Dynamic Acquaintance Lists.

    Science.gov (United States)

    Mukhopadhyay, Snehasis; Peng, Shengquan; Raje, Rajeev; Palakal, Mathew; Mostafa, Javed

    2003-01-01

    Discussion of automated information services focuses on information classification and collaborative agents, i.e. intelligent computer programs. Highlights include multi-agent systems; distributed artificial intelligence; thesauri; document representation and classification; agent modeling; acquaintances, or remote agents discovered through…

  5. Multi-agent Negotiation Mechanisms for Statistical Target Classification in Wireless Multimedia Sensor Networks

    Science.gov (United States)

    Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng

    2007-01-01

    The recent availability of low cost and miniaturized hardware has allowed wireless sensor networks (WSNs) to retrieve audio and video data in real world applications, which has fostered the development of wireless multimedia sensor networks (WMSNs). Resource constraints and challenging multimedia data volume make development of efficient algorithms to perform in-network processing of multimedia contents imperative. This paper proposes solving problems in the domain of WMSNs from the perspective of multi-agent systems. The multi-agent framework enables flexible network configuration and efficient collaborative in-network processing. The focus is placed on target classification in WMSNs where audio information is retrieved by microphones. To deal with the uncertainties related to audio information retrieval, the statistical approaches of power spectral density estimates, principal component analysis and Gaussian process classification are employed. A multi-agent negotiation mechanism is specially developed to efficiently utilize limited resources and simultaneously enhance classification accuracy and reliability. The negotiation is composed of two phases, where an auction based approach is first exploited to allocate the classification task among the agents and then individual agent decisions are combined by the committee decision mechanism. Simulation experiments with real world data are conducted and the results show that the proposed statistical approaches and negotiation mechanism not only reduce memory and computation requirements in WMSNs but also significantly enhance classification accuracy and reliability. PMID:28903223

  6. From fault classification to fault tolerance for multi-agent systems

    CERN Document Server

    Potiron, Katia; Taillibert, Patrick

    2013-01-01

    Faults are a concern for Multi-Agent Systems (MAS) designers, especially if the MAS are built for industrial or military use because there must be some guarantee of dependability. Some fault classification exists for classical systems, and is used to define faults. When dependability is at stake, such fault classification may be used from the beginning of the system's conception to define fault classes and specify which types of faults are expected. Thus, one may want to use fault classification for MAS; however, From Fault Classification to Fault Tolerance for Multi-Agent Systems argues that

  7. Multi-agent Negotiation Mechanisms for Statistical Target Classification in Wireless Multimedia Sensor Networks

    Directory of Open Access Journals (Sweden)

    Sheng Wang

    2007-10-01

    Full Text Available The recent availability of low cost and miniaturized hardware has allowedwireless sensor networks (WSNs to retrieve audio and video data in real worldapplications, which has fostered the development of wireless multimedia sensor networks(WMSNs. Resource constraints and challenging multimedia data volume makedevelopment of efficient algorithms to perform in-network processing of multimediacontents imperative. This paper proposes solving problems in the domain of WMSNs fromthe perspective of multi-agent systems. The multi-agent framework enables flexible networkconfiguration and efficient collaborative in-network processing. The focus is placed ontarget classification in WMSNs where audio information is retrieved by microphones. Todeal with the uncertainties related to audio information retrieval, the statistical approachesof power spectral density estimates, principal component analysis and Gaussian processclassification are employed. A multi-agent negotiation mechanism is specially developed toefficiently utilize limited resources and simultaneously enhance classification accuracy andreliability. The negotiation is composed of two phases, where an auction based approach isfirst exploited to allocate the classification task among the agents and then individual agentdecisions are combined by the committee decision mechanism. Simulation experiments withreal world data are conducted and the results show that the proposed statistical approachesand negotiation mechanism not only reduce memory and computation requi

  8. Review of therapeutic agents for burns pruritus and protocols for management in adult and paediatric patients using the GRADE classification

    Directory of Open Access Journals (Sweden)

    Goutos Ioannis

    2010-10-01

    Full Text Available To review the current evidence on therapeutic agents for burns pruritus and use the Grading of Recommendations, Assessment, Development and Evaluation (GRADE classification to propose therapeutic protocols for adult and paediatric patients. All published interventions for burns pruritus were analysed by a multidisciplinary panel of burns specialists following the GRADE classification to rate individual agents. Following the collation of results and panel discussion, consensus protocols are presented. Twenty-three studies appraising therapeutic agents in the burns literature were identified. The majority of these studies (16 out of 23 are of an observational nature, making an evidence-based approach to defining optimal therapy not feasible. Our multidisciplinary approach employing the GRADE classification recommends the use of antihistamines (cetirizine and cimetidine and gabapentin as the first-line pharmacological agents for both adult and paediatric patients. Ondansetron and loratadine are the second-line medications in our protocols. We additionally recommend a variety of non-pharmacological adjuncts for the perusal of clinicians in order to maximise symptomatic relief in patients troubled with postburn itch. Most studies in the subject area lack sufficient statistical power to dictate a ′gold standard′ treatment agent for burns itch. We encourage clinicians to employ the GRADE system in order to delineate the most appropriate therapeutic approach for burns pruritus until further research elucidates the most efficacious interventions. This widely adopted classification empowers burns clinicians to tailor therapeutic regimens according to current evidence, patient values, risks and resource considerations in different medical environments.

  9. N-grams Based Supervised Machine Learning Model for Mobile Agent Platform Protection against Unknown Malicious Mobile Agents

    Directory of Open Access Journals (Sweden)

    Pallavi Bagga

    2017-12-01

    Full Text Available From many past years, the detection of unknown malicious mobile agents before they invade the Mobile Agent Platform has been the subject of much challenging activity. The ever-growing threat of malicious agents calls for techniques for automated malicious agent detection. In this context, the machine learning (ML methods are acknowledged more effective than the Signature-based and Behavior-based detection methods. Therefore, in this paper, the prime contribution has been made to detect the unknown malicious mobile agents based on n-gram features and supervised ML approach, which has not been done so far in the sphere of the Mobile Agents System (MAS security. To carry out the study, the n-grams ranging from 3 to 9 are extracted from a dataset containing 40 malicious and 40 non-malicious mobile agents. Subsequently, the classification is performed using different classifiers. A nested 5-fold cross validation scheme is employed in order to avoid the biasing in the selection of optimal parameters of classifier. The observations of extensive experiments demonstrate that the work done in this paper is suitable for the task of unknown malicious mobile agent detection in a Mobile Agent Environment, and also adds the ML in the interest list of researchers dealing with MAS security.

  10. The efficiency of the RULES-4 classification learning algorithm in predicting the density of agents

    Directory of Open Access Journals (Sweden)

    Ziad Salem

    2014-12-01

    Full Text Available Learning is the act of obtaining new or modifying existing knowledge, behaviours, skills or preferences. The ability to learn is found in humans, other organisms and some machines. Learning is always based on some sort of observations or data such as examples, direct experience or instruction. This paper presents a classification algorithm to learn the density of agents in an arena based on the measurements of six proximity sensors of a combined actuator sensor units (CASUs. Rules are presented that were induced by the learning algorithm that was trained with data-sets based on the CASU’s sensor data streams collected during a number of experiments with “Bristlebots (agents in the arena (environment”. It was found that a set of rules generated by the learning algorithm is able to predict the number of bristlebots in the arena based on the CASU’s sensor readings with satisfying accuracy.

  11. Improvement of Bioactive Compound Classification through Integration of Orthogonal Cell-Based Biosensing Methods

    Directory of Open Access Journals (Sweden)

    Goran N. Jovanovic

    2007-01-01

    Full Text Available Lack of specificity for different classes of chemical and biological agents, and false positives and negatives, can limit the range of applications for cell-based biosensors. This study suggests that the integration of results from algal cells (Mesotaenium caldariorum and fish chromatophores (Betta splendens improves classification efficiency and detection reliability. Cells were challenged with paraquat, mercuric chloride, sodium arsenite and clonidine. The two detection systems were independently investigated for classification of the toxin set by performing discriminant analysis. The algal system correctly classified 72% of the bioactive compounds, whereas the fish chromatophore system correctly classified 68%. The combined classification efficiency was 95%. The algal sensor readout is based on fluorescence measurements of changes in the energy producing pathways of photosynthetic cells, whereas the response from fish chromatophores was quantified using optical density. Change in optical density reflects interference with the functioning of cellular signal transduction networks. Thus, algal cells and fish chromatophores respond to the challenge agents through sufficiently different mechanisms of action to be considered orthogonal.

  12. Intelligent Agent-Based Intrusion Detection System Using Enhanced Multiclass SVM

    Science.gov (United States)

    Ganapathy, S.; Yogesh, P.; Kannan, A.

    2012-01-01

    Intrusion detection systems were used in the past along with various techniques to detect intrusions in networks effectively. However, most of these systems are able to detect the intruders only with high false alarm rate. In this paper, we propose a new intelligent agent-based intrusion detection model for mobile ad hoc networks using a combination of attribute selection, outlier detection, and enhanced multiclass SVM classification methods. For this purpose, an effective preprocessing technique is proposed that improves the detection accuracy and reduces the processing time. Moreover, two new algorithms, namely, an Intelligent Agent Weighted Distance Outlier Detection algorithm and an Intelligent Agent-based Enhanced Multiclass Support Vector Machine algorithm are proposed for detecting the intruders in a distributed database environment that uses intelligent agents for trust management and coordination in transaction processing. The experimental results of the proposed model show that this system detects anomalies with low false alarm rate and high-detection rate when tested with KDD Cup 99 data set. PMID:23056036

  13. Patterns of Use of an Agent-Based Model and a System Dynamics Model: The Application of Patterns of Use and the Impacts on Learning Outcomes

    Science.gov (United States)

    Thompson, Kate; Reimann, Peter

    2010-01-01

    A classification system that was developed for the use of agent-based models was applied to strategies used by school-aged students to interrogate an agent-based model and a system dynamics model. These were compared, and relationships between learning outcomes and the strategies used were also analysed. It was found that the classification system…

  14. A New Classification Approach Based on Multiple Classification Rules

    OpenAIRE

    Zhongmei Zhou

    2014-01-01

    A good classifier can correctly predict new data for which the class label is unknown, so it is important to construct a high accuracy classifier. Hence, classification techniques are much useful in ubiquitous computing. Associative classification achieves higher classification accuracy than some traditional rule-based classification approaches. However, the approach also has two major deficiencies. First, it generates a very large number of association classification rules, especially when t...

  15. Agent Persuasion Mechanism of Acquaintance

    Science.gov (United States)

    Jinghua, Wu; Wenguang, Lu; Hailiang, Meng

    Agent persuasion can improve negotiation efficiency in dynamic environment based on its initiative and autonomy, and etc., which is being affected much more by acquaintance. Classification of acquaintance on agent persuasion is illustrated, and the agent persuasion model of acquaintance is also illustrated. Then the concept of agent persuasion degree of acquaintance is given. Finally, relative interactive mechanism is elaborated.

  16. A Multiagent-based Intrusion Detection System with the Support of Multi-Class Supervised Classification

    Science.gov (United States)

    Shyu, Mei-Ling; Sainani, Varsha

    The increasing number of network security related incidents have made it necessary for the organizations to actively protect their sensitive data with network intrusion detection systems (IDSs). IDSs are expected to analyze a large volume of data while not placing a significantly added load on the monitoring systems and networks. This requires good data mining strategies which take less time and give accurate results. In this study, a novel data mining assisted multiagent-based intrusion detection system (DMAS-IDS) is proposed, particularly with the support of multiclass supervised classification. These agents can detect and take predefined actions against malicious activities, and data mining techniques can help detect them. Our proposed DMAS-IDS shows superior performance compared to central sniffing IDS techniques, and saves network resources compared to other distributed IDS with mobile agents that activate too many sniffers causing bottlenecks in the network. This is one of the major motivations to use a distributed model based on multiagent platform along with a supervised classification technique.

  17. Agent Programming Languages and Logics in Agent-Based Simulation

    DEFF Research Database (Denmark)

    Larsen, John

    2018-01-01

    and social behavior, and work on verification. Agent-based simulation is an approach for simulation that also uses the notion of agents. Although agent programming languages and logics are much less used in agent-based simulation, there are successful examples with agents designed according to the BDI...

  18. Towards a framework for agent-based image analysis of remote-sensing data.

    Science.gov (United States)

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-04-03

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).

  19. A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture

    Science.gov (United States)

    Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.

    2005-01-01

    Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.

  20. Online Learning for Classification of Alzheimer Disease based on Cortical Thickness and Hippocampal Shape Analysis.

    Science.gov (United States)

    Lee, Ga-Young; Kim, Jeonghun; Kim, Ju Han; Kim, Kiwoong; Seong, Joon-Kyung

    2014-01-01

    Mobile healthcare applications are becoming a growing trend. Also, the prevalence of dementia in modern society is showing a steady growing trend. Among degenerative brain diseases that cause dementia, Alzheimer disease (AD) is the most common. The purpose of this study was to identify AD patients using magnetic resonance imaging in the mobile environment. We propose an incremental classification for mobile healthcare systems. Our classification method is based on incremental learning for AD diagnosis and AD prediction using the cortical thickness data and hippocampus shape. We constructed a classifier based on principal component analysis and linear discriminant analysis. We performed initial learning and mobile subject classification. Initial learning is the group learning part in our server. Our smartphone agent implements the mobile classification and shows various results. With use of cortical thickness data analysis alone, the discrimination accuracy was 87.33% (sensitivity 96.49% and specificity 64.33%). When cortical thickness data and hippocampal shape were analyzed together, the achieved accuracy was 87.52% (sensitivity 96.79% and specificity 63.24%). In this paper, we presented a classification method based on online learning for AD diagnosis by employing both cortical thickness data and hippocampal shape analysis data. Our method was implemented on smartphone devices and discriminated AD patients for normal group.

  1. Pitch Based Sound Classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic......, and that even though classification gets marginally better, not much is achieved by increasing the window size beyond 1 s....

  2. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases...... the accuracy at the same time. The test example is classified using simpler and smaller model. The training examples in a particular cluster share the common vocabulary. At the time of clustering, we do not take into account the labels of the training examples. After the clusters have been created......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...

  3. Exploring complex dynamics in multi agent-based intelligent systems: Theoretical and experimental approaches using the Multi Agent-based Behavioral Economic Landscape (MABEL) model

    Science.gov (United States)

    Alexandridis, Konstantinos T.

    This dissertation adopts a holistic and detailed approach to modeling spatially explicit agent-based artificial intelligent systems, using the Multi Agent-based Behavioral Economic Landscape (MABEL) model. The research questions that addresses stem from the need to understand and analyze the real-world patterns and dynamics of land use change from a coupled human-environmental systems perspective. Describes the systemic, mathematical, statistical, socio-economic and spatial dynamics of the MABEL modeling framework, and provides a wide array of cross-disciplinary modeling applications within the research, decision-making and policy domains. Establishes the symbolic properties of the MABEL model as a Markov decision process, analyzes the decision-theoretic utility and optimization attributes of agents towards comprising statistically and spatially optimal policies and actions, and explores the probabilogic character of the agents' decision-making and inference mechanisms via the use of Bayesian belief and decision networks. Develops and describes a Monte Carlo methodology for experimental replications of agent's decisions regarding complex spatial parcel acquisition and learning. Recognizes the gap on spatially-explicit accuracy assessment techniques for complex spatial models, and proposes an ensemble of statistical tools designed to address this problem. Advanced information assessment techniques such as the Receiver-Operator Characteristic curve, the impurity entropy and Gini functions, and the Bayesian classification functions are proposed. The theoretical foundation for modular Bayesian inference in spatially-explicit multi-agent artificial intelligent systems, and the ensembles of cognitive and scenario assessment modular tools build for the MABEL model are provided. Emphasizes the modularity and robustness as valuable qualitative modeling attributes, and examines the role of robust intelligent modeling as a tool for improving policy-decisions related to land

  4. Russian and Foreign Experience of Integration of Agent-Based Models and Geographic Information Systems

    Directory of Open Access Journals (Sweden)

    Konstantin Anatol’evich Gulin

    2016-11-01

    Full Text Available The article provides an overview of the mechanisms of integration of agent-based models and GIS technology developed by Russian and foreign researchers. The basic framework of the article is based on critical analysis of domestic and foreign literature (monographs, scientific articles. The study is based on the application of universal scientific research methods: system approach, analysis and synthesis, classification, systematization and grouping, generalization and comparison. The article presents theoretical and methodological bases of integration of agent-based models and geographic information systems. The concept and essence of agent-based models are explained; their main advantages (compared to other modeling methods are identified. The paper characterizes the operating environment of agents as a key concept in the theory of agent-based modeling. It is shown that geographic information systems have a wide range of information resources for calculations, searching, modeling of the real world in various aspects, acting as an effective tool for displaying the agents’ operating environment and allowing to bring the model as close as possible to the real conditions. The authors also focus on a wide range of possibilities for various researches in different spatial and temporal contexts. Comparative analysis of platforms supporting the integration of agent-based models and geographic information systems has been carried out. The authors give examples of complex socio-economic models: the model of a creative city, humanitarian assistance model. In the absence of standards for research results description, the authors focus on the models’ elements such as the characteristics of the agents and their operation environment, agents’ behavior, rules of interaction between the agents and the external environment. The paper describes the possibilities and prospects of implementing these models

  5. Dissimilarity-based classification of anatomical tree structures

    DEFF Research Database (Denmark)

    Sørensen, Lauge; Lo, Pechin Chien Pau; Dirksen, Asger

    2011-01-01

    A novel method for classification of abnormality in anatomical tree structures is presented. A tree is classified based on direct comparisons with other trees in a dissimilarity-based classification scheme. The pair-wise dissimilarity measure between two trees is based on a linear assignment betw...

  6. Dissimilarity-based classification of anatomical tree structures

    DEFF Research Database (Denmark)

    Sørensen, Lauge Emil Borch Laurs; Lo, Pechin Chien Pau; Dirksen, Asger

    2011-01-01

    A novel method for classification of abnormality in anatomical tree structures is presented. A tree is classified based on direct comparisons with other trees in a dissimilarity-based classification scheme. The pair-wise dissimilarity measure between two trees is based on a linear assignment...

  7. Cloud field classification based on textural features

    Science.gov (United States)

    Sengupta, Sailes Kumar

    1989-01-01

    An essential component in global climate research is accurate cloud cover and type determination. Of the two approaches to texture-based classification (statistical and textural), only the former is effective in the classification of natural scenes such as land, ocean, and atmosphere. In the statistical approach that was adopted, parameters characterizing the stochastic properties of the spatial distribution of grey levels in an image are estimated and then used as features for cloud classification. Two types of textural measures were used. One is based on the distribution of the grey level difference vector (GLDV), and the other on a set of textural features derived from the MaxMin cooccurrence matrix (MMCM). The GLDV method looks at the difference D of grey levels at pixels separated by a horizontal distance d and computes several statistics based on this distribution. These are then used as features in subsequent classification. The MaxMin tectural features on the other hand are based on the MMCM, a matrix whose (I,J)th entry give the relative frequency of occurrences of the grey level pair (I,J) that are consecutive and thresholded local extremes separated by a given pixel distance d. Textural measures are then computed based on this matrix in much the same manner as is done in texture computation using the grey level cooccurrence matrix. The database consists of 37 cloud field scenes from LANDSAT imagery using a near IR visible channel. The classification algorithm used is the well known Stepwise Discriminant Analysis. The overall accuracy was estimated by the percentage or correct classifications in each case. It turns out that both types of classifiers, at their best combination of features, and at any given spatial resolution give approximately the same classification accuracy. A neural network based classifier with a feed forward architecture and a back propagation training algorithm is used to increase the classification accuracy, using these two classes

  8. AN OBJECT-BASED METHOD FOR CHINESE LANDFORM TYPES CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    H. Ding

    2016-06-01

    Full Text Available Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM. In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  9. Agent-based modeling and network dynamics

    CERN Document Server

    Namatame, Akira

    2016-01-01

    The book integrates agent-based modeling and network science. It is divided into three parts, namely, foundations, primary dynamics on and of social networks, and applications. The book begins with the network origin of agent-based models, known as cellular automata, and introduce a number of classic models, such as Schelling’s segregation model and Axelrod’s spatial game. The essence of the foundation part is the network-based agent-based models in which agents follow network-based decision rules. Under the influence of the substantial progress in network science in late 1990s, these models have been extended from using lattices into using small-world networks, scale-free networks, etc. The book also shows that the modern network science mainly driven by game-theorists and sociophysicists has inspired agent-based social scientists to develop alternative formation algorithms, known as agent-based social networks. The book reviews a number of pioneering and representative models in this family. Upon the gi...

  10. Comparison Effectiveness of Pixel Based Classification and Object Based Classification Using High Resolution Image In Floristic Composition Mapping (Study Case: Gunung Tidar Magelang City)

    Science.gov (United States)

    Ardha Aryaguna, Prama; Danoedoro, Projo

    2016-11-01

    Developments of analysis remote sensing have same way with development of technology especially in sensor and plane. Now, a lot of image have high spatial and radiometric resolution, that's why a lot information. Vegetation object analysis such floristic composition got a lot advantage of that development. Floristic composition can be interpreted using a lot of method such pixel based classification and object based classification. The problems for pixel based method on high spatial resolution image are salt and paper who appear in result of classification. The purpose of this research are compare effectiveness between pixel based classification and object based classification for composition vegetation mapping on high resolution image Worldview-2. The results show that pixel based classification using majority 5×5 kernel windows give the highest accuracy between another classifications. The highest accuracy is 73.32% from image Worldview-2 are being radiometric corrected level surface reflectance, but for overall accuracy in every class, object based are the best between another methods. Reviewed from effectiveness aspect, pixel based are more effective then object based for vegetation composition mapping in Tidar forest.

  11. Inventory classification based on decoupling points

    Directory of Open Access Journals (Sweden)

    Joakim Wikner

    2015-01-01

    Full Text Available The ideal state of continuous one-piece flow may never be achieved. Still the logistics manager can improve the flow by carefully positioning inventory to buffer against variations. Strategies such as lean, postponement, mass customization, and outsourcing all rely on strategic positioning of decoupling points to separate forecast-driven from customer-order-driven flows. Planning and scheduling of the flow are also based on classification of decoupling points as master scheduled or not. A comprehensive classification scheme for these types of decoupling points is introduced. The approach rests on identification of flows as being either demand based or supply based. The demand or supply is then combined with exogenous factors, classified as independent, or endogenous factors, classified as dependent. As a result, eight types of strategic as well as tactical decoupling points are identified resulting in a process-based framework for inventory classification that can be used for flow design.

  12. Sentiment classification technology based on Markov logic networks

    Science.gov (United States)

    He, Hui; Li, Zhigang; Yao, Chongchong; Zhang, Weizhe

    2016-07-01

    With diverse online media emerging, there is a growing concern of sentiment classification problem. At present, text sentiment classification mainly utilizes supervised machine learning methods, which feature certain domain dependency. On the basis of Markov logic networks (MLNs), this study proposed a cross-domain multi-task text sentiment classification method rooted in transfer learning. Through many-to-one knowledge transfer, labeled text sentiment classification, knowledge was successfully transferred into other domains, and the precision of the sentiment classification analysis in the text tendency domain was improved. The experimental results revealed the following: (1) the model based on a MLN demonstrated higher precision than the single individual learning plan model. (2) Multi-task transfer learning based on Markov logical networks could acquire more knowledge than self-domain learning. The cross-domain text sentiment classification model could significantly improve the precision and efficiency of text sentiment classification.

  13. Mechanism-based drug exposure classification in pharmacoepidemiological studies

    NARCIS (Netherlands)

    Verdel, B.M.

    2010-01-01

    Mechanism-based classification of drug exposure in pharmacoepidemiological studies In pharmacoepidemiology and pharmacovigilance, the relation between drug exposure and clinical outcomes is crucial. Exposure classification in pharmacoepidemiological studies is traditionally based on

  14. Assurance in Agent-Based Systems

    Energy Technology Data Exchange (ETDEWEB)

    Gilliom, Laura R.; Goldsmith, Steven Y.

    1999-05-10

    Our vision of the future of information systems is one that includes engineered collectives of software agents which are situated in an environment over years and which increasingly improve the performance of the overall system of which they are a part. At a minimum, the movement of agent and multi-agent technology into National Security applications, including their use in information assurance, is apparent today. The use of deliberative, autonomous agents in high-consequence/high-security applications will require a commensurate level of protection and confidence in the predictability of system-level behavior. At Sandia National Laboratories, we have defined and are addressing a research agenda that integrates the surety (safety, security, and reliability) into agent-based systems at a deep level. Surety is addressed at multiple levels: The integrity of individual agents must be protected by addressing potential failure modes and vulnerabilities to malevolent threats. Providing for the surety of the collective requires attention to communications surety issues and mechanisms for identifying and working with trusted collaborators. At the highest level, using agent-based collectives within a large-scale distributed system requires the development of principled design methods to deliver the desired emergent performance or surety characteristics. This position paper will outline the research directions underway at Sandia, will discuss relevant work being performed elsewhere, and will report progress to date toward assurance in agent-based systems.

  15. Assurance in Agent-Based Systems

    International Nuclear Information System (INIS)

    Gilliom, Laura R.; Goldsmith, Steven Y.

    1999-01-01

    Our vision of the future of information systems is one that includes engineered collectives of software agents which are situated in an environment over years and which increasingly improve the performance of the overall system of which they are a part. At a minimum, the movement of agent and multi-agent technology into National Security applications, including their use in information assurance, is apparent today. The use of deliberative, autonomous agents in high-consequence/high-security applications will require a commensurate level of protection and confidence in the predictability of system-level behavior. At Sandia National Laboratories, we have defined and are addressing a research agenda that integrates the surety (safety, security, and reliability) into agent-based systems at a deep level. Surety is addressed at multiple levels: The integrity of individual agents must be protected by addressing potential failure modes and vulnerabilities to malevolent threats. Providing for the surety of the collective requires attention to communications surety issues and mechanisms for identifying and working with trusted collaborators. At the highest level, using agent-based collectives within a large-scale distributed system requires the development of principled design methods to deliver the desired emergent performance or surety characteristics. This position paper will outline the research directions underway at Sandia, will discuss relevant work being performed elsewhere, and will report progress to date toward assurance in agent-based systems

  16. Agent-based modeling of sustainable behaviors

    CERN Document Server

    Sánchez-Maroño, Noelia; Fontenla-Romero, Oscar; Polhill, J; Craig, Tony; Bajo, Javier; Corchado, Juan

    2017-01-01

    Using the O.D.D. (Overview, Design concepts, Detail) protocol, this title explores the role of agent-based modeling in predicting the feasibility of various approaches to sustainability. The chapters incorporated in this volume consist of real case studies to illustrate the utility of agent-based modeling and complexity theory in discovering a path to more efficient and sustainable lifestyles. The topics covered within include: households' attitudes toward recycling, designing decision trees for representing sustainable behaviors, negotiation-based parking allocation, auction-based traffic signal control, and others. This selection of papers will be of interest to social scientists who wish to learn more about agent-based modeling as well as experts in the field of agent-based modeling.

  17. Agent-Based Optimization

    CERN Document Server

    Jędrzejowicz, Piotr; Kacprzyk, Janusz

    2013-01-01

    This volume presents a collection of original research works by leading specialists focusing on novel and promising approaches in which the multi-agent system paradigm is used to support, enhance or replace traditional approaches to solving difficult optimization problems. The editors have invited several well-known specialists to present their solutions, tools, and models falling under the common denominator of the agent-based optimization. The book consists of eight chapters covering examples of application of the multi-agent paradigm and respective customized tools to solve  difficult optimization problems arising in different areas such as machine learning, scheduling, transportation and, more generally, distributed and cooperative problem solving.

  18. Bargaining agents based system for automatic classification of potential allergens in recipes

    Directory of Open Access Journals (Sweden)

    José ALEMANY

    2016-11-01

    Full Text Available The automatic recipe recommendation which take into account the dietary restrictions of users (such as allergies or intolerances is a complex and open problem. Some of the limitations of the problem is the lack of food databases correctly labeled with its potential allergens and non-unification of this information by companies in the food sector. In the absence of an appropriate solution, people affected by food restrictions cannot use recommender systems, because this recommend them inappropriate recipes. In order to resolve this situation, in this article we propose a solution based on a collaborative multi-agent system, using negotiation and machine learning techniques, is able to detect and label potential allergens in recipes. The proposed system is being employed in receteame.com, a recipe recommendation system which includes persuasive technologies, which are interactive technologies aimed at changing users’ attitudes or behaviors through persuasion and social influence, and social information to improve the recommendations.

  19. Security Framework for Agent-Based Cloud Computing

    Directory of Open Access Journals (Sweden)

    K Venkateshwaran

    2015-06-01

    Full Text Available Agent can play a key role in bringing suitable cloud services to the customer based on their requirements. In agent based cloud computing, agent does negotiation, coordination, cooperation and collaboration on behalf of the customer to make the decisions in efficient manner. However the agent based cloud computing have some security issues like (a. addition of malicious agent in the cloud environment which could demolish the process by attacking other agents, (b. denial of service by creating flooding attacks on other involved agents. (c. Some of the exceptions in the agent interaction protocol such as Not-Understood and Cancel_Meta protocol can be misused and may lead to terminating the connection of all the other agents participating in the negotiating services. Also, this paper proposes algorithms to solve these issues to ensure that there will be no intervention of any malicious activities during the agent interaction.

  20. Deep learning for EEG-Based preference classification

    Science.gov (United States)

    Teo, Jason; Hou, Chew Lin; Mountstephens, James

    2017-10-01

    Electroencephalogram (EEG)-based emotion classification is rapidly becoming one of the most intensely studied areas of brain-computer interfacing (BCI). The ability to passively identify yet accurately correlate brainwaves with our immediate emotions opens up truly meaningful and previously unattainable human-computer interactions such as in forensic neuroscience, rehabilitative medicine, affective entertainment and neuro-marketing. One particularly useful yet rarely explored areas of EEG-based emotion classification is preference recognition [1], which is simply the detection of like versus dislike. Within the limited investigations into preference classification, all reported studies were based on musically-induced stimuli except for a single study which used 2D images. The main objective of this study is to apply deep learning, which has been shown to produce state-of-the-art results in diverse hard problems such as in computer vision, natural language processing and audio recognition, to 3D object preference classification over a larger group of test subjects. A cohort of 16 users was shown 60 bracelet-like objects as rotating visual stimuli on a computer display while their preferences and EEGs were recorded. After training a variety of machine learning approaches which included deep neural networks, we then attempted to classify the users' preferences for the 3D visual stimuli based on their EEGs. Here, we show that that deep learning outperforms a variety of other machine learning classifiers for this EEG-based preference classification task particularly in a highly challenging dataset with large inter- and intra-subject variability.

  1. Knowledge-based approach to video content classification

    Science.gov (United States)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  2. Land Cover and Land Use Classification with TWOPAC: towards Automated Processing for Pixel- and Object-Based Image Classification

    Directory of Open Access Journals (Sweden)

    Stefan Dech

    2012-09-01

    Full Text Available We present a novel and innovative automated processing environment for the derivation of land cover (LC and land use (LU information. This processing framework named TWOPAC (TWinned Object and Pixel based Automated classification Chain enables the standardized, independent, user-friendly, and comparable derivation of LC and LU information, with minimized manual classification labor. TWOPAC allows classification of multi-spectral and multi-temporal remote sensing imagery from different sensor types. TWOPAC enables not only pixel-based classification, but also allows classification based on object-based characteristics. Classification is based on a Decision Tree approach (DT for which the well-known C5.0 code has been implemented, which builds decision trees based on the concept of information entropy. TWOPAC enables automatic generation of the decision tree classifier based on a C5.0-retrieved ascii-file, as well as fully automatic validation of the classification output via sample based accuracy assessment.Envisaging the automated generation of standardized land cover products, as well as area-wide classification of large amounts of data in preferably a short processing time, standardized interfaces for process control, Web Processing Services (WPS, as introduced by the Open Geospatial Consortium (OGC, are utilized. TWOPAC’s functionality to process geospatial raster or vector data via web resources (server, network enables TWOPAC’s usability independent of any commercial client or desktop software and allows for large scale data processing on servers. Furthermore, the components of TWOPAC were built-up using open source code components and are implemented as a plug-in for Quantum GIS software for easy handling of the classification process from the user’s perspective.

  3. Econophysics of agent-based models

    CERN Document Server

    Aoyama, Hideaki; Chakrabarti, Bikas; Chakraborti, Anirban; Ghosh, Asim

    2014-01-01

    The primary goal of this book is to present the research findings and conclusions of physicists, economists, mathematicians and financial engineers working in the field of "Econophysics" who have undertaken agent-based modelling, comparison with empirical studies and related investigations. Most standard economic models assume the existence of the representative agent, who is “perfectly rational” and applies the utility maximization principle when taking action. One reason for this is the desire to keep models mathematically tractable: no tools are available to economists for solving non-linear models of heterogeneous adaptive agents without explicit optimization. In contrast, multi-agent models, which originated from statistical physics considerations, allow us to go beyond the prototype theories of traditional economics involving the representative agent. This book is based on the Econophys-Kolkata VII Workshop, at which many such modelling efforts were presented. In the book, leading researchers in the...

  4. The generalization ability of online SVM classification based on Markov sampling.

    Science.gov (United States)

    Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang

    2015-03-01

    In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

  5. Three essays in agent-based macroeconomics

    OpenAIRE

    Canzian, Giulia

    2009-01-01

    The dissertation is aimed at offering an insight into the agent-based methodology and its possible application to the macroeconomic analysis. Relying on this methodology, I deal with three different issues concerning heterogeneity of economic agents, bounded rationality and interaction. Specifically, the first chapter is devoted to describe the distinctive characteristics of agent-based economics and its advantages-disadvantages. In the second chapter I propose a credit market framework c...

  6. Multi-label literature classification based on the Gene Ontology graph

    Directory of Open Access Journals (Sweden)

    Lu Xinghua

    2008-12-01

    Full Text Available Abstract Background The Gene Ontology is a controlled vocabulary for representing knowledge related to genes and proteins in a computable form. The current effort of manually annotating proteins with the Gene Ontology is outpaced by the rate of accumulation of biomedical knowledge in literature, which urges the development of text mining approaches to facilitate the process by automatically extracting the Gene Ontology annotation from literature. The task is usually cast as a text classification problem, and contemporary methods are confronted with unbalanced training data and the difficulties associated with multi-label classification. Results In this research, we investigated the methods of enhancing automatic multi-label classification of biomedical literature by utilizing the structure of the Gene Ontology graph. We have studied three graph-based multi-label classification algorithms, including a novel stochastic algorithm and two top-down hierarchical classification methods for multi-label literature classification. We systematically evaluated and compared these graph-based classification algorithms to a conventional flat multi-label algorithm. The results indicate that, through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods can significantly improve predictions of the Gene Ontology terms implied by the analyzed text. Furthermore, the graph-based multi-label classifiers are capable of suggesting Gene Ontology annotations (to curators that are closely related to the true annotations even if they fail to predict the true ones directly. A software package implementing the studied algorithms is available for the research community. Conclusion Through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods have better potential than the conventional flat multi-label classification approach to facilitate

  7. A review of supervised object-based land-cover image classification

    Science.gov (United States)

    Ma, Lei; Li, Manchun; Ma, Xiaoxue; Cheng, Liang; Du, Peijun; Liu, Yongxue

    2017-08-01

    Object-based image classification for land-cover mapping purposes using remote-sensing imagery has attracted significant attention in recent years. Numerous studies conducted over the past decade have investigated a broad array of sensors, feature selection, classifiers, and other factors of interest. However, these research results have not yet been synthesized to provide coherent guidance on the effect of different supervised object-based land-cover classification processes. In this study, we first construct a database with 28 fields using qualitative and quantitative information extracted from 254 experimental cases described in 173 scientific papers. Second, the results of the meta-analysis are reported, including general characteristics of the studies (e.g., the geographic range of relevant institutes, preferred journals) and the relationships between factors of interest (e.g., spatial resolution and study area or optimal segmentation scale, accuracy and number of targeted classes), especially with respect to the classification accuracy of different sensors, segmentation scale, training set size, supervised classifiers, and land-cover types. Third, useful data on supervised object-based image classification are determined from the meta-analysis. For example, we find that supervised object-based classification is currently experiencing rapid advances, while development of the fuzzy technique is limited in the object-based framework. Furthermore, spatial resolution correlates with the optimal segmentation scale and study area, and Random Forest (RF) shows the best performance in object-based classification. The area-based accuracy assessment method can obtain stable classification performance, and indicates a strong correlation between accuracy and training set size, while the accuracy of the point-based method is likely to be unstable due to mixed objects. In addition, the overall accuracy benefits from higher spatial resolution images (e.g., unmanned aerial

  8. Structure-based classification and ontology in chemistry

    Directory of Open Access Journals (Sweden)

    Hastings Janna

    2012-04-01

    Full Text Available Abstract Background Recent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving relevant results from the available information, and organising those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures, while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies. Results We analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches. Conclusion Systems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational

  9. Contextual segment-based classification of airborne laser scanner data

    NARCIS (Netherlands)

    Vosselman, George; Coenen, Maximilian; Rottensteiner, Franz

    2017-01-01

    Classification of point clouds is needed as a first step in the extraction of various types of geo-information from point clouds. We present a new approach to contextual classification of segmented airborne laser scanning data. Potential advantages of segment-based classification are easily offset

  10. Integrating Globality and Locality for Robust Representation Based Classification

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2014-01-01

    Full Text Available The representation based classification method (RBCM has shown huge potential for face recognition since it first emerged. Linear regression classification (LRC method and collaborative representation classification (CRC method are two well-known RBCMs. LRC and CRC exploit training samples of each class and all the training samples to represent the testing sample, respectively, and subsequently conduct classification on the basis of the representation residual. LRC method can be viewed as a “locality representation” method because it just uses the training samples of each class to represent the testing sample and it cannot embody the effectiveness of the “globality representation.” On the contrary, it seems that CRC method cannot own the benefit of locality of the general RBCM. Thus we propose to integrate CRC and LRC to perform more robust representation based classification. The experimental results on benchmark face databases substantially demonstrate that the proposed method achieves high classification accuracy.

  11. EMG finger movement classification based on ANFIS

    Science.gov (United States)

    Caesarendra, W.; Tjahjowidodo, T.; Nico, Y.; Wahyudati, S.; Nurhasanah, L.

    2018-04-01

    An increase number of people suffering from stroke has impact to the rapid development of finger hand exoskeleton to enable an automatic physical therapy. Prior to the development of finger exoskeleton, a research topic yet important i.e. machine learning of finger gestures classification is conducted. This paper presents a study on EMG signal classification of 5 finger gestures as a preliminary study toward the finger exoskeleton design and development in Indonesia. The EMG signals of 5 finger gestures were acquired using Myo EMG sensor. The EMG signal features were extracted and reduced using PCA. The ANFIS based learning is used to classify reduced features of 5 finger gestures. The result shows that the classification of finger gestures is less than the classification of 7 hand gestures.

  12. Chinese Sentence Classification Based on Convolutional Neural Network

    Science.gov (United States)

    Gu, Chengwei; Wu, Ming; Zhang, Chuang

    2017-10-01

    Sentence classification is one of the significant issues in Natural Language Processing (NLP). Feature extraction is often regarded as the key point for natural language processing. Traditional ways based on machine learning can not take high level features into consideration, such as Naive Bayesian Model. The neural network for sentence classification can make use of contextual information to achieve greater results in sentence classification tasks. In this paper, we focus on classifying Chinese sentences. And the most important is that we post a novel architecture of Convolutional Neural Network (CNN) to apply on Chinese sentence classification. In particular, most of the previous methods often use softmax classifier for prediction, we embed a linear support vector machine to substitute softmax in the deep neural network model, minimizing a margin-based loss to get a better result. And we use tanh as an activation function, instead of ReLU. The CNN model improve the result of Chinese sentence classification tasks. Experimental results on the Chinese news title database validate the effectiveness of our model.

  13. Preliminary Research on Grassland Fine-classification Based on MODIS

    International Nuclear Information System (INIS)

    Hu, Z W; Zhang, S; Yu, X Y; Wang, X S

    2014-01-01

    Grassland ecosystem is important for climatic regulation, maintaining the soil and water. Research on the grassland monitoring method could provide effective reference for grassland resource investigation. In this study, we used the vegetation index method for grassland classification. There are several types of climate in China. Therefore, we need to use China's Main Climate Zone Maps and divide the study region into four climate zones. Based on grassland classification system of the first nation-wide grass resource survey in China, we established a new grassland classification system which is only suitable for this research. We used MODIS images as the basic data resources, and use the expert classifier method to perform grassland classification. Based on the 1:1,000,000 Grassland Resource Map of China, we obtained the basic distribution of all the grassland types and selected 20 samples evenly distributed in each type, then used NDVI/EVI product to summarize different spectral features of different grassland types. Finally, we introduced other classification auxiliary data, such as elevation, accumulate temperature (AT), humidity index (HI) and rainfall. China's nation-wide grassland classification map is resulted by merging the grassland in different climate zone. The overall classification accuracy is 60.4%. The result indicated that expert classifier is proper for national wide grassland classification, but the classification accuracy need to be improved

  14. An Emotional Agent Model Based on Granular Computing

    Directory of Open Access Journals (Sweden)

    Jun Hu

    2012-01-01

    Full Text Available Affective computing has a very important significance for fulfilling intelligent information processing and harmonious communication between human being and computers. A new model for emotional agent is proposed in this paper to make agent have the ability of handling emotions, based on the granular computing theory and the traditional BDI agent model. Firstly, a new emotion knowledge base based on granular computing for emotion expression is presented in the model. Secondly, a new emotional reasoning algorithm based on granular computing is proposed. Thirdly, a new emotional agent model based on granular computing is presented. Finally, based on the model, an emotional agent for patient assistant in hospital is realized, experiment results show that it is efficient to handle simple emotions.

  15. An Authentication Technique Based on Classification

    Institute of Scientific and Technical Information of China (English)

    李钢; 杨杰

    2004-01-01

    We present a novel watermarking approach based on classification for authentication, in which a watermark is embedded into the host image. When the marked image is modified, the extracted watermark is also different to the original watermark, and different kinds of modification lead to different extracted watermarks. In this paper, different kinds of modification are considered as classes, and we used classification algorithm to recognize the modifications with high probability. Simulation results show that the proposed method is potential and effective.

  16. Methods for Model-Based Reasoning within Agent-Based Ambient Intelligence Applications

    NARCIS (Netherlands)

    Bosse, T.; Both, F.; Gerritsen, C.; Hoogendoorn, M.; Treur, J.

    2012-01-01

    Within agent-based Ambient Intelligence applications agents react to humans based on information obtained by sensoring and their knowledge about human functioning. Appropriate types of reactions depend on the extent to which an agent understands the human and is able to interpret the available

  17. An Agent-Based Monetary Production Simulation Model

    DEFF Research Database (Denmark)

    Bruun, Charlotte

    2006-01-01

    An Agent-Based Simulation Model Programmed in Objective Borland Pascal. Program and source code is downloadable......An Agent-Based Simulation Model Programmed in Objective Borland Pascal. Program and source code is downloadable...

  18. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    Science.gov (United States)

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Story telling engine based on agent interaction

    OpenAIRE

    Porcel, Juan Carlos

    2008-01-01

    Comics have been used as a programming tool for agents, giving them instructions on how to act. In this thesis I do this in reverse, I use comics to describe the actions of agents already interacting with each other to create a storytelling engine that dynamically generate stories, based on the interaction of said agents. The model for the agent behaviours is based on the improvisational puppets model of Barbara Hayes-Roth. This model is chosen due to the nature of comics themselves. Comics ...

  20. A Secure Protocol Based on a Sedentary Agent for Mobile Agent Environments

    OpenAIRE

    Abdelmorhit E. Rhazi; Samuel Pierre; Hanifa Boucheneb

    2007-01-01

    The main challenge when deploying mobile agent environments pertains to security issues concerning mobile agents and their executive platform. This paper proposes a secure protocol which protects mobile agents against attacks from malicious hosts in these environments. Protection is based on the perfect cooperation of a sedentary agent running inside a trusted third host. Results show that the protocol detects several attacks, such as denial of service, incorrect execution and re-execution of...

  1. Evolutionary game theory using agent-based methods.

    Science.gov (United States)

    Adami, Christoph; Schossau, Jory; Hintze, Arend

    2016-12-01

    Evolutionary game theory is a successful mathematical framework geared towards understanding the selective pressures that affect the evolution of the strategies of agents engaged in interactions with potential conflicts. While a mathematical treatment of the costs and benefits of decisions can predict the optimal strategy in simple settings, more realistic settings such as finite populations, non-vanishing mutations rates, stochastic decisions, communication between agents, and spatial interactions, require agent-based methods where each agent is modeled as an individual, carries its own genes that determine its decisions, and where the evolutionary outcome can only be ascertained by evolving the population of agents forward in time. While highlighting standard mathematical results, we compare those to agent-based methods that can go beyond the limitations of equations and simulate the complexity of heterogeneous populations and an ever-changing set of interactors. We conclude that agent-based methods can predict evolutionary outcomes where purely mathematical treatments cannot tread (for example in the weak selection-strong mutation limit), but that mathematics is crucial to validate the computational simulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Fuzzy Constraint-Based Agent Negotiation

    Institute of Scientific and Technical Information of China (English)

    Menq-Wen Lin; K. Robert Lai; Ting-Jung Yu

    2005-01-01

    Conflicts between two or more parties arise for various reasons and perspectives. Thus, resolution of conflicts frequently relies on some form of negotiation. This paper presents a general problem-solving framework for modeling multi-issue multilateral negotiation using fuzzy constraints. Agent negotiation is formulated as a distributed fuzzy constraint satisfaction problem (DFCSP). Fuzzy constrains are thus used to naturally represent each agent's desires involving imprecision and human conceptualization, particularly when lexical imprecision and subjective matters are concerned. On the other hand, based on fuzzy constraint-based problem-solving, our approach enables an agent not only to systematically relax fuzzy constraints to generate a proposal, but also to employ fuzzy similarity to select the alternative that is subject to its acceptability by the opponents. This task of problem-solving is to reach an agreement that benefits all agents with a high satisfaction degree of fuzzy constraints, and move towards the deal more quickly since their search focuses only on the feasible solution space. An application to multilateral negotiation of a travel planning is provided to demonstrate the usefulness and effectiveness of our framework.

  3. Agent-based land markets: Heterogeneous agents, land proces and urban land use change

    NARCIS (Netherlands)

    Filatova, Tatiana; Parker, Dawn C.; van der Veen, A.; Amblard, F.

    2007-01-01

    We construct a spatially explicit agent-based model of a bilateral land market. Heterogeneous agents form their bid and ask prices for land based on the utility that they obtain from a certain location (houte/land) and base on the state of the market (an excess of demand or supply). We underline the

  4. ICF-based classification and measurement of functioning.

    Science.gov (United States)

    Stucki, G; Kostanjsek, N; Ustün, B; Cieza, A

    2008-09-01

    If we aim towards a comprehensive understanding of human functioning and the development of comprehensive programs to optimize functioning of individuals and populations we need to develop suitable measures. The approval of the International Classification, Disability and Health (ICF) in 2001 by the 54th World Health Assembly as the first universally shared model and classification of functioning, disability and health marks, therefore an important step in the development of measurement instruments and ultimately for our understanding of functioning, disability and health. The acceptance and use of the ICF as a reference framework and classification has been facilitated by its development in a worldwide, comprehensive consensus process and the increasing evidence regarding its validity. However, the broad acceptance and use of the ICF as a reference framework and classification will also depend on the resolution of conceptual and methodological challenges relevant for the classification and measurement of functioning. This paper therefore describes first how the ICF categories can serve as building blocks for the measurement of functioning and then the current state of the development of ICF based practical tools and international standards such as the ICF Core Sets. Finally it illustrates how to map the world of measures to the ICF and vice versa and the methodological principles relevant for the transformation of information obtained with a clinical test or a patient-oriented instrument to the ICF as well as the development of ICF-based clinical and self-reported measurement instruments.

  5. Voice based gender classification using machine learning

    Science.gov (United States)

    Raahul, A.; Sapthagiri, R.; Pankaj, K.; Vijayarajan, V.

    2017-11-01

    Gender identification is one of the major problem speech analysis today. Tracing the gender from acoustic data i.e., pitch, median, frequency etc. Machine learning gives promising results for classification problem in all the research domains. There are several performance metrics to evaluate algorithms of an area. Our Comparative model algorithm for evaluating 5 different machine learning algorithms based on eight different metrics in gender classification from acoustic data. Agenda is to identify gender, with five different algorithms: Linear Discriminant Analysis (LDA), K-Nearest Neighbour (KNN), Classification and Regression Trees (CART), Random Forest (RF), and Support Vector Machine (SVM) on basis of eight different metrics. The main parameter in evaluating any algorithms is its performance. Misclassification rate must be less in classification problems, which says that the accuracy rate must be high. Location and gender of the person have become very crucial in economic markets in the form of AdSense. Here with this comparative model algorithm, we are trying to assess the different ML algorithms and find the best fit for gender classification of acoustic data.

  6. Recent advances in agent-based complex automated negotiation

    CERN Document Server

    Ito, Takayuki; Zhang, Minjie; Fujita, Katsuhide; Robu, Valentin

    2016-01-01

    This book covers recent advances in Complex Automated Negotiations as a widely studied emerging area in the field of Autonomous Agents and Multi-Agent Systems. The book includes selected revised and extended papers from the 7th International Workshop on Agent-Based Complex Automated Negotiation (ACAN2014), which was held in Paris, France, in May 2014. The book also includes brief introductions about Agent-based Complex Automated Negotiation which are based on tutorials provided in the workshop, and brief summaries and descriptions about the ANAC'14 (Automated Negotiating Agents Competition) competition, where authors of selected finalist agents explain the strategies and the ideas used by them. The book is targeted to academic and industrial researchers in various communities of autonomous agents and multi-agent systems, such as agreement technology, mechanism design, electronic commerce, related areas, as well as graduate, undergraduate, and PhD students working in those areas or having interest in them.

  7. Agent Based Reasoning in Multilevel Flow Modeling

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2012-01-01

    to launch the MFM Workbench into an agent based environment, which can complement disadvantages of the original software. The agent-based MFM Workbench is centered on a concept called “Blackboard System” and use an event based mechanism to arrange the reasoning tasks. This design will support the new...

  8. Agent Based Modeling Applications for Geosciences

    Science.gov (United States)

    Stein, J. S.

    2004-12-01

    Agent-based modeling techniques have successfully been applied to systems in which complex behaviors or outcomes arise from varied interactions between individuals in the system. Each individual interacts with its environment, as well as with other individuals, by following a set of relatively simple rules. Traditionally this "bottom-up" modeling approach has been applied to problems in the fields of economics and sociology, but more recently has been introduced to various disciplines in the geosciences. This technique can help explain the origin of complex processes from a relatively simple set of rules, incorporate large and detailed datasets when they exist, and simulate the effects of extreme events on system-wide behavior. Some of the challenges associated with this modeling method include: significant computational requirements in order to keep track of thousands to millions of agents, methods and strategies of model validation are lacking, as is a formal methodology for evaluating model uncertainty. Challenges specific to the geosciences, include how to define agents that control water, contaminant fluxes, climate forcing and other physical processes and how to link these "geo-agents" into larger agent-based simulations that include social systems such as demographics economics and regulations. Effective management of limited natural resources (such as water, hydrocarbons, or land) requires an understanding of what factors influence the demand for these resources on a regional and temporal scale. Agent-based models can be used to simulate this demand across a variety of sectors under a range of conditions and determine effective and robust management policies and monitoring strategies. The recent focus on the role of biological processes in the geosciences is another example of an area that could benefit from agent-based applications. A typical approach to modeling the effect of biological processes in geologic media has been to represent these processes in

  9. Cluster Validity Classification Approaches Based on Geometric Probability and Application in the Classification of Remotely Sensed Images

    Directory of Open Access Journals (Sweden)

    LI Jian-Wei

    2014-08-01

    Full Text Available On the basis of the cluster validity function based on geometric probability in literature [1, 2], propose a cluster analysis method based on geometric probability to process large amount of data in rectangular area. The basic idea is top-down stepwise refinement, firstly categories then subcategories. On all clustering levels, use the cluster validity function based on geometric probability firstly, determine clusters and the gathering direction, then determine the center of clustering and the border of clusters. Through TM remote sensing image classification examples, compare with the supervision and unsupervised classification in ERDAS and the cluster analysis method based on geometric probability in two-dimensional square which is proposed in literature 2. Results show that the proposed method can significantly improve the classification accuracy.

  10. Investigating the feasibility of a BCI-driven robot-based writing agent for handicapped individuals

    Science.gov (United States)

    Syan, Chanan S.; Harnarinesingh, Randy E. S.; Beharry, Rishi

    2014-07-01

    Brain-Computer Interfaces (BCIs) predominantly employ output actuators such as virtual keyboards and wheelchair controllers to enable handicapped individuals to interact and communicate with their environment. However, BCI-based assistive technologies are limited in their application. There is minimal research geared towards granting disabled individuals the ability to communicate using written words. This is a drawback because involving a human attendant in writing tasks can entail a breach of personal privacy where the task entails sensitive and private information such as banking matters. BCI-driven robot-based writing however can provide a safeguard for user privacy where it is required. This study investigated the feasibility of a BCI-driven writing agent using the 3 degree-of- freedom Phantom Omnibot. A full alphanumerical English character set was developed and validated using a teach pendant program in MATLAB. The Omnibot was subsequently interfaced to a P300-based BCI. Three subjects utilised the BCI in the online context to communicate words to the writing robot over a Local Area Network (LAN). The average online letter-wise classification accuracy was 91.43%. The writing agent legibly constructed the communicated letters with minor errors in trajectory execution. The developed system therefore provided a feasible platform for BCI-based writing.

  11. Investigating the feasibility of a BCI-driven robot-based writing agent for handicapped individuals

    International Nuclear Information System (INIS)

    Syan, Chanan S; Harnarinesingh, Randy E S; Beharry, Rishi

    2014-01-01

    Brain-Computer Interfaces (BCIs) predominantly employ output actuators such as virtual keyboards and wheelchair controllers to enable handicapped individuals to interact and communicate with their environment. However, BCI-based assistive technologies are limited in their application. There is minimal research geared towards granting disabled individuals the ability to communicate using written words. This is a drawback because involving a human attendant in writing tasks can entail a breach of personal privacy where the task entails sensitive and private information such as banking matters. BCI-driven robot-based writing however can provide a safeguard for user privacy where it is required. This study investigated the feasibility of a BCI-driven writing agent using the 3 degree-of- freedom Phantom Omnibot. A full alphanumerical English character set was developed and validated using a teach pendant program in MATLAB. The Omnibot was subsequently interfaced to a P300-based BCI. Three subjects utilised the BCI in the online context to communicate words to the writing robot over a Local Area Network (LAN). The average online letter-wise classification accuracy was 91.43%. The writing agent legibly constructed the communicated letters with minor errors in trajectory execution. The developed system therefore provided a feasible platform for BCI-based writing

  12. Multi-issue Agent Negotiation Based on Fairness

    Science.gov (United States)

    Zuo, Baohe; Zheng, Sue; Wu, Hong

    Agent-based e-commerce service has become a hotspot now. How to make the agent negotiation process quickly and high-efficiently is the main research direction of this area. In the multi-issue model, MAUT(Multi-attribute Utility Theory) or its derived theory usually consider little about the fairness of both negotiators. This work presents a general model of agent negotiation which considered the satisfaction of both negotiators via autonomous learning. The model can evaluate offers from the opponent agent based on the satisfaction degree, learn online to get the opponent's knowledge from interactive instances of history and negotiation of this time, make concessions dynamically based on fair object. Through building the optimal negotiation model, the bilateral negotiation achieved a higher efficiency and fairer deal.

  13. Vision-Based Perception and Classification of Mosquitoes Using Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Masataka Fuchida

    2017-01-01

    Full Text Available The need for a novel automated mosquito perception and classification method is becoming increasingly essential in recent years, with steeply increasing number of mosquito-borne diseases and associated casualties. There exist remote sensing and GIS-based methods for mapping potential mosquito inhabitants and locations that are prone to mosquito-borne diseases, but these methods generally do not account for species-wise identification of mosquitoes in closed-perimeter regions. Traditional methods for mosquito classification involve highly manual processes requiring tedious sample collection and supervised laboratory analysis. In this research work, we present the design and experimental validation of an automated vision-based mosquito classification module that can deploy in closed-perimeter mosquito inhabitants. The module is capable of identifying mosquitoes from other bugs such as bees and flies by extracting the morphological features, followed by support vector machine-based classification. In addition, this paper presents the results of three variants of support vector machine classifier in the context of mosquito classification problem. This vision-based approach to the mosquito classification problem presents an efficient alternative to the conventional methods for mosquito surveillance, mapping and sample image collection. Experimental results involving classification between mosquitoes and a predefined set of other bugs using multiple classification strategies demonstrate the efficacy and validity of the proposed approach with a maximum recall of 98%.

  14. Classification of research reactors and discussion of thinking of safety regulation based on the classification

    International Nuclear Information System (INIS)

    Song Chenxiu; Zhu Lixin

    2013-01-01

    Research reactors have different characteristics in the fields of reactor type, use, power level, design principle, operation model and safety performance, etc, and also have significant discrepancy in the aspect of nuclear safety regulation. This paper introduces classification of research reactors and discusses thinking of safety regulation based on the classification of research reactors. (authors)

  15. Agent-based models in economics a toolkit

    CERN Document Server

    Fagiolo, Giorgio; Gallegati, Mauro; Richiardi, Matteo; Russo, Alberto

    2018-01-01

    In contrast to mainstream economics, complexity theory conceives the economy as a complex system of heterogeneous interacting agents characterised by limited information and bounded rationality. Agent Based Models (ABMs) are the analytical and computational tools developed by the proponents of this emerging methodology. Aimed at students and scholars of contemporary economics, this book includes a comprehensive toolkit for agent-based computational economics, now quickly becoming the new way to study evolving economic systems. Leading scholars in the field explain how ABMs can be applied fruitfully to many real-world economic examples and represent a great advancement over mainstream approaches. The essays discuss the methodological bases of agent-based approaches and demonstrate step-by-step how to build, simulate and analyse ABMs and how to validate their outputs empirically using the data. They also present a wide set of applications of these models to key economic topics, including the business cycle, lab...

  16. Radar Target Classification using Recursive Knowledge-Based Methods

    DEFF Research Database (Denmark)

    Jochumsen, Lars Wurtz

    The topic of this thesis is target classification of radar tracks from a 2D mechanically scanning coastal surveillance radar. The measurements provided by the radar are position data and therefore the classification is mainly based on kinematic data, which is deduced from the position. The target...... been terminated. Therefore, an update of the classification results must be made for each measurement of the target. The data for this work are collected throughout the PhD and are both collected from radars and other sensors such as GPS....

  17. Energy-efficiency based classification of the manufacturing workstation

    Science.gov (United States)

    Frumuşanu, G.; Afteni, C.; Badea, N.; Epureanu, A.

    2017-08-01

    EU Directive 92/75/EC established for the first time an energy consumption labelling scheme, further implemented by several other directives. As consequence, nowadays many products (e.g. home appliances, tyres, light bulbs, houses) have an EU Energy Label when offered for sale or rent. Several energy consumption models of manufacturing equipments have been also developed. This paper proposes an energy efficiency - based classification of the manufacturing workstation, aiming to characterize its energetic behaviour. The concept of energy efficiency of the manufacturing workstation is defined. On this base, a classification methodology has been developed. It refers to specific criteria and their evaluation modalities, together to the definition & delimitation of energy efficiency classes. The energy class position is defined after the amount of energy needed by the workstation in the middle point of its operating domain, while its extension is determined by the value of the first coefficient from the Taylor series that approximates the dependence between the energy consume and the chosen parameter of the working regime. The main domain of interest for this classification looks to be the optimization of the manufacturing activities planning and programming. A case-study regarding an actual lathe classification from energy efficiency point of view, based on two different approaches (analytical and numerical) is also included.

  18. NIM: A Node Influence Based Method for Cancer Classification

    Directory of Open Access Journals (Sweden)

    Yiwen Wang

    2014-01-01

    Full Text Available The classification of different cancer types owns great significance in the medical field. However, the great majority of existing cancer classification methods are clinical-based and have relatively weak diagnostic ability. With the rapid development of gene expression technology, it is able to classify different kinds of cancers using DNA microarray. Our main idea is to confront the problem of cancer classification using gene expression data from a graph-based view. Based on a new node influence model we proposed, this paper presents a novel high accuracy method for cancer classification, which is composed of four parts: the first is to calculate the similarity matrix of all samples, the second is to compute the node influence of training samples, the third is to obtain the similarity between every test sample and each class using weighted sum of node influence and similarity matrix, and the last is to classify each test sample based on its similarity between every class. The data sets used in our experiments are breast cancer, central nervous system, colon tumor, prostate cancer, acute lymphoblastic leukemia, and lung cancer. experimental results showed that our node influence based method (NIM is more efficient and robust than the support vector machine, K-nearest neighbor, C4.5, naive Bayes, and CART.

  19. TENSOR MODELING BASED FOR AIRBORNE LiDAR DATA CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    N. Li

    2016-06-01

    Full Text Available Feature selection and description is a key factor in classification of Earth observation data. In this paper a classification method based on tensor decomposition is proposed. First, multiple features are extracted from raw LiDAR point cloud, and raster LiDAR images are derived by accumulating features or the “raw” data attributes. Then, the feature rasters of LiDAR data are stored as a tensor, and tensor decomposition is used to select component features. This tensor representation could keep the initial spatial structure and insure the consideration of the neighborhood. Based on a small number of component features a k nearest neighborhood classification is applied.

  20. Waste-acceptance criteria and risk-based thinking for radioactive-waste classification

    International Nuclear Information System (INIS)

    Lowenthal, M.D.

    1998-01-01

    The US system of radioactive-waste classification and its development provide a reference point for the discussion of risk-based thinking in waste classification. The official US system is described and waste-acceptance criteria for disposal sites are introduced because they constitute a form of de facto waste classification. Risk-based classification is explored and it is found that a truly risk-based system is context-dependent: risk depends not only on the waste-management activity but, for some activities such as disposal, it depends on the specific physical context. Some of the elements of the official US system incorporate risk-based thinking, but like many proposed alternative schemes, the physical context of disposal is ignored. The waste-acceptance criteria for disposal sites do account for this context dependence and could be used as a risk-based classification scheme for disposal. While different classes would be necessary for different management activities, the waste-acceptance criteria would obviate the need for the current system and could better match wastes to disposal environments saving money or improving safety or both

  1. Research on Classification of Chinese Text Data Based on SVM

    Science.gov (United States)

    Lin, Yuan; Yu, Hongzhi; Wan, Fucheng; Xu, Tao

    2017-09-01

    Data Mining has important application value in today’s industry and academia. Text classification is a very important technology in data mining. At present, there are many mature algorithms for text classification. KNN, NB, AB, SVM, decision tree and other classification methods all show good classification performance. Support Vector Machine’ (SVM) classification method is a good classifier in machine learning research. This paper will study the classification effect based on the SVM method in the Chinese text data, and use the support vector machine method in the chinese text to achieve the classify chinese text, and to able to combination of academia and practical application.

  2. Iris Image Classification Based on Hierarchical Visual Codebook.

    Science.gov (United States)

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  3. Improving the Computational Performance of Ontology-Based Classification Using Graph Databases

    Directory of Open Access Journals (Sweden)

    Thomas J. Lampoltshammer

    2015-07-01

    Full Text Available The increasing availability of very high-resolution remote sensing imagery (i.e., from satellites, airborne laser scanning, or aerial photography represents both a blessing and a curse for researchers. The manual classification of these images, or other similar geo-sensor data, is time-consuming and leads to subjective and non-deterministic results. Due to this fact, (semi- automated classification approaches are in high demand in affected research areas. Ontologies provide a proper way of automated classification for various kinds of sensor data, including remotely sensed data. However, the processing of data entities—so-called individuals—is one of the most cost-intensive computational operations within ontology reasoning. Therefore, an approach based on graph databases is proposed to overcome the issue of a high time consumption regarding the classification task. The introduced approach shifts the classification task from the classical Protégé environment and its common reasoners to the proposed graph-based approaches. For the validation, the authors tested the approach on a simulation scenario based on a real-world example. The results demonstrate a quite promising improvement of classification speed—up to 80,000 times faster than the Protégé-based approach.

  4. An Immune Agent for Web-Based AI Course

    Science.gov (United States)

    Gong, Tao; Cai, Zixing

    2006-01-01

    To overcome weakness and faults of a web-based e-learning course such as Artificial Intelligence (AI), an immune agent was proposed, simulating a natural immune mechanism against a virus. The immune agent was built on the multi-dimension education agent model and immune algorithm. The web-based AI course was comprised of many files, such as HTML…

  5. Hot complaint intelligent classification based on text mining

    Directory of Open Access Journals (Sweden)

    XIA Haifeng

    2013-10-01

    Full Text Available The complaint recognizer system plays an important role in making sure the correct classification of the hot complaint,improving the service quantity of telecommunications industry.The customers’ complaint in telecommunications industry has its special particularity which should be done in limited time,which cause the error in classification of hot complaint.The paper presents a model of complaint hot intelligent classification based on text mining,which can classify the hot complaint in the correct level of the complaint navigation.The examples show that the model can be efficient to classify the text of the complaint.

  6. Vehicle Maneuver Detection with Accelerometer-Based Classification

    Directory of Open Access Journals (Sweden)

    Javier Cervantes-Villanueva

    2016-09-01

    Full Text Available In the mobile computing era, smartphones have become instrumental tools to develop innovative mobile context-aware systems. In that sense, their usage in the vehicular domain eases the development of novel and personal transportation solutions. In this frame, the present work introduces an innovative mechanism to perceive the current kinematic state of a vehicle on the basis of the accelerometer data from a smartphone mounted in the vehicle. Unlike previous proposals, the introduced architecture targets the computational limitations of such devices to carry out the detection process following an incremental approach. For its realization, we have evaluated different classification algorithms to act as agents within the architecture. Finally, our approach has been tested with a real-world dataset collected by means of the ad hoc mobile application developed.

  7. A proposed data base system for detection, classification and ...

    African Journals Online (AJOL)

    A proposed data base system for detection, classification and location of fault on electricity company of Ghana electrical distribution system. Isaac Owusu-Nyarko, Mensah-Ananoo Eugine. Abstract. No Abstract. Keywords: database, classification of fault, power, distribution system, SCADA, ECG. Full Text: EMAIL FULL TEXT ...

  8. Hydrologic-Process-Based Soil Texture Classifications for Improved Visualization of Landscape Function

    Science.gov (United States)

    Groenendyk, Derek G.; Ferré, Ty P.A.; Thorp, Kelly R.; Rice, Amy K.

    2015-01-01

    Soils lie at the interface between the atmosphere and the subsurface and are a key component that control ecosystem services, food production, and many other processes at the Earth’s surface. There is a long-established convention for identifying and mapping soils by texture. These readily available, georeferenced soil maps and databases are used widely in environmental sciences. Here, we show that these traditional soil classifications can be inappropriate, contributing to bias and uncertainty in applications from slope stability to water resource management. We suggest a new approach to soil classification, with a detailed example from the science of hydrology. Hydrologic simulations based on common meteorological conditions were performed using HYDRUS-1D, spanning textures identified by the United States Department of Agriculture soil texture triangle. We consider these common conditions to be: drainage from saturation, infiltration onto a drained soil, and combined infiltration and drainage events. Using a k-means clustering algorithm, we created soil classifications based on the modeled hydrologic responses of these soils. The hydrologic-process-based classifications were compared to those based on soil texture and a single hydraulic property, Ks. Differences in classifications based on hydrologic response versus soil texture demonstrate that traditional soil texture classification is a poor predictor of hydrologic response. We then developed a QGIS plugin to construct soil maps combining a classification with georeferenced soil data from the Natural Resource Conservation Service. The spatial patterns of hydrologic response were more immediately informative, much simpler, and less ambiguous, for use in applications ranging from trafficability to irrigation management to flood control. The ease with which hydrologic-process-based classifications can be made, along with the improved quantitative predictions of soil responses and visualization of landscape

  9. Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.

    Science.gov (United States)

    Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel

    2017-06-01

    Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.

  10. Evaluating Water Demand Using Agent-Based Modeling

    Science.gov (United States)

    Lowry, T. S.

    2004-12-01

    The supply and demand of water resources are functions of complex, inter-related systems including hydrology, climate, demographics, economics, and policy. To assess the safety and sustainability of water resources, planners often rely on complex numerical models that relate some or all of these systems using mathematical abstractions. The accuracy of these models relies on how well the abstractions capture the true nature of the systems interactions. Typically, these abstractions are based on analyses of observations and/or experiments that account only for the statistical mean behavior of each system. This limits the approach in two important ways: 1) It cannot capture cross-system disruptive events, such as major drought, significant policy change, or terrorist attack, and 2) it cannot resolve sub-system level responses. To overcome these limitations, we are developing an agent-based water resources model that includes the systems of hydrology, climate, demographics, economics, and policy, to examine water demand during normal and extraordinary conditions. Agent-based modeling (ABM) develops functional relationships between systems by modeling the interaction between individuals (agents), who behave according to a probabilistic set of rules. ABM is a "bottom-up" modeling approach in that it defines macro-system behavior by modeling the micro-behavior of individual agents. While each agent's behavior is often simple and predictable, the aggregate behavior of all agents in each system can be complex, unpredictable, and different than behaviors observed in mean-behavior models. Furthermore, the ABM approach creates a virtual laboratory where the effects of policy changes and/or extraordinary events can be simulated. Our model, which is based on the demographics and hydrology of the Middle Rio Grande Basin in the state of New Mexico, includes agent groups of residential, agricultural, and industrial users. Each agent within each group determines its water usage

  11. A technology path to tactical agent-based modeling

    Science.gov (United States)

    James, Alex; Hanratty, Timothy P.

    2017-05-01

    Wargaming is a process of thinking through and visualizing events that could occur during a possible course of action. Over the past 200 years, wargaming has matured into a set of formalized processes. One area of growing interest is the application of agent-based modeling. Agent-based modeling and its additional supporting technologies has potential to introduce a third-generation wargaming capability to the Army, creating a positive overmatch decision-making capability. In its simplest form, agent-based modeling is a computational technique that helps the modeler understand and simulate how the "whole of a system" responds to change over time. It provides a decentralized method of looking at situations where individual agents are instantiated within an environment, interact with each other, and empowered to make their own decisions. However, this technology is not without its own risks and limitations. This paper explores a technology roadmap, identifying research topics that could realize agent-based modeling within a tactical wargaming context.

  12. Granular loess classification based

    International Nuclear Information System (INIS)

    Browzin, B.S.

    1985-01-01

    This paper discusses how loess might be identified by two index properties: the granulometric composition and the dry unit weight. These two indices are necessary but not always sufficient for identification of loess. On the basis of analyses of samples from three continents, it was concluded that the 0.01-0.5-mm fraction deserves the name loessial fraction. Based on the loessial fraction concept, a granulometric classification of loess is proposed. A triangular chart is used to classify loess

  13. A hybrid agent-based approach for modeling microbiological systems.

    Science.gov (United States)

    Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing

    2008-11-21

    Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.

  14. Failure diagnosis using deep belief learning based health state classification

    International Nuclear Information System (INIS)

    Tamilselvan, Prasanna; Wang, Pingfeng

    2013-01-01

    Effective health diagnosis provides multifarious benefits such as improved safety, improved reliability and reduced costs for operation and maintenance of complex engineered systems. This paper presents a novel multi-sensor health diagnosis method using deep belief network (DBN). DBN has recently become a popular approach in machine learning for its promised advantages such as fast inference and the ability to encode richer and higher order network structures. The DBN employs a hierarchical structure with multiple stacked restricted Boltzmann machines and works through a layer by layer successive learning process. The proposed multi-sensor health diagnosis methodology using DBN based state classification can be structured in three consecutive stages: first, defining health states and preprocessing sensory data for DBN training and testing; second, developing DBN based classification models for diagnosis of predefined health states; third, validating DBN classification models with testing sensory dataset. Health diagnosis using DBN based health state classification technique is compared with four existing diagnosis techniques. Benchmark classification problems and two engineering health diagnosis applications: aircraft engine health diagnosis and electric power transformer health diagnosis are employed to demonstrate the efficacy of the proposed approach

  15. Towards Agent-Based Model Specification in Smart Grid: A Cognitive Agent-based Computing Approach

    OpenAIRE

    Akram, Waseem; Niazi, Muaz A.; Iantovics, Laszlo Barna

    2017-01-01

    A smart grid can be considered as a complex network where each node represents a generation unit or a consumer. Whereas links can be used to represent transmission lines. One way to study complex systems is by using the agent-based modeling (ABM) paradigm. An ABM is a way of representing a complex system of autonomous agents interacting with each other. Previously, a number of studies have been presented in the smart grid domain making use of the ABM paradigm. However, to the best of our know...

  16. Multi-Agent Pathfinding with n Agents on Graphs with n Vertices

    DEFF Research Database (Denmark)

    Förster, Klaus-Tycho; Groner, Linus; Hoefler, Torsten

    2017-01-01

    We investigate the multi-agent pathfinding (MAPF) problem with $n$ agents on graphs with $n$ vertices: Each agent has a unique start and goal vertex, with the objective of moving all agents in parallel movements to their goal s.t.~each vertex and each edge may only be used by one agent at a time....... We give a combinatorial classification of all graphs where this problem is solvable in general, including cases where the solvability depends on the initial agent placement. Furthermore, we present an algorithm solving the MAPF problem in our setting, requiring O(n²) rounds, or O(n³) moves...... of individual agents. Complementing these results, we show that there are graphs where Omega(n²) rounds and Omega(n³) moves are required for any algorithm....

  17. Classification of Noisy Data: An Approach Based on Genetic Algorithms and Voronoi Tessellation

    DEFF Research Database (Denmark)

    Khan, Abdul Rauf; Schiøler, Henrik; Knudsen, Torben

    Classification is one of the major constituents of the data-mining toolkit. The well-known methods for classification are built on either the principle of logic or statistical/mathematical reasoning for classification. In this article we propose: (1) a different strategy, which is based on the po......Classification is one of the major constituents of the data-mining toolkit. The well-known methods for classification are built on either the principle of logic or statistical/mathematical reasoning for classification. In this article we propose: (1) a different strategy, which is based...

  18. Novel insights in agent-based complex automated negotiation

    CERN Document Server

    Lopez-Carmona, Miguel; Ito, Takayuki; Zhang, Minjie; Bai, Quan; Fujita, Katsuhide

    2014-01-01

    This book focuses on all aspects of complex automated negotiations, which are studied in the field of autonomous agents and multi-agent systems. This book consists of two parts. I: Agent-Based Complex Automated Negotiations, and II: Automated Negotiation Agents Competition. The chapters in Part I are extended versions of papers presented at the 2012 international workshop on Agent-Based Complex Automated Negotiation (ACAN), after peer reviews by three Program Committee members. Part II examines in detail ANAC 2012 (The Third Automated Negotiating Agents Competition), in which automated agents that have different negotiation strategies and are implemented by different developers are automatically negotiated in the several negotiation domains. ANAC is an international competition in which automated negotiation strategies, submitted by a number of universities and research institutes across the world, are evaluated in tournament style. The purpose of the competition is to steer the research in the area of bilate...

  19. Color Independent Components Based SIFT Descriptors for Object/Scene Classification

    Science.gov (United States)

    Ai, Dan-Ni; Han, Xian-Hua; Ruan, Xiang; Chen, Yen-Wei

    In this paper, we present a novel color independent components based SIFT descriptor (termed CIC-SIFT) for object/scene classification. We first learn an efficient color transformation matrix based on independent component analysis (ICA), which is adaptive to each category in a database. The ICA-based color transformation can enhance contrast between the objects and the background in an image. Then we compute CIC-SIFT descriptors over all three transformed color independent components. Since the ICA-based color transformation can boost the objects and suppress the background, the proposed CIC-SIFT can extract more effective and discriminative local features for object/scene classification. The comparison is performed among seven SIFT descriptors, and the experimental classification results show that our proposed CIC-SIFT is superior to other conventional SIFT descriptors.

  20. Object-Based Classification as an Alternative Approach to the Traditional Pixel-Based Classification to Identify Potential Habitat of the Grasshopper Sparrow

    Science.gov (United States)

    Jobin, Benoît; Labrecque, Sandra; Grenier, Marcelle; Falardeau, Gilles

    2008-01-01

    The traditional method of identifying wildlife habitat distribution over large regions consists of pixel-based classification of satellite images into a suite of habitat classes used to select suitable habitat patches. Object-based classification is a new method that can achieve the same objective based on the segmentation of spectral bands of the image creating homogeneous polygons with regard to spatial or spectral characteristics. The segmentation algorithm does not solely rely on the single pixel value, but also on shape, texture, and pixel spatial continuity. The object-based classification is a knowledge base process where an interpretation key is developed using ground control points and objects are assigned to specific classes according to threshold values of determined spectral and/or spatial attributes. We developed a model using the eCognition software to identify suitable habitats for the Grasshopper Sparrow, a rare and declining species found in southwestern Québec. The model was developed in a region with known breeding sites and applied on other images covering adjacent regions where potential breeding habitats may be present. We were successful in locating potential habitats in areas where dairy farming prevailed but failed in an adjacent region covered by a distinct Landsat scene and dominated by annual crops. We discuss the added value of this method, such as the possibility to use the contextual information associated to objects and the ability to eliminate unsuitable areas in the segmentation and land cover classification processes, as well as technical and logistical constraints. A series of recommendations on the use of this method and on conservation issues of Grasshopper Sparrow habitat is also provided.

  1. Comparison of hand-craft feature based SVM and CNN based deep learning framework for automatic polyp classification.

    Science.gov (United States)

    Younghak Shin; Balasingham, Ilangko

    2017-07-01

    Colonoscopy is a standard method for screening polyps by highly trained physicians. Miss-detected polyps in colonoscopy are potential risk factor for colorectal cancer. In this study, we investigate an automatic polyp classification framework. We aim to compare two different approaches named hand-craft feature method and convolutional neural network (CNN) based deep learning method. Combined shape and color features are used for hand craft feature extraction and support vector machine (SVM) method is adopted for classification. For CNN approach, three convolution and pooling based deep learning framework is used for classification purpose. The proposed framework is evaluated using three public polyp databases. From the experimental results, we have shown that the CNN based deep learning framework shows better classification performance than the hand-craft feature based methods. It achieves over 90% of classification accuracy, sensitivity, specificity and precision.

  2. Accurate crop classification using hierarchical genetic fuzzy rule-based systems

    Science.gov (United States)

    Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.

    2014-10-01

    This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

  3. Hierarchical structure for audio-video based semantic classification of sports video sequences

    Science.gov (United States)

    Kolekar, M. H.; Sengupta, S.

    2005-07-01

    A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.

  4. Agent-based Simulation of the Maritime Domain

    Directory of Open Access Journals (Sweden)

    O. Vaněk

    2010-01-01

    Full Text Available In this paper, a multi-agent based simulation platform is introduced that focuses on legitimate and illegitimate aspects of maritime traffic, mainly on intercontinental transport through piracy afflicted areas. The extensible architecture presented here comprises several modules controlling the simulation and the life-cycle of the agents, analyzing the simulation output and visualizing the entire simulated domain. The simulation control module is initialized by various configuration scenarios to simulate various real-world situations, such as a pirate ambush, coordinated transit through a transport corridor, or coastal fishing and local traffic. The environmental model provides a rich set of inputs for agents that use the geo-spatial data and the vessel operational characteristics for their reasoning. The agent behavior model based on finite state machines together with planning algorithms allows complex expression of agent behavior, so the resulting simulation output can serve as a substitution for real world data from the maritime domain.

  5. Agent-Based Data Integration Framework

    Directory of Open Access Journals (Sweden)

    Łukasz Faber

    2014-01-01

    Full Text Available Combining data from diverse, heterogeneous sources while facilitating a unified access to it is an important (albeit difficult task. There are various possibilities of performing it. In this publication, we propose and describe an agent-based framework dedicated to acquiring and processing distributed, heterogeneous data collected from diverse sources (e.g., the Internet, external software, relational, and document databases. Using this multi-agent-based approach in the aspects of the general architecture (the organization and management of the framework, we create a proof-of-concept implementation. The approach is presented using a sample scenario in which the system is used to search for personal and professional profiles of scientists.

  6. SQL based cardiovascular ultrasound image classification.

    Science.gov (United States)

    Nandagopalan, S; Suryanarayana, Adiga B; Sudarshan, T S B; Chandrashekar, Dhanalakshmi; Manjunath, C N

    2013-01-01

    This paper proposes a novel method to analyze and classify the cardiovascular ultrasound echocardiographic images using Naïve-Bayesian model via database OLAP-SQL. Efficient data mining algorithms based on tightly-coupled model is used to extract features. Three algorithms are proposed for classification namely Naïve-Bayesian Classifier for Discrete variables (NBCD) with SQL, NBCD with OLAP-SQL, and Naïve-Bayesian Classifier for Continuous variables (NBCC) using OLAP-SQL. The proposed model is trained with 207 patient images containing normal and abnormal categories. Out of the three proposed algorithms, a high classification accuracy of 96.59% was achieved from NBCC which is better than the earlier methods.

  7. Design and implementation based on the classification protection vulnerability scanning system

    International Nuclear Information System (INIS)

    Wang Chao; Lu Zhigang; Liu Baoxu

    2010-01-01

    With the application and spread of the classification protection, Network Security Vulnerability Scanning should consider the efficiency and the function expansion. It proposes a kind of a system vulnerability from classification protection, and elaborates the design and implementation of a vulnerability scanning system based on vulnerability classification plug-in technology and oriented classification protection. According to the experiment, the application of classification protection has good adaptability and salability with the system, and it also approves the efficiency of scanning. (authors)

  8. New approaches in agent-based modeling of complex financial systems

    Science.gov (United States)

    Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei

    2017-12-01

    Agent-based modeling is a powerful simulation technique to understand the collective behavior and microscopic interaction in complex financial systems. Recently, the concept for determining the key parameters of agent-based models from empirical data instead of setting them artificially was suggested. We first review several agent-based models and the new approaches to determine the key model parameters from historical market data. Based on the agents' behaviors with heterogeneous personal preferences and interactions, these models are successful in explaining the microscopic origination of the temporal and spatial correlations of financial markets. We then present a novel paradigm combining big-data analysis with agent-based modeling. Specifically, from internet query and stock market data, we extract the information driving forces and develop an agent-based model to simulate the dynamic behaviors of complex financial systems.

  9. Analysis of composition-based metagenomic classification.

    Science.gov (United States)

    Higashi, Susan; Barreto, André da Motta Salles; Cantão, Maurício Egidio; de Vasconcelos, Ana Tereza Ribeiro

    2012-01-01

    An essential step of a metagenomic study is the taxonomic classification, that is, the identification of the taxonomic lineage of the organisms in a given sample. The taxonomic classification process involves a series of decisions. Currently, in the context of metagenomics, such decisions are usually based on empirical studies that consider one specific type of classifier. In this study we propose a general framework for analyzing the impact that several decisions can have on the classification problem. Instead of focusing on any specific classifier, we define a generic score function that provides a measure of the difficulty of the classification task. Using this framework, we analyze the impact of the following parameters on the taxonomic classification problem: (i) the length of n-mers used to encode the metagenomic sequences, (ii) the similarity measure used to compare sequences, and (iii) the type of taxonomic classification, which can be conventional or hierarchical, depending on whether the classification process occurs in a single shot or in several steps according to the taxonomic tree. We defined a score function that measures the degree of separability of the taxonomic classes under a given configuration induced by the parameters above. We conducted an extensive computational experiment and found out that reasonable values for the parameters of interest could be (i) intermediate values of n, the length of the n-mers; (ii) any similarity measure, because all of them resulted in similar scores; and (iii) the hierarchical strategy, which performed better in all of the cases. As expected, short n-mers generate lower configuration scores because they give rise to frequency vectors that represent distinct sequences in a similar way. On the other hand, large values for n result in sparse frequency vectors that represent differently metagenomic fragments that are in fact similar, also leading to low configuration scores. Regarding the similarity measure, in

  10. Group-Based Active Learning of Classification Models.

    Science.gov (United States)

    Luo, Zhipeng; Hauskrecht, Milos

    2017-05-01

    Learning of classification models from real-world data often requires additional human expert effort to annotate the data. However, this process can be rather costly and finding ways of reducing the human annotation effort is critical for this task. The objective of this paper is to develop and study new ways of providing human feedback for efficient learning of classification models by labeling groups of examples. Briefly, unlike traditional active learning methods that seek feedback on individual examples, we develop a new group-based active learning framework that solicits label information on groups of multiple examples. In order to describe groups in a user-friendly way, conjunctive patterns are used to compactly represent groups. Our empirical study on 12 UCI data sets demonstrates the advantages and superiority of our approach over both classic instance-based active learning work, as well as existing group-based active-learning methods.

  11. An Interactive Tool for Creating Multi-Agent Systems and Interactive Agent-based Games

    DEFF Research Database (Denmark)

    Lund, Henrik Hautop; Pagliarini, Luigi

    2011-01-01

    Utilizing principles from parallel and distributed processing combined with inspiration from modular robotics, we developed the modular interactive tiles. As an educational tool, the modular interactive tiles facilitate the learning of multi-agent systems and interactive agent-based games...

  12. Validation of Agent Based Distillation Movement Algorithms

    National Research Council Canada - National Science Library

    Gill, Andrew

    2003-01-01

    Agent based distillations (ABD) are low-resolution abstract models, which can be used to explore questions associated with land combat operations in a short period of time Movement of agents within the EINSTein and MANA ABDs...

  13. Data Clustering and Evolving Fuzzy Decision Tree for Data Base Classification Problems

    Science.gov (United States)

    Chang, Pei-Chann; Fan, Chin-Yuan; Wang, Yen-Wen

    Data base classification suffers from two well known difficulties, i.e., the high dimensionality and non-stationary variations within the large historic data. This paper presents a hybrid classification model by integrating a case based reasoning technique, a Fuzzy Decision Tree (FDT), and Genetic Algorithms (GA) to construct a decision-making system for data classification in various data base applications. The model is major based on the idea that the historic data base can be transformed into a smaller case-base together with a group of fuzzy decision rules. As a result, the model can be more accurately respond to the current data under classifying from the inductions by these smaller cases based fuzzy decision trees. Hit rate is applied as a performance measure and the effectiveness of our proposed model is demonstrated by experimentally compared with other approaches on different data base classification applications. The average hit rate of our proposed model is the highest among others.

  14. A new gammagraphic and functional-based classification for hyperthyroidism

    International Nuclear Information System (INIS)

    Sanchez, J.; Lamata, F.; Cerdan, R.; Agilella, V.; Gastaminza, R.; Abusada, R.; Gonzales, M.; Martinez, M.

    2000-01-01

    The absence of an universal classification for hyperthyroidism's (HT), give rise to inadequate interpretation of series and trials, and prevents decision making. We offer a tentative classification based on gammagraphic and functional findings. Clinical records from patients who underwent thyroidectomy in our Department since 1967 to 1997 were reviewed. Those with functional measurements of hyperthyroidism were considered. All were managed according to the same preestablished guidelines. HT was the surgical indication in 694 (27,1%) of the 2559 thyroidectomy. Based on gammagraphic studies, we classified HTs in: parenchymatous increased-uptake, which could be diffuse, diffuse with cold nodules or diffuse with at least one nodule, and nodular increased-uptake (Autonomous Functioning Thyroid Nodes-AFTN), divided into solitary AFTN or toxic adenoma and multiple AFTN o toxic multi-nodular goiter. This gammagraphic-based classification in useful and has high sensitivity to detect these nodules assessing their activity, allowing us to make therapeutic decision making and, in some cases, to choose surgical technique. (authors)

  15. An Active Learning Exercise for Introducing Agent-Based Modeling

    Science.gov (United States)

    Pinder, Jonathan P.

    2013-01-01

    Recent developments in agent-based modeling as a method of systems analysis and optimization indicate that students in business analytics need an introduction to the terminology, concepts, and framework of agent-based modeling. This article presents an active learning exercise for MBA students in business analytics that demonstrates agent-based…

  16. Consentaneous agent-based and stochastic model of the financial markets.

    Science.gov (United States)

    Gontis, Vygintas; Kononovicius, Aleksejus

    2014-01-01

    We are looking for the agent-based treatment of the financial markets considering necessity to build bridges between microscopic, agent based, and macroscopic, phenomenological modeling. The acknowledgment that agent-based modeling framework, which may provide qualitative and quantitative understanding of the financial markets, is very ambiguous emphasizes the exceptional value of well defined analytically tractable agent systems. Herding as one of the behavior peculiarities considered in the behavioral finance is the main property of the agent interactions we deal with in this contribution. Looking for the consentaneous agent-based and macroscopic approach we combine two origins of the noise: exogenous one, related to the information flow, and endogenous one, arising form the complex stochastic dynamics of agents. As a result we propose a three state agent-based herding model of the financial markets. From this agent-based model we derive a set of stochastic differential equations, which describes underlying macroscopic dynamics of agent population and log price in the financial markets. The obtained solution is then subjected to the exogenous noise, which shapes instantaneous return fluctuations. We test both Gaussian and q-Gaussian noise as a source of the short term fluctuations. The resulting model of the return in the financial markets with the same set of parameters reproduces empirical probability and spectral densities of absolute return observed in New York, Warsaw and NASDAQ OMX Vilnius Stock Exchanges. Our result confirms the prevalent idea in behavioral finance that herding interactions may be dominant over agent rationality and contribute towards bubble formation.

  17. Modelling of robotic work cells using agent based-approach

    Science.gov (United States)

    Sękala, A.; Banaś, W.; Gwiazda, A.; Monica, Z.; Kost, G.; Hryniewicz, P.

    2016-08-01

    In the case of modern manufacturing systems the requirements, both according the scope and according characteristics of technical procedures are dynamically changing. This results in production system organization inability to keep up with changes in a market demand. Accordingly, there is a need for new design methods, characterized, on the one hand with a high efficiency and on the other with the adequate level of the generated organizational solutions. One of the tools that could be used for this purpose is the concept of agent systems. These systems are the tools of artificial intelligence. They allow assigning to agents the proper domains of procedures and knowledge so that they represent in a self-organizing system of an agent environment, components of a real system. The agent-based system for modelling robotic work cell should be designed taking into consideration many limitations considered with the characteristic of this production unit. It is possible to distinguish some grouped of structural components that constitute such a system. This confirms the structural complexity of a work cell as a specific production system. So it is necessary to develop agents depicting various aspects of the work cell structure. The main groups of agents that are used to model a robotic work cell should at least include next pattern representatives: machine tool agents, auxiliary equipment agents, robots agents, transport equipment agents, organizational agents as well as data and knowledge bases agents. In this way it is possible to create the holarchy of the agent-based system.

  18. Performance Evaluation of Frequency Transform Based Block Classification of Compound Image Segmentation Techniques

    Science.gov (United States)

    Selwyn, Ebenezer Juliet; Florinabel, D. Jemi

    2018-04-01

    Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.

  19. Dissemination of Cultural Norms and Values: Agent-Based Modeling

    Directory of Open Access Journals (Sweden)

    Denis Andreevich Degterev

    2016-12-01

    Full Text Available This article shows how agent-based modeling allows us to explore the mechanisms of the dissemination of cultural norms and values both within one country and in the whole world. In recent years, this type of simulation is particularly prevalent in the analysis of international relations, becoming more popular than the system dynamics and discrete event simulation. The use of agent-based modeling in the analysis of international relations is connected with the agent-structure problem in international relations. Structure and agents act as interdependent and dynamically changing in the process of interaction between entities. Agent-structure interaction could be modeled by means of the theory of complex adaptive systems with the use of agent-based modeling techniques. One of the first examples of the use of agent-based modeling in political science is a model of racial segregation T. Shellinga. On the basis of this model, the author shows how the change in behavioral patterns at micro-level impacts on the macro-level. Patterns are changing due to the dynamics of cultural norms and values, formed by mass-media and other social institutes. The author shows the main areas of modern application of agent-based modeling in international studies including the analysis of ethnic conflicts, the formation of international coalitions. Particular attention is paid to Robert Axelrod approach based on the use of genetic algorithms to the spread of cultural norms and values. Agent-based modeling shows how to how to create such conditions that the norms that originally are not shared by a significant part of the population, eventually spread everywhere. Practical application of these algorithms is shown by the author of the article on the example of the situation in Ukraine in 2015-2016. The article also reveals the mechanisms of international spread of cultural norms and values. The main think-tanks using agent-based modeling in international studies are

  20. Agent-based services for B2B electronic commerce

    Science.gov (United States)

    Fong, Elizabeth; Ivezic, Nenad; Rhodes, Tom; Peng, Yun

    2000-12-01

    The potential of agent-based systems has not been realized yet, in part, because of the lack of understanding of how the agent technology supports industrial needs and emerging standards. The area of business-to-business electronic commerce (b2b e-commerce) is one of the most rapidly developing sectors of industry with huge impact on manufacturing practices. In this paper, we investigate the current state of agent technology and the feasibility of applying agent-based computing to b2b e-commerce in the circuit board manufacturing sector. We identify critical tasks and opportunities in the b2b e-commerce area where agent-based services can best be deployed. We describe an implemented agent-based prototype system to facilitate the bidding process for printed circuit board manufacturing and assembly. These activities are taking place within the Internet Commerce for Manufacturing (ICM) project, the NIST- sponsored project working with industry to create an environment where small manufacturers of mechanical and electronic components may participate competitively in virtual enterprises that manufacture printed circuit assemblies.

  1. Internet-enabled collaborative agent-based supply chains

    Science.gov (United States)

    Shen, Weiming; Kremer, Rob; Norrie, Douglas H.

    2000-12-01

    This paper presents some results of our recent research work related to the development of a new Collaborative Agent System Architecture (CASA) and an Infrastructure for Collaborative Agent Systems (ICAS). Initially being proposed as a general architecture for Internet based collaborative agent systems (particularly complex industrial collaborative agent systems), the proposed architecture is very suitable for managing the Internet enabled complex supply chain for a large manufacturing enterprise. The general collaborative agent system architecture with the basic communication and cooperation services, domain independent components, prototypes and mechanisms are described. Benefits of implementing Internet enabled supply chains with the proposed infrastructure are discussed. A case study on Internet enabled supply chain management is presented.

  2. Agent-Based Models in Social Physics

    Science.gov (United States)

    Quang, Le Anh; Jung, Nam; Cho, Eun Sung; Choi, Jae Han; Lee, Jae Woo

    2018-06-01

    We review the agent-based models (ABM) on social physics including econophysics. The ABM consists of agent, system space, and external environment. The agent is autonomous and decides his/her behavior by interacting with the neighbors or the external environment with the rules of behavior. Agents are irrational because they have only limited information when they make decisions. They adapt using learning from past memories. Agents have various attributes and are heterogeneous. ABM is a non-equilibrium complex system that exhibits various emergence phenomena. The social complexity ABM describes human behavioral characteristics. In ABMs of econophysics, we introduce the Sugarscape model and the artificial market models. We review minority games and majority games in ABMs of game theory. Social flow ABM introduces crowding, evacuation, traffic congestion, and pedestrian dynamics. We also review ABM for opinion dynamics and voter model. We discuss features and advantages and disadvantages of Netlogo, Repast, Swarm, and Mason, which are representative platforms for implementing ABM.

  3. Moral Guilt : An Agent-Based Model Analysis

    OpenAIRE

    Gaudou , Benoit; Lorini , Emiliano; Mayor , Eunate

    2013-01-01

    International audience; In this article we analyze the influence of a concrete moral emotion (i.e. moral guilt) on strategic decision making. We present a normal form Prisoner’s Dilemma with a moral component. We assume that agents evaluate the game’s outcomes with respect to their ideality degree (i.e. how much a given outcome conforms to the player’s moral values), based on two proposed notions on ethical preferences: Harsanyi’s and Rawls’. Based on such game, we construct and agent-based m...

  4. Graph-Based Semi-Supervised Hyperspectral Image Classification Using Spatial Information

    Science.gov (United States)

    Jamshidpour, N.; Homayouni, S.; Safari, A.

    2017-09-01

    Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.

  5. GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION

    Directory of Open Access Journals (Sweden)

    N. Jamshidpour

    2017-09-01

    Full Text Available Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.

  6. Java-based mobile agent platforms for wireless sensor networks

    NARCIS (Netherlands)

    Aiello, F.; Carbone, A.; Fortino, G.; Galzarano, S.; Ganzha, M.; Paprzycki, M.

    2010-01-01

    This paper proposes an overview and comparison of mobile agent platforms for the development of wireless sensor network applications. In particular, the architecture, programming model and basic performance of two Java-based agent platforms, Mobile Agent Platform for Sun SPOT (MAPS) and Agent

  7. Key-phrase based classification of public health web pages.

    Science.gov (United States)

    Dolamic, Ljiljana; Boyer, Célia

    2013-01-01

    This paper describes and evaluates the public health web pages classification model based on key phrase extraction and matching. Easily extendible both in terms of new classes as well as the new language this method proves to be a good solution for text classification faced with the total lack of training data. To evaluate the proposed solution we have used a small collection of public health related web pages created by a double blind manual classification. Our experiments have shown that by choosing the adequate threshold value the desired value for either precision or recall can be achieved.

  8. Agent-based modeling and simulation Part 3 : desktop ABMS.

    Energy Technology Data Exchange (ETDEWEB)

    Macal, C. M.; North, M. J.; Decision and Information Sciences

    2007-01-01

    Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of autonomous, interacting agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to support their research. Some have gone so far as to contend that ABMS 'is a third way of doing science,' in addition to traditional deductive and inductive reasoning (Axelrod 1997b). Computational advances have made possible a growing number of agent-based models across a variety of application domains. Applications range from modeling agent behavior in the stock market, supply chains, and consumer markets, to predicting the spread of epidemics, the threat of bio-warfare, and the factors responsible for the fall of ancient civilizations. This tutorial describes the theoretical and practical foundations of ABMS, identifies toolkits and methods for developing agent models, and illustrates the development of a simple agent-based model of shopper behavior using spreadsheets.

  9. Establish an Agent-Simulant Technology Relationship (ASTR)

    Science.gov (United States)

    2017-04-14

    collective protection; CP; decontamination ; decon; contamination avoidance; CA; chemical biological radiological; CBR 16. SECURITY CLASSIFICATION...Within chemical defense, the individual protection (IP), collective protection (CP), decontamination (decon), and contamination avoidance (CA...OT). c. Testing may use chemical warfare agent (CWA), biological warfare agent (BWA), radiological agent, or simulant (surrogate). A simulant is a

  10. The Study of Land Use Classification Based on SPOT6 High Resolution Data

    OpenAIRE

    Wu Song; Jiang Qigang

    2016-01-01

    A method is carried out to quick classification extract of the type of land use in agricultural areas, which is based on the spot6 high resolution remote sensing classification data and used of the good nonlinear classification ability of support vector machine. The results show that the spot6 high resolution remote sensing classification data can realize land classification efficiently, the overall classification accuracy reached 88.79% and Kappa factor is 0.8632 which means that the classif...

  11. Rough set classification based on quantum logic

    Science.gov (United States)

    Hassan, Yasser F.

    2017-11-01

    By combining the advantages of quantum computing and soft computing, the paper shows that rough sets can be used with quantum logic for classification and recognition systems. We suggest the new definition of rough set theory as quantum logic theory. Rough approximations are essential elements in rough set theory, the quantum rough set model for set-valued data directly construct set approximation based on a kind of quantum similarity relation which is presented here. Theoretical analyses demonstrate that the new model for quantum rough sets has new type of decision rule with less redundancy which can be used to give accurate classification using principles of quantum superposition and non-linear quantum relations. To our knowledge, this is the first attempt aiming to define rough sets in representation of a quantum rather than logic or sets. The experiments on data-sets have demonstrated that the proposed model is more accuracy than the traditional rough sets in terms of finding optimal classifications.

  12. Organizational Data Classification Based on the Importance Concept of Complex Networks.

    Science.gov (United States)

    Carneiro, Murillo Guimaraes; Zhao, Liang

    2017-08-01

    Data classification is a common task, which can be performed by both computers and human beings. However, a fundamental difference between them can be observed: computer-based classification considers only physical features (e.g., similarity, distance, or distribution) of input data; by contrast, brain-based classification takes into account not only physical features, but also the organizational structure of data. In this paper, we figure out the data organizational structure for classification using complex networks constructed from training data. Specifically, an unlabeled instance is classified by the importance concept characterized by Google's PageRank measure of the underlying data networks. Before a test data instance is classified, a network is constructed from vector-based data set and the test instance is inserted into the network in a proper manner. To this end, we also propose a measure, called spatio-structural differential efficiency, to combine the physical and topological features of the input data. Such a method allows for the classification technique to capture a variety of data patterns using the unique importance measure. Extensive experiments demonstrate that the proposed technique has promising predictive performance on the detection of heart abnormalities.

  13. Combined Kernel-Based BDT-SMO Classification of Hyperspectral Fused Images

    Directory of Open Access Journals (Sweden)

    Fenghua Huang

    2014-01-01

    Full Text Available To solve the poor generalization and flexibility problems that single kernel SVM classifiers have while classifying combined spectral and spatial features, this paper proposed a solution to improve the classification accuracy and efficiency of hyperspectral fused images: (1 different radial basis kernel functions (RBFs are employed for spectral and textural features, and a new combined radial basis kernel function (CRBF is proposed by combining them in a weighted manner; (2 the binary decision tree-based multiclass SMO (BDT-SMO is used in the classification of hyperspectral fused images; (3 experiments are carried out, where the single radial basis function- (SRBF- based BDT-SMO classifier and the CRBF-based BDT-SMO classifier are used, respectively, to classify the land usages of hyperspectral fused images, and genetic algorithms (GA are used to optimize the kernel parameters of the classifiers. The results show that, compared with SRBF, CRBF-based BDT-SMO classifiers display greater classification accuracy and efficiency.

  14. Pathological Bases for a Robust Application of Cancer Molecular Classification

    Directory of Open Access Journals (Sweden)

    Salvador J. Diaz-Cano

    2015-04-01

    Full Text Available Any robust classification system depends on its purpose and must refer to accepted standards, its strength relying on predictive values and a careful consideration of known factors that can affect its reliability. In this context, a molecular classification of human cancer must refer to the current gold standard (histological classification and try to improve it with key prognosticators for metastatic potential, staging and grading. Although organ-specific examples have been published based on proteomics, transcriptomics and genomics evaluations, the most popular approach uses gene expression analysis as a direct correlate of cellular differentiation, which represents the key feature of the histological classification. RNA is a labile molecule that varies significantly according with the preservation protocol, its transcription reflect the adaptation of the tumor cells to the microenvironment, it can be passed through mechanisms of intercellular transference of genetic information (exosomes, and it is exposed to epigenetic modifications. More robust classifications should be based on stable molecules, at the genetic level represented by DNA to improve reliability, and its analysis must deal with the concept of intratumoral heterogeneity, which is at the origin of tumor progression and is the byproduct of the selection process during the clonal expansion and progression of neoplasms. The simultaneous analysis of multiple DNA targets and next generation sequencing offer the best practical approach for an analytical genomic classification of tumors.

  15. Empirical agent-based modelling challenges and solutions

    CERN Document Server

    Barreteau, Olivier

    2014-01-01

    This instructional book showcases techniques to parameterise human agents in empirical agent-based models (ABM). In doing so, it provides a timely overview of key ABM methodologies and the most innovative approaches through a variety of empirical applications.  It features cutting-edge research from leading academics and practitioners, and will provide a guide for characterising and parameterising human agents in empirical ABM.  In order to facilitate learning, this text shares the valuable experiences of other modellers in particular modelling situations. Very little has been published in the area of empirical ABM, and this contributed volume will appeal to graduate-level students and researchers studying simulation modeling in economics, sociology, ecology, and trans-disciplinary studies, such as topics related to sustainability. In a similar vein to the instruction found in a cookbook, this text provides the empirical modeller with a set of 'recipes'  ready to be implemented. Agent-based modeling (AB...

  16. Hardware Accelerators Targeting a Novel Group Based Packet Classification Algorithm

    Directory of Open Access Journals (Sweden)

    O. Ahmed

    2013-01-01

    Full Text Available Packet classification is a ubiquitous and key building block for many critical network devices. However, it remains as one of the main bottlenecks faced when designing fast network devices. In this paper, we propose a novel Group Based Search packet classification Algorithm (GBSA that is scalable, fast, and efficient. GBSA consumes an average of 0.4 megabytes of memory for a 10 k rule set. The worst-case classification time per packet is 2 microseconds, and the preprocessing speed is 3 M rules/second based on an Xeon processor operating at 3.4 GHz. When compared with other state-of-the-art classification techniques, the results showed that GBSA outperforms the competition with respect to speed, memory usage, and processing time. Moreover, GBSA is amenable to implementation in hardware. Three different hardware implementations are also presented in this paper including an Application Specific Instruction Set Processor (ASIP implementation and two pure Register-Transfer Level (RTL implementations based on Impulse-C and Handel-C flows, respectively. Speedups achieved with these hardware accelerators ranged from 9x to 18x compared with a pure software implementation running on an Xeon processor.

  17. Classification of high resolution imagery based on fusion of multiscale texture features

    International Nuclear Information System (INIS)

    Liu, Jinxiu; Liu, Huiping; Lv, Ying; Xue, Xiaojuan

    2014-01-01

    In high resolution data classification process, combining texture features with spectral bands can effectively improve the classification accuracy. However, the window size which is difficult to choose is regarded as an important factor influencing overall classification accuracy in textural classification and current approaches to image texture analysis only depend on a single moving window which ignores different scale features of various land cover types. In this paper, we propose a new method based on the fusion of multiscale texture features to overcome these problems. The main steps in new method include the classification of fixed window size spectral/textural images from 3×3 to 15×15 and comparison of all the posterior possibility values for every pixel, as a result the biggest probability value is given to the pixel and the pixel belongs to a certain land cover type automatically. The proposed approach is tested on University of Pavia ROSIS data. The results indicate that the new method improve the classification accuracy compared to results of methods based on fixed window size textural classification

  18. Empirical Studies On Machine Learning Based Text Classification Algorithms

    OpenAIRE

    Shweta C. Dharmadhikari; Maya Ingle; Parag Kulkarni

    2011-01-01

    Automatic classification of text documents has become an important research issue now days. Properclassification of text documents requires information retrieval, machine learning and Natural languageprocessing (NLP) techniques. Our aim is to focus on important approaches to automatic textclassification based on machine learning techniques viz. supervised, unsupervised and semi supervised.In this paper we present a review of various text classification approaches under machine learningparadig...

  19. Locality-preserving sparse representation-based classification in hyperspectral imagery

    Science.gov (United States)

    Gao, Lianru; Yu, Haoyang; Zhang, Bing; Li, Qingting

    2016-10-01

    This paper proposes to combine locality-preserving projections (LPP) and sparse representation (SR) for hyperspectral image classification. The LPP is first used to reduce the dimensionality of all the training and testing data by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold, where the high-dimensional data lies. Then, SR codes the projected testing pixels as sparse linear combinations of all the training samples to classify the testing pixels by evaluating which class leads to the minimum approximation error. The integration of LPP and SR represents an innovative contribution to the literature. The proposed approach, called locality-preserving SR-based classification, addresses the imbalance between high dimensionality of hyperspectral data and the limited number of training samples. Experimental results on three real hyperspectral data sets demonstrate that the proposed approach outperforms the original counterpart, i.e., SR-based classification.

  20. AN ADABOOST OPTIMIZED CCFIS BASED CLASSIFICATION MODEL FOR BREAST CANCER DETECTION

    Directory of Open Access Journals (Sweden)

    CHANDRASEKAR RAVI

    2017-06-01

    Full Text Available Classification is a Data Mining technique used for building a prototype of the data behaviour, using which an unseen data can be classified into one of the defined classes. Several researchers have proposed classification techniques but most of them did not emphasis much on the misclassified instances and storage space. In this paper, a classification model is proposed that takes into account the misclassified instances and storage space. The classification model is efficiently developed using a tree structure for reducing the storage complexity and uses single scan of the dataset. During the training phase, Class-based Closed Frequent ItemSets (CCFIS were mined from the training dataset in the form of a tree structure. The classification model has been developed using the CCFIS and a similarity measure based on Longest Common Subsequence (LCS. Further, the Particle Swarm Optimization algorithm is applied on the generated CCFIS, which assigns weights to the itemsets and their associated classes. Most of the classifiers are correctly classifying the common instances but they misclassify the rare instances. In view of that, AdaBoost algorithm has been used to boost the weights of the misclassified instances in the previous round so as to include them in the training phase to classify the rare instances. This improves the accuracy of the classification model. During the testing phase, the classification model is used to classify the instances of the test dataset. Breast Cancer dataset from UCI repository is used for experiment. Experimental analysis shows that the accuracy of the proposed classification model outperforms the PSOAdaBoost-Sequence classifier by 7% superior to other approaches like Naïve Bayes Classifier, Support Vector Machine Classifier, Instance Based Classifier, ID3 Classifier, J48 Classifier, etc.

  1. Next frontier in agent-based complex automated negotiation

    CERN Document Server

    Ito, Takayuki; Zhang, Minjie; Robu, Valentin

    2015-01-01

    This book focuses on automated negotiations based on multi-agent systems. It is intended for researchers and students in various fields involving autonomous agents and multi-agent systems, such as e-commerce tools, decision-making and negotiation support systems, and collaboration tools. The contents will help them to understand the concept of automated negotiations, negotiation protocols, negotiating agents’ strategies, and the applications of those strategies. In this book, some negotiation protocols focusing on the multiple interdependent issues in negotiations are presented, making it possible to find high-quality solutions for the complex agents’ utility functions. This book is a compilation of the extended versions of the very best papers selected from the many that were presented at the International Workshop on Agent-Based Complex Automated Negotiations.

  2. The selection of adhesive systems for resin-based luting agents.

    Science.gov (United States)

    Carville, Rebecca; Quinn, Frank

    2008-01-01

    The use of resin-based luting agents is ever expanding with the development of adhesive dentistry. A multitude of different adhesive systems are used with resin-based luting agents, and new products are introduced to the market frequently. Traditional adhesives generally required a multiple step bonding procedure prior to cementing with active resin-based luting materials; however, combined agents offer a simple application procedure. Self-etching 'all-in-one' systems claim that there is no need for the use of a separate adhesive process. The following review addresses the advantages and disadvantages of the available adhesive systems used with resin-based luting agents.

  3. Macromolecular and dendrimer-based magnetic resonance contrast agents

    Energy Technology Data Exchange (ETDEWEB)

    Bumb, Ambika; Brechbiel, Martin W. (Radiation Oncology Branch, National Cancer Inst., National Inst. of Health, Bethesda, MD (United States)), e-mail: pchoyke@mail.nih.gov; Choyke, Peter (Molecular Imaging Program, National Cancer Inst., National Inst. of Health, Bethesda, MD (United States))

    2010-09-15

    Magnetic resonance imaging (MRI) is a powerful imaging modality that can provide an assessment of function or molecular expression in tandem with anatomic detail. Over the last 20-25 years, a number of gadolinium-based MR contrast agents have been developed to enhance signal by altering proton relaxation properties. This review explores a range of these agents from small molecule chelates, such as Gd-DTPA and Gd-DOTA, to macromolecular structures composed of albumin, polylysine, polysaccharides (dextran, inulin, starch), poly(ethylene glycol), copolymers of cystamine and cystine with GD-DTPA, and various dendritic structures based on polyamidoamine and polylysine (Gadomers). The synthesis, structure, biodistribution, and targeting of dendrimer-based MR contrast agents are also discussed

  4. Ligand and structure-based classification models for Prediction of P-glycoprotein inhibitors

    DEFF Research Database (Denmark)

    Klepsch, Freya; Poongavanam, Vasanthanathan; Ecker, Gerhard Franz

    2014-01-01

    an algorithm based on Euclidean distance. Results show that random forest and SVM performed best for classification of P-gp inhibitors and non-inhibitors, correctly predicting 73/75 % of the external test set compounds. Classification based on the docking experiments using the scoring function Chem...

  5. Improving Classification of Protein Interaction Articles Using Context Similarity-Based Feature Selection.

    Science.gov (United States)

    Chen, Yifei; Sun, Yuxing; Han, Bing-Qing

    2015-01-01

    Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.

  6. Agent-Based Computing: Promise and Perils

    OpenAIRE

    Jennings, N. R.

    1999-01-01

    Agent-based computing represents an exciting new synthesis both for Artificial Intelligence (AI) and, more genrally, Computer Science. It has the potential to significantly improve the theory and practice of modelling, designing and implementing complex systems. Yet, to date, there has been little systematic analysis of what makes an agent such an appealing and powerful conceptual model. Moreover, even less effort has been devoted to exploring the inherent disadvantages that stem from adoptin...

  7. Polarimetric SAR image classification based on discriminative dictionary learning model

    Science.gov (United States)

    Sang, Cheng Wei; Sun, Hong

    2018-03-01

    Polarimetric SAR (PolSAR) image classification is one of the important applications of PolSAR remote sensing. It is a difficult high-dimension nonlinear mapping problem, the sparse representations based on learning overcomplete dictionary have shown great potential to solve such problem. The overcomplete dictionary plays an important role in PolSAR image classification, however for PolSAR image complex scenes, features shared by different classes will weaken the discrimination of learned dictionary, so as to degrade classification performance. In this paper, we propose a novel overcomplete dictionary learning model to enhance the discrimination of dictionary. The learned overcomplete dictionary by the proposed model is more discriminative and very suitable for PolSAR classification.

  8. Design and simulation of material-integrated distributed sensor processing with a code-based agent platform and mobile multi-agent systems.

    Science.gov (United States)

    Bosse, Stefan

    2015-02-16

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  9. Design and Simulation of Material-Integrated Distributed Sensor Processing with a Code-Based Agent Platform and Mobile Multi-Agent Systems

    Directory of Open Access Journals (Sweden)

    Stefan Bosse

    2015-02-01

    Full Text Available Multi-agent systems (MAS can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  10. Agent-based simulation of electricity markets : a literature review

    International Nuclear Information System (INIS)

    Sensfuss, F.; Genoese, M.; Genoese, M.; Most, D.

    2007-01-01

    The electricity sector in Europe and North America is undergoing considerable changes as a result of deregulation, issues related to climate change, and the integration of renewable resources within the electricity grid. This article reviewed agent-based simulation methods of analyzing electricity markets. The paper provided an analysis of research currently being conducted on electricity market designs and examined methods of modelling agent decisions. Methods of coupling long term and short term decisions were also reviewed. Issues related to single and multiple market analysis methods were discussed, as well as different approaches to integrating agent-based models with models of other commodities. The integration of transmission constraints within agent-based models was also discussed, and methods of measuring market efficiency were evaluated. Other topics examined in the paper included approaches to integrating investment decisions, carbon dioxide (CO 2 ) trading, and renewable support schemes. It was concluded that agent-based models serve as a test bed for the electricity sector, and will help to provide insights for future policy decisions. 74 refs., 6 figs

  11. An object-oriented classification method of high resolution imagery based on improved AdaTree

    International Nuclear Information System (INIS)

    Xiaohe, Zhang; Liang, Zhai; Jixian, Zhang; Huiyong, Sang

    2014-01-01

    With the popularity of the application using high spatial resolution remote sensing image, more and more studies paid attention to object-oriented classification on image segmentation as well as automatic classification after image segmentation. This paper proposed a fast method of object-oriented automatic classification. First, edge-based or FNEA-based segmentation was used to identify image objects and the values of most suitable attributes of image objects for classification were calculated. Then a certain number of samples from the image objects were selected as training data for improved AdaTree algorithm to get classification rules. Finally, the image objects could be classified easily using these rules. In the AdaTree, we mainly modified the final hypothesis to get classification rules. In the experiment with WorldView2 image, the result of the method based on AdaTree showed obvious accuracy and efficient improvement compared with the method based on SVM with the kappa coefficient achieving 0.9242

  12. Agent-based Big Data Classification

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... relevant and interesting knowledge from data, while data mining is a particular ... partitioning the original feature space instead of using the whole input ... classifying sentiment of online reviews using ontology. The proposed ...

  13. Cheese Classification, Characterization, and Categorization: A Global Perspective.

    Science.gov (United States)

    Almena-Aliste, Montserrat; Mietton, Bernard

    2014-02-01

    Cheese is one of the most fascinating, complex, and diverse foods enjoyed today. Three elements constitute the cheese ecosystem: ripening agents, consisting of enzymes and microorganisms; the composition of the fresh cheese; and the environmental conditions during aging. These factors determine and define not only the sensory quality of the final cheese product but also the vast diversity of cheeses produced worldwide. How we define and categorize cheese is a complicated matter. There are various approaches to cheese classification, and a global approach for classification and characterization is needed. We review current cheese classification schemes and the limitations inherent in each of the schemes described. While some classification schemes are based on microbiological criteria, others rely on descriptions of the technologies used for cheese production. The goal of this review is to present an overview of comprehensive and practical integrative classification models in order to better describe cheese diversity and the fundamental differences within cheeses, as well as to connect fundamental technological, microbiological, chemical, and sensory characteristics to contribute to an overall characterization of the main families of cheese, including the expanding world of American artisanal cheeses.

  14. Building an asynchronous web-based tool for machine learning classification.

    Science.gov (United States)

    Weber, Griffin; Vinterbo, Staal; Ohno-Machado, Lucila

    2002-01-01

    Various unsupervised and supervised learning methods including support vector machines, classification trees, linear discriminant analysis and nearest neighbor classifiers have been used to classify high-throughput gene expression data. Simpler and more widely accepted statistical tools have not yet been used for this purpose, hence proper comparisons between classification methods have not been conducted. We developed free software that implements logistic regression with stepwise variable selection as a quick and simple method for initial exploration of important genetic markers in disease classification. To implement the algorithm and allow our collaborators in remote locations to evaluate and compare its results against those of other methods, we developed a user-friendly asynchronous web-based application with a minimal amount of programming using free, downloadable software tools. With this program, we show that classification using logistic regression can perform as well as other more sophisticated algorithms, and it has the advantages of being easy to interpret and reproduce. By making the tool freely and easily available, we hope to promote the comparison of classification methods. In addition, we believe our web application can be used as a model for other bioinformatics laboratories that need to develop web-based analysis tools in a short amount of time and on a limited budget.

  15. Application of In-Segment Multiple Sampling in Object-Based Classification

    Directory of Open Access Journals (Sweden)

    Nataša Đurić

    2014-12-01

    Full Text Available When object-based analysis is applied to very high-resolution imagery, pixels within the segments reveal large spectral inhomogeneity; their distribution can be considered complex rather than normal. When normality is violated, the classification methods that rely on the assumption of normally distributed data are not as successful or accurate. It is hard to detect normality violations in small samples. The segmentation process produces segments that vary highly in size; samples can be very big or very small. This paper investigates whether the complexity within the segment can be addressed using multiple random sampling of segment pixels and multiple calculations of similarity measures. In order to analyze the effect sampling has on classification results, statistics and probability value equations of non-parametric two-sample Kolmogorov-Smirnov test and parametric Student’s t-test are selected as similarity measures in the classification process. The performance of both classifiers was assessed on a WorldView-2 image for four land cover classes (roads, buildings, grass and trees and compared to two commonly used object-based classifiers—k-Nearest Neighbor (k-NN and Support Vector Machine (SVM. Both proposed classifiers showed a slight improvement in the overall classification accuracies and produced more accurate classification maps when compared to the ground truth image.

  16. Lidar-based individual tree species classification using convolutional neural network

    Science.gov (United States)

    Mizoguchi, Tomohiro; Ishii, Akira; Nakamura, Hiroyuki; Inoue, Tsuyoshi; Takamatsu, Hisashi

    2017-06-01

    Terrestrial lidar is commonly used for detailed documentation in the field of forest inventory investigation. Recent improvements of point cloud processing techniques enabled efficient and precise computation of an individual tree shape parameters, such as breast-height diameter, height, and volume. However, tree species are manually specified by skilled workers to date. Previous works for automatic tree species classification mainly focused on aerial or satellite images, and few works have been reported for classification techniques using ground-based sensor data. Several candidate sensors can be considered for classification, such as RGB or multi/hyper spectral cameras. Above all candidates, we use terrestrial lidar because it can obtain high resolution point cloud in the dark forest. We selected bark texture for the classification criteria, since they clearly represent unique characteristics of each tree and do not change their appearance under seasonable variation and aged deterioration. In this paper, we propose a new method for automatic individual tree species classification based on terrestrial lidar using Convolutional Neural Network (CNN). The key component is the creation step of a depth image which well describe the characteristics of each species from a point cloud. We focus on Japanese cedar and cypress which cover the large part of domestic forest. Our experimental results demonstrate the effectiveness of our proposed method.

  17. Changing Histopathological Diagnostics by Genome-Based Tumor Classification

    Directory of Open Access Journals (Sweden)

    Michael Kloth

    2014-05-01

    Full Text Available Traditionally, tumors are classified by histopathological criteria, i.e., based on their specific morphological appearances. Consequently, current therapeutic decisions in oncology are strongly influenced by histology rather than underlying molecular or genomic aberrations. The increase of information on molecular changes however, enabled by the Human Genome Project and the International Cancer Genome Consortium as well as the manifold advances in molecular biology and high-throughput sequencing techniques, inaugurated the integration of genomic information into disease classification. Furthermore, in some cases it became evident that former classifications needed major revision and adaption. Such adaptations are often required by understanding the pathogenesis of a disease from a specific molecular alteration, using this molecular driver for targeted and highly effective therapies. Altogether, reclassifications should lead to higher information content of the underlying diagnoses, reflecting their molecular pathogenesis and resulting in optimized and individual therapeutic decisions. The objective of this article is to summarize some particularly important examples of genome-based classification approaches and associated therapeutic concepts. In addition to reviewing disease specific markers, we focus on potentially therapeutic or predictive markers and the relevance of molecular diagnostics in disease monitoring.

  18. VigilAgent for the development of agent-based multi-robot surveillance systems

    OpenAIRE

    Gascueña Noheda, José Manuel; Navarro Martínez, Elena María; Fernández Caballero, Antonio

    2011-01-01

    Usually, surveillance applications are developed following an ad-hoc approach instead of using a methodology to guide stakeholders in achieving quality standards expected from commercial software. To solve this gap, our conjecture is that surveillance applications can be fully developed from their initial design stages by means of agent-based methodologies. Specifically, this paper describes the experience and the results of using a multi-agent systems approach according to the process provid...

  19. Agent-based inter-organizational systems in advanced logistics operations

    NARCIS (Netherlands)

    M. Wasesa (Meditya)

    2017-01-01

    textabstract“Agent-based Inter-organizational Systems (ABIOS) in Advanced Logistics Operations” explores the concepts, the design, and the role and impact of agent-based systems to improve coordination and performance of logistics operations. The dissertation consists of one conceptual study and

  20. Sparse Representation Based Binary Hypothesis Model for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Yidong Tang

    2016-01-01

    Full Text Available The sparse representation based classifier (SRC and its kernel version (KSRC have been employed for hyperspectral image (HSI classification. However, the state-of-the-art SRC often aims at extended surface objects with linear mixture in smooth scene and assumes that the number of classes is given. Considering the small target with complex background, a sparse representation based binary hypothesis (SRBBH model is established in this paper. In this model, a query pixel is represented in two ways, which are, respectively, by background dictionary and by union dictionary. The background dictionary is composed of samples selected from the local dual concentric window centered at the query pixel. Thus, for each pixel the classification issue becomes an adaptive multiclass classification problem, where only the number of desired classes is required. Furthermore, the kernel method is employed to improve the interclass separability. In kernel space, the coding vector is obtained by using kernel-based orthogonal matching pursuit (KOMP algorithm. Then the query pixel can be labeled by the characteristics of the coding vectors. Instead of directly using the reconstruction residuals, the different impacts the background dictionary and union dictionary have on reconstruction are used for validation and classification. It enhances the discrimination and hence improves the performance.

  1. The development of a classification schema for arts-based approaches to knowledge translation.

    Science.gov (United States)

    Archibald, Mandy M; Caine, Vera; Scott, Shannon D

    2014-10-01

    Arts-based approaches to knowledge translation are emerging as powerful interprofessional strategies with potential to facilitate evidence uptake, communication, knowledge, attitude, and behavior change across healthcare provider and consumer groups. These strategies are in the early stages of development. To date, no classification system for arts-based knowledge translation exists, which limits development and understandings of effectiveness in evidence syntheses. We developed a classification schema of arts-based knowledge translation strategies based on two mechanisms by which these approaches function: (a) the degree of precision in key message delivery, and (b) the degree of end-user participation. We demonstrate how this classification is necessary to explore how context, time, and location shape arts-based knowledge translation strategies. Classifying arts-based knowledge translation strategies according to their core attributes extends understandings of the appropriateness of these approaches for various healthcare settings and provider groups. The classification schema developed may enhance understanding of how, where, and for whom arts-based knowledge translation approaches are effective, and enable theorizing of essential knowledge translation constructs, such as the influence of context, time, and location on utilization strategies. The classification schema developed may encourage systematic inquiry into the effectiveness of these approaches in diverse interprofessional contexts. © 2014 Sigma Theta Tau International.

  2. Research on Remote Sensing Image Classification Based on Feature Level Fusion

    Science.gov (United States)

    Yuan, L.; Zhu, G.

    2018-04-01

    Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.

  3. Gd-HOPO Based High Relaxivity MRI Contrast Agents

    Energy Technology Data Exchange (ETDEWEB)

    Datta, Ankona; Raymond, Kenneth

    2008-11-06

    Tris-bidentate HOPO-based ligands developed in our laboratory were designed to complement the coordination preferences of Gd{sup 3+}, especially its oxophilicity. The HOPO ligands provide a hexadentate coordination environment for Gd{sup 3+} in which all he donor atoms are oxygen. Because Gd{sup 3+} favors eight or nine coordination, this design provides two to three open sites for inner-sphere water molecules. These water molecules rapidly exchange with bulk solution, hence affecting the relaxation rates of bulk water olecules. The parameters affecting the efficiency of these contrast agents have been tuned to improve contrast while still maintaining a high thermodynamic stability for Gd{sup 3+} binding. The Gd- HOPO-based contrast agents surpass current commercially available agents ecause of a higher number of inner-sphere water molecules, rapid exchange of inner-sphere water molecules via an associative mechanism, and a long electronic relaxation time. The contrast enhancement provided by these agents is at least twice that of commercial contrast gents, which are based on polyaminocarboxylate ligands.

  4. Semantic Document Image Classification Based on Valuable Text Pattern

    Directory of Open Access Journals (Sweden)

    Hossein Pourghassem

    2011-01-01

    Full Text Available Knowledge extraction from detected document image is a complex problem in the field of information technology. This problem becomes more intricate when we know, a negligible percentage of the detected document images are valuable. In this paper, a segmentation-based classification algorithm is used to analysis the document image. In this algorithm, using a two-stage segmentation approach, regions of the image are detected, and then classified to document and non-document (pure region regions in the hierarchical classification. In this paper, a novel valuable definition is proposed to classify document image in to valuable or invaluable categories. The proposed algorithm is evaluated on a database consisting of the document and non-document image that provide from Internet. Experimental results show the efficiency of the proposed algorithm in the semantic document image classification. The proposed algorithm provides accuracy rate of 98.8% for valuable and invaluable document image classification problem.

  5. Ship Classification with High Resolution TerraSAR-X Imagery Based on Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Zhi Zhao

    2013-01-01

    Full Text Available Ship surveillance using space-borne synthetic aperture radar (SAR, taking advantages of high resolution over wide swaths and all-weather working capability, has attracted worldwide attention. Recent activity in this field has concentrated mainly on the study of ship detection, but the classification is largely still open. In this paper, we propose a novel ship classification scheme based on analytic hierarchy process (AHP in order to achieve better performance. The main idea is to apply AHP on both feature selection and classification decision. On one hand, the AHP based feature selection constructs a selection decision problem based on several feature evaluation measures (e.g., discriminability, stability, and information measure and provides objective criteria to make comprehensive decisions for their combinations quantitatively. On the other hand, we take the selected feature sets as the input of KNN classifiers and fuse the multiple classification results based on AHP, in which the feature sets’ confidence is taken into account when the AHP based classification decision is made. We analyze the proposed classification scheme and demonstrate its results on a ship dataset that comes from TerraSAR-X SAR images.

  6. Texture-based classification of different gastric tumors at contrast-enhanced CT

    Energy Technology Data Exchange (ETDEWEB)

    Ba-Ssalamah, Ahmed, E-mail: ahmed.ba-ssalamah@meduniwien.ac.at [Department of Radiology, Medical University of Vienna (Austria); Muin, Dina; Schernthaner, Ruediger; Kulinna-Cosentini, Christiana; Bastati, Nina [Department of Radiology, Medical University of Vienna (Austria); Stift, Judith [Department of Pathology, Medical University of Vienna (Austria); Gore, Richard [Department of Radiology, University of Chicago Pritzker School of Medicine, Chicago, IL (United States); Mayerhoefer, Marius E. [Department of Radiology, Medical University of Vienna (Austria)

    2013-10-01

    Purpose: To determine the feasibility of texture analysis for the classification of gastric adenocarcinoma, lymphoma, and gastrointestinal stromal tumors on contrast-enhanced hydrodynamic-MDCT images. Materials and methods: The arterial phase scans of 47 patients with adenocarcinoma (AC) and a histologic tumor grade of [AC-G1, n = 4, G1, n = 4; AC-G2, n = 7; AC-G3, n = 16]; GIST, n = 15; and lymphoma, n = 5, and the venous phase scans of 48 patients with AC-G1, n = 3; AC-G2, n = 6; AC-G3, n = 14; GIST, n = 17; lymphoma, n = 8, were retrospectively reviewed. Based on regions of interest, texture analysis was performed, and features derived from the gray-level histogram, run-length and co-occurrence matrix, absolute gradient, autoregressive model, and wavelet transform were calculated. Fisher coefficients, probability of classification error, average correlation coefficients, and mutual information coefficients were used to create combinations of texture features that were optimized for tumor differentiation. Linear discriminant analysis in combination with a k-nearest neighbor classifier was used for tumor classification. Results: On arterial-phase scans, texture-based lesion classification was highly successful in differentiating between AC and lymphoma, and GIST and lymphoma, with misclassification rates of 3.1% and 0%, respectively. On venous-phase scans, texture-based classification was slightly less successful for AC vs. lymphoma (9.7% misclassification) and GIST vs. lymphoma (8% misclassification), but enabled the differentiation between AC and GIST (10% misclassification), and between the different grades of AC (4.4% misclassification). No texture feature combination was able to adequately distinguish between all three tumor types. Conclusion: Classification of different gastric tumors based on textural information may aid radiologists in establishing the correct diagnosis, at least in cases where the differential diagnosis can be narrowed down to two

  7. Texture-based classification of different gastric tumors at contrast-enhanced CT

    International Nuclear Information System (INIS)

    Ba-Ssalamah, Ahmed; Muin, Dina; Schernthaner, Ruediger; Kulinna-Cosentini, Christiana; Bastati, Nina; Stift, Judith; Gore, Richard; Mayerhoefer, Marius E.

    2013-01-01

    Purpose: To determine the feasibility of texture analysis for the classification of gastric adenocarcinoma, lymphoma, and gastrointestinal stromal tumors on contrast-enhanced hydrodynamic-MDCT images. Materials and methods: The arterial phase scans of 47 patients with adenocarcinoma (AC) and a histologic tumor grade of [AC-G1, n = 4, G1, n = 4; AC-G2, n = 7; AC-G3, n = 16]; GIST, n = 15; and lymphoma, n = 5, and the venous phase scans of 48 patients with AC-G1, n = 3; AC-G2, n = 6; AC-G3, n = 14; GIST, n = 17; lymphoma, n = 8, were retrospectively reviewed. Based on regions of interest, texture analysis was performed, and features derived from the gray-level histogram, run-length and co-occurrence matrix, absolute gradient, autoregressive model, and wavelet transform were calculated. Fisher coefficients, probability of classification error, average correlation coefficients, and mutual information coefficients were used to create combinations of texture features that were optimized for tumor differentiation. Linear discriminant analysis in combination with a k-nearest neighbor classifier was used for tumor classification. Results: On arterial-phase scans, texture-based lesion classification was highly successful in differentiating between AC and lymphoma, and GIST and lymphoma, with misclassification rates of 3.1% and 0%, respectively. On venous-phase scans, texture-based classification was slightly less successful for AC vs. lymphoma (9.7% misclassification) and GIST vs. lymphoma (8% misclassification), but enabled the differentiation between AC and GIST (10% misclassification), and between the different grades of AC (4.4% misclassification). No texture feature combination was able to adequately distinguish between all three tumor types. Conclusion: Classification of different gastric tumors based on textural information may aid radiologists in establishing the correct diagnosis, at least in cases where the differential diagnosis can be narrowed down to two

  8. Classification of arterial and venous cerebral vasculature based on wavelet postprocessing of CT perfusion data.

    Science.gov (United States)

    Havla, Lukas; Schneider, Moritz J; Thierfelder, Kolja M; Beyer, Sebastian E; Ertl-Wagner, Birgit; Reiser, Maximilian F; Sommer, Wieland H; Dietrich, Olaf

    2016-02-01

    The purpose of this study was to propose and evaluate a new wavelet-based technique for classification of arterial and venous vessels using time-resolved cerebral CT perfusion data sets. Fourteen consecutive patients (mean age 73 yr, range 17-97) with suspected stroke but no pathology in follow-up MRI were included. A CT perfusion scan with 32 dynamic phases was performed during intravenous bolus contrast-agent application. After rigid-body motion correction, a Paul wavelet (order 1) was used to calculate voxelwise the wavelet power spectrum (WPS) of each attenuation-time course. The angiographic intensity A was defined as the maximum of the WPS, located at the coordinates T (time axis) and W (scale/width axis) within the WPS. Using these three parameters (A, T, W) separately as well as combined by (1) Fisher's linear discriminant analysis (FLDA), (2) logistic regression (LogR) analysis, or (3) support vector machine (SVM) analysis, their potential to classify 18 different arterial and venous vessel segments per subject was evaluated. The best vessel classification was obtained using all three parameters A and T and W [area under the curve (AUC): 0.953 with FLDA and 0.957 with LogR or SVM]. In direct comparison, the wavelet-derived parameters provided performance at least equal to conventional attenuation-time-course parameters. The maximum AUC obtained from the proposed wavelet parameters was slightly (although not statistically significantly) higher than the maximum AUC (0.945) obtained from the conventional parameters. A new method to classify arterial and venous cerebral vessels with high statistical accuracy was introduced based on the time-domain wavelet transform of dynamic CT perfusion data in combination with linear or nonlinear multidimensional classification techniques.

  9. In Defense of Agent-Based Virtue Ethics | Van Zyl | Philosophical ...

    African Journals Online (AJOL)

    In 'Against agent-based virtue ethics' (2004) Michael Brady rejects agent-based virtue ethics on the grounds that it fails to capture the commonsense distinction between an agent's doing the right thing, and her doing it for the right reason. In his view, the failure to account for this distinction has paradoxical results, making it ...

  10. Ensemble Classification of Data Streams Based on Attribute Reduction and a Sliding Window

    Directory of Open Access Journals (Sweden)

    Yingchun Chen

    2018-04-01

    Full Text Available With the current increasing volume and dimensionality of data, traditional data classification algorithms are unable to satisfy the demands of practical classification applications of data streams. To deal with noise and concept drift in data streams, we propose an ensemble classification algorithm based on attribute reduction and a sliding window in this paper. Using mutual information, an approximate attribute reduction algorithm based on rough sets is used to reduce data dimensionality and increase the diversity of reduced results in the algorithm. A double-threshold concept drift detection method and a three-stage sliding window control strategy are introduced to improve the performance of the algorithm when dealing with both noise and concept drift. The classification precision is further improved by updating the base classifiers and their nonlinear weights. Experiments on synthetic datasets and actual datasets demonstrate the performance of the algorithm in terms of classification precision, memory use, and time efficiency.

  11. Classification of right-hand grasp movement based on EMOTIV Epoc+

    Science.gov (United States)

    Tobing, T. A. M. L.; Prawito, Wijaya, S. K.

    2017-07-01

    Combinations of BCT elements for right-hand grasp movement have been obtained, providing the average value of their classification accuracy. The aim of this study is to find a suitable combination for best classification accuracy of right-hand grasp movement based on EEG headset, EMOTIV Epoc+. There are three movement classifications: grasping hand, relax, and opening hand. These classifications take advantage of Event-Related Desynchronization (ERD) phenomenon that makes it possible to differ relaxation, imagery, and movement state from each other. The combinations of elements are the usage of Independent Component Analysis (ICA), spectrum analysis by Fast Fourier Transform (FFT), maximum mu and beta power with their frequency as features, and also classifier Probabilistic Neural Network (PNN) and Radial Basis Function (RBF). The average values of classification accuracy are ± 83% for training and ± 57% for testing. To have a better understanding of the signal quality recorded by EMOTIV Epoc+, the result of classification accuracy of left or right-hand grasping movement EEG signal (provided by Physionet) also be given, i.e.± 85% for training and ± 70% for testing. The comparison of accuracy value from each combination, experiment condition, and external EEG data are provided for the purpose of value analysis of classification accuracy.

  12. A role based coordination model in agent systems

    Institute of Scientific and Technical Information of China (English)

    ZHANG Ya-ying; YOU Jin-yuan

    2005-01-01

    Coordination technology addresses the construction of open, flexible systems from active and independent software agents in concurrent and distributed systems. In most open distributed applications, multiple agents need interaction and communication to achieve their overall goal. Coordination technologies for the Internet typically are concerned with enabling interaction among agents and helping them cooperate with each other.At the same time, access control should also be considered to constrain interaction to make it harmless. Access control should be regarded as the security counterpart of coordination. At present, the combination of coordination and access control remains an open problem. Thus, we propose a role based coordination model with policy enforcement in agent application systems. In this model, coordination is combined with access control so as to fully characterize the interactions in agent systems. A set of agents interacting with each other for a common global system task constitutes a coordination group. Role based access control is applied in this model to prevent unauthorized accesses. Coordination policy is enforced in a distributed manner so that the model can be applied to the open distributed systems such as Intemet. An Internet online auction system is presented as a case study to illustrate the proposed coordination model and finally the performance analysis of the model is introduced.

  13. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification

    Directory of Open Access Journals (Sweden)

    Lu Bing

    2017-01-01

    Full Text Available We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL. After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM. Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  14. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification.

    Science.gov (United States)

    Bing, Lu; Wang, Wei

    2017-01-01

    We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL). After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM). Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  15. Stability of subsystem solutions in agent-based models

    Science.gov (United States)

    Perc, Matjaž

    2018-01-01

    The fact that relatively simple entities, such as particles or neurons, or even ants or bees or humans, give rise to fascinatingly complex behaviour when interacting in large numbers is the hallmark of complex systems science. Agent-based models are frequently employed for modelling and obtaining a predictive understanding of complex systems. Since the sheer number of equations that describe the behaviour of an entire agent-based model often makes it impossible to solve such models exactly, Monte Carlo simulation methods must be used for the analysis. However, unlike pairwise interactions among particles that typically govern solid-state physics systems, interactions among agents that describe systems in biology, sociology or the humanities often involve group interactions, and they also involve a larger number of possible states even for the most simplified description of reality. This begets the question: when can we be certain that an observed simulation outcome of an agent-based model is actually stable and valid in the large system-size limit? The latter is key for the correct determination of phase transitions between different stable solutions, and for the understanding of the underlying microscopic processes that led to these phase transitions. We show that a satisfactory answer can only be obtained by means of a complete stability analysis of subsystem solutions. A subsystem solution can be formed by any subset of all possible agent states. The winner between two subsystem solutions can be determined by the average moving direction of the invasion front that separates them, yet it is crucial that the competing subsystem solutions are characterised by a proper composition and spatiotemporal structure before the competition starts. We use the spatial public goods game with diverse tolerance as an example, but the approach has relevance for a wide variety of agent-based models.

  16. A DIMENSION REDUCTION-BASED METHOD FOR CLASSIFICATION OF HYPERSPECTRAL AND LIDAR DATA

    Directory of Open Access Journals (Sweden)

    B. Abbasi

    2015-12-01

    Full Text Available The existence of various natural objects such as grass, trees, and rivers along with artificial manmade features such as buildings and roads, make it difficult to classify ground objects. Consequently using single data or simple classification approach cannot improve classification results in object identification. Also, using of a variety of data from different sensors; increase the accuracy of spatial and spectral information. In this paper, we proposed a classification algorithm on joint use of hyperspectral and Lidar (Light Detection and Ranging data based on dimension reduction. First, some feature extraction techniques are applied to achieve more information from Lidar and hyperspectral data. Also Principal component analysis (PCA and Minimum Noise Fraction (MNF have been utilized to reduce the dimension of spectral features. The number of 30 features containing the most information of the hyperspectral images is considered for both PCA and MNF. In addition, Normalized Difference Vegetation Index (NDVI has been measured to highlight the vegetation. Furthermore, the extracted features from Lidar data calculated based on relation between every pixel of data and surrounding pixels in local neighbourhood windows. The extracted features are based on the Grey Level Co-occurrence Matrix (GLCM matrix. In second step, classification is operated in all features which obtained by MNF, PCA, NDVI and GLCM and trained by class samples. After this step, two classification maps are obtained by SVM classifier with MNF+NDVI+GLCM features and PCA+NDVI+GLCM features, respectively. Finally, the classified images are fused together to create final classification map by decision fusion based majority voting strategy.

  17. Agent-based models for higher-order theory of mind

    NARCIS (Netherlands)

    de Weerd, Harmen; Verbrugge, Rineke; Verheij, Bart; Kamiński, Bogumił; Koloch, Grzegorz

    2014-01-01

    Agent-based models are a powerful tool for explaining the emergence of social phenomena in a society. In such models, individual agents typically have little cognitive ability. In this paper, we model agents with the cognitive ability to make use of theory of mind. People use this ability to reason

  18. Vessel-guided airway segmentation based on voxel classification

    DEFF Research Database (Denmark)

    Lo, Pechin Chien Pau; Sporring, Jon; Ashraf, Haseem

    2008-01-01

    This paper presents a method for improving airway tree segmentation using vessel orientation information. We use the fact that an airway branch is always accompanied by an artery, with both structures having similar orientations. This work is based on a  voxel classification airway segmentation...... method proposed previously. The probability of a voxel belonging to the airway, from the voxel classification method, is augmented with an orientation similarity measure as a criterion for region growing. The orientation similarity measure of a voxel indicates how similar is the orientation...... of the surroundings of a voxel, estimated based on a tube model, is to that of a neighboring vessel. The proposed method is tested on 20 CT images from different subjects selected randomly from a lung cancer screening study. Length of the airway branches from the results of the proposed method are significantly...

  19. Yarn-dyed fabric defect classification based on convolutional neural network

    Science.gov (United States)

    Jing, Junfeng; Dong, Amei; Li, Pengfei; Zhang, Kaibing

    2017-09-01

    Considering that manual inspection of the yarn-dyed fabric can be time consuming and inefficient, we propose a yarn-dyed fabric defect classification method by using a convolutional neural network (CNN) based on a modified AlexNet. CNN shows powerful ability in performing feature extraction and fusion by simulating the learning mechanism of human brain. The local response normalization layers in AlexNet are replaced by the batch normalization layers, which can enhance both the computational efficiency and classification accuracy. In the training process of the network, the characteristics of the defect are extracted step by step and the essential features of the image can be obtained from the fusion of the edge details with several convolution operations. Then the max-pooling layers, the dropout layers, and the fully connected layers are employed in the classification model to reduce the computation cost and extract more precise features of the defective fabric. Finally, the results of the defect classification are predicted by the softmax function. The experimental results show promising performance with an acceptable average classification rate and strong robustness on yarn-dyed fabric defect classification.

  20. Dynamic Allocation of a Domestic Heating Task to Gas-Based and Heatpump-Based Heating Agents

    NARCIS (Netherlands)

    Treur, J.

    2013-01-01

    In this paper a multi-agent model for a domestic heating task is introduced and analysed. The model includes two alternative heating agents (for gas-based heating and for heatpump-based heating), and a third allocation agent which determines the most economic allocation of the heating task to these

  1. Agent-based simulation of a financial market

    Science.gov (United States)

    Raberto, Marco; Cincotti, Silvano; Focardi, Sergio M.; Marchesi, Michele

    2001-10-01

    This paper introduces an agent-based artificial financial market in which heterogeneous agents trade one single asset through a realistic trading mechanism for price formation. Agents are initially endowed with a finite amount of cash and a given finite portfolio of assets. There is no money-creation process; the total available cash is conserved in time. In each period, agents make random buy and sell decisions that are constrained by available resources, subject to clustering, and dependent on the volatility of previous periods. The model proposed herein is able to reproduce the leptokurtic shape of the probability density of log price returns and the clustering of volatility. Implemented using extreme programming and object-oriented technology, the simulator is a flexible computational experimental facility that can find applications in both academic and industrial research projects.

  2. An Intelligent Fleet Condition-Based Maintenance Decision Making Method Based on Multi-Agent

    OpenAIRE

    Bo Sun; Qiang Feng; Songjie Li

    2012-01-01

    According to the demand for condition-based maintenance online decision making among a mission oriented fleet, an intelligent maintenance decision making method based on Multi-agent and heuristic rules is proposed. The process of condition-based maintenance within an aircraft fleet (each containing one or more Line Replaceable Modules) based on multiple maintenance thresholds is analyzed. Then the process is abstracted into a Multi-Agent Model, a 2-layer model structure containing host negoti...

  3. Generative embedding for model-based classification of fMRI data.

    Directory of Open Access Journals (Sweden)

    Kay H Brodersen

    2011-06-01

    Full Text Available Decoding models, such as those underlying multivariate classification algorithms, have been increasingly used to infer cognitive or clinical brain states from measures of brain activity obtained by functional magnetic resonance imaging (fMRI. The practicality of current classifiers, however, is restricted by two major challenges. First, due to the high data dimensionality and low sample size, algorithms struggle to separate informative from uninformative features, resulting in poor generalization performance. Second, popular discriminative methods such as support vector machines (SVMs rarely afford mechanistic interpretability. In this paper, we address these issues by proposing a novel generative-embedding approach that incorporates neurobiologically interpretable generative models into discriminative classifiers. Our approach extends previous work on trial-by-trial classification for electrophysiological recordings to subject-by-subject classification for fMRI and offers two key advantages over conventional methods: it may provide more accurate predictions by exploiting discriminative information encoded in 'hidden' physiological quantities such as synaptic connection strengths; and it affords mechanistic interpretability of clinical classifications. Here, we introduce generative embedding for fMRI using a combination of dynamic causal models (DCMs and SVMs. We propose a general procedure of DCM-based generative embedding for subject-wise classification, provide a concrete implementation, and suggest good-practice guidelines for unbiased application of generative embedding in the context of fMRI. We illustrate the utility of our approach by a clinical example in which we classify moderately aphasic patients and healthy controls using a DCM of thalamo-temporal regions during speech processing. Generative embedding achieves a near-perfect balanced classification accuracy of 98% and significantly outperforms conventional activation-based and

  4. A Coupled Simulation Architecture for Agent-Based/Geohydrological Modelling

    Science.gov (United States)

    Jaxa-Rozen, M.

    2016-12-01

    The quantitative modelling of social-ecological systems can provide useful insights into the interplay between social and environmental processes, and their impact on emergent system dynamics. However, such models should acknowledge the complexity and uncertainty of both of the underlying subsystems. For instance, the agent-based models which are increasingly popular for groundwater management studies can be made more useful by directly accounting for the hydrological processes which drive environmental outcomes. Conversely, conventional environmental models can benefit from an agent-based depiction of the feedbacks and heuristics which influence the decisions of groundwater users. From this perspective, this work describes a Python-based software architecture which couples the popular NetLogo agent-based platform with the MODFLOW/SEAWAT geohydrological modelling environment. This approach enables users to implement agent-based models in NetLogo's user-friendly platform, while benefiting from the full capabilities of MODFLOW/SEAWAT packages or reusing existing geohydrological models. The software architecture is based on the pyNetLogo connector, which provides an interface between the NetLogo agent-based modelling software and the Python programming language. This functionality is then extended and combined with Python's object-oriented features, to design a simulation architecture which couples NetLogo with MODFLOW/SEAWAT through the FloPy library (Bakker et al., 2016). The Python programming language also provides access to a range of external packages which can be used for testing and analysing the coupled models, which is illustrated for an application of Aquifer Thermal Energy Storage (ATES).

  5. Agent-based transportation planning compared with scheduling heuristics

    NARCIS (Netherlands)

    Mes, Martijn R.K.; van der Heijden, Matthijs C.; van Harten, Aart

    2004-01-01

    Here we consider the problem of dynamically assigning vehicles to transportation orders that have di¤erent time windows and should be handled in real time. We introduce a new agent-based system for the planning and scheduling of these transportation networks. Intelligent vehicle agents schedule

  6. Object-Based Crop Species Classification Based on the Combination of Airborne Hyperspectral Images and LiDAR Data

    Directory of Open Access Journals (Sweden)

    Xiaolong Liu

    2015-01-01

    Full Text Available Identification of crop species is an important issue in agricultural management. In recent years, many studies have explored this topic using multi-spectral and hyperspectral remote sensing data. In this study, we perform dedicated research to propose a framework for mapping crop species by combining hyperspectral and Light Detection and Ranging (LiDAR data in an object-based image analysis (OBIA paradigm. The aims of this work were the following: (i to understand the performances of different spectral dimension-reduced features from hyperspectral data and their combination with LiDAR derived height information in image segmentation; (ii to understand what classification accuracies of crop species can be achieved by combining hyperspectral and LiDAR data in an OBIA paradigm, especially in regions that have fragmented agricultural landscape and complicated crop planting structure; and (iii to understand the contributions of the crop height that is derived from LiDAR data, as well as the geometric and textural features of image objects, to the crop species’ separabilities. The study region was an irrigated agricultural area in the central Heihe river basin, which is characterized by many crop species, complicated crop planting structures, and fragmented landscape. The airborne hyperspectral data acquired by the Compact Airborne Spectrographic Imager (CASI with a 1 m spatial resolution and the Canopy Height Model (CHM data derived from the LiDAR data acquired by the airborne Leica ALS70 LiDAR system were used for this study. The image segmentation accuracies of different feature combination schemes (very high-resolution imagery (VHR, VHR/CHM, and minimum noise fractional transformed data (MNF/CHM were evaluated and analyzed. The results showed that VHR/CHM outperformed the other two combination schemes with a segmentation accuracy of 84.8%. The object-based crop species classification results of different feature integrations indicated that

  7. American College of Rheumatology classification criteria for Sjögren's syndrome

    DEFF Research Database (Denmark)

    Shiboski, S C; Shiboski, C H; Criswell, L A

    2012-01-01

    We propose new classification criteria for Sjögren's syndrome (SS), which are needed considering the emergence of biologic agents as potential treatments and their associated comorbidity. These criteria target individuals with signs/symptoms suggestive of SS.......We propose new classification criteria for Sjögren's syndrome (SS), which are needed considering the emergence of biologic agents as potential treatments and their associated comorbidity. These criteria target individuals with signs/symptoms suggestive of SS....

  8. Improving Generalization Based on l1-Norm Regularization for EEG-Based Motor Imagery Classification

    Directory of Open Access Journals (Sweden)

    Yuwei Zhao

    2018-05-01

    Full Text Available Multichannel electroencephalography (EEG is widely used in typical brain-computer interface (BCI systems. In general, a number of parameters are essential for a EEG classification algorithm due to redundant features involved in EEG signals. However, the generalization of the EEG method is often adversely affected by the model complexity, considerably coherent with its number of undetermined parameters, further leading to heavy overfitting. To decrease the complexity and improve the generalization of EEG method, we present a novel l1-norm-based approach to combine the decision value obtained from each EEG channel directly. By extracting the information from different channels on independent frequency bands (FB with l1-norm regularization, the method proposed fits the training data with much less parameters compared to common spatial pattern (CSP methods in order to reduce overfitting. Moreover, an effective and efficient solution to minimize the optimization object is proposed. The experimental results on dataset IVa of BCI competition III and dataset I of BCI competition IV show that, the proposed method contributes to high classification accuracy and increases generalization performance for the classification of MI EEG. As the training set ratio decreases from 80 to 20%, the average classification accuracy on the two datasets changes from 85.86 and 86.13% to 84.81 and 76.59%, respectively. The classification performance and generalization of the proposed method contribute to the practical application of MI based BCI systems.

  9. Video based object representation and classification using multiple covariance matrices.

    Science.gov (United States)

    Zhang, Yurong; Liu, Quan

    2017-01-01

    Video based object recognition and classification has been widely studied in computer vision and image processing area. One main issue of this task is to develop an effective representation for video. This problem can generally be formulated as image set representation. In this paper, we present a new method called Multiple Covariance Discriminative Learning (MCDL) for image set representation and classification problem. The core idea of MCDL is to represent an image set using multiple covariance matrices with each covariance matrix representing one cluster of images. Firstly, we use the Nonnegative Matrix Factorization (NMF) method to do image clustering within each image set, and then adopt Covariance Discriminative Learning on each cluster (subset) of images. At last, we adopt KLDA and nearest neighborhood classification method for image set classification. Promising experimental results on several datasets show the effectiveness of our MCDL method.

  10. Agent-based enterprise integration

    Energy Technology Data Exchange (ETDEWEB)

    N. M. Berry; C. M. Pancerella

    1998-12-01

    The authors are developing and deploying software agents in an enterprise information architecture such that the agents manage enterprise resources and facilitate user interaction with these resources. The enterprise agents are built on top of a robust software architecture for data exchange and tool integration across heterogeneous hardware and software. The resulting distributed multi-agent system serves as a method of enhancing enterprises in the following ways: providing users with knowledge about enterprise resources and applications; accessing the dynamically changing enterprise; locating enterprise applications and services; and improving search capabilities for applications and data. Furthermore, agents can access non-agents (i.e., databases and tools) through the enterprise framework. The ultimate target of the effort is the user; they are attempting to increase user productivity in the enterprise. This paper describes their design and early implementation and discusses the planned future work.

  11. Torrent classification - Base of rational management of erosive regions

    International Nuclear Information System (INIS)

    Gavrilovic, Zoran; Stefanovic, Milutin; Milovanovic, Irina; Cotric, Jelena; Milojevic, Mileta

    2008-01-01

    A complex methodology for torrents and erosion and the associated calculations was developed during the second half of the twentieth century in Serbia. It was the 'Erosion Potential Method'. One of the modules of that complex method was focused on torrent classification. The module enables the identification of hydro graphic, climate and erosion characteristics. The method makes it possible for each torrent, regardless of its magnitude, to be simply and recognizably described by the 'Formula of torrentially'. The above torrent classification is the base on which a set of optimisation calculations is developed for the required scope of erosion-control works and measures, the application of which enables the management of significantly larger erosion and torrential regions compared to the previous period. This paper will present the procedure and the method of torrent classification.

  12. A classification model of Hyperion image base on SAM combined decision tree

    Science.gov (United States)

    Wang, Zhenghai; Hu, Guangdao; Zhou, YongZhang; Liu, Xin

    2009-10-01

    Monitoring the Earth using imaging spectrometers has necessitated more accurate analyses and new applications to remote sensing. A very high dimensional input space requires an exponentially large amount of data to adequately and reliably represent the classes in that space. On the other hand, with increase in the input dimensionality the hypothesis space grows exponentially, which makes the classification performance highly unreliable. Traditional classification algorithms Classification of hyperspectral images is challenging. New algorithms have to be developed for hyperspectral data classification. The Spectral Angle Mapper (SAM) is a physically-based spectral classification that uses an ndimensional angle to match pixels to reference spectra. The algorithm determines the spectral similarity between two spectra by calculating the angle between the spectra, treating them as vectors in a space with dimensionality equal to the number of bands. The key and difficulty is that we should artificial defining the threshold of SAM. The classification precision depends on the rationality of the threshold of SAM. In order to resolve this problem, this paper proposes a new automatic classification model of remote sensing image using SAM combined with decision tree. It can automatic choose the appropriate threshold of SAM and improve the classify precision of SAM base on the analyze of field spectrum. The test area located in Heqing Yunnan was imaged by EO_1 Hyperion imaging spectrometer using 224 bands in visual and near infrared. The area included limestone areas, rock fields, soil and forests. The area was classified into four different vegetation and soil types. The results show that this method choose the appropriate threshold of SAM and eliminates the disturbance and influence of unwanted objects effectively, so as to improve the classification precision. Compared with the likelihood classification by field survey data, the classification precision of this model

  13. Joint Probability-Based Neuronal Spike Train Classification

    Directory of Open Access Journals (Sweden)

    Yan Chen

    2009-01-01

    Full Text Available Neuronal spike trains are used by the nervous system to encode and transmit information. Euclidean distance-based methods (EDBMs have been applied to quantify the similarity between temporally-discretized spike trains and model responses. In this study, using the same discretization procedure, we developed and applied a joint probability-based method (JPBM to classify individual spike trains of slowly adapting pulmonary stretch receptors (SARs. The activity of individual SARs was recorded in anaesthetized, paralysed adult male rabbits, which were artificially-ventilated at constant rate and one of three different volumes. Two-thirds of the responses to the 600 stimuli presented at each volume were used to construct three response models (one for each stimulus volume consisting of a series of time bins, each with spike probabilities. The remaining one-third of the responses where used as test responses to be classified into one of the three model responses. This was done by computing the joint probability of observing the same series of events (spikes or no spikes, dictated by the test response in a given model and determining which probability of the three was highest. The JPBM generally produced better classification accuracy than the EDBM, and both performed well above chance. Both methods were similarly affected by variations in discretization parameters, response epoch duration, and two different response alignment strategies. Increasing bin widths increased classification accuracy, which also improved with increased observation time, but primarily during periods of increasing lung inflation. Thus, the JPBM is a simple and effective method performing spike train classification.

  14. The fractional volatility model: An agent-based interpretation

    Science.gov (United States)

    Vilela Mendes, R.

    2008-06-01

    Based on the criteria of mathematical simplicity and consistency with empirical market data, a model with volatility driven by fractional noise has been constructed which provides a fairly accurate mathematical parametrization of the data. Here, some features of the model are reviewed and extended to account for leverage effects. Using agent-based models, one tries to find which agent strategies and (or) properties of the financial institutions might be responsible for the features of the fractional volatility model.

  15. Natural Language Processing Based Instrument for Classification of Free Text Medical Records

    Directory of Open Access Journals (Sweden)

    Manana Khachidze

    2016-01-01

    Full Text Available According to the Ministry of Labor, Health and Social Affairs of Georgia a new health management system has to be introduced in the nearest future. In this context arises the problem of structuring and classifying documents containing all the history of medical services provided. The present work introduces the instrument for classification of medical records based on the Georgian language. It is the first attempt of such classification of the Georgian language based medical records. On the whole 24.855 examination records have been studied. The documents were classified into three main groups (ultrasonography, endoscopy, and X-ray and 13 subgroups using two well-known methods: Support Vector Machine (SVM and K-Nearest Neighbor (KNN. The results obtained demonstrated that both machine learning methods performed successfully, with a little supremacy of SVM. In the process of classification a “shrink” method, based on features selection, was introduced and applied. At the first stage of classification the results of the “shrink” case were better; however, on the second stage of classification into subclasses 23% of all documents could not be linked to only one definite individual subclass (liver or binary system due to common features characterizing these subclasses. The overall results of the study were successful.

  16. A Framework For Agent-Based Educational Guidance And ...

    African Journals Online (AJOL)

    This work applies principles of artificial intelligence and agent development of educational guidance and counselling. An agentbased expert system is developed. The system supports the storage and intelligent interactive processing of the knowledge acquired by study and experience of the human expert in the domain ...

  17. Markov chain aggregation for agent-based models

    CERN Document Server

    Banisch, Sven

    2016-01-01

    This self-contained text develops a Markov chain approach that makes the rigorous analysis of a class of microscopic models that specify the dynamics of complex systems at the individual level possible. It presents a general framework of aggregation in agent-based and related computational models, one which makes use of lumpability and information theory in order to link the micro and macro levels of observation. The starting point is a microscopic Markov chain description of the dynamical process in complete correspondence with the dynamical behavior of the agent-based model (ABM), which is obtained by considering the set of all possible agent configurations as the state space of a huge Markov chain. An explicit formal representation of a resulting “micro-chain” including microscopic transition rates is derived for a class of models by using the random mapping representation of a Markov process. The type of probability distribution used to implement the stochastic part of the model, which defines the upd...

  18. Modeling collective emotions: a stochastic approach based on Brownian agents

    International Nuclear Information System (INIS)

    Schweitzer, F.

    2010-01-01

    We develop a agent-based framework to model the emergence of collective emotions, which is applied to online communities. Agents individual emotions are described by their valence and arousal. Using the concept of Brownian agents, these variables change according to a stochastic dynamics, which also considers the feedback from online communication. Agents generate emotional information, which is stored and distributed in a field modeling the online medium. This field affects the emotional states of agents in a non-linear manner. We derive conditions for the emergence of collective emotions, observable in a bimodal valence distribution. Dependent on a saturated or a super linear feedback between the information field and the agent's arousal, we further identify scenarios where collective emotions only appear once or in a repeated manner. The analytical results are illustrated by agent-based computer simulations. Our framework provides testable hypotheses about the emergence of collective emotions, which can be verified by data from online communities. (author)

  19. Dendrimer-based Macromolecular MRI Contrast Agents: Characteristics and Application

    Directory of Open Access Journals (Sweden)

    Hisataka Kobayashi

    2003-01-01

    Full Text Available Numerous macromolecular MRI contrast agents prepared employing relatively simple chemistry may be readily available that can provide sufficient enhancement for multiple applications. These agents operate using a ~100-fold lower concentration of gadolinium ions in comparison to the necessary concentration of iodine employed in CT imaging. Herein, we describe some of the general potential directions of macromolecular MRI contrast agents using our recently reported families of dendrimer-based agents as examples. Changes in molecular size altered the route of excretion. Smaller-sized contrast agents less than 60 kDa molecular weight were excreted through the kidney resulting in these agents being potentially suitable as functional renal contrast agents. Hydrophilic and larger-sized contrast agents were found better suited for use as blood pool contrast agents. Hydrophobic variants formed with polypropylenimine diaminobutane dendrimer cores created liver contrast agents. Larger hydrophilic agents are useful for lymphatic imaging. Finally, contrast agents conjugated with either monoclonal antibodies or with avidin are able to function as tumor-specific contrast agents, which also might be employed as therapeutic drugs for either gadolinium neutron capture therapy or in conjunction with radioimmunotherapy.

  20. Classification of Hearing Loss Disorders Using Teoae-Based Descriptors

    Science.gov (United States)

    Hatzopoulos, Stavros Dimitris

    Transiently Evoked Otoacoustic Emissions (TEOAE) are signals produced by the cochlea upon stimulation by an acoustic click. Within the context of this dissertation, it was hypothesized that the relationship between the TEOAEs and the functional status of the OHCs provided an opportunity for designing a TEOAE-based clinical procedure that could be used to assess cochlear function. To understand the nature of the TEOAE signals in the time and the frequency domain several different analyses were performed. Using normative Input-Output (IO) curves, short-time FFT analyses and cochlear computer simulations, it was found that for optimization of the hearing loss classification it is necessary to use a complete 20 ms TEOAE segment. It was also determined that various 2-D filtering methods (median and averaging filtering masks, LP-FFT) used to enhance of the TEOAE S/N offered minimal improvement (less than 6 dB per stimulus level). Higher S/N improvements resulted in TEOAE sequences that were over-smoothed. The final classification algorithm was based on a statistical analysis of raw FFT data and when applied to a sample set of clinically obtained TEOAE recordings (from 56 normal and 66 hearing-loss subjects) correctly identified 94.3% of the normal and 90% of the hearing loss subjects, at the 80 dB SPL stimulus level. To enhance the discrimination between the conductive and the sensorineural populations, data from the 68 dB SPL stimulus level were used, which yielded a normal classification of 90.2%, a hearing loss classification of 87.5% and a conductive-sensorineural classification of 87%. Among the hearing-loss populations the best discrimination was obtained in the group of otosclerosis and the worst in the group of acute acoustic trauma.

  1. Radiographic classification for fractures of the fifth metatarsal base

    International Nuclear Information System (INIS)

    Mehlhorn, Alexander T.; Zwingmann, Joern; Hirschmueller, Anja; Suedkamp, Norbert P.; Schmal, Hagen

    2014-01-01

    Avulsion fractures of the fifth metatarsal base (MTB5) are common fore foot injuries. Based on a radiomorphometric analysis reflecting the risk for a secondary displacement, a new classification was developed. A cohort of 95 healthy, sportive, and young patients (age ≤ 50 years) with avulsion fractures of the MTB5 was included in the study and divided into groups with non-displaced, primary-displaced, and secondary-displaced fractures. Radiomorphometric data obtained using standard oblique and dorso-plantar views were analyzed in association with secondary displacement. Based on this, a classification was developed and checked for reproducibility. Fractures with a longer distance between the lateral edge of the styloid process and the lateral fracture step-off and fractures with a more medial joint entry of the fracture line at the MTB5 are at higher risk to displace secondarily. Based on these findings, all fractures were divided into three types: type I with a fracture entry in the lateral third; type II in the middle third; and type III in the medial third of the MTB5. Additionally, the three types were subdivided into an A-type with a fracture displacement <2 mm and a B-type with a fracture displacement ≥ 2 mm. A substantial level of interobserver agreement was found in the assignment of all 95 fractures to the six fracture types (κ = 0.72). The secondary displacement of fractures was confirmed by all examiners in 100 %. Radiomorphometric data may identify fractures at risk for secondary displacement of the MTB5. Based on this, a reliable classification was developed. (orig.)

  2. Radiographic classification for fractures of the fifth metatarsal base

    Energy Technology Data Exchange (ETDEWEB)

    Mehlhorn, Alexander T.; Zwingmann, Joern; Hirschmueller, Anja; Suedkamp, Norbert P.; Schmal, Hagen [University of Freiburg Medical Center, Department of Orthopaedic Surgery, Freiburg (Germany)

    2014-04-15

    Avulsion fractures of the fifth metatarsal base (MTB5) are common fore foot injuries. Based on a radiomorphometric analysis reflecting the risk for a secondary displacement, a new classification was developed. A cohort of 95 healthy, sportive, and young patients (age ≤ 50 years) with avulsion fractures of the MTB5 was included in the study and divided into groups with non-displaced, primary-displaced, and secondary-displaced fractures. Radiomorphometric data obtained using standard oblique and dorso-plantar views were analyzed in association with secondary displacement. Based on this, a classification was developed and checked for reproducibility. Fractures with a longer distance between the lateral edge of the styloid process and the lateral fracture step-off and fractures with a more medial joint entry of the fracture line at the MTB5 are at higher risk to displace secondarily. Based on these findings, all fractures were divided into three types: type I with a fracture entry in the lateral third; type II in the middle third; and type III in the medial third of the MTB5. Additionally, the three types were subdivided into an A-type with a fracture displacement <2 mm and a B-type with a fracture displacement ≥ 2 mm. A substantial level of interobserver agreement was found in the assignment of all 95 fractures to the six fracture types (κ = 0.72). The secondary displacement of fractures was confirmed by all examiners in 100 %. Radiomorphometric data may identify fractures at risk for secondary displacement of the MTB5. Based on this, a reliable classification was developed. (orig.)

  3. Towards an agent-oriented programming language based on Scala

    Science.gov (United States)

    Mitrović, Dejan; Ivanović, Mirjana; Budimac, Zoran

    2012-09-01

    Scala and its multi-threaded model based on actors represent an excellent framework for developing purely reactive agents. This paper presents an early research on extending Scala with declarative programming constructs, which would result in a new agent-oriented programming language suitable for developing more advanced, BDI agent architectures. The main advantage the new language over many other existing solutions for programming BDI agents is a natural and straightforward integration of imperative and declarative programming constructs, fitted under a single development framework.

  4. Gadolinium-based contrast agents in pediatric magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Gale, Eric M.; Caravan, Peter [Massachusetts General Hospital, Harvard Medical School, Department of Radiology, The Martinos Center for Biomedical Imaging, Boston, MA (United States); Rao, Anil G. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); McDonald, Robert J. [College of Medicine, Mayo Clinic, Department of Radiology, Rochester, MN (United States); Winfeld, Matthew [University of Pennsylvania Perelman School of Medicine, Philadelphia, PA (United States); Fleck, Robert J. [Cincinnati Children' s Hospital Medical Center, Department of Pediatric Radiology, Cincinnati, OH (United States); Gee, Michael S. [MassGeneral Hospital for Children, Harvard Medical School, Division of Pediatric Imaging, Department of Radiology, Boston, MA (United States)

    2017-05-15

    Gadolinium-based contrast agents can increase the accuracy and expediency of an MRI examination. However the benefits of a contrast-enhanced scan must be carefully weighed against the well-documented risks associated with administration of exogenous contrast media. The purpose of this review is to discuss commercially available gadolinium-based contrast agents (GBCAs) in the context of pediatric radiology. We discuss the chemistry, regulatory status, safety and clinical applications, with particular emphasis on imaging of the blood vessels, heart, hepatobiliary tree and central nervous system. We also discuss non-GBCA MRI contrast agents that are less frequently used or not commercially available. (orig.)

  5. Chinese wine classification system based on micrograph using combination of shape and structure features

    Science.gov (United States)

    Wan, Yi

    2011-06-01

    Chinese wines can be classification or graded by the micrographs. Micrographs of Chinese wines show floccules, stick and granule of variant shape and size. Different wines have variant microstructure and micrographs, we study the classification of Chinese wines based on the micrographs. Shape and structure of wines' particles in microstructure is the most important feature for recognition and classification of wines. So we introduce a feature extraction method which can describe the structure and region shape of micrograph efficiently. First, the micrographs are enhanced using total variation denoising, and segmented using a modified Otsu's method based on the Rayleigh Distribution. Then features are extracted using proposed method in the paper based on area, perimeter and traditional shape feature. Eight kinds total 26 features are selected. Finally, Chinese wine classification system based on micrograph using combination of shape and structure features and BP neural network have been presented. We compare the recognition results for different choices of features (traditional shape features or proposed features). The experimental results show that the better classification rate have been achieved using the combinational features proposed in this paper.

  6. Desert plains classification based on Geomorphometrical parameters (Case study: Aghda, Yazd)

    Science.gov (United States)

    Tazeh, mahdi; Kalantari, Saeideh

    2013-04-01

    This research focuses on plains. There are several tremendous methods and classification which presented for plain classification. One of The natural resource based classification which is mostly using in Iran, classified plains into three types, Erosional Pediment, Denudation Pediment Aggradational Piedmont. The qualitative and quantitative factors to differentiate them from each other are also used appropriately. In this study effective Geomorphometrical parameters in differentiate landforms were applied for plain. Geomorphometrical parameters are calculable and can be extracted using mathematical equations and the corresponding relations on digital elevation model. Geomorphometrical parameters used in this study included Percent of Slope, Plan Curvature, Profile Curvature, Minimum Curvature, the Maximum Curvature, Cross sectional Curvature, Longitudinal Curvature and Gaussian Curvature. The results indicated that the most important affecting Geomorphometrical parameters for plain and desert classifications includes: Percent of Slope, Minimum Curvature, Profile Curvature, and Longitudinal Curvature. Key Words: Plain, Geomorphometry, Classification, Biophysical, Yazd Khezarabad.

  7. A canonical correlation analysis based EMG classification algorithm for eliminating electrode shift effect.

    Science.gov (United States)

    Zhe Fan; Zhong Wang; Guanglin Li; Ruomei Wang

    2016-08-01

    Motion classification system based on surface Electromyography (sEMG) pattern recognition has achieved good results in experimental condition. But it is still a challenge for clinical implement and practical application. Many factors contribute to the difficulty of clinical use of the EMG based dexterous control. The most obvious and important is the noise in the EMG signal caused by electrode shift, muscle fatigue, motion artifact, inherent instability of signal and biological signals such as Electrocardiogram. In this paper, a novel method based on Canonical Correlation Analysis (CCA) was developed to eliminate the reduction of classification accuracy caused by electrode shift. The average classification accuracy of our method were above 95% for the healthy subjects. In the process, we validated the influence of electrode shift on motion classification accuracy and discovered the strong correlation with correlation coefficient of >0.9 between shift position data and normal position data.

  8. A knowledge base architecture for distributed knowledge agents

    Science.gov (United States)

    Riedesel, Joel; Walls, Bryan

    1990-01-01

    A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.

  9. Torrent classification - Base of rational management of erosive regions

    Energy Technology Data Exchange (ETDEWEB)

    Gavrilovic, Zoran; Stefanovic, Milutin; Milovanovic, Irina; Cotric, Jelena; Milojevic, Mileta [Institute for the Development of Water Resources ' Jaroslav Cerni' , 11226 Beograd (Pinosava), Jaroslava Cernog 80 (Serbia)], E-mail: gavrilovicz@sbb.rs

    2008-11-01

    A complex methodology for torrents and erosion and the associated calculations was developed during the second half of the twentieth century in Serbia. It was the 'Erosion Potential Method'. One of the modules of that complex method was focused on torrent classification. The module enables the identification of hydro graphic, climate and erosion characteristics. The method makes it possible for each torrent, regardless of its magnitude, to be simply and recognizably described by the 'Formula of torrentially'. The above torrent classification is the base on which a set of optimisation calculations is developed for the required scope of erosion-control works and measures, the application of which enables the management of significantly larger erosion and torrential regions compared to the previous period. This paper will present the procedure and the method of torrent classification.

  10. Automated classification of mouse pup isolation syllables: from cluster analysis to an Excel based ‘mouse pup syllable classification calculator’

    Directory of Open Access Journals (Sweden)

    Jasmine eGrimsley

    2013-01-01

    Full Text Available Mouse pups vocalize at high rates when they are cold or isolated from the nest. The proportions of each syllable type produced carry information about disease state and are being used as behavioral markers for the internal state of animals. Manual classifications of these vocalizations identified ten syllable types based on their spectro-temporal features. However, manual classification of mouse syllables is time consuming and vulnerable to experimenter bias. This study uses an automated cluster analysis to identify acoustically distinct syllable types produced by CBA/CaJ mouse pups, and then compares the results to prior manual classification methods. The cluster analysis identified two syllable types, based on their frequency bands, that have continuous frequency-time structure, and two syllable types featuring abrupt frequency transitions. Although cluster analysis computed fewer syllable types than manual classification, the clusters represented well the probability distributions of the acoustic features within syllables. These probability distributions indicate that some of the manually classified syllable types are not statistically distinct. The characteristics of the four classified clusters were used to generate a Microsoft Excel-based mouse syllable classifier that rapidly categorizes syllables, with over a 90% match, into the syllable types determined by cluster analysis.

  11. Uav-Based Crops Classification with Joint Features from Orthoimage and Dsm Data

    Science.gov (United States)

    Liu, B.; Shi, Y.; Duan, Y.; Wu, W.

    2018-04-01

    Accurate crops classification remains a challenging task due to the same crop with different spectra and different crops with same spectrum phenomenon. Recently, UAV-based remote sensing approach gains popularity not only for its high spatial and temporal resolution, but also for its ability to obtain spectraand spatial data at the same time. This paper focus on how to take full advantages of spatial and spectrum features to improve crops classification accuracy, based on an UAV platform equipped with a general digital camera. Texture and spatial features extracted from the RGB orthoimage and the digital surface model of the monitoring area are analysed and integrated within a SVM classification framework. Extensive experiences results indicate that the overall classification accuracy is drastically improved from 72.9 % to 94.5 % when the spatial features are combined together, which verified the feasibility and effectiveness of the proposed method.

  12. Hyperspectral image classification based on local binary patterns and PCANet

    Science.gov (United States)

    Yang, Huizhen; Gao, Feng; Dong, Junyu; Yang, Yang

    2018-04-01

    Hyperspectral image classification has been well acknowledged as one of the challenging tasks of hyperspectral data processing. In this paper, we propose a novel hyperspectral image classification framework based on local binary pattern (LBP) features and PCANet. In the proposed method, linear prediction error (LPE) is first employed to select a subset of informative bands, and LBP is utilized to extract texture features. Then, spectral and texture features are stacked into a high dimensional vectors. Next, the extracted features of a specified position are transformed to a 2-D image. The obtained images of all pixels are fed into PCANet for classification. Experimental results on real hyperspectral dataset demonstrate the effectiveness of the proposed method.

  13. Optical beam classification using deep learning: a comparison with rule- and feature-based classification

    Science.gov (United States)

    Alom, Md. Zahangir; Awwal, Abdul A. S.; Lowe-Webb, Roger; Taha, Tarek M.

    2017-08-01

    Vector Machine (SVM). The experimental results show around 96% classification accuracy using CNN; the CNN approach also provides comparable recognition results compared to the present feature-based off-normal detection. The feature-based solution was developed to capture the expertise of a human expert in classifying the images. The misclassified results are further studied to explain the differences and discover any discrepancies or inconsistencies in current classification.

  14. Classification of forensic autopsy reports through conceptual graph-based document representation model.

    Science.gov (United States)

    Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali

    2018-06-01

    Text categorization has been used extensively in recent years to classify plain-text clinical reports. This study employs text categorization techniques for the classification of open narrative forensic autopsy reports. One of the key steps in text classification is document representation. In document representation, a clinical report is transformed into a format that is suitable for classification. The traditional document representation technique for text categorization is the bag-of-words (BoW) technique. In this study, the traditional BoW technique is ineffective in classifying forensic autopsy reports because it merely extracts frequent but discriminative features from clinical reports. Moreover, this technique fails to capture word inversion, as well as word-level synonymy and polysemy, when classifying autopsy reports. Hence, the BoW technique suffers from low accuracy and low robustness unless it is improved with contextual and application-specific information. To overcome the aforementioned limitations of the BoW technique, this research aims to develop an effective conceptual graph-based document representation (CGDR) technique to classify 1500 forensic autopsy reports from four (4) manners of death (MoD) and sixteen (16) causes of death (CoD). Term-based and Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) based conceptual features were extracted and represented through graphs. These features were then used to train a two-level text classifier. The first level classifier was responsible for predicting MoD. In addition, the second level classifier was responsible for predicting CoD using the proposed conceptual graph-based document representation technique. To demonstrate the significance of the proposed technique, its results were compared with those of six (6) state-of-the-art document representation techniques. Lastly, this study compared the effects of one-level classification and two-level classification on the experimental results

  15. Proposing a Hybrid Model Based on Robson's Classification for Better Impact on Trends of Cesarean Deliveries.

    Science.gov (United States)

    Hans, Punit; Rohatgi, Renu

    2017-06-01

    To construct a hybrid model classification for cesarean section (CS) deliveries based on the woman-characteristics (Robson's classification with additional layers of indications for CS, keeping in view low-resource settings available in India). This is a cross-sectional study conducted at Nalanda Medical College, Patna. All the women delivered from January 2016 to May 2016 in the labor ward were included. Results obtained were compared with the values obtained for India, from secondary analysis of WHO multi-country survey (2010-2011) by Joshua Vogel and colleagues' study published in "The Lancet Global Health." The three classifications (indication-based, Robson's and hybrid model) applied for categorization of the cesarean deliveries from the same sample of data and a semiqualitative evaluations done, considering the main characteristics, strengths and weaknesses of each classification system. The total number of women delivered during study period was 1462, out of which CS deliveries were 471. Overall, CS rate calculated for NMCH, hospital in this specified period, was 32.21% ( p  = 0.001). Hybrid model scored 23/23, and scores of Robson classification and indication-based classification were 21/23 and 10/23, respectively. Single-study centre and referral bias are the limitations of the study. Given the flexibility of the classifications, we constructed a hybrid model based on the woman-characteristics system with additional layers of other classification. Indication-based classification answers why, Robson classification answers on whom, while through our hybrid model we get to know why and on whom cesarean deliveries are being performed.

  16. Agent Behavior-Based Simulation Study on Mass Collaborative Product Development Process

    Directory of Open Access Journals (Sweden)

    Shuo Zhang

    2015-01-01

    Full Text Available Mass collaborative product development (MCPD benefits people by high innovation products with lower cost and shorter lead time due to quick development of group innovation, Internet-based customization, and prototype manufacturing. Simulation is an effective way to study the evolution process and therefore to guarantee the success of MCPD. In this paper, an agent behavior-based simulation approach of MCPD is developed, which models the MCPD process as the interactive process of design agents and the environment objects based on Complex Adaptive System (CAS theory. Next, the structure model of design agent is proposed, and the modification and collaboration behaviors are described. Third, the agent behavior-based simulation flow of MCPD is designed. At last, simulation experiments are carried out based on an engineering case of mobile phone design. The experiment results show the following: (1 the community scale has significant influence on MCPD process; (2 the simulation process can explicitly represent the modification and collaboration behaviors of design agents; (3 the community evolution process can be observed and analyzed dynamically based on simulation data.

  17. Image Classification Based on Convolutional Denoising Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Shuangshuang Chen

    2017-01-01

    Full Text Available Image classification aims to group images into corresponding semantic categories. Due to the difficulties of interclass similarity and intraclass variability, it is a challenging issue in computer vision. In this paper, an unsupervised feature learning approach called convolutional denoising sparse autoencoder (CDSAE is proposed based on the theory of visual attention mechanism and deep learning methods. Firstly, saliency detection method is utilized to get training samples for unsupervised feature learning. Next, these samples are sent to the denoising sparse autoencoder (DSAE, followed by convolutional layer and local contrast normalization layer. Generally, prior in a specific task is helpful for the task solution. Therefore, a new pooling strategy—spatial pyramid pooling (SPP fused with center-bias prior—is introduced into our approach. Experimental results on the common two image datasets (STL-10 and CIFAR-10 demonstrate that our approach is effective in image classification. They also demonstrate that none of these three components: local contrast normalization, SPP fused with center-prior, and l2 vector normalization can be excluded from our proposed approach. They jointly improve image representation and classification performance.

  18. Feature-Based Classification of Amino Acid Substitutions outside Conserved Functional Protein Domains

    Directory of Open Access Journals (Sweden)

    Branislava Gemovic

    2013-01-01

    Full Text Available There are more than 500 amino acid substitutions in each human genome, and bioinformatics tools irreplaceably contribute to determination of their functional effects. We have developed feature-based algorithm for the detection of mutations outside conserved functional domains (CFDs and compared its classification efficacy with the most commonly used phylogeny-based tools, PolyPhen-2 and SIFT. The new algorithm is based on the informational spectrum method (ISM, a feature-based technique, and statistical analysis. Our dataset contained neutral polymorphisms and mutations associated with myeloid malignancies from epigenetic regulators ASXL1, DNMT3A, EZH2, and TET2. PolyPhen-2 and SIFT had significantly lower accuracies in predicting the effects of amino acid substitutions outside CFDs than expected, with especially low sensitivity. On the other hand, only ISM algorithm showed statistically significant classification of these sequences. It outperformed PolyPhen-2 and SIFT by 15% and 13%, respectively. These results suggest that feature-based methods, like ISM, are more suitable for the classification of amino acid substitutions outside CFDs than phylogeny-based tools.

  19. Gradient Evolution-based Support Vector Machine Algorithm for Classification

    Science.gov (United States)

    Zulvia, Ferani E.; Kuo, R. J.

    2018-03-01

    This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.

  20. Agent based Particle Swarm Optimization for Load Frequency Control of Distribution Grid

    DEFF Research Database (Denmark)

    Cha, Seung-Tae; Saleem, Arshad; Wu, Qiuwei

    2012-01-01

    This paper presents a Particle Swarm Optimization (PSO) based on multi-agent controller. Real-time digital simulator (RTDS) is used for modelling the power system, while a PSO based multi-agent LFC algorithm is developed in JAVA for communicating with resource agents and determines the scenario...... to stabilize the frequency and voltage after the system enters into the islanding operation mode. The proposed algorithm is based on the formulation of an optimization problem using agent based PSO. The modified IEEE 9-bus system is employed to illustrate the performance of the proposed controller via RTDS...

  1. Evolving cancer classification in the era of personalized medicine: A primer for radiologists

    Energy Technology Data Exchange (ETDEWEB)

    O' Neill, Alibhe C.; Jagannathan, Jyothi P.; Ramaiya, Nikhil H. [Dept. of of Imaging, Dana Farber Cancer Institute, Boston (United States)

    2017-01-15

    Traditionally tumors were classified based on anatomic location but now specific genetic mutations in cancers are leading to treatment of tumors with molecular targeted therapies. This has led to a paradigm shift in the classification and treatment of cancer. Tumors treated with molecular targeted therapies often show morphological changes rather than change in size and are associated with class specific and drug specific toxicities, different from those encountered with conventional chemotherapeutic agents. It is important for the radiologists to be familiar with the new cancer classification and the various treatment strategies employed, in order to effectively communicate and participate in the multi-disciplinary care. In this paper we will focus on lung cancer as a prototype of the new molecular classification.

  2. Atmospheric circulation classification comparison based on wildfires in Portugal

    Science.gov (United States)

    Pereira, M. G.; Trigo, R. M.

    2009-04-01

    Atmospheric circulation classifications are not a simple description of atmospheric states but a tool to understand and interpret the atmospheric processes and to model the relation between atmospheric circulation and surface climate and other related variables (Radan Huth et al., 2008). Classifications were initially developed with weather forecasting purposes, however with the progress in computer processing capability, new and more robust objective methods were developed and applied to large datasets prompting atmospheric circulation classification methods to one of the most important fields in synoptic and statistical climatology. Classification studies have been extensively used in climate change studies (e.g. reconstructed past climates, recent observed changes and future climates), in bioclimatological research (e.g. relating human mortality to climatic factors) and in a wide variety of synoptic climatological applications (e.g. comparison between datasets, air pollution, snow avalanches, wine quality, fish captures and forest fires). Likewise, atmospheric circulation classifications are important for the study of the role of weather in wildfire occurrence in Portugal because the daily synoptic variability is the most important driver of local weather conditions (Pereira et al., 2005). In particular, the objective classification scheme developed by Trigo and DaCamara (2000) to classify the atmospheric circulation affecting Portugal have proved to be quite useful in discriminating the occurrence and development of wildfires as well as the distribution over Portugal of surface climatic variables with impact in wildfire activity such as maximum and minimum temperature and precipitation. This work aims to present: (i) an overview the existing circulation classification for the Iberian Peninsula, and (ii) the results of a comparison study between these atmospheric circulation classifications based on its relation with wildfires and relevant meteorological

  3. CT-based injury classification

    International Nuclear Information System (INIS)

    Mirvis, S.E.; Whitley, N.O.; Vainright, J.; Gens, D.

    1988-01-01

    Review of preoperative abdominal CT scans obtained in adults after blunt trauma during a 2.5-year period demonstrated isolated or predominant liver injury in 35 patients and splenic injury in 33 patients. CT-based injury scores, consisting of five levels of hepatic injury and four levels of splenic injury, were correlated with clinical outcome and surgical findings. Hepatic injury grades I-III, present in 33 of 35 patients, were associated with successful nonsurgical management in 27 (82%) or with findings at celiotomy not requiring surgical intervention in four (12%). Higher grades of splenic injury generally required early operative intervention, but eight (36%) of 22 patients with initial grade III or IV injury were managed without surgery, while four (36%) of 11 patients with grade I or II injury required delayed celiotomy and splenectomy (three patients) or emergent rehospitalization (one patient). CT-based injury classification is useful in guiding the nonoperative management of blunt hepatic injury in hemodynamically stable adults but appears to be less reliable in predicting the outcome of blunt splenic injury

  4. A new circulation type classification based upon Lagrangian air trajectories

    Directory of Open Access Journals (Sweden)

    Alexandre M. Ramos

    2014-10-01

    Full Text Available A new classification method of the large-scale circulation characteristic for a specific target area (NW Iberian Peninsula is presented, based on the analysis of 90-h backward trajectories arriving in this area calculated with the 3-D Lagrangian particle dispersion model FLEXPART. A cluster analysis is applied to separate the backward trajectories in up to five representative air streams for each day. Specific measures are then used to characterise the distinct air streams (e.g., curvature of the trajectories, cyclonic or anticyclonic flow, moisture evolution, origin and length of the trajectories. The robustness of the presented method is demonstrated in comparison with the Eulerian Lamb weather type classification.A case study of the 2003 heatwave is discussed in terms of the new Lagrangian circulation and the Lamb weather type classifications. It is shown that the new classification method adds valuable information about the pertinent meteorological conditions, which are missing in an Eulerian approach. The new method is climatologically evaluated for the five-year time period from December 1999 to November 2004. The ability of the method to capture the inter-seasonal circulation variability in the target region is shown. Furthermore, the multi-dimensional character of the classification is shortly discussed, in particular with respect to inter-seasonal differences. Finally, the relationship between the new Lagrangian classification and the precipitation in the target area is studied.

  5. Agent-based method for distributed clustering of textual information

    Science.gov (United States)

    Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN

    2010-09-28

    A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.

  6. Classification of scintigrams on the base of an automatic analysis

    International Nuclear Information System (INIS)

    Vidyukov, V.I.; Kasatkin, Yu.N.; Kal'nitskaya, E.F.; Mironov, S.P.; Rotenberg, E.M.

    1980-01-01

    The stages of drawing a discriminative system based on self-education for an automatic analysis of scintigrams have been considered. The results of the classification of 240 scintigrams of the liver into ''normal'', ''diffuse lesions'', ''focal lesions'' have been evaluated by medical experts and computer. The accuracy of the computerized classification was 91.7%, that of the experts-85%. The automatic analysis methods of scintigrams of the liver have been realized using the specialized MDS system of data processing. The quality of the discriminative system has been assessed on 125 scintigrams. The accuracy of the classification is equal to 89.6%. The employment of the self-education; methods permitted one to single out two subclasses depending on the severity of diffuse lesions

  7. An application-based classification to understand buyer-seller interaction in business services

    NARCIS (Netherlands)

    Valk, van der W.; Wynstra, J.Y.F.; Axelsson, B.

    2006-01-01

    Abstract: Purpose – Most existing classifications of business services have taken the perspective of the supplier as opposed to that of the buyer. To address this imbalance, the purpose of this paper is to propose a classification of business services based on how the buying company applies the

  8. Agent-based modelling of cholera diffusion

    NARCIS (Netherlands)

    Augustijn-Beckers, Petronella; Doldersum, Tom; Useya, Juliana; Augustijn, Dionysius C.M.

    2016-01-01

    This paper introduces a spatially explicit agent-based simulation model for micro-scale cholera diffusion. The model simulates both an environmental reservoir of naturally occurring V.cholerae bacteria and hyperinfectious V. cholerae. Objective of the research is to test if runoff from open refuse

  9. Agent-based simulation in entrepreneurship research

    NARCIS (Netherlands)

    Yang, S.-J.S.; Chandra, Y.

    2009-01-01

    Agent-based modeling (ABM) has wide applications in natural and social sciences yet it has not been widely applied in entrepreneurship research. We discuss the nature of ABM, its position among conventional methodologies and then offer a roadmap for developing, testing and extending theories of

  10. Agent-based simulation of animal behaviour

    NARCIS (Netherlands)

    C.M. Jonker (Catholijn); J. Treur

    1998-01-01

    textabstract In this paper it is shown how animal behaviour can be simulated in an agent-based manner. Different models are shown for different types of behaviour, varying from purely reactive behaviour to pro-active, social and adaptive behaviour. The compositional development method for

  11. Patent Keyword Extraction Algorithm Based on Distributed Representation for Patent Classification

    Directory of Open Access Journals (Sweden)

    Jie Hu

    2018-02-01

    Full Text Available Many text mining tasks such as text retrieval, text summarization, and text comparisons depend on the extraction of representative keywords from the main text. Most existing keyword extraction algorithms are based on discrete bag-of-words type of word representation of the text. In this paper, we propose a patent keyword extraction algorithm (PKEA based on the distributed Skip-gram model for patent classification. We also develop a set of quantitative performance measures for keyword extraction evaluation based on information gain and cross-validation, based on Support Vector Machine (SVM classification, which are valuable when human-annotated keywords are not available. We used a standard benchmark dataset and a homemade patent dataset to evaluate the performance of PKEA. Our patent dataset includes 2500 patents from five distinct technological fields related to autonomous cars (GPS systems, lidar systems, object recognition systems, radar systems, and vehicle control systems. We compared our method with Frequency, Term Frequency-Inverse Document Frequency (TF-IDF, TextRank and Rapid Automatic Keyword Extraction (RAKE. The experimental results show that our proposed algorithm provides a promising way to extract keywords from patent texts for patent classification.

  12. A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data

    Science.gov (United States)

    Gajda, Agnieszka; Wójtowicz-Nowakowska, Anna

    2013-04-01

    A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data Land cover maps are generally produced on the basis of high resolution imagery. Recently, LiDAR (Light Detection and Ranging) data have been brought into use in diverse applications including land cover mapping. In this study we attempted to assess the accuracy of land cover classification using both high resolution aerial imagery and LiDAR data (airborne laser scanning, ALS), testing two classification approaches: a pixel-based classification and object-oriented image analysis (OBIA). The study was conducted on three test areas (3 km2 each) in the administrative area of Kraków, Poland, along the course of the Vistula River. They represent three different dominating land cover types of the Vistula River valley. Test site 1 had a semi-natural vegetation, with riparian forests and shrubs, test site 2 represented a densely built-up area, and test site 3 was an industrial site. Point clouds from ALS and ortophotomaps were both captured in November 2007. Point cloud density was on average 16 pt/m2 and it contained additional information about intensity and encoded RGB values. Ortophotomaps had a spatial resolution of 10 cm. From point clouds two raster maps were generated: intensity (1) and (2) normalised Digital Surface Model (nDSM), both with the spatial resolution of 50 cm. To classify the aerial data, a supervised classification approach was selected. Pixel based classification was carried out in ERDAS Imagine software. Ortophotomaps and intensity and nDSM rasters were used in classification. 15 homogenous training areas representing each cover class were chosen. Classified pixels were clumped to avoid salt and pepper effect. Object oriented image object classification was carried out in eCognition software, which implements both the optical and ALS data. Elevation layers (intensity, firs/last reflection, etc.) were used at segmentation stage due to

  13. Mobile Agent-Based Software Systems Modeling Approaches: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Aissam Belghiat

    2016-06-01

    Full Text Available Mobile agent-based applications are special type of software systems which take the advantages of mobile agents in order to provide a new beneficial paradigm to solve multiple complex problems in several fields and areas such as network management, e-commerce, e-learning, etc. Likewise, we notice lack of real applications based on this paradigm and lack of serious evaluations of their modeling approaches. Hence, this paper provides a comparative study of modeling approaches of mobile agent-based software systems. The objective is to give the reader an overview and a thorough understanding of the work that has been done and where the gaps in the research are.

  14. G0-WISHART Distribution Based Classification from Polarimetric SAR Images

    Science.gov (United States)

    Hu, G. C.; Zhao, Q. H.

    2017-09-01

    Enormous scientific and technical developments have been carried out to further improve the remote sensing for decades, particularly Polarimetric Synthetic Aperture Radar(PolSAR) technique, so classification method based on PolSAR images has getted much more attention from scholars and related department around the world. The multilook polarmetric G0-Wishart model is a more flexible model which describe homogeneous, heterogeneous and extremely heterogeneous regions in the image. Moreover, the polarmetric G0-Wishart distribution dose not include the modified Bessel function of the second kind. It is a kind of simple statistical distribution model with less parameter. To prove its feasibility, a process of classification has been tested with the full-polarized Synthetic Aperture Radar (SAR) image by the method. First, apply multilook polarimetric SAR data process and speckle filter to reduce speckle influence for classification result. Initially classify the image into sixteen classes by H/A/α decomposition. Using the ICM algorithm to classify feature based on the G0-Wshart distance. Qualitative and quantitative results show that the proposed method can classify polaimetric SAR data effectively and efficiently.

  15. Semi-supervised vibration-based classification and condition monitoring of compressors

    Science.gov (United States)

    Potočnik, Primož; Govekar, Edvard

    2017-09-01

    Semi-supervised vibration-based classification and condition monitoring of the reciprocating compressors installed in refrigeration appliances is proposed in this paper. The method addresses the problem of industrial condition monitoring where prior class definitions are often not available or difficult to obtain from local experts. The proposed method combines feature extraction, principal component analysis, and statistical analysis for the extraction of initial class representatives, and compares the capability of various classification methods, including discriminant analysis (DA), neural networks (NN), support vector machines (SVM), and extreme learning machines (ELM). The use of the method is demonstrated on a case study which was based on industrially acquired vibration measurements of reciprocating compressors during the production of refrigeration appliances. The paper presents a comparative qualitative analysis of the applied classifiers, confirming the good performance of several nonlinear classifiers. If the model parameters are properly selected, then very good classification performance can be obtained from NN trained by Bayesian regularization, SVM and ELM classifiers. The method can be effectively applied for the industrial condition monitoring of compressors.

  16. Agent-based Modeling with MATSim for Hazards Evacuation Planning

    Science.gov (United States)

    Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.

    2015-12-01

    Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.

  17. Single-labelled music genre classification using content-based features

    CSIR Research Space (South Africa)

    Ajoodha, R

    2015-11-01

    Full Text Available In this paper we use content-based features to perform automatic classification of music pieces into genres. We categorise these features into four groups: features extracted from the Fourier transform’s magnitude spectrum, features designed...

  18. On the Feature Selection and Classification Based on Information Gain for Document Sentiment Analysis

    Directory of Open Access Journals (Sweden)

    Asriyanti Indah Pratiwi

    2018-01-01

    Full Text Available Sentiment analysis in a movie review is the needs of today lifestyle. Unfortunately, enormous features make the sentiment of analysis slow and less sensitive. Finding the optimum feature selection and classification is still a challenge. In order to handle an enormous number of features and provide better sentiment classification, an information-based feature selection and classification are proposed. The proposed method reduces more than 90% unnecessary features while the proposed classification scheme achieves 96% accuracy of sentiment classification. From the experimental results, it can be concluded that the combination of proposed feature selection and classification achieves the best performance so far.

  19. Agent-Based Crowd Simulation Considering Emotion Contagion for Emergency Evacuation Problem

    Science.gov (United States)

    Faroqi, H.; Mesgari, M.-S.

    2015-12-01

    During emergencies, emotions greatly affect human behaviour. For more realistic multi-agent systems in simulations of emergency evacuations, it is important to incorporate emotions and their effects on the agents. In few words, emotional contagion is a process in which a person or group influences the emotions or behavior of another person or group through the conscious or unconscious induction of emotion states and behavioral attitudes. In this study, we simulate an emergency situation in an open square area with three exits considering Adults and Children agents with different behavior. Also, Security agents are considered in order to guide Adults and Children for finding the exits and be calm. Six levels of emotion levels are considered for each agent in different scenarios and situations. The agent-based simulated model initialize with the random scattering of agent populations and then when an alarm occurs, each agent react to the situation based on its and neighbors current circumstances. The main goal of each agent is firstly to find the exit, and then help other agents to find their ways. Numbers of exited agents along with their emotion levels and damaged agents are compared in different scenarios with different initialization in order to evaluate the achieved results of the simulated model. NetLogo 5.2 is used as the multi-agent simulation framework with R language as the developing language.

  20. AGENT-BASED CROWD SIMULATION CONSIDERING EMOTION CONTAGION FOR EMERGENCY EVACUATION PROBLEM

    Directory of Open Access Journals (Sweden)

    H. Faroqi

    2015-12-01

    Full Text Available During emergencies, emotions greatly affect human behaviour. For more realistic multi-agent systems in simulations of emergency evacuations, it is important to incorporate emotions and their effects on the agents. In few words, emotional contagion is a process in which a person or group influences the emotions or behavior of another person or group through the conscious or unconscious induction of emotion states and behavioral attitudes. In this study, we simulate an emergency situation in an open square area with three exits considering Adults and Children agents with different behavior. Also, Security agents are considered in order to guide Adults and Children for finding the exits and be calm. Six levels of emotion levels are considered for each agent in different scenarios and situations. The agent-based simulated model initialize with the random scattering of agent populations and then when an alarm occurs, each agent react to the situation based on its and neighbors current circumstances. The main goal of each agent is firstly to find the exit, and then help other agents to find their ways. Numbers of exited agents along with their emotion levels and damaged agents are compared in different scenarios with different initialization in order to evaluate the achieved results of the simulated model. NetLogo 5.2 is used as the multi-agent simulation framework with R language as the developing language.

  1. MODEL-BASED CLUSTERING FOR CLASSIFICATION OF AQUATIC SYSTEMS AND DIAGNOSIS OF ECOLOGICAL STRESS

    Science.gov (United States)

    Clustering approaches were developed using the classification likelihood, the mixture likelihood, and also using a randomization approach with a model index. Using a clustering approach based on the mixture and classification likelihoods, we have developed an algorithm that...

  2. CATS-based Air Traffic Controller Agents

    Science.gov (United States)

    Callantine, Todd J.

    2002-01-01

    This report describes intelligent agents that function as air traffic controllers. Each agent controls traffic in a single sector in real time; agents controlling traffic in adjoining sectors can coordinate to manage an arrival flow across a given meter fix. The purpose of this research is threefold. First, it seeks to study the design of agents for controlling complex systems. In particular, it investigates agent planning and reactive control functionality in a dynamic environment in which a variety perceptual and decision making skills play a central role. It examines how heuristic rules can be applied to model planning and decision making skills, rather than attempting to apply optimization methods. Thus, the research attempts to develop intelligent agents that provide an approximation of human air traffic controller behavior that, while not based on an explicit cognitive model, does produce task performance consistent with the way human air traffic controllers operate. Second, this research sought to extend previous research on using the Crew Activity Tracking System (CATS) as the basis for intelligent agents. The agents use a high-level model of air traffic controller activities to structure the control task. To execute an activity in the CATS model, according to the current task context, the agents reference a 'skill library' and 'control rules' that in turn execute the pattern recognition, planning, and decision-making required to perform the activity. Applying the skills enables the agents to modify their representation of the current control situation (i.e., the 'flick' or 'picture'). The updated representation supports the next activity in a cycle of action that, taken as a whole, simulates air traffic controller behavior. A third, practical motivation for this research is to use intelligent agents to support evaluation of new air traffic control (ATC) methods to support new Air Traffic Management (ATM) concepts. Current approaches that use large, human

  3. Chronic Heart Failure Follow-up Management Based on Agent Technology.

    Science.gov (United States)

    Mohammadzadeh, Niloofar; Safdari, Reza

    2015-10-01

    Monitoring heart failure patients through continues assessment of sign and symptoms by information technology tools lead to large reduction in re-hospitalization. Agent technology is one of the strongest artificial intelligence areas; therefore, it can be expected to facilitate, accelerate, and improve health services especially in home care and telemedicine. The aim of this article is to provide an agent-based model for chronic heart failure (CHF) follow-up management. This research was performed in 2013-2014 to determine appropriate scenarios and the data required to monitor and follow-up CHF patients, and then an agent-based model was designed. Agents in the proposed model perform the following tasks: medical data access, communication with other agents of the framework and intelligent data analysis, including medical data processing, reasoning, negotiation for decision-making, and learning capabilities. The proposed multi-agent system has ability to learn and thus improve itself. Implementation of this model with more and various interval times at a broader level could achieve better results. The proposed multi-agent system is no substitute for cardiologists, but it could assist them in decision-making.

  4. Agent-based modelling in synthetic biology.

    Science.gov (United States)

    Gorochowski, Thomas E

    2016-11-30

    Biological systems exhibit complex behaviours that emerge at many different levels of organization. These span the regulation of gene expression within single cells to the use of quorum sensing to co-ordinate the action of entire bacterial colonies. Synthetic biology aims to make the engineering of biology easier, offering an opportunity to control natural systems and develop new synthetic systems with useful prescribed behaviours. However, in many cases, it is not understood how individual cells should be programmed to ensure the emergence of a required collective behaviour. Agent-based modelling aims to tackle this problem, offering a framework in which to simulate such systems and explore cellular design rules. In this article, I review the use of agent-based models in synthetic biology, outline the available computational tools, and provide details on recently engineered biological systems that are amenable to this approach. I further highlight the challenges facing this methodology and some of the potential future directions. © 2016 The Author(s).

  5. Remote Sensing Image Classification Based on Stacked Denoising Autoencoder

    Directory of Open Access Journals (Sweden)

    Peng Liang

    2017-12-01

    Full Text Available Focused on the issue that conventional remote sensing image classification methods have run into the bottlenecks in accuracy, a new remote sensing image classification method inspired by deep learning is proposed, which is based on Stacked Denoising Autoencoder. First, the deep network model is built through the stacked layers of Denoising Autoencoder. Then, with noised input, the unsupervised Greedy layer-wise training algorithm is used to train each layer in turn for more robust expressing, characteristics are obtained in supervised learning by Back Propagation (BP neural network, and the whole network is optimized by error back propagation. Finally, Gaofen-1 satellite (GF-1 remote sensing data are used for evaluation, and the total accuracy and kappa accuracy reach 95.7% and 0.955, respectively, which are higher than that of the Support Vector Machine and Back Propagation neural network. The experiment results show that the proposed method can effectively improve the accuracy of remote sensing image classification.

  6. FPGA-Based Online PQD Detection and Classification through DWT, Mathematical Morphology and SVD

    Directory of Open Access Journals (Sweden)

    Misael Lopez-Ramirez

    2018-03-01

    Full Text Available Power quality disturbances (PQD in electric distribution systems can be produced by the utilization of non-linear loads or environmental circumstances, causing electrical equipment malfunction and reduction of its useful life. Detecting and classifying different PQDs implies great efforts in planning and structuring the monitoring system. The main disadvantage of most works in the literature is that they treat a limited number of electrical disturbances through personal computer (PC-based computation techniques, which makes it difficult to perform an online PQD classification. In this work, the novel contribution is a methodology for PQD recognition and classification through discrete wavelet transform, mathematical morphology, decomposition of singular values, and statistical analysis. Furthermore, the timely and reliable classification of different disturbances is necessary; hence, a field programmable gate array (FPGA-based integrated circuit is developed to offer a portable hardware processing unit to perform fast, online PQD classification. The obtained numerical and experimental results demonstrate that the proposed method guarantees high effectiveness during online PQD detection and classification of real voltage/current signals.

  7. GMDH-Based Semi-Supervised Feature Selection for Electricity Load Classification Forecasting

    Directory of Open Access Journals (Sweden)

    Lintao Yang

    2018-01-01

    Full Text Available With the development of smart power grids, communication network technology and sensor technology, there has been an exponential growth in complex electricity load data. Irregular electricity load fluctuations caused by the weather and holiday factors disrupt the daily operation of the power companies. To deal with these challenges, this paper investigates a day-ahead electricity peak load interval forecasting problem. It transforms the conventional continuous forecasting problem into a novel interval forecasting problem, and then further converts the interval forecasting problem into the classification forecasting problem. In addition, an indicator system influencing the electricity load is established from three dimensions, namely the load series, calendar data, and weather data. A semi-supervised feature selection algorithm is proposed to address an electricity load classification forecasting issue based on the group method of data handling (GMDH technology. The proposed algorithm consists of three main stages: (1 training the basic classifier; (2 selectively marking the most suitable samples from the unclassified label data, and adding them to an initial training set; and (3 training the classification models on the final training set and classifying the test samples. An empirical analysis of electricity load dataset from four Chinese cities is conducted. Results show that the proposed model can address the electricity load classification forecasting problem more efficiently and effectively than the FW-Semi FS (forward semi-supervised feature selection and GMDH-U (GMDH-based semi-supervised feature selection for customer classification models.

  8. FIPA agent based network distributed control system

    Energy Technology Data Exchange (ETDEWEB)

    D. Abbott; V. Gyurjyan; G. Heyes; E. Jastrzembski; C. Timmer; E. Wolin

    2003-03-01

    A control system with the capabilities to combine heterogeneous control systems or processes into a uniform homogeneous environment is discussed. This dynamically extensible system is an example of the software system at the agent level of abstraction. This level of abstraction considers agents as atomic entities that communicate to implement the functionality of the control system. Agents' engineering aspects are addressed by adopting the domain independent software standard, formulated by FIPA. Jade core Java classes are used as a FIPA specification implementation. A special, lightweight, XML RDFS based, control oriented, ontology markup language is developed to standardize the description of the arbitrary control system data processor. Control processes, described in this language, are integrated into the global system at runtime, without actual programming. Fault tolerance and recovery issues are also addressed.

  9. FIPA agent based network distributed control system

    International Nuclear Information System (INIS)

    Abbott, D.; Gyurjyan, V.; Heyes, G.; Jastrzembski, E.; Timmer, C.; Wolin, E.

    2003-01-01

    A control system with the capabilities to combine heterogeneous control systems or processes into a uniform homogeneous environment is discussed. This dynamically extensible system is an example of the software system at the agent level of abstraction. This level of abstraction considers agents as atomic entities that communicate to implement the functionality of the control system. Agents' engineering aspects are addressed by adopting the domain independent software standard, formulated by FIPA. Jade core Java classes are used as a FIPA specification implementation. A special, lightweight, XML RDFS based, control oriented, ontology markup language is developed to standardize the description of the arbitrary control system data processor. Control processes, described in this language, are integrated into the global system at runtime, without actual programming. Fault tolerance and recovery issues are also addressed

  10. Agent Based Fuzzy T-S Multi-Model System and Its Applications

    Directory of Open Access Journals (Sweden)

    Xiaopeng Zhao

    2015-11-01

    Full Text Available Based on the basic concepts of agent and fuzzy T-S model, an agent based fuzzy T-S multi-model (ABFT-SMM system is proposed in this paper. Different from the traditional method, the parameters and the membership value of the agent can be adjusted along with the process. In this system, each agent can be described as a dynamic equation, which can be seen as the local part of the multi-model, and it can execute the task alone or collaborate with other agents to accomplish a fixed goal. It is proved in this paper that the agent based fuzzy T-S multi-model system can approximate any linear or nonlinear system at arbitrary accuracy. The applications to the benchmark problem of chaotic time series prediction, water heater system and waste heat utilizing process illustrate the viability and the efficiency of the mentioned approach. At the same time, the method can be easily used to a number of engineering fields, including identification, nonlinear control, fault diagnostics and performance analysis.

  11. Complexity in Simplicity: Flexible Agent-based State Space Exploration

    DEFF Research Database (Denmark)

    Rasmussen, Jacob Illum; Larsen, Kim Guldstrand

    2007-01-01

    In this paper, we describe a new flexible framework for state space exploration based on cooperating agents. The idea is to let various agents with different search patterns explore the state space individually and communicate information about fruitful subpaths of the search tree to each other...

  12. Agent-Based Model Approach to Complex Phenomena in Real Economy

    Science.gov (United States)

    Iyetomi, H.; Aoyama, H.; Fujiwara, Y.; Ikeda, Y.; Souma, W.

    An agent-based model for firms' dynamics is developed. The model consists of firm agents with identical characteristic parameters and a bank agent. Dynamics of those agents are described by their balance sheets. Each firm tries to maximize its expected profit with possible risks in market. Infinite growth of a firm directed by the ``profit maximization" principle is suppressed by a concept of ``going concern". Possibility of bankruptcy of firms is also introduced by incorporating a retardation effect of information on firms' decision. The firms, mutually interacting through the monopolistic bank, become heterogeneous in the course of temporal evolution. Statistical properties of firms' dynamics obtained by simulations based on the model are discussed in light of observations in the real economy.

  13. Agent Based Modelling for Social Simulation

    NARCIS (Netherlands)

    Smit, S.K.; Ubink, E.M.; Vecht, B. van der; Langley, D.J.

    2013-01-01

    This document is the result of an exploratory project looking into the status of, and opportunities for Agent Based Modelling (ABM) at TNO. The project focussed on ABM applications containing social interactions and human factors, which we termed ABM for social simulation (ABM4SS). During the course

  14. A Multi-Agent Traffic Control Model Based on Distributed System

    Directory of Open Access Journals (Sweden)

    Qian WU

    2014-06-01

    Full Text Available With the development of urbanization construction, urban travel has become a quite thorny and imminent problem. Some previous researches on the large urban traffic systems easily change into NPC problems. We purpose a multi-agent inductive control model based on the distributed approach. To describe the real traffic scene, this model designs four different types of intelligent agents, i.e. we regard each lane, route, intersection and traffic region as different types of intelligent agents. Each agent can achieve the real-time traffic data from its neighbor agents, and decision-making agents establish real-time traffic signal plans through the communication between local agents and their neighbor agents. To evaluate the traffic system, this paper takes the average delay, the stopped time and the average speed as performance parameters. Finally, the distributed multi-agent is simulated on the VISSIM simulation platform, the simulation results show that the multi-agent system is more effective than the adaptive control system in solving the traffic congestion.

  15. Comparison Of Power Quality Disturbances Classification Based On Neural Network

    Directory of Open Access Journals (Sweden)

    Nway Nway Kyaw Win

    2015-07-01

    Full Text Available Abstract Power quality disturbances PQDs result serious problems in the reliability safety and economy of power system network. In order to improve electric power quality events the detection and classification of PQDs must be made type of transient fault. Software analysis of wavelet transform with multiresolution analysis MRA algorithm and feed forward neural network probabilistic and multilayer feed forward neural network based methodology for automatic classification of eight types of PQ signals flicker harmonics sag swell impulse fluctuation notch and oscillatory will be presented. The wavelet family Db4 is chosen in this system to calculate the values of detailed energy distributions as input features for classification because it can perform well in detecting and localizing various types of PQ disturbances. This technique classifies the types of PQDs problem sevents.The classifiers classify and identify the disturbance type according to the energy distribution. The results show that the PNN can analyze different power disturbance types efficiently. Therefore it can be seen that PNN has better classification accuracy than MLFF.

  16. Rediscovering the Economics of Keynes in an Agent-Based Computational Setting

    DEFF Research Database (Denmark)

    Bruun, Charlotte

    The aim of this paper is to use agent-based computational economics to explore the economic thinking of Keynes. Taking his starting point at the macroeconomic level, Keynes argued that economic systems are characterized by fundamental uncertainty - an uncertainty that makes rule-based behaviour...... and reliance on monetary magnitudes more optimal to the economic agent than profit- and utility optimazation in the traditional sense. Unfortunately more systematic studies of the properties of such a system was not possible at the time of Keynes. The system envisioned by Keynes holds a lot of properties...... in commen with what we today call complex dynamic systems, and today we may aply the method of agent-based computational economics to the ideas of Keynes. The presented agent-based Keynesian model demonstrate, as argued by Keynes, that the economy can selforganize without relying on price movement...

  17. Agent Based Individual Traffic guidance

    DEFF Research Database (Denmark)

    Wanscher, Jørgen Bundgaard

    2004-01-01

    When working with traffic planning or guidance it is common practice to view the vehicles as a combined mass. >From this models are employed to specify the vehicle supply and demand for each region. As the models are complex and the calculations are equally demanding the regions and the detail...... of the road network is aggregated. As a result the calculations reveal only what the mass of vehicles are doing and not what a single vehicle is doing. This is the crucial difference to ABIT (Agent Based Individual Trafficguidance). ABIT is based on the fact that information on the destination of each vehicle...

  18. A minimum spanning forest based classification method for dedicated breast CT images

    International Nuclear Information System (INIS)

    Pike, Robert; Sechopoulos, Ioannis; Fei, Baowei

    2015-01-01

    Purpose: To develop and test an automated algorithm to classify different types of tissue in dedicated breast CT images. Methods: Images of a single breast of five different patients were acquired with a dedicated breast CT clinical prototype. The breast CT images were processed by a multiscale bilateral filter to reduce noise while keeping edge information and were corrected to overcome cupping artifacts. As skin and glandular tissue have similar CT values on breast CT images, morphologic processing is used to identify the skin based on its position information. A support vector machine (SVM) is trained and the resulting model used to create a pixelwise classification map of fat and glandular tissue. By combining the results of the skin mask with the SVM results, the breast tissue is classified as skin, fat, and glandular tissue. This map is then used to identify markers for a minimum spanning forest that is grown to segment the image using spatial and intensity information. To evaluate the authors’ classification method, they use DICE overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on five patient images. Results: Comparison between the automatic and the manual segmentation shows that the minimum spanning forest based classification method was able to successfully classify dedicated breast CT image with average DICE ratios of 96.9%, 89.8%, and 89.5% for fat, glandular, and skin tissue, respectively. Conclusions: A 2D minimum spanning forest based classification method was proposed and evaluated for classifying the fat, skin, and glandular tissue in dedicated breast CT images. The classification method can be used for dense breast tissue quantification, radiation dose assessment, and other applications in breast imaging

  19. Genome profiling (GP method based classification of insects: congruence with that of classical phenotype-based one.

    Directory of Open Access Journals (Sweden)

    Shamim Ahmed

    Full Text Available Ribosomal RNAs have been widely used for identification and classification of species, and have produced data giving new insights into phylogenetic relationships. Recently, multilocus genotyping and even whole genome sequencing-based technologies have been adopted in ambitious comparative biology studies. However, such technologies are still far from routine-use in species classification studies due to their high costs in terms of labor, equipment and consumables.Here, we describe a simple and powerful approach for species classification called genome profiling (GP. The GP method composed of random PCR, temperature gradient gel electrophoresis (TGGE and computer-aided gel image processing is highly informative and less laborious. For demonstration, we classified 26 species of insects using GP and 18S rDNA-sequencing approaches. The GP method was found to give a better correspondence to the classical phenotype-based approach than did 18S rDNA sequencing employing a congruence value. To our surprise, use of a single probe in GP was sufficient to identify the relationships between the insect species, making this approach more straightforward.The data gathered here, together with those of previous studies show that GP is a simple and powerful method that can be applied for actually universally identifying and classifying species. The current success supported our previous proposal that GP-based web database can be constructible and effective for the global identification/classification of species.

  20. An Intelligent Agent based Architecture for Visual Data Mining

    OpenAIRE

    Hamdi Ellouzi; Hela Ltifi; Mounir Ben Ayed

    2016-01-01

    the aim of this paper is to present an intelligent architecture of Decision Support System (DSS) based on visual data mining. This architecture applies the multi-agent technology to facilitate the design and development of DSS in complex and dynamic environment. Multi-Agent Systems add a high level of abstraction. To validate the proposed architecture, it is implemented to develop a distributed visual data mining based DSS to predict nosocomial infectionsoccurrence in intensive care units. Th...

  1. RESEARCH ON REMOTE SENSING GEOLOGICAL INFORMATION EXTRACTION BASED ON OBJECT ORIENTED CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    H. Gao

    2018-04-01

    Full Text Available The northern Tibet belongs to the Sub cold arid climate zone in the plateau. It is rarely visited by people. The geological working conditions are very poor. However, the stratum exposures are good and human interference is very small. Therefore, the research on the automatic classification and extraction of remote sensing geological information has typical significance and good application prospect. Based on the object-oriented classification in Northern Tibet, using the Worldview2 high-resolution remote sensing data, combined with the tectonic information and image enhancement, the lithological spectral features, shape features, spatial locations and topological relations of various geological information are excavated. By setting the threshold, based on the hierarchical classification, eight kinds of geological information were classified and extracted. Compared with the existing geological maps, the accuracy analysis shows that the overall accuracy reached 87.8561 %, indicating that the classification-oriented method is effective and feasible for this study area and provides a new idea for the automatic extraction of remote sensing geological information.

  2. Agent-Based Modeling: A Powerful Tool for Tourism Researchers

    NARCIS (Netherlands)

    Nicholls, Sarah; Amelung, B.; Student, Jillian

    2017-01-01

    Agent-based modeling (ABM) is a way of representing complex systems of autonomous agents or actors, and of simulating the multiple potential outcomes of these agents’ behaviors and interactions in the form of a range of alternatives or futures. Despite the complexity of the tourism system, and the

  3. Characterization of nanoparticle-based contrast agents for molecular magnetic resonance imaging

    International Nuclear Information System (INIS)

    Shan, Liang; Chopra, Arvind; Leung, Kam; Eckelman, William C.; Menkens, Anne E.

    2012-01-01

    The development of molecular imaging agents is currently undergoing a dramatic expansion. As of October 2011, ∼4,800 newly developed agents have been synthesized and characterized in vitro and in animal models of human disease. Despite this rapid progress, the transfer of these agents to clinical practice is rather slow. To address this issue, the National Institutes of Health launched the Molecular Imaging and Contrast Agents Database (MICAD) in 2005 to provide freely accessible online information regarding molecular imaging probes and contrast agents for the imaging community. While compiling information regarding imaging agents published in peer-reviewed journals, the MICAD editors have observed that some important information regarding the characterization of a contrast agent is not consistently reported. This makes it difficult for investigators to evaluate and meta-analyze data generated from different studies of imaging agents, especially for the agents based on nanoparticles. This article is intended to serve as a guideline for new investigators for the characterization of preclinical studies performed with nanoparticle-based MRI contrast agents. The common characterization parameters are summarized into seven categories: contrast agent designation, physicochemical properties, magnetic properties, in vitro studies, animal studies, MRI studies, and toxicity. Although no single set of parameters is suitable to define the properties of the various types of contrast agents, it is essential to ensure that these agents meet certain quality control parameters at the preclinical stage, so that they can be used without delay for clinical studies.

  4. Agent-Based Framework for Personalized Service Provisioning in Converged IP Networks

    Science.gov (United States)

    Podobnik, Vedran; Matijasevic, Maja; Lovrek, Ignac; Skorin-Kapov, Lea; Desic, Sasa

    In a global multi-service and multi-provider market, the Internet Service Providers will increasingly need to differentiate in the service quality they offer and base their operation on new, consumer-centric business models. In this paper, we propose an agent-based framework for the Business-to-Consumer (B2C) electronic market, comprising the Consumer Agents, Broker Agents and Content Agents, which enable Internet consumers to select a content provider in an automated manner. We also discuss how to dynamically allocate network resources to provide end-to-end Quality of Service (QoS) for a given consumer and content provider.

  5. Wearable-Sensor-Based Classification Models of Faller Status in Older Adults.

    Directory of Open Access Journals (Sweden)

    Jennifer Howcroft

    Full Text Available Wearable sensors have potential for quantitative, gait-based, point-of-care fall risk assessment that can be easily and quickly implemented in clinical-care and older-adult living environments. This investigation generated models for wearable-sensor based fall-risk classification in older adults and identified the optimal sensor type, location, combination, and modelling method; for walking with and without a cognitive load task. A convenience sample of 100 older individuals (75.5 ± 6.7 years; 76 non-fallers, 24 fallers based on 6 month retrospective fall occurrence walked 7.62 m under single-task and dual-task conditions while wearing pressure-sensing insoles and tri-axial accelerometers at the head, pelvis, and left and right shanks. Participants also completed the Activities-specific Balance Confidence scale, Community Health Activities Model Program for Seniors questionnaire, six minute walk test, and ranked their fear of falling. Fall risk classification models were assessed for all sensor combinations and three model types: multi-layer perceptron neural network, naïve Bayesian, and support vector machine. The best performing model was a multi-layer perceptron neural network with input parameters from pressure-sensing insoles and head, pelvis, and left shank accelerometers (accuracy = 84%, F1 score = 0.600, MCC score = 0.521. Head sensor-based models had the best performance of the single-sensor models for single-task gait assessment. Single-task gait assessment models outperformed models based on dual-task walking or clinical assessment data. Support vector machines and neural networks were the best modelling technique for fall risk classification. Fall risk classification models developed for point-of-care environments should be developed using support vector machines and neural networks, with a multi-sensor single-task gait assessment.

  6. A Sieving ANN for Emotion-Based Movie Clip Classification

    Science.gov (United States)

    Watanapa, Saowaluk C.; Thipakorn, Bundit; Charoenkitkarn, Nipon

    Effective classification and analysis of semantic contents are very important for the content-based indexing and retrieval of video database. Our research attempts to classify movie clips into three groups of commonly elicited emotions, namely excitement, joy and sadness, based on a set of abstract-level semantic features extracted from the film sequence. In particular, these features consist of six visual and audio measures grounded on the artistic film theories. A unique sieving-structured neural network is proposed to be the classifying model due to its robustness. The performance of the proposed model is tested with 101 movie clips excerpted from 24 award-winning and well-known Hollywood feature films. The experimental result of 97.8% correct classification rate, measured against the collected human-judges, indicates the great potential of using abstract-level semantic features as an engineered tool for the application of video-content retrieval/indexing.

  7. Faller Classification in Older Adults Using Wearable Sensors Based on Turn and Straight-Walking Accelerometer-Based Features.

    Science.gov (United States)

    Drover, Dylan; Howcroft, Jennifer; Kofman, Jonathan; Lemaire, Edward D

    2017-06-07

    Faller classification in elderly populations can facilitate preventative care before a fall occurs. A novel wearable-sensor based faller classification method for the elderly was developed using accelerometer-based features from straight walking and turns. Seventy-six older individuals (74.15 ± 7.0 years), categorized as prospective fallers and non-fallers, completed a six-minute walk test with accelerometers attached to their lower legs and pelvis. After segmenting straight and turn sections, cross validation tests were conducted on straight and turn walking features to assess classification performance. The best "classifier model-feature selector" combination used turn data, random forest classifier, and select-5-best feature selector (73.4% accuracy, 60.5% sensitivity, 82.0% specificity, and 0.44 Matthew's Correlation Coefficient (MCC)). Using only the most frequently occurring features, a feature subset (minimum of anterior-posterior ratio of even/odd harmonics for right shank, standard deviation (SD) of anterior left shank acceleration SD, SD of mean anterior left shank acceleration, maximum of medial-lateral first quartile of Fourier transform (FQFFT) for lower back, maximum of anterior-posterior FQFFT for lower back) achieved better classification results, with 77.3% accuracy, 66.1% sensitivity, 84.7% specificity, and 0.52 MCC score. All classification performance metrics improved when turn data was used for faller classification, compared to straight walking data. Combining turn and straight walking features decreased performance metrics compared to turn features for similar classifier model-feature selector combinations.

  8. Classification Framework for ICT-Based Learning Technologies for Disabled People

    Science.gov (United States)

    Hersh, Marion

    2017-01-01

    The paper presents the first systematic approach to the classification of inclusive information and communication technologies (ICT)-based learning technologies and ICT-based learning technologies for disabled people which covers both assistive and general learning technologies, is valid for all disabled people and considers the full range of…

  9. Migration control for mobile agents based on passport and visa

    OpenAIRE

    Guan, SU; Wang, T; Ong, SH

    2003-01-01

    Research on mobile agents has attracted much attention as this paradigm has demonstrated great potential for the next-generation e-commerce. Proper solutions to security-related problems become key factors in the successful deployment of mobile agents in e-commerce systems. We propose the use of passport and visa (P/V) for securing mobile agent migration across communities based on the SAFER e-commerce framework. P/V not only serves as up-to-date digital credentials for agent-host authentica...

  10. Convolution-based classification of audio and symbolic representations of music

    DEFF Research Database (Denmark)

    Velarde, Gissel; Cancino Chacón, Carlos; Meredith, David

    2018-01-01

    We present a novel convolution-based method for classification of audio and symbolic representations of music, which we apply to classification of music by style. Pieces of music are first sampled to pitch–time representations (piano-rolls or spectrograms) and then convolved with a Gaussian filter......-class composer identification, methods specialised for classifying symbolic representations of music are more effective. We also performed experiments on symbolic representations, synthetic audio and two different recordings of The Well-Tempered Clavier by J. S. Bach to study the method’s capacity to distinguish...

  11. Deep Galaxy: Classification of Galaxies based on Deep Convolutional Neural Networks

    OpenAIRE

    Khalifa, Nour Eldeen M.; Taha, Mohamed Hamed N.; Hassanien, Aboul Ella; Selim, I. M.

    2017-01-01

    In this paper, a deep convolutional neural network architecture for galaxies classification is presented. The galaxy can be classified based on its features into main three categories Elliptical, Spiral, and Irregular. The proposed deep galaxies architecture consists of 8 layers, one main convolutional layer for features extraction with 96 filters, followed by two principles fully connected layers for classification. It is trained over 1356 images and achieved 97.272% in testing accuracy. A c...

  12. Task Classification Based Energy-Aware Consolidation in Clouds

    Directory of Open Access Journals (Sweden)

    HeeSeok Choi

    2016-01-01

    Full Text Available We consider a cloud data center, in which the service provider supplies virtual machines (VMs on hosts or physical machines (PMs to its subscribers for computation in an on-demand fashion. For the cloud data center, we propose a task consolidation algorithm based on task classification (i.e., computation-intensive and data-intensive and resource utilization (e.g., CPU and RAM. Furthermore, we design a VM consolidation algorithm to balance task execution time and energy consumption without violating a predefined service level agreement (SLA. Unlike the existing research on VM consolidation or scheduling that applies none or single threshold schemes, we focus on a double threshold (upper and lower scheme, which is used for VM consolidation. More specifically, when a host operates with resource utilization below the lower threshold, all the VMs on the host will be scheduled to be migrated to other hosts and then the host will be powered down, while when a host operates with resource utilization above the upper threshold, a VM will be migrated to avoid using 100% of resource utilization. Based on experimental performance evaluations with real-world traces, we prove that our task classification based energy-aware consolidation algorithm (TCEA achieves a significant energy reduction without incurring predefined SLA violations.

  13. Model-based object classification using unification grammars and abstract representations

    Science.gov (United States)

    Liburdy, Kathleen A.; Schalkoff, Robert J.

    1993-04-01

    The design and implementation of a high level computer vision system which performs object classification is described. General object labelling and functional analysis require models of classes which display a wide range of geometric variations. A large representational gap exists between abstract criteria such as `graspable' and current geometric image descriptions. The vision system developed and described in this work addresses this problem and implements solutions based on a fusion of semantics, unification, and formal language theory. Object models are represented using unification grammars, which provide a framework for the integration of structure and semantics. A methodology for the derivation of symbolic image descriptions capable of interacting with the grammar-based models is described and implemented. A unification-based parser developed for this system achieves object classification by determining if the symbolic image description can be unified with the abstract criteria of an object model. Future research directions are indicated.

  14. Histological image classification using biologically interpretable shape-based features

    International Nuclear Information System (INIS)

    Kothari, Sonal; Phan, John H; Young, Andrew N; Wang, May D

    2013-01-01

    Automatic cancer diagnostic systems based on histological image classification are important for improving therapeutic decisions. Previous studies propose textural and morphological features for such systems. These features capture patterns in histological images that are useful for both cancer grading and subtyping. However, because many of these features lack a clear biological interpretation, pathologists may be reluctant to adopt these features for clinical diagnosis. We examine the utility of biologically interpretable shape-based features for classification of histological renal tumor images. Using Fourier shape descriptors, we extract shape-based features that capture the distribution of stain-enhanced cellular and tissue structures in each image and evaluate these features using a multi-class prediction model. We compare the predictive performance of the shape-based diagnostic model to that of traditional models, i.e., using textural, morphological and topological features. The shape-based model, with an average accuracy of 77%, outperforms or complements traditional models. We identify the most informative shapes for each renal tumor subtype from the top-selected features. Results suggest that these shapes are not only accurate diagnostic features, but also correlate with known biological characteristics of renal tumors. Shape-based analysis of histological renal tumor images accurately classifies disease subtypes and reveals biologically insightful discriminatory features. This method for shape-based analysis can be extended to other histological datasets to aid pathologists in diagnostic and therapeutic decisions

  15. Intelligent judgements over health risks in a spatial agent-based model.

    Science.gov (United States)

    Abdulkareem, Shaheen A; Augustijn, Ellen-Wien; Mustafa, Yaseen T; Filatova, Tatiana

    2018-03-20

    Millions of people worldwide are exposed to deadly infectious diseases on a regular basis. Breaking news of the Zika outbreak for instance, made it to the main media titles internationally. Perceiving disease risks motivate people to adapt their behavior toward a safer and more protective lifestyle. Computational science is instrumental in exploring patterns of disease spread emerging from many individual decisions and interactions among agents and their environment by means of agent-based models. Yet, current disease models rarely consider simulating dynamics in risk perception and its impact on the adaptive protective behavior. Social sciences offer insights into individual risk perception and corresponding protective actions, while machine learning provides algorithms and methods to capture these learning processes. This article presents an innovative approach to extend agent-based disease models by capturing behavioral aspects of decision-making in a risky context using machine learning techniques. We illustrate it with a case of cholera in Kumasi, Ghana, accounting for spatial and social risk factors that affect intelligent behavior and corresponding disease incidents. The results of computational experiments comparing intelligent with zero-intelligent representations of agents in a spatial disease agent-based model are discussed. We present a spatial disease agent-based model (ABM) with agents' behavior grounded in Protection Motivation Theory. Spatial and temporal patterns of disease diffusion among zero-intelligent agents are compared to those produced by a population of intelligent agents. Two Bayesian Networks (BNs) designed and coded using R and are further integrated with the NetLogo-based Cholera ABM. The first is a one-tier BN1 (only risk perception), the second is a two-tier BN2 (risk and coping behavior). We run three experiments (zero-intelligent agents, BN1 intelligence and BN2 intelligence) and report the results per experiment in terms of

  16. A texton-based approach for the classification of lung parenchyma in CT images

    DEFF Research Database (Denmark)

    Gangeh, Mehrdad J.; Sørensen, Lauge; Shaker, Saher B.

    2010-01-01

    In this paper, a texton-based classification system based on raw pixel representation along with a support vector machine with radial basis function kernel is proposed for the classification of emphysema in computed tomography images of the lung. The proposed approach is tested on 168 annotated...... regions of interest consisting of normal tissue, centrilobular emphysema, and paraseptal emphysema. The results show the superiority of the proposed approach to common techniques in the literature including moments of the histogram of filter responses based on Gaussian derivatives. The performance...

  17. SAW Classification Algorithm for Chinese Text Classification

    OpenAIRE

    Xiaoli Guo; Huiyu Sun; Tiehua Zhou; Ling Wang; Zhaoyang Qu; Jiannan Zang

    2015-01-01

    Considering the explosive growth of data, the increased amount of text data’s effect on the performance of text categorization forward the need for higher requirements, such that the existing classification method cannot be satisfied. Based on the study of existing text classification technology and semantics, this paper puts forward a kind of Chinese text classification oriented SAW (Structural Auxiliary Word) algorithm. The algorithm uses the special space effect of Chinese text where words...

  18. Spatial agent-based models for socio-ecological systems: challenges and prospects

    NARCIS (Netherlands)

    de Filatova, T.; Verburg, P.H.; Parker, D.C.; Stannard, S.R.

    2013-01-01

    Departing from the comprehensive reviews carried out in the field, we identify the key challenges that agent-based methodology faces when modeling coupled socio-ecological systems. Focusing primarily on the papers presented in this thematic issue, we review progress in spatial agent-based models

  19. Reverse engineering a social agent-based hidden markov model--visage.

    Science.gov (United States)

    Chen, Hung-Ching Justin; Goldberg, Mark; Magdon-Ismail, Malik; Wallace, William A

    2008-12-01

    We present a machine learning approach to discover the agent dynamics that drives the evolution of the social groups in a community. We set up the problem by introducing an agent-based hidden Markov model for the agent dynamics: an agent's actions are determined by micro-laws. Nonetheless, We learn the agent dynamics from the observed communications without knowing state transitions. Our approach is to identify the appropriate micro-laws corresponding to an identification of the appropriate parameters in the model. The model identification problem is then formulated as a mixed optimization problem. To solve the problem, we develop a multistage learning process for determining the group structure, the group evolution, and the micro-laws of a community based on the observed set of communications among actors, without knowing the semantic contents. Finally, to test the quality of our approximations and the feasibility of the approach, we present the results of extensive experiments on synthetic data as well as the results on real communities, such as Enron email and Movie newsgroups. Insight into agent dynamics helps us understand the driving forces behind social evolution.

  20. A patch-based convolutional neural network for remote sensing image classification.

    Science.gov (United States)

    Sharma, Atharva; Liu, Xiuwen; Yang, Xiaojun; Shi, Di

    2017-11-01

    Availability of accurate land cover information over large areas is essential to the global environment sustainability; digital classification using medium-resolution remote sensing data would provide an effective method to generate the required land cover information. However, low accuracy of existing per-pixel based classification methods for medium-resolution data is a fundamental limiting factor. While convolutional neural networks (CNNs) with deep layers have achieved unprecedented improvements in object recognition applications that rely on fine image structures, they cannot be applied directly to medium-resolution data due to lack of such fine structures. In this paper, considering the spatial relation of a pixel to its neighborhood, we propose a new deep patch-based CNN system tailored for medium-resolution remote sensing data. The system is designed by incorporating distinctive characteristics of medium-resolution data; in particular, the system computes patch-based samples from multidimensional top of atmosphere reflectance data. With a test site from the Florida Everglades area (with a size of 771 square kilometers), the proposed new system has outperformed pixel-based neural network, pixel-based CNN and patch-based neural network by 24.36%, 24.23% and 11.52%, respectively, in overall classification accuracy. By combining the proposed deep CNN and the huge collection of medium-resolution remote sensing data, we believe that much more accurate land cover datasets can be produced over large areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Classification of cancerous cells based on the one-class problem approach

    Science.gov (United States)

    Murshed, Nabeel A.; Bortolozzi, Flavio; Sabourin, Robert

    1996-03-01

    One of the most important factors in reducing the effect of cancerous diseases is the early diagnosis, which requires a good and a robust method. With the advancement of computer technologies and digital image processing, the development of a computer-based system has become feasible. In this paper, we introduce a new approach for the detection of cancerous cells. This approach is based on the one-class problem approach, through which the classification system need only be trained with patterns of cancerous cells. This reduces the burden of the training task by about 50%. Based on this approach, a computer-based classification system is developed, based on the Fuzzy ARTMAP neural networks. Experimental results were performed using a set of 542 patterns taken from a sample of breast cancer. Results of the experiment show 98% correct identification of cancerous cells and 95% correct identification of non-cancerous cells.

  2. Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.

    Science.gov (United States)

    Diamant, Idit; Klang, Eyal; Amitai, Michal; Konen, Eli; Goldberger, Jacob; Greenspan, Hayit

    2017-06-01

    We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). We demonstrated that classification based on informative selected set of words results in significant improvement. Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.

  3. Invariance and universality in social agent-based simulations

    Science.gov (United States)

    Cioffi-Revilla, Claudio

    2002-01-01

    Agent-based simulation models have a promising future in the social sciences, from political science to anthropology, economics, and sociology. To realize their full scientific potential, however, these models must address a set of key problems, such as the number of interacting agents and their geometry, network topology, time calibration, phenomenological calibration, structural stability, power laws, and other substantive and methodological issues. This paper discusses and highlights these problems and outlines some solutions. PMID:12011412

  4. Contract Monitoring in Agent-Based Systems: Case Study

    Science.gov (United States)

    Hodík, Jiří; Vokřínek, Jiří; Jakob, Michal

    Monitoring of fulfilment of obligations defined by electronic contracts in distributed domains is presented in this paper. A two-level model of contract-based systems and the types of observations needed for contract monitoring are introduced. The observations (inter-agent communication and agents’ actions) are collected and processed by the contract observation and analysis pipeline. The presented approach has been utilized in a multi-agent system for electronic contracting in a modular certification testing domain.

  5. Multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement

    Science.gov (United States)

    Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing

    2018-02-01

    For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.

  6. SVM classification model in depression recognition based on mutation PSO parameter optimization

    Directory of Open Access Journals (Sweden)

    Zhang Ming

    2017-01-01

    Full Text Available At present, the clinical diagnosis of depression is mainly through structured interviews by psychiatrists, which is lack of objective diagnostic methods, so it causes the higher rate of misdiagnosis. In this paper, a method of depression recognition based on SVM and particle swarm optimization algorithm mutation is proposed. To address on the problem that particle swarm optimization (PSO algorithm easily trap in local optima, we propose a feedback mutation PSO algorithm (FBPSO to balance the local search and global exploration ability, so that the parameters of the classification model is optimal. We compared different PSO mutation algorithms about classification accuracy for depression, and found the classification accuracy of support vector machine (SVM classifier based on feedback mutation PSO algorithm is the highest. Our study promotes important reference value for establishing auxiliary diagnostic used in depression recognition of clinical diagnosis.

  7. A Novel Imbalanced Data Classification Approach Based on Logistic Regression and Fisher Discriminant

    Directory of Open Access Journals (Sweden)

    Baofeng Shi

    2015-01-01

    Full Text Available We introduce an imbalanced data classification approach based on logistic regression significant discriminant and Fisher discriminant. First of all, a key indicators extraction model based on logistic regression significant discriminant and correlation analysis is derived to extract features for customer classification. Secondly, on the basis of the linear weighted utilizing Fisher discriminant, a customer scoring model is established. And then, a customer rating model where the customer number of all ratings follows normal distribution is constructed. The performance of the proposed model and the classical SVM classification method are evaluated in terms of their ability to correctly classify consumers as default customer or nondefault customer. Empirical results using the data of 2157 customers in financial engineering suggest that the proposed approach better performance than the SVM model in dealing with imbalanced data classification. Moreover, our approach contributes to locating the qualified customers for the banks and the bond investors.

  8. A Discrete Wavelet Based Feature Extraction and Hybrid Classification Technique for Microarray Data Analysis

    Directory of Open Access Journals (Sweden)

    Jaison Bennet

    2014-01-01

    Full Text Available Cancer classification by doctors and radiologists was based on morphological and clinical features and had limited diagnostic ability in olden days. The recent arrival of DNA microarray technology has led to the concurrent monitoring of thousands of gene expressions in a single chip which stimulates the progress in cancer classification. In this paper, we have proposed a hybrid approach for microarray data classification based on nearest neighbor (KNN, naive Bayes, and support vector machine (SVM. Feature selection prior to classification plays a vital role and a feature selection technique which combines discrete wavelet transform (DWT and moving window technique (MWT is used. The performance of the proposed method is compared with the conventional classifiers like support vector machine, nearest neighbor, and naive Bayes. Experiments have been conducted on both real and benchmark datasets and the results indicate that the ensemble approach produces higher classification accuracy than conventional classifiers. This paper serves as an automated system for the classification of cancer and can be applied by doctors in real cases which serve as a boon to the medical community. This work further reduces the misclassification of cancers which is highly not allowed in cancer detection.

  9. Rule-based land cover classification from very high-resolution satellite image with multiresolution segmentation

    Science.gov (United States)

    Haque, Md. Enamul; Al-Ramadan, Baqer; Johnson, Brian A.

    2016-07-01

    Multiresolution segmentation and rule-based classification techniques are used to classify objects from very high-resolution satellite images of urban areas. Custom rules are developed using different spectral, geometric, and textural features with five scale parameters, which exploit varying classification accuracy. Principal component analysis is used to select the most important features out of a total of 207 different features. In particular, seven different object types are considered for classification. The overall classification accuracy achieved for the rule-based method is 95.55% and 98.95% for seven and five classes, respectively. Other classifiers that are not using rules perform at 84.17% and 97.3% accuracy for seven and five classes, respectively. The results exploit coarse segmentation for higher scale parameter and fine segmentation for lower scale parameter. The major contribution of this research is the development of rule sets and the identification of major features for satellite image classification where the rule sets are transferable and the parameters are tunable for different types of imagery. Additionally, the individual objectwise classification and principal component analysis help to identify the required object from an arbitrary number of objects within images given ground truth data for the training.

  10. Automated mode shape estimation in agent-based wireless sensor networks

    Science.gov (United States)

    Zimmerman, Andrew T.; Lynch, Jerome P.

    2010-04-01

    Recent advances in wireless sensing technology have made it possible to deploy dense networks of sensing transducers within large structural systems. Because these networks leverage the embedded computing power and agent-based abilities integral to many wireless sensing devices, it is possible to analyze sensor data autonomously and in-network. In this study, market-based techniques are used to autonomously estimate mode shapes within a network of agent-based wireless sensors. Specifically, recent work in both decentralized Frequency Domain Decomposition and market-based resource allocation is leveraged to create a mode shape estimation algorithm derived from free-market principles. This algorithm allows an agent-based wireless sensor network to autonomously shift emphasis between improving mode shape accuracy and limiting the consumption of certain scarce network resources: processing time, storage capacity, and power consumption. The developed algorithm is validated by successfully estimating mode shapes using a network of wireless sensor prototypes deployed on the mezzanine balcony of Hill Auditorium, located on the University of Michigan campus.

  11. [Severity classification of chronic obstructive pulmonary disease based on deep learning].

    Science.gov (United States)

    Ying, Jun; Yang, Ceyuan; Li, Quanzheng; Xue, Wanguo; Li, Tanshi; Cao, Wenzhe

    2017-12-01

    In this paper, a deep learning method has been raised to build an automatic classification algorithm of severity of chronic obstructive pulmonary disease. Large sample clinical data as input feature were analyzed for their weights in classification. Through feature selection, model training, parameter optimization and model testing, a classification prediction model based on deep belief network was built to predict severity classification criteria raised by the Global Initiative for Chronic Obstructive Lung Disease (GOLD). We get accuracy over 90% in prediction for two different standardized versions of severity criteria raised in 2007 and 2011 respectively. Moreover, we also got the contribution ranking of different input features through analyzing the model coefficient matrix and confirmed that there was a certain degree of agreement between the more contributive input features and the clinical diagnostic knowledge. The validity of the deep belief network model was proved by this result. This study provides an effective solution for the application of deep learning method in automatic diagnostic decision making.

  12. Classification and Target Group Selection Based Upon Frequent Patterns

    NARCIS (Netherlands)

    W.H.L.M. Pijls (Wim); R. Potharst (Rob)

    2000-01-01

    textabstractIn this technical report , two new algorithms based upon frequent patterns are proposed. One algorithm is a classification method. The other one is an algorithm for target group selection. In both algorithms, first of all, the collection of frequent patterns in the training set is

  13. Density Based Support Vector Machines for Classification

    OpenAIRE

    Zahra Nazari; Dongshik Kang

    2015-01-01

    Support Vector Machines (SVM) is the most successful algorithm for classification problems. SVM learns the decision boundary from two classes (for Binary Classification) of training points. However, sometimes there are some less meaningful samples amongst training points, which are corrupted by noises or misplaced in wrong side, called outliers. These outliers are affecting on margin and classification performance, and machine should better to discard them. SVM as a popular and widely used cl...

  14. Risk Classification and Risk-based Safety and Mission Assurance

    Science.gov (United States)

    Leitner, Jesse A.

    2014-01-01

    Recent activities to revamp and emphasize the need to streamline processes and activities for Class D missions across the agency have led to various interpretations of Class D, including the lumping of a variety of low-cost projects into Class D. Sometimes terms such as Class D minus are used. In this presentation, mission risk classifications will be traced to official requirements and definitions as a measure to ensure that projects and programs align with the guidance and requirements that are commensurate for their defined risk posture. As part of this, the full suite of risk classifications, formal and informal will be defined, followed by an introduction to the new GPR 8705.4 that is currently under review.GPR 8705.4 lays out guidance for the mission success activities performed at the Classes A-D for NPR 7120.5 projects as well as for projects not under NPR 7120.5. Furthermore, the trends in stepping from Class A into higher risk posture classifications will be discussed. The talk will conclude with a discussion about risk-based safety and mission assuranceat GSFC.

  15. Overfitting Reduction of Text Classification Based on AdaBELM

    Directory of Open Access Journals (Sweden)

    Xiaoyue Feng

    2017-07-01

    Full Text Available Overfitting is an important problem in machine learning. Several algorithms, such as the extreme learning machine (ELM, suffer from this issue when facing high-dimensional sparse data, e.g., in text classification. One common issue is that the extent of overfitting is not well quantified. In this paper, we propose a quantitative measure of overfitting referred to as the rate of overfitting (RO and a novel model, named AdaBELM, to reduce the overfitting. With RO, the overfitting problem can be quantitatively measured and identified. The newly proposed model can achieve high performance on multi-class text classification. To evaluate the generalizability of the new model, we designed experiments based on three datasets, i.e., the 20 Newsgroups, Reuters-21578, and BioMed corpora, which represent balanced, unbalanced, and real application data, respectively. Experiment results demonstrate that AdaBELM can reduce overfitting and outperform classical ELM, decision tree, random forests, and AdaBoost on all three text-classification datasets; for example, it can achieve 62.2% higher accuracy than ELM. Therefore, the proposed model has a good generalizability.

  16. Multi-agent based distributed control architecture for microgrid energy management and optimization

    International Nuclear Information System (INIS)

    Basir Khan, M. Reyasudin; Jidin, Razali; Pasupuleti, Jagadeesh

    2016-01-01

    Highlights: • A new multi-agent based distributed control architecture for energy management. • Multi-agent coordination based on non-cooperative game theory. • A microgrid model comprised of renewable energy generation systems. • Performance comparison of distributed with conventional centralized control. - Abstract: Most energy management systems are based on a centralized controller that is difficult to satisfy criteria such as fault tolerance and adaptability. Therefore, a new multi-agent based distributed energy management system architecture is proposed in this paper. The distributed generation system is composed of several distributed energy resources and a group of loads. A multi-agent system based decentralized control architecture was developed in order to provide control for the complex energy management of the distributed generation system. Then, non-cooperative game theory was used for the multi-agent coordination in the system. The distributed generation system was assessed by simulation under renewable resource fluctuations, seasonal load demand and grid disturbances. The simulation results show that the implementation of the new energy management system proved to provide more robust and high performance controls than conventional centralized energy management systems.

  17. An application to pulmonary emphysema classification based on model of texton learning by sparse representation

    Science.gov (United States)

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryojiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2012-03-01

    We aim at using a new texton based texture classification method in the classification of pulmonary emphysema in computed tomography (CT) images of the lungs. Different from conventional computer-aided diagnosis (CAD) pulmonary emphysema classification methods, in this paper, firstly, the dictionary of texton is learned via applying sparse representation(SR) to image patches in the training dataset. Then the SR coefficients of the test images over the dictionary are used to construct the histograms for texture presentations. Finally, classification is performed by using a nearest neighbor classifier with a histogram dissimilarity measure as distance. The proposed approach is tested on 3840 annotated regions of interest consisting of normal tissue and mild, moderate and severe pulmonary emphysema of three subtypes. The performance of the proposed system, with an accuracy of about 88%, is comparably higher than state of the art method based on the basic rotation invariant local binary pattern histograms and the texture classification method based on texton learning by k-means, which performs almost the best among other approaches in the literature.

  18. Multi-agent cooperation rescue algorithm based on influence degree and state prediction

    Science.gov (United States)

    Zheng, Yanbin; Ma, Guangfu; Wang, Linlin; Xi, Pengxue

    2018-04-01

    Aiming at the multi-agent cooperative rescue in disaster, a multi-agent cooperative rescue algorithm based on impact degree and state prediction is proposed. Firstly, based on the influence of the information in the scene on the collaborative task, the influence degree function is used to filter the information. Secondly, using the selected information to predict the state of the system and Agent behavior. Finally, according to the result of the forecast, the cooperative behavior of Agent is guided and improved the efficiency of individual collaboration. The simulation results show that this algorithm can effectively solve the cooperative rescue problem of multi-agent and ensure the efficient completion of the task.

  19. Persuasion Model and Its Evaluation Based on Positive Change Degree of Agent Emotion

    Science.gov (United States)

    Jinghua, Wu; Wenguang, Lu; Hailiang, Meng

    For it can meet needs of negotiation among organizations take place in different time and place, and for it can make its course more rationality and result more ideal, persuasion based on agent can improve cooperation among organizations well. Integrated emotion change in agent persuasion can further bring agent advantage of artificial intelligence into play. Emotion of agent persuasion is classified, and the concept of positive change degree is given. Based on this, persuasion model based on positive change degree of agent emotion is constructed, which is explained clearly through an example. Finally, the method of relative evaluation is given, which is also verified through a calculation example.

  20. An agent-based information management model of the Chinese pig sector

    NARCIS (Netherlands)

    Osinga, S.A.; Kramer, M.R.; Hofstede, G.J.; Roozmand, O.; Beulens, A.J.M.

    2010-01-01

    This paper investigates the effect of a selected top-down measure (what-if scenario) on actual agent behaviour and total system behaviour by means of an agent-based simulation model, when agents’ behaviour cannot fully be managed because the agents are autonomous. The Chinese pork sector serves as

  1. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  2. Soil classification basing on the spectral characteristics of topsoil samples

    Science.gov (United States)

    Liu, Huanjun; Zhang, Xiaokang; Zhang, Xinle

    2016-04-01

    Soil taxonomy plays an important role in soil utility and management, but China has only course soil map created based on 1980s data. New technology, e.g. spectroscopy, could simplify soil classification. The study try to classify soils basing on the spectral characteristics of topsoil samples. 148 topsoil samples of typical soils, including Black soil, Chernozem, Blown soil and Meadow soil, were collected from Songnen plain, Northeast China, and the room spectral reflectance in the visible and near infrared region (400-2500 nm) were processed with weighted moving average, resampling technique, and continuum removal. Spectral indices were extracted from soil spectral characteristics, including the second absorption positions of spectral curve, the first absorption vale's area, and slope of spectral curve at 500-600 nm and 1340-1360 nm. Then K-means clustering and decision tree were used respectively to build soil classification model. The results indicated that 1) the second absorption positions of Black soil and Chernozem were located at 610 nm and 650 nm respectively; 2) the spectral curve of the meadow is similar to its adjacent soil, which could be due to soil erosion; 3) decision tree model showed higher classification accuracy, and accuracy of Black soil, Chernozem, Blown soil and Meadow are 100%, 88%, 97%, 50% respectively, and the accuracy of Blown soil could be increased to 100% by adding one more spectral index (the first two vole's area) to the model, which showed that the model could be used for soil classification and soil map in near future.

  3. A Skeleton Based Programming Paradigm for Mobile Multi-Agents on Distributed Systems and Its Realization within the MAGDA Mobile Agents Platform

    Directory of Open Access Journals (Sweden)

    R. Aversa

    2008-01-01

    Full Text Available Parallel programming effort can be reduced by using high level constructs such as algorithmic skeletons. Within the MAGDA toolset, supporting programming and execution of mobile agent based distributed applications, we provide a skeleton-based parallel programming environment, based on specialization of Algorithmic Skeleton Java interfaces and classes. Their implementation include mobile agent features for execution on heterogeneous systems, such as clusters of WSs and PCs, and support reliability and dynamic workload balancing. The user can thus develop a parallel, mobile agent based application by simply specialising a given set of classes and methods and using a set of added functionalities.

  4. A Novel Algorithm for Imbalance Data Classification Based on Neighborhood Hypergraph

    Directory of Open Access Journals (Sweden)

    Feng Hu

    2014-01-01

    Full Text Available The classification problem for imbalance data is paid more attention to. So far, many significant methods are proposed and applied to many fields. But more efficient methods are needed still. Hypergraph may not be powerful enough to deal with the data in boundary region, although it is an efficient tool to knowledge discovery. In this paper, the neighborhood hypergraph is presented, combining rough set theory and hypergraph. After that, a novel classification algorithm for imbalance data based on neighborhood hypergraph is developed, which is composed of three steps: initialization of hyperedge, classification of training data set, and substitution of hyperedge. After conducting an experiment of 10-fold cross validation on 18 data sets, the proposed algorithm has higher average accuracy than others.

  5. Segmentation of Clinical Endoscopic Images Based on the Classification of Topological Vector Features

    Directory of Open Access Journals (Sweden)

    O. A. Dunaeva

    2013-01-01

    Full Text Available In this work, we describe a prototype of an automatic segmentation system and annotation of endoscopy images. The used algorithm is based on the classification of vectors of the topological features of the original image. We use the image processing scheme which includes image preprocessing, calculation of vector descriptors defined for every point of the source image and the subsequent classification of descriptors. Image preprocessing includes finding and selecting artifacts and equalizating the image brightness. In this work, we give the detailed algorithm of the construction of topological descriptors and the classifier creating procedure based on mutual sharing the AdaBoost scheme and a naive Bayes classifier. In the final section, we show the results of the classification of real endoscopic images.

  6. Some improved classification-based ridge parameter of Hoerl and ...

    African Journals Online (AJOL)

    Some improved classification-based ridge parameter of Hoerl and Kennard estimation techniques. ... This assumption is often violated and Ridge Regression estimator introduced by [2]has been identified to be more efficient than ordinary least square (OLS) in handling it. However, it requires a ridge parameter, K, of which ...

  7. Contaminant classification using cosine distances based on multiple conventional sensors.

    Science.gov (United States)

    Liu, Shuming; Che, Han; Smith, Kate; Chang, Tian

    2015-02-01

    Emergent contamination events have a significant impact on water systems. After contamination detection, it is important to classify the type of contaminant quickly to provide support for remediation attempts. Conventional methods generally either rely on laboratory-based analysis, which requires a long analysis time, or on multivariable-based geometry analysis and sequence analysis, which is prone to being affected by the contaminant concentration. This paper proposes a new contaminant classification method, which discriminates contaminants in a real time manner independent of the contaminant concentration. The proposed method quantifies the similarities or dissimilarities between sensors' responses to different types of contaminants. The performance of the proposed method was evaluated using data from contaminant injection experiments in a laboratory and compared with a Euclidean distance-based method. The robustness of the proposed method was evaluated using an uncertainty analysis. The results show that the proposed method performed better in identifying the type of contaminant than the Euclidean distance based method and that it could classify the type of contaminant in minutes without significantly compromising the correct classification rate (CCR).

  8. Object-based Dimensionality Reduction in Land Surface Phenology Classification

    Directory of Open Access Journals (Sweden)

    Brian E. Bunker

    2016-11-01

    Full Text Available Unsupervised classification or clustering of multi-decadal land surface phenology provides a spatio-temporal synopsis of natural and agricultural vegetation response to environmental variability and anthropogenic activities. Notwithstanding the detailed temporal information available in calibrated bi-monthly normalized difference vegetation index (NDVI and comparable time series, typical pre-classification workflows average a pixel’s bi-monthly index within the larger multi-decadal time series. While this process is one practical way to reduce the dimensionality of time series with many hundreds of image epochs, it effectively dampens temporal variation from both intra and inter-annual observations related to land surface phenology. Through a novel application of object-based segmentation aimed at spatial (not temporal dimensionality reduction, all 294 image epochs from a Moderate Resolution Imaging Spectroradiometer (MODIS bi-monthly NDVI time series covering the northern Fertile Crescent were retained (in homogenous landscape units as unsupervised classification inputs. Given the inherent challenges of in situ or manual image interpretation of land surface phenology classes, a cluster validation approach based on transformed divergence enabled comparison between traditional and novel techniques. Improved intra-annual contrast was clearly manifest in rain-fed agriculture and inter-annual trajectories showed increased cluster cohesion, reducing the overall number of classes identified in the Fertile Crescent study area from 24 to 10. Given careful segmentation parameters, this spatial dimensionality reduction technique augments the value of unsupervised learning to generate homogeneous land surface phenology units. By combining recent scalable computational approaches to image segmentation, future work can pursue new global land surface phenology products based on the high temporal resolution signatures of vegetation index time series.

  9. A Multi-Agent Based Energy Management Solution for Integrated Buildings and Microgrid System

    DEFF Research Database (Denmark)

    Anvari-Moghaddam, Amjad; Rahimi-Kian, Ashkan; Mirian, Maryam S.

    2017-01-01

    -reflex to complex learning agents are designed and implemented to cooperate with each other to reach an optimal operating strategy for the mentioned integrated energy system (IES) while meeting the system’s objectives and related constraints. The optimization process for the EMS is defined as a coordinated......In this paper, an ontology-driven multi-agent based energy management system (EMS) is proposed for monitoring and optimal control of an integrated homes/buildings and microgrid system with various renewable energy resources (RESs) and controllable loads. Different agents ranging from simple...... distributed generation (DG) and demand response (DR) management problem within the studied environment and is solved by the proposed agent-based approach utilizing cooperation and communication among decision agents. To verify the effectiveness and applicability of the proposed multi-agent based EMS, several...

  10. Efficacy measures associated to a plantar pressure based classification system in diabetic foot medicine.

    Science.gov (United States)

    Deschamps, Kevin; Matricali, Giovanni Arnoldo; Desmet, Dirk; Roosen, Philip; Keijsers, Noel; Nobels, Frank; Bruyninckx, Herman; Staes, Filip

    2016-09-01

    The concept of 'classification' has, similar to many other diseases, been found to be fundamental in the field of diabetic medicine. In the current study, we aimed at determining efficacy measures of a recently published plantar pressure based classification system. Technical efficacy of the classification system was investigated by applying a high resolution, pixel-level analysis on the normalized plantar pressure pedobarographic fields of the original experimental dataset consisting of 97 patients with diabetes and 33 persons without diabetes. Clinical efficacy was assessed by considering the occurence of foot ulcers at the plantar aspect of the forefoot in this dataset. Classification efficacy was assessed by determining the classification recognition rate as well as its sensitivity and specificity using cross-validation subsets of the experimental dataset together with a novel cohort of 12 patients with diabetes. Pixel-level comparison of the four groups associated to the classification system highlighted distinct regional differences. Retrospective analysis showed the occurence of eleven foot ulcers in the experimental dataset since their gait analysis. Eight out of the eleven ulcers developed in a region of the foot which had the highest forces. Overall classification recognition rate exceeded 90% for all cross-validation subsets. Sensitivity and specificity of the four groups associated to the classification system exceeded respectively the 0.7 and 0.8 level in all cross-validation subsets. The results of the current study support the use of the novel plantar pressure based classification system in diabetic foot medicine. It may particularly serve in communication, diagnosis and clinical decision making. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Agent-Based Collaborative Traffic Flow Management, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose agent-based game-theoretic approaches for simulation of strategies involved in multi-objective collaborative traffic flow management (CTFM). Intelligent...

  12. A new web-based system for unsupervised classification of satellite images from the Google Maps engine

    Science.gov (United States)

    Ferrán, Ángel; Bernabé, Sergio; García-Rodríguez, Pablo; Plaza, Antonio

    2012-10-01

    In this paper, we develop a new web-based system for unsupervised classification of satellite images available from the Google Maps engine. The system has been developed using the Google Maps API and incorporates functionalities such as unsupervised classification of image portions selected by the user (at the desired zoom level). For this purpose, we use a processing chain made up of the well-known ISODATA and k-means algorithms, followed by spatial post-processing based on majority voting. The system is currently hosted on a high performance server which performs the execution of classification algorithms and returns the obtained classification results in a very efficient way. The previous functionalities are necessary to use efficient techniques for the classification of images and the incorporation of content-based image retrieval (CBIR). Several experimental validation types of the classification results with the proposed system are performed by comparing the classification accuracy of the proposed chain by means of techniques available in the well-known Environment for Visualizing Images (ENVI) software package. The server has access to a cluster of commodity graphics processing units (GPUs), hence in future work we plan to perform the processing in parallel by taking advantage of the cluster.

  13. Geometry of behavioral spaces: A computational approach to analysis and understanding of agent based models and agent behaviors

    Science.gov (United States)

    Cenek, Martin; Dahl, Spencer K.

    2016-11-01

    Systems with non-linear dynamics frequently exhibit emergent system behavior, which is important to find and specify rigorously to understand the nature of the modeled phenomena. Through this analysis, it is possible to characterize phenomena such as how systems assemble or dissipate and what behaviors lead to specific final system configurations. Agent Based Modeling (ABM) is one of the modeling techniques used to study the interaction dynamics between a system's agents and its environment. Although the methodology of ABM construction is well understood and practiced, there are no computational, statistically rigorous, comprehensive tools to evaluate an ABM's execution. Often, a human has to observe an ABM's execution in order to analyze how the ABM functions, identify the emergent processes in the agent's behavior, or study a parameter's effect on the system-wide behavior. This paper introduces a new statistically based framework to automatically analyze agents' behavior, identify common system-wide patterns, and record the probability of agents changing their behavior from one pattern of behavior to another. We use network based techniques to analyze the landscape of common behaviors in an ABM's execution. Finally, we test the proposed framework with a series of experiments featuring increasingly emergent behavior. The proposed framework will allow computational comparison of ABM executions, exploration of a model's parameter configuration space, and identification of the behavioral building blocks in a model's dynamics.

  14. Agent-Based Modeling of Day-Ahead Real Time Pricing in a Pool-Based Electricity Market

    Directory of Open Access Journals (Sweden)

    Sh. Yousefi

    2011-09-01

    Full Text Available In this paper, an agent-based structure of the electricity retail market is presented based on which day-ahead (DA energy procurement for customers is modeled. Here, we focus on operation of only one Retail Energy Provider (REP agent who purchases energy from DA pool-based wholesale market and offers DA real time tariffs to a group of its customers. As a model of customer response to the offered real time prices, an hourly acceptance function is proposed in order to represent the hourly changes in the customer’s effective demand according to the prices. Here, Q-learning (QL approach is applied in day-ahead real time pricing for the customers enabling the REP agent to discover which price yields the most benefit through a trial-and-error search. Numerical studies are presented based on New England day-ahead market data which include comparing the results of RTP based on QL approach with that of genetic-based pricing.

  15. Attribute-based classification for zero-shot visual object categorization.

    Science.gov (United States)

    Lampert, Christoph H; Nickisch, Hannes; Harmeling, Stefan

    2014-03-01

    We study the problem of object recognition for categories for which we have no training examples, a task also called zero--data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently; the world contains tens of thousands of different object classes, and image collections have been formed and suitably annotated for only a few of them. To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be prelearned independently, for example, from existing image data sets unrelated to the current task. Afterward, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper, we also introduce a new data set, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more data sets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes.

  16. KNN BASED CLASSIFICATION OF DIGITAL MODULATED SIGNALS

    Directory of Open Access Journals (Sweden)

    Sajjad Ahmed Ghauri

    2016-11-01

    Full Text Available Demodulation process without the knowledge of modulation scheme requires Automatic Modulation Classification (AMC. When receiver has limited information about received signal then AMC become essential process. AMC finds important place in the field many civil and military fields such as modern electronic warfare, interfering source recognition, frequency management, link adaptation etc. In this paper we explore the use of K-nearest neighbor (KNN for modulation classification with different distance measurement methods. Five modulation schemes are used for classification purpose which is Binary Phase Shift Keying (BPSK, Quadrature Phase Shift Keying (QPSK, Quadrature Amplitude Modulation (QAM, 16-QAM and 64-QAM. Higher order cummulants (HOC are used as an input feature set to the classifier. Simulation results shows that proposed classification method provides better results for the considered modulation formats.

  17. Personalized E- learning System Based on Intelligent Agent

    Science.gov (United States)

    Duo, Sun; Ying, Zhou Cai

    Lack of personalized learning is the key shortcoming of traditional e-Learning system. This paper analyzes the personal characters in e-Learning activity. In order to meet the personalized e-learning, a personalized e-learning system based on intelligent agent was proposed and realized in the paper. The structure of system, work process, the design of intelligent agent and the realization of intelligent agent were introduced in the paper. After the test use of the system by certain network school, we found that the system could improve the learner's initiative participation, which can provide learners with personalized knowledge service. Thus, we thought it might be a practical solution to realize self- learning and self-promotion in the lifelong education age.

  18. A k-mer-based barcode DNA classification methodology based on spectral representation and a neural gas network.

    Science.gov (United States)

    Fiannaca, Antonino; La Rosa, Massimo; Rizzo, Riccardo; Urso, Alfonso

    2015-07-01

    In this paper, an alignment-free method for DNA barcode classification that is based on both a spectral representation and a neural gas network for unsupervised clustering is proposed. In the proposed methodology, distinctive words are identified from a spectral representation of DNA sequences. A taxonomic classification of the DNA sequence is then performed using the sequence signature, i.e., the smallest set of k-mers that can assign a DNA sequence to its proper taxonomic category. Experiments were then performed to compare our method with other supervised machine learning classification algorithms, such as support vector machine, random forest, ripper, naïve Bayes, ridor, and classification tree, which also consider short DNA sequence fragments of 200 and 300 base pairs (bp). The experimental tests were conducted over 10 real barcode datasets belonging to different animal species, which were provided by the on-line resource "Barcode of Life Database". The experimental results showed that our k-mer-based approach is directly comparable, in terms of accuracy, recall and precision metrics, with the other classifiers when considering full-length sequences. In addition, we demonstrate the robustness of our method when a classification is performed task with a set of short DNA sequences that were randomly extracted from the original data. For example, the proposed method can reach the accuracy of 64.8% at the species level with 200-bp fragments. Under the same conditions, the best other classifier (random forest) reaches the accuracy of 20.9%. Our results indicate that we obtained a clear improvement over the other classifiers for the study of short DNA barcode sequence fragments. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Toward Intelligent Autonomous Agents for Cyber Defense: Report of the 2017 Workshop by the North Atlantic Treaty Organization (NATO) Research Group IST-152 RTG

    Science.gov (United States)

    2018-04-18

    simple example? 15. SUBJECT TERMS cybersecurity , cyber defense, autonomous agents, resilience, adversarial intelligence 16. SECURITY CLASSIFICATION...explained” based on other attack sequences (e.g., Kullback–Leibler [K-L] divergence). For example, the DARPA Explainable Artificial Intelligence ...a failure of humanity, not artificial intelligence . The notion of self-guidance approaches the field of robot ethics. How can autonomous agents be

  20. Toward Intelligent Autonomous Agents for Cyber Defense: Report of the 2017 Workshop by the North Atlantic Treaty Organization (NATO) Research Group IST-152-RTG

    Science.gov (United States)

    2018-04-01

    simple example? 15. SUBJECT TERMS cybersecurity , cyber defense, autonomous agents, resilience, adversarial intelligence 16. SECURITY CLASSIFICATION...explained” based on other attack sequences (e.g., Kullback–Leibler [K-L] divergence). For example, the DARPA Explainable Artificial Intelligence ...a failure of humanity, not artificial intelligence . The notion of self-guidance approaches the field of robot ethics. How can autonomous agents be

  1. Collective Machine Learning: Team Learning and Classification in Multi-Agent Systems

    Science.gov (United States)

    Gifford, Christopher M.

    2009-01-01

    This dissertation focuses on the collaboration of multiple heterogeneous, intelligent agents (hardware or software) which collaborate to learn a task and are capable of sharing knowledge. The concept of collaborative learning in multi-agent and multi-robot systems is largely under studied, and represents an area where further research is needed to…

  2. Classification and Quality Evaluation of Tobacco Leaves Based on Image Processing and Fuzzy Comprehensive Evaluation

    Science.gov (United States)

    Zhang, Fan; Zhang, Xinhong

    2011-01-01

    Most of classification, quality evaluation or grading of the flue-cured tobacco leaves are manually operated, which relies on the judgmental experience of experts, and inevitably limited by personal, physical and environmental factors. The classification and the quality evaluation are therefore subjective and experientially based. In this paper, an automatic classification method of tobacco leaves based on the digital image processing and the fuzzy sets theory is presented. A grading system based on image processing techniques was developed for automatically inspecting and grading flue-cured tobacco leaves. This system uses machine vision for the extraction and analysis of color, size, shape and surface texture. Fuzzy comprehensive evaluation provides a high level of confidence in decision making based on the fuzzy logic. The neural network is used to estimate and forecast the membership function of the features of tobacco leaves in the fuzzy sets. The experimental results of the two-level fuzzy comprehensive evaluation (FCE) show that the accuracy rate of classification is about 94% for the trained tobacco leaves, and the accuracy rate of the non-trained tobacco leaves is about 72%. We believe that the fuzzy comprehensive evaluation is a viable way for the automatic classification and quality evaluation of the tobacco leaves. PMID:22163744

  3. Optimal Couple Projections for Domain Adaptive Sparse Representation-based Classification.

    Science.gov (United States)

    Zhang, Guoqing; Sun, Huaijiang; Porikli, Fatih; Liu, Yazhou; Sun, Quansen

    2017-08-29

    In recent years, sparse representation based classification (SRC) is one of the most successful methods and has been shown impressive performance in various classification tasks. However, when the training data has a different distribution than the testing data, the learned sparse representation may not be optimal, and the performance of SRC will be degraded significantly. To address this problem, in this paper, we propose an optimal couple projections for domain-adaptive sparse representation-based classification (OCPD-SRC) method, in which the discriminative features of data in the two domains are simultaneously learned with the dictionary that can succinctly represent the training and testing data in the projected space. OCPD-SRC is designed based on the decision rule of SRC, with the objective to learn coupled projection matrices and a common discriminative dictionary such that the between-class sparse reconstruction residuals of data from both domains are maximized, and the within-class sparse reconstruction residuals of data are minimized in the projected low-dimensional space. Thus, the resulting representations can well fit SRC and simultaneously have a better discriminant ability. In addition, our method can be easily extended to multiple domains and can be kernelized to deal with the nonlinear structure of data. The optimal solution for the proposed method can be efficiently obtained following the alternative optimization method. Extensive experimental results on a series of benchmark databases show that our method is better or comparable to many state-of-the-art methods.

  4. An approach for classification of hydrogeological systems at the regional scale based on groundwater hydrographs

    Science.gov (United States)

    Haaf, Ezra; Barthel, Roland

    2016-04-01

    When assessing hydrogeological conditions at the regional scale, the analyst is often confronted with uncertainty of structures, inputs and processes while having to base inference on scarce and patchy data. Haaf and Barthel (2015) proposed a concept for handling this predicament by developing a groundwater systems classification framework, where information is transferred from similar, but well-explored and better understood to poorly described systems. The concept is based on the central hypothesis that similar systems react similarly to the same inputs and vice versa. It is conceptually related to PUB (Prediction in ungauged basins) where organization of systems and processes by quantitative methods is intended and used to improve understanding and prediction. Furthermore, using the framework it is expected that regional conceptual and numerical models can be checked or enriched by ensemble generated data from neighborhood-based estimators. In a first step, groundwater hydrographs from a large dataset in Southern Germany are compared in an effort to identify structural similarity in groundwater dynamics. A number of approaches to group hydrographs, mostly based on a similarity measure - which have previously only been used in local-scale studies, can be found in the literature. These are tested alongside different global feature extraction techniques. The resulting classifications are then compared to a visual "expert assessment"-based classification which serves as a reference. A ranking of the classification methods is carried out and differences shown. Selected groups from the classifications are related to geological descriptors. Here we present the most promising results from a comparison of classifications based on series correlation, different series distances and series features, such as the coefficients of the discrete Fourier transform and the intrinsic mode functions of empirical mode decomposition. Additionally, we show examples of classes

  5. Semiotics and agents for integrating and navigating through multimedia representations of concepts

    Science.gov (United States)

    Joyce, Dan W.; Lewis, Paul H.; Tansley, Robert H.; Dobie, Mark R.; Hall, Wendy

    1999-12-01

    The purpose of this paper is two-fold. We begin by exploring the emerging trend to view multimedia information in terms of low-level and high-level components; the former being feature-based and the latter the 'semantics' intrinsic to what is portrayed by the media object. Traditionally, this has been viewed by employing analogies with generative linguistics. Recently, a new perceptive based on the semiotic tradition has been alluded to in several papers. We believe this to be a more appropriate approach. From this, we propose an approach for tackling this problem which uses an associative data structure expressing authored information together with intelligent agents acting autonomously over this structure. We then show how neural networks can be used to implement such agents. The agents act as 'vehicles' for bridging the gap between multimedia semantics and concrete expressions of high-level knowledge, but we suggest that traditional neural network techniques for classification are not architecturally adequate.

  6. Research on environmental impact of water-based fire extinguishing agents

    Science.gov (United States)

    Wang, Shuai

    2018-02-01

    This paper offers current status of application of water-based fire extinguishing agents, the environmental and research considerations of the need for the study of toxicity research. This paper also offers systematic review of test methods of toxicity and environmental impact of water-based fire extinguishing agents currently available, illustrate the main requirements and relevant test methods, and offer some research findings for future research considerations. The paper also offers limitations of current study.

  7. QoS Negotiation and Renegotiation Based on Mobile Agents

    Institute of Scientific and Technical Information of China (English)

    ZHANG Shi-bing; ZHANG Deng-yin

    2006-01-01

    The Quality of Service (QoS) has received more and more attention since QoS becomes increasingly important in the Internet development. Mobile software agents represent a valid alternative to the implementation of strategies for the negotiation. In this paper, a QoS negotiation and renegotiation system architecture based on mobile agents is proposed. The agents perform the task in the whole process. Therefore, such a system can reduce the network load, overcome latency, and avoid frequent exchange information between clients and server. The simulation results show that the proposed system could improve the network resource utility about 10%.

  8. Emergent Macroeconomics An Agent-Based Approach to Business Fluctuations

    CERN Document Server

    Delli Gatti, Domenico; Gallegati, Mauro; Giulioni, Gianfranco; Palestrini, Antonio

    2008-01-01

    This book contributes substantively to the current state-of-the-art of macroeconomics by providing a method for building models in which business cycles and economic growth emerge from the interactions of a large number of heterogeneous agents. Drawing from recent advances in agent-based computational modeling, the authors show how insights from dispersed fields like the microeconomics of capital market imperfections, industrial dynamics and the theory of stochastic processes can be fruitfully combined to improve our understanding of macroeconomic dynamics. This book should be a valuable resource for all researchers interested in analyzing macroeconomic issues without recurring to a fictitious representative agent.

  9. Initial steps towards an evidence-based classification system for golfers with a physical impairment

    NARCIS (Netherlands)

    Stoter, Inge K.; Hettinga, Florentina J.; Altmann, Viola; Eisma, Wim; Arendzen, Hans; Bennett, Tony; van der Woude, Lucas H.; Dekker, Rienk

    2017-01-01

    Purpose: The present narrative review aims to make a first step towards an evidence-based classification system in handigolf following the International Paralympic Committee (IPC). It intends to create a conceptual framework of classification for handigolf and an agenda for future research. Method:

  10. Classification and global distribution of ocean precipitation types based on satellite passive microwave signatures

    Science.gov (United States)

    Gautam, Nitin

    The main objectives of this thesis are to develop a robust statistical method for the classification of ocean precipitation based on physical properties to which the SSM/I is sensitive and to examine how these properties vary globally and seasonally. A two step approach is adopted for the classification of oceanic precipitation classes from multispectral SSM/I data: (1)we subjectively define precipitation classes using a priori information about the precipitating system and its possible distinct signature on SSM/I data such as scattering by ice particles aloft in the precipitating cloud, emission by liquid rain water below freezing level, the difference of polarization at 19 GHz-an indirect measure of optical depth, etc.; (2)we then develop an objective classification scheme which is found to reproduce the subjective classification with high accuracy. This hybrid strategy allows us to use the characteristics of the data to define and encode classes and helps retain the physical interpretation of classes. The classification methods based on k-nearest neighbor and neural network are developed to objectively classify six precipitation classes. It is found that the classification method based neural network yields high accuracy for all precipitation classes. An inversion method based on minimum variance approach was used to retrieve gross microphysical properties of these precipitation classes such as column integrated liquid water path, column integrated ice water path, and column integrated min water path. This classification method is then applied to 2 years (1991-92) of SSM/I data to examine and document the seasonal and global distribution of precipitation frequency corresponding to each of these objectively defined six classes. The characteristics of the distribution are found to be consistent with assumptions used in defining these six precipitation classes and also with well known climatological patterns of precipitation regions. The seasonal and global

  11. Structuring Qualitative Data for Agent-Based Modelling

    NARCIS (Netherlands)

    Ghorbani, Amineh; Dijkema, Gerard P.J.; Schrauwen, Noortje

    2015-01-01

    Using ethnography to build agent-based models may result in more empirically grounded simulations. Our study on innovation practice and culture in the Westland horticulture sector served to explore what information and data from ethnographic analysis could be used in models and how. MAIA, a

  12. An agent-based architecture for multimodal interaction

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.

    In this paper, an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The agent-based architecture can be used to create multimodal interaction. The generic process model has been designed, implemented and used to simulate

  13. An agent-based architecture for multimodal interaction

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.

    2001-01-01

    In this paper, an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The agent-based architecture can be used to create multimodal interaction. The generic process model has been designed, implemented and used to simulate

  14. Laser-based instrumentation for the detection of chemical agents

    International Nuclear Information System (INIS)

    Hartford, A. Jr.; Sander, R.K.; Quigley, G.P.; Radziemski, L.J.; Cremers, D.A.

    1982-01-01

    Several laser-based techniques are being evaluated for the remote, point, and surface detection of chemical agents. Among the methods under investigation are optoacoustic spectroscopy, laser-induced breakdown spectroscopy (LIBS), and synchronous detection of laser-induced fluorescence (SDLIF). Optoacoustic detection has already been shown to be capable of extremely sensitive point detection. Its application to remote sensing of chemical agents is currently being evaluated. Atomic emission from the region of a laser-generated plasma has been used to identify the characteristic elements contained in nerve (P and F) and blister (S and Cl) agents. Employing this LIBS approach, detection of chemical agent simulants dispersed in air and adsorbed on a variety of surfaces has been achieved. Synchronous detection of laser-induced fluorescence provides an attractive alternative to conventional LIF, in that an artificial narrowing of the fluorescence emission is obtained. The application of this technique to chemical agent simulants has been successfully demonstrated. 19 figures

  15. Connectionist agent-based learning in bank-run decision making

    Science.gov (United States)

    Huang, Weihong; Huang, Qiao

    2018-05-01

    It is of utter importance for the policy makers, bankers, and investors to thoroughly understand the probability of bank-run (PBR) which was often neglected in the classical models. Bank-run is not merely due to miscoordination (Diamond and Dybvig, 1983) or deterioration of bank assets (Allen and Gale, 1998) but various factors. This paper presents the simulation results of the nonlinear dynamic probabilities of bank runs based on the global games approach, with the distinct assumption that heterogenous agents hold highly correlated but unidentical beliefs about the true payoffs. The specific technique used in the simulation is to let agents have an integrated cognitive-affective network. It is observed that, even when the economy is good, agents are significantly affected by the cognitive-affective network to react to bad news which might lead to bank-run. Both the rise of the late payoffs, R, and the early payoffs, r, will decrease the effect of the affective process. The increased risk sharing might or might not increase PBR, and the increase in late payoff is beneficial for preventing the bank run. This paper is one of the pioneers that links agent-based computational economics and behavioral economics.

  16. Multi-material classification of dry recyclables from municipal solid waste based on thermal imaging.

    Science.gov (United States)

    Gundupalli, Sathish Paulraj; Hait, Subrata; Thakur, Atul

    2017-12-01

    There has been a significant rise in municipal solid waste (MSW) generation in the last few decades due to rapid urbanization and industrialization. Due to the lack of source segregation practice, a need for automated segregation of recyclables from MSW exists in the developing countries. This paper reports a thermal imaging based system for classifying useful recyclables from simulated MSW sample. Experimental results have demonstrated the possibility to use thermal imaging technique for classification and a robotic system for sorting of recyclables in a single process step. The reported classification system yields an accuracy in the range of 85-96% and is comparable with the existing single-material recyclable classification techniques. We believe that the reported thermal imaging based system can emerge as a viable and inexpensive large-scale classification-cum-sorting technology in recycling plants for processing MSW in developing countries. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. A kernel-based multi-feature image representation for histopathology image classification

    International Nuclear Information System (INIS)

    Moreno J; Caicedo J Gonzalez F

    2010-01-01

    This paper presents a novel strategy for building a high-dimensional feature space to represent histopathology image contents. Histogram features, related to colors, textures and edges, are combined together in a unique image representation space using kernel functions. This feature space is further enhanced by the application of latent semantic analysis, to model hidden relationships among visual patterns. All that information is included in the new image representation space. Then, support vector machine classifiers are used to assign semantic labels to images. Processing and classification algorithms operate on top of kernel functions, so that; the structure of the feature space is completely controlled using similarity measures and a dual representation. The proposed approach has shown a successful performance in a classification task using a dataset with 1,502 real histopathology images in 18 different classes. The results show that our approach for histological image classification obtains an improved average performance of 20.6% when compared to a conventional classification approach based on SVM directly applied to the original kernel.

  18. A KERNEL-BASED MULTI-FEATURE IMAGE REPRESENTATION FOR HISTOPATHOLOGY IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    J Carlos Moreno

    2010-09-01

    Full Text Available This paper presents a novel strategy for building a high-dimensional feature space to represent histopathology image contents. Histogram features, related to colors, textures and edges, are combined together in a unique image representation space using kernel functions. This feature space is further enhanced by the application of Latent Semantic Analysis, to model hidden relationships among visual patterns. All that information is included in the new image representation space. Then, Support Vector Machine classifiers are used to assign semantic labels to images. Processing and classification algorithms operate on top of kernel functions, so that, the structure of the feature space is completely controlled using similarity measures and a dual representation. The proposed approach has shown a successful performance in a classification task using a dataset with 1,502 real histopathology images in 18 different classes. The results show that our approach for histological image classification obtains an improved average performance of 20.6% when compared to a conventional classification approach based on SVM directly applied to the original kernel.

  19. Non-target adjacent stimuli classification improves performance of classical ERP-based brain computer interface

    Science.gov (United States)

    Ceballos, G. A.; Hernández, L. F.

    2015-04-01

    Objective. The classical ERP-based speller, or P300 Speller, is one of the most commonly used paradigms in the field of Brain Computer Interfaces (BCI). Several alterations to the visual stimuli presentation system have been developed to avoid unfavorable effects elicited by adjacent stimuli. However, there has been little, if any, regard to useful information contained in responses to adjacent stimuli about spatial location of target symbols. This paper aims to demonstrate that combining the classification of non-target adjacent stimuli with standard classification (target versus non-target) significantly improves classical ERP-based speller efficiency. Approach. Four SWLDA classifiers were trained and combined with the standard classifier: the lower row, upper row, right column and left column classifiers. This new feature extraction procedure and the classification method were carried out on three open databases: the UAM P300 database (Universidad Autonoma Metropolitana, Mexico), BCI competition II (dataset IIb) and BCI competition III (dataset II). Main results. The inclusion of the classification of non-target adjacent stimuli improves target classification in the classical row/column paradigm. A gain in mean single trial classification of 9.6% and an overall improvement of 25% in simulated spelling speed was achieved. Significance. We have provided further evidence that the ERPs produced by adjacent stimuli present discriminable features, which could provide additional information about the spatial location of intended symbols. This work promotes the searching of information on the peripheral stimulation responses to improve the performance of emerging visual ERP-based spellers.

  20. Complex between lignin and a Ti-based coupling agent

    DEFF Research Database (Denmark)

    Rasmussen, Jonas Stensgaard; Barsberg, Søren Talbro; Felby, Claus

    2014-01-01

    -fourier transform infrared spectroscopy in combination with first principle predictions based on the density functional theory (DFT). In the infrared spectra, a new band at 1586 cm-1 was identified and the DFT predictions confirmed that the new band is because of the covalent bonds in the form of ether linkages...... coating formulations would have a better performance if the adhesion to wood could be improved. In the present work, the chemical interaction between a titanium-based coupling agent, isopropyl triisostearoyl titanate (titanium agent, TA) and lignin has been studied by means of attenuated total reflectance...

  1. The Discriminative validity of "nociceptive," "peripheral neuropathic," and "central sensitization" as mechanisms-based classifications of musculoskeletal pain.

    LENUS (Irish Health Repository)

    Smart, Keith M

    2012-02-01

    OBJECTIVES: Empirical evidence of discriminative validity is required to justify the use of mechanisms-based classifications of musculoskeletal pain in clinical practice. The purpose of this study was to evaluate the discriminative validity of mechanisms-based classifications of pain by identifying discriminatory clusters of clinical criteria predictive of "nociceptive," "peripheral neuropathic," and "central sensitization" pain in patients with low back (+\\/- leg) pain disorders. METHODS: This study was a cross-sectional, between-patients design using the extreme-groups method. Four hundred sixty-four patients with low back (+\\/- leg) pain were assessed using a standardized assessment protocol. After each assessment, patients\\' pain was assigned a mechanisms-based classification. Clinicians then completed a clinical criteria checklist indicating the presence\\/absence of various clinical criteria. RESULTS: Multivariate analyses using binary logistic regression with Bayesian model averaging identified a discriminative cluster of 7, 3, and 4 symptoms and signs predictive of a dominance of "nociceptive," "peripheral neuropathic," and "central sensitization" pain, respectively. Each cluster was found to have high levels of classification accuracy (sensitivity, specificity, positive\\/negative predictive values, positive\\/negative likelihood ratios). DISCUSSION: By identifying a discriminatory cluster of symptoms and signs predictive of "nociceptive," "peripheral neuropathic," and "central" pain, this study provides some preliminary discriminative validity evidence for mechanisms-based classifications of musculoskeletal pain. Classification system validation requires the accumulation of validity evidence before their use in clinical practice can be recommended. Further studies are required to evaluate the construct and criterion validity of mechanisms-based classifications of musculoskeletal pain.

  2. From learning taxonomies to phylogenetic learning: Integration of 16S rRNA gene data into FAME-based bacterial classification

    Science.gov (United States)

    2010-01-01

    Background Machine learning techniques have shown to improve bacterial species classification based on fatty acid methyl ester (FAME) data. Nonetheless, FAME analysis has a limited resolution for discrimination of bacteria at the species level. In this paper, we approach the species classification problem from a taxonomic point of view. Such a taxonomy or tree is typically obtained by applying clustering algorithms on FAME data or on 16S rRNA gene data. The knowledge gained from the tree can then be used to evaluate FAME-based classifiers, resulting in a novel framework for bacterial species classification. Results In view of learning in a taxonomic framework, we consider two types of trees. First, a FAME tree is constructed with a supervised divisive clustering algorithm. Subsequently, based on 16S rRNA gene sequence analysis, phylogenetic trees are inferred by the NJ and UPGMA methods. In this second approach, the species classification problem is based on the combination of two different types of data. Herein, 16S rRNA gene sequence data is used for phylogenetic tree inference and the corresponding binary tree splits are learned based on FAME data. We call this learning approach 'phylogenetic learning'. Supervised Random Forest models are developed to train the classification tasks in a stratified cross-validation setting. In this way, better classification results are obtained for species that are typically hard to distinguish by a single or flat multi-class classification model. Conclusions FAME-based bacterial species classification is successfully evaluated in a taxonomic framework. Although the proposed approach does not improve the overall accuracy compared to flat multi-class classification, it has some distinct advantages. First, it has better capabilities for distinguishing species on which flat multi-class classification fails. Secondly, the hierarchical classification structure allows to easily evaluate and visualize the resolution of FAME data for

  3. From learning taxonomies to phylogenetic learning: Integration of 16S rRNA gene data into FAME-based bacterial classification

    Directory of Open Access Journals (Sweden)

    Dawyndt Peter

    2010-01-01

    Full Text Available Abstract Background Machine learning techniques have shown to improve bacterial species classification based on fatty acid methyl ester (FAME data. Nonetheless, FAME analysis has a limited resolution for discrimination of bacteria at the species level. In this paper, we approach the species classification problem from a taxonomic point of view. Such a taxonomy or tree is typically obtained by applying clustering algorithms on FAME data or on 16S rRNA gene data. The knowledge gained from the tree can then be used to evaluate FAME-based classifiers, resulting in a novel framework for bacterial species classification. Results In view of learning in a taxonomic framework, we consider two types of trees. First, a FAME tree is constructed with a supervised divisive clustering algorithm. Subsequently, based on 16S rRNA gene sequence analysis, phylogenetic trees are inferred by the NJ and UPGMA methods. In this second approach, the species classification problem is based on the combination of two different types of data. Herein, 16S rRNA gene sequence data is used for phylogenetic tree inference and the corresponding binary tree splits are learned based on FAME data. We call this learning approach 'phylogenetic learning'. Supervised Random Forest models are developed to train the classification tasks in a stratified cross-validation setting. In this way, better classification results are obtained for species that are typically hard to distinguish by a single or flat multi-class classification model. Conclusions FAME-based bacterial species classification is successfully evaluated in a taxonomic framework. Although the proposed approach does not improve the overall accuracy compared to flat multi-class classification, it has some distinct advantages. First, it has better capabilities for distinguishing species on which flat multi-class classification fails. Secondly, the hierarchical classification structure allows to easily evaluate and visualize the

  4. From learning taxonomies to phylogenetic learning: integration of 16S rRNA gene data into FAME-based bacterial classification.

    Science.gov (United States)

    Slabbinck, Bram; Waegeman, Willem; Dawyndt, Peter; De Vos, Paul; De Baets, Bernard

    2010-01-30

    Machine learning techniques have shown to improve bacterial species classification based on fatty acid methyl ester (FAME) data. Nonetheless, FAME analysis has a limited resolution for discrimination of bacteria at the species level. In this paper, we approach the species classification problem from a taxonomic point of view. Such a taxonomy or tree is typically obtained by applying clustering algorithms on FAME data or on 16S rRNA gene data. The knowledge gained from the tree can then be used to evaluate FAME-based classifiers, resulting in a novel framework for bacterial species classification. In view of learning in a taxonomic framework, we consider two types of trees. First, a FAME tree is constructed with a supervised divisive clustering algorithm. Subsequently, based on 16S rRNA gene sequence analysis, phylogenetic trees are inferred by the NJ and UPGMA methods. In this second approach, the species classification problem is based on the combination of two different types of data. Herein, 16S rRNA gene sequence data is used for phylogenetic tree inference and the corresponding binary tree splits are learned based on FAME data. We call this learning approach 'phylogenetic learning'. Supervised Random Forest models are developed to train the classification tasks in a stratified cross-validation setting. In this way, better classification results are obtained for species that are typically hard to distinguish by a single or flat multi-class classification model. FAME-based bacterial species classification is successfully evaluated in a taxonomic framework. Although the proposed approach does not improve the overall accuracy compared to flat multi-class classification, it has some distinct advantages. First, it has better capabilities for distinguishing species on which flat multi-class classification fails. Secondly, the hierarchical classification structure allows to easily evaluate and visualize the resolution of FAME data for the discrimination of bacterial

  5. The paradox of atheoretical classification

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2016-01-01

    A distinction can be made between “artificial classifications” and “natural classifications,” where artificial classifications may adequately serve some limited purposes, but natural classifications are overall most fruitful by allowing inference and thus many different purposes. There is strong...... support for the view that a natural classification should be based on a theory (and, of course, that the most fruitful theory provides the most fruitful classification). Nevertheless, atheoretical (or “descriptive”) classifications are often produced. Paradoxically, atheoretical classifications may...... be very successful. The best example of a successful “atheoretical” classification is probably the prestigious Diagnostic and Statistical Manual of Mental Disorders (DSM) since its third edition from 1980. Based on such successes one may ask: Should the claim that classifications ideally are natural...

  6. A Novel Approach to Selecting Contractor in Agent-based Multi-sensor Battlefield Reconnaissance Simulation

    Directory of Open Access Journals (Sweden)

    Xiong Li

    2012-11-01

    Full Text Available This paper presents a novel approach towards showing how contractor in agent-based simulation for complex warfare system such as multi-sensor battlefield reconnaissance system can be selected in Contract Net Protocol (CNP with high efficiency. We first analyze agent and agent-based simulation framework, CNP and collaborators, and present agents interaction chain used to actualize CNP and establish agents trust network. We then obtain contractor's importance weight and dynamic trust by presenting fuzzy similarity-based algorithm and trust modifying algorithm, thus we propose contractor selecting approach based on maximum dynamic integrative trust. We validate the feasibility and capability of this approach by implementing simulation, analyzing compared results and checking the model.

  7. Knowledge Management in Role Based Agents

    Science.gov (United States)

    Kır, Hüseyin; Ekinci, Erdem Eser; Dikenelli, Oguz

    In multi-agent system literature, the role concept is getting increasingly researched to provide an abstraction to scope beliefs, norms, goals of agents and to shape relationships of the agents in the organization. In this research, we propose a knowledgebase architecture to increase applicability of roles in MAS domain by drawing inspiration from the self concept in the role theory of sociology. The proposed knowledgebase architecture has granulated structure that is dynamically organized according to the agent's identification in a social environment. Thanks to this dynamic structure, agents are enabled to work on consistent knowledge in spite of inevitable conflicts between roles and the agent. The knowledgebase architecture is also implemented and incorporated into the SEAGENT multi-agent system development framework.

  8. Data Provenance for Agent-Based Models in a Distributed Memory

    Directory of Open Access Journals (Sweden)

    Delmar B. Davis

    2018-04-01

    Full Text Available Agent-Based Models (ABMs assist with studying emergent collective behavior of individual entities in social, biological, economic, network, and physical systems. Data provenance can support ABM by explaining individual agent behavior. However, there is no provenance support for ABMs in a distributed setting. The Multi-Agent Spatial Simulation (MASS library provides a framework for simulating ABMs at fine granularity, where agents and spatial data are shared application resources in a distributed memory. We introduce a novel approach to capture ABM provenance in a distributed memory, called ProvMASS. We evaluate our technique with traditional data provenance queries and performance measures. Our results indicate that a configurable approach can capture provenance that explains coordination of distributed shared resources, simulation logic, and agent behavior while limiting performance overhead. We also show the ability to support practical analyses (e.g., agent tracking and storage requirements for different capture configurations.

  9. A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification

    Science.gov (United States)

    Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.

    2015-01-01

    In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898

  10. Interactive classification and content-based retrieval of tissue images

    Science.gov (United States)

    Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof

    2002-11-01

    We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.

  11. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation

    Directory of Open Access Journals (Sweden)

    Rui Sun

    2016-08-01

    Full Text Available Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.

  12. Style-based classification of Chinese ink and wash paintings

    Science.gov (United States)

    Sheng, Jiachuan; Jiang, Jianmin

    2013-09-01

    Following the fact that a large collection of ink and wash paintings (IWP) is being digitized and made available on the Internet, their automated content description, analysis, and management are attracting attention across research communities. While existing research in relevant areas is primarily focused on image processing approaches, a style-based algorithm is proposed to classify IWPs automatically by their authors. As IWPs do not have colors or even tones, the proposed algorithm applies edge detection to locate the local region and detect painting strokes to enable histogram-based feature extraction and capture of important cues to reflect the styles of different artists. Such features are then applied to drive a number of neural networks in parallel to complete the classification, and an information entropy balanced fusion is proposed to make an integrated decision for the multiple neural network classification results in which the entropy is used as a pointer to combine the global and local features. Evaluations via experiments support that the proposed algorithm achieves good performances, providing excellent potential for computerized analysis and management of IWPs.

  13. Understanding Group/Party Affiliation Using Social Networks and Agent-Based Modeling

    Science.gov (United States)

    Campbell, Kenyth

    2012-01-01

    The dynamics of group affiliation and group dispersion is a concept that is most often studied in order for political candidates to better understand the most efficient way to conduct their campaigns. While political campaigning in the United States is a very hot topic that most politicians analyze and study, the concept of group/party affiliation presents its own area of study that producers very interesting results. One tool for examining party affiliation on a large scale is agent-based modeling (ABM), a paradigm in the modeling and simulation (M&S) field perfectly suited for aggregating individual behaviors to observe large swaths of a population. For this study agent based modeling was used in order to look at a community of agents and determine what factors can affect the group/party affiliation patterns that are present. In the agent-based model that was used for this experiment many factors were present but two main factors were used to determine the results. The results of this study show that it is possible to use agent-based modeling to explore group/party affiliation and construct a model that can mimic real world events. More importantly, the model in the study allows for the results found in a smaller community to be translated into larger experiments to determine if the results will remain present on a much larger scale.

  14. Modeling Oil Exploration and Production: Resource-Constrained and Agent-Based Approaches

    International Nuclear Information System (INIS)

    Jakobsson, Kristofer

    2010-05-01

    Energy is essential to the functioning of society, and oil is the single largest commercial energy source. Some analysts have concluded that the peak in oil production is soon about to happen on the global scale, while others disagree. Such incompatible views can persist because the issue of 'peak oil' cuts through the established scientific disciplines. The question is: what characterizes the modeling approaches that are available today, and how can they be further developed to improve a trans-disciplinary understanding of oil depletion? The objective of this thesis is to present long-term scenarios of oil production (Paper I) using a resource-constrained model; and an agent-based model of the oil exploration process (Paper II). It is also an objective to assess the strengths, limitations, and future development potentials of resource-constrained modeling, analytical economic modeling, and agent-based modeling. Resource-constrained models are only suitable when the time frame is measured in decades, but they can give a rough indication of which production scenarios are reasonable given the size of the resource. However, the models are comprehensible, transparent and the only feasible long-term forecasting tools at present. It is certainly possible to distinguish between reasonable scenarios, based on historically observed parameter values, and unreasonable scenarios with parameter values obtained through flawed analogy. The economic subfield of optimal depletion theory is founded on the notion of rational economic agents, and there is a causal relation between decisions made at the micro-level and the macro-result. In terms of future improvements, however, the analytical form considerably restricts the versatility of the approach. Agent-based modeling makes it feasible to combine economically motivated agents with a physical environment. An example relating to oil exploration is given in Paper II, where it is shown that the exploratory activities of individual

  15. Multi Agent System Based Wide Area Protection against Cascading Events

    DEFF Research Database (Denmark)

    Liu, Zhou; Chen, Zhe; Liu, Leo

    2012-01-01

    In this paper, a multi-agent system based wide area protection scheme is proposed in order to prevent long term voltage instability induced cascading events. The distributed relays and controllers work as a device agent which not only executes the normal function automatically but also can...... the effectiveness of proposed protection strategy. The simulation results indicate that the proposed multi agent control system can effectively coordinate the distributed relays and controllers to prevent the long term voltage instability induced cascading events....

  16. An agent-based model of signal transduction in bacterial chemotaxis.

    Directory of Open Access Journals (Sweden)

    Jameson Miller

    2010-05-01

    Full Text Available We report the application of agent-based modeling to examine the signal transduction network and receptor arrays for chemotaxis in Escherichia coli, which are responsible for regulating swimming behavior in response to environmental stimuli. Agent-based modeling is a stochastic and bottom-up approach, where individual components of the modeled system are explicitly represented, and bulk properties emerge from their movement and interactions. We present the Chemoscape model: a collection of agents representing both fixed membrane-embedded and mobile cytoplasmic proteins, each governed by a set of rules representing knowledge or hypotheses about their function. When the agents were placed in a simulated cellular space and then allowed to move and interact stochastically, the model exhibited many properties similar to the biological system including adaptation, high signal gain, and wide dynamic range. We found the agent based modeling approach to be both powerful and intuitive for testing hypotheses about biological properties such as self-assembly, the non-linear dynamics that occur through cooperative protein interactions, and non-uniform distributions of proteins in the cell. We applied the model to explore the role of receptor type, geometry and cooperativity in the signal gain and dynamic range of the chemotactic response to environmental stimuli. The model provided substantial qualitative evidence that the dynamic range of chemotactic response can be traced to both the heterogeneity of receptor types present, and the modulation of their cooperativity by their methylation state.

  17. An agent-based model of signal transduction in bacterial chemotaxis.

    Science.gov (United States)

    Miller, Jameson; Parker, Miles; Bourret, Robert B; Giddings, Morgan C

    2010-05-13

    We report the application of agent-based modeling to examine the signal transduction network and receptor arrays for chemotaxis in Escherichia coli, which are responsible for regulating swimming behavior in response to environmental stimuli. Agent-based modeling is a stochastic and bottom-up approach, where individual components of the modeled system are explicitly represented, and bulk properties emerge from their movement and interactions. We present the Chemoscape model: a collection of agents representing both fixed membrane-embedded and mobile cytoplasmic proteins, each governed by a set of rules representing knowledge or hypotheses about their function. When the agents were placed in a simulated cellular space and then allowed to move and interact stochastically, the model exhibited many properties similar to the biological system including adaptation, high signal gain, and wide dynamic range. We found the agent based modeling approach to be both powerful and intuitive for testing hypotheses about biological properties such as self-assembly, the non-linear dynamics that occur through cooperative protein interactions, and non-uniform distributions of proteins in the cell. We applied the model to explore the role of receptor type, geometry and cooperativity in the signal gain and dynamic range of the chemotactic response to environmental stimuli. The model provided substantial qualitative evidence that the dynamic range of chemotactic response can be traced to both the heterogeneity of receptor types present, and the modulation of their cooperativity by their methylation state.

  18. Aesthetics-based classification of geological structures in outcrops for geotourism purposes: a tentative proposal

    Science.gov (United States)

    Mikhailenko, Anna V.; Nazarenko, Olesya V.; Ruban, Dmitry A.; Zayats, Pavel P.

    2017-03-01

    The current growth in geotourism requires an urgent development of classifications of geological features on the basis of criteria that are relevant to tourist perceptions. It appears that structure-related patterns are especially attractive for geotourists. Consideration of the main criteria by which tourists judge beauty and observations made in the geodiversity hotspot of the Western Caucasus allow us to propose a tentative aesthetics-based classification of geological structures in outcrops, with two classes and four subclasses. It is possible to distinguish between regular and quasi-regular patterns (i.e., striped and lined and contorted patterns) and irregular and complex patterns (paysage and sculptured patterns). Typical examples of each case are found both in the study area and on a global scale. The application of the proposed classification permits to emphasise features of interest to a broad range of tourists. Aesthetics-based (i.e., non-geological) classifications are necessary to take into account visions and attitudes of visitors.

  19. Resting State fMRI Functional Connectivity-Based Classification Using a Convolutional Neural Network Architecture.

    Science.gov (United States)

    Meszlényi, Regina J; Buza, Krisztian; Vidnyánszky, Zoltán

    2017-01-01

    Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.

  20. Agent-based Personal Network (PN) service architecture

    DEFF Research Database (Denmark)

    Jiang, Bo; Olesen, Henning

    2004-01-01

    In this paper we proposte a new concept for a centralized agent system as the solution for the PN service architecture, which aims to efficiently control and manage the PN resources and enable the PN based services to run seamlessly over different networks and devices. The working principle...

  1. Agent-Based Modeling of Taxi Behavior Simulation with Probe Vehicle Data

    Directory of Open Access Journals (Sweden)

    Saurav Ranjit

    2018-05-01

    Full Text Available Taxi behavior is a spatial–temporal dynamic process involving discrete time dependent events, such as customer pick-up, customer drop-off, cruising, and parking. Simulation models, which are a simplification of a real-world system, can help understand the effects of change of such dynamic behavior. In this paper, agent-based modeling and simulation is proposed, that describes the dynamic action of an agent, i.e., taxi, governed by behavior rules and properties, which emulate the taxi behavior. Taxi behavior simulations are fundamentally done for optimizing the service level for both taxi drivers as well as passengers. Moreover, simulation techniques, as such, could be applied to another field of application as well, where obtaining real raw data are somewhat difficult due to privacy issues, such as human mobility data or call detail record data. This paper describes the development of an agent-based simulation model which is based on multiple input parameters (taxi stay point cluster; trip information (origin and destination; taxi demand information; free taxi movement; and network travel time that were derived from taxi probe GPS data. As such, agent’s parameters were mapped into grid network, and the road network, for which the grid network was used as a base for query/search/retrieval of taxi agent’s parameters, while the actual movement of taxi agents was on the road network with routing and interpolation. The results obtained from the simulated taxi agent data and real taxi data showed a significant level of similarity of different taxi behavior, such as trip generation; trip time; trip distance as well as trip occupancy, based on its distribution. As for efficient data handling, a distributed computing platform for large-scale data was used for extracting taxi agent parameter from the probe data by utilizing both spatial and non-spatial indexing technique.

  2. Classification of schizophrenia patients based on resting-state functional network connectivity

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Arbabshirani

    2013-07-01

    Full Text Available There is a growing interest in automatic classification of mental disorders based on neuroimaging data. Small training data sets (subjects and very large amount of high dimensional data make it a challenging task to design robust and accurate classifiers for heterogeneous disorders such as schizophrenia. Most previous studies considered structural MRI, diffusion tensor imaging and task-based fMRI for this purpose. However, resting-state data has been rarely used in discrimination of schizophrenia patients from healthy controls. Resting data are of great interest, since they are relatively easy to collect, and not confounded by behavioral performance on a task. Several linear and non-linear classification methods were trained using a training dataset and evaluate with a separate testing dataset. Results show that classification with high accuracy is achievable using simple non-linear discriminative methods such as k-nearest neighbors which is very promising. We compare and report detailed results of each classifier as well as statistical analysis and evaluation of each single feature. To our knowledge our effects represent the first use of resting-state functional network connectivity features to classify schizophrenia.

  3. Hand eczema classification

    DEFF Research Database (Denmark)

    Diepgen, T L; Andersen, Klaus Ejner; Brandao, F M

    2008-01-01

    of the disease is rarely evidence based, and a classification system for different subdiagnoses of hand eczema is not agreed upon. Randomized controlled trials investigating the treatment of hand eczema are called for. For this, as well as for clinical purposes, a generally accepted classification system...... A classification system for hand eczema is proposed. Conclusions It is suggested that this classification be used in clinical work and in clinical trials....

  4. Agent-Based Decision Control—How to Appreciate Multivariate Optimisation in Architecture

    DEFF Research Database (Denmark)

    Negendahl, Kristoffer; Perkov, Thomas Holmer; Kolarik, Jakub

    2015-01-01

    , the method is applied to a multivariate optimisation problem. The aim is specifically to demonstrate optimisation for entire building energy consumption, daylight distribution and capital cost. Based on the demonstrations Moth’s ability to find local minima is discussed. It is concluded that agent-based...... in the early design stage. The main focus is to demonstrate the optimisation method, which is done in two ways. Firstly, the newly developed agent-based optimisation algorithm named Moth is tested on three different single objective search spaces. Here Moth is compared to two evolutionary algorithms. Secondly...... optimisation algorithms like Moth open up for new uses of optimisation in the early design stage. With Moth the final outcome is less dependent on pre- and post-processing, and Moth allows user intervention during optimisation. Therefore, agent-based models for optimisation such as Moth can be a powerful...

  5. Effective Sequential Classifier Training for SVM-Based Multitemporal Remote Sensing Image Classification

    Science.gov (United States)

    Guo, Yiqing; Jia, Xiuping; Paull, David

    2018-06-01

    The explosive availability of remote sensing images has challenged supervised classification algorithms such as Support Vector Machines (SVM), as training samples tend to be highly limited due to the expensive and laborious task of ground truthing. The temporal correlation and spectral similarity between multitemporal images have opened up an opportunity to alleviate this problem. In this study, a SVM-based Sequential Classifier Training (SCT-SVM) approach is proposed for multitemporal remote sensing image classification. The approach leverages the classifiers of previous images to reduce the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is firstly predicted based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate position with current training samples. This approach can be applied progressively to sequential image data, with only a small number of training samples being required from each image. Experiments were conducted with Sentinel-2A multitemporal data over an agricultural area in Australia. Results showed that the proposed SCT-SVM achieved better classification accuracies compared with two state-of-the-art model transfer algorithms. When training data are insufficient, the overall classification accuracy of the incoming image was improved from 76.18% to 94.02% with the proposed SCT-SVM, compared with those obtained without the assistance from previous images. These results demonstrate that the leverage of a priori information from previous images can provide advantageous assistance for later images in multitemporal image classification.

  6. Comparison of some classification algorithms based on deterministic and nondeterministic decision rules

    KAUST Repository

    Delimata, Paweł

    2010-01-01

    We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.

  7. Agent Based Knowledge Management Solution using Ontology, Semantic Web Services and GIS

    Directory of Open Access Journals (Sweden)

    Andreea DIOSTEANU

    2009-01-01

    Full Text Available The purpose of our research is to develop an agent based knowledge management application framework using a specific type of ontology that is able to facilitate semantic web service search and automatic composition. This solution can later on be used to develop complex solutions for location based services, supply chain management, etc. This application for modeling knowledge highlights the importance of agent interaction that leads to efficient enterprise interoperability. Furthermore, it proposes an "agent communication language" ontology that extends the OWL Lite standard approach and makes it more flexible in retrieving proper data for identifying the agents that can best communicate and negotiate.

  8. Classification of EEG signals using a genetic-based machine learning classifier.

    Science.gov (United States)

    Skinner, B T; Nguyen, H T; Liu, D K

    2007-01-01

    This paper investigates the efficacy of the genetic-based learning classifier system XCS, for the classification of noisy, artefact-inclusive human electroencephalogram (EEG) signals represented using large condition strings (108bits). EEG signals from three participants were recorded while they performed four mental tasks designed to elicit hemispheric responses. Autoregressive (AR) models and Fast Fourier Transform (FFT) methods were used to form feature vectors with which mental tasks can be discriminated. XCS achieved a maximum classification accuracy of 99.3% and a best average of 88.9%. The relative classification performance of XCS was then compared against four non-evolutionary classifier systems originating from different learning techniques. The experimental results will be used as part of our larger research effort investigating the feasibility of using EEG signals as an interface to allow paralysed persons to control a powered wheelchair or other devices.

  9. Multiscale agent-based cancer modeling.

    Science.gov (United States)

    Zhang, Le; Wang, Zhihui; Sagotsky, Jonathan A; Deisboeck, Thomas S

    2009-04-01

    Agent-based modeling (ABM) is an in silico technique that is being used in a variety of research areas such as in social sciences, economics and increasingly in biomedicine as an interdisciplinary tool to study the dynamics of complex systems. Here, we describe its applicability to integrative tumor biology research by introducing a multi-scale tumor modeling platform that understands brain cancer as a complex dynamic biosystem. We summarize significant findings of this work, and discuss both challenges and future directions for ABM in the field of cancer research.

  10. Monitoring of Oil Exploitation Infrastructure by Combining Unsupervised Pixel-Based Classification of Polarimetric SAR and Object-Based Image Analysis

    Directory of Open Access Journals (Sweden)

    Simon Plank

    2014-12-01

    Full Text Available In developing countries, there is a high correlation between the dependence of oil exports and violent conflicts. Furthermore, even in countries which experienced a peaceful development of their oil industry, land use and environmental issues occur. Therefore, independent monitoring of oil field infrastructure may support problem solving. Earth observation data enables fast monitoring of large areas which allows comparing the real amount of land used by the oil exploitation and the companies’ contractual obligations. The target feature of this monitoring is the infrastructure of the oil exploitation, oil well pads—rectangular features of bare land covering an area of approximately 50–60 m × 100 m. This article presents an automated feature extraction procedure based on the combination of a pixel-based unsupervised classification of polarimetric synthetic aperture radar data (PolSAR and an object-based post-classification. The method is developed and tested using dual-polarimetric TerraSAR-X imagery acquired over the Doba basin in south Chad. The advantages of PolSAR are independence of the cloud coverage (vs. optical imagery and the possibility of detailed land use classification (vs. single-pol SAR. The PolSAR classification uses the polarimetric Wishart probability density function based on the anisotropy/entropy/alpha decomposition. The object-based post-classification refinement, based on properties of the feature targets such as shape and area, increases the user’s accuracy of the methodology by an order of a magnitude. The final achieved user’s and producer’s accuracy is 59%–71% in each case (area based accuracy assessment. Considering only the numbers of correctly/falsely detected oil well pads, the user’s and producer’s accuracies increase to even 74%–89%. In an iterative training procedure the best suited polarimetric speckle filter and processing parameters of the developed feature extraction procedure are

  11. Engineering large-scale agent-based systems with consensus

    Science.gov (United States)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  12. Summary of the NICHD-BPCA Pediatric Formulation Initiatives Workshop-Pediatric Biopharmaceutics Classification System (PBCS) Working Group

    Science.gov (United States)

    Abdel-Rahman, Susan; Amidon, Gordon L.; Kaul, Ajay; Lukacova, Viera; Vinks, Alexander A.; Knipp, Gregory

    2012-01-01

    ) an incomplete understanding of age-based changes in the GI, liver and kidney physiology; 3) a clear need to better understand age-based intestinal permeability and fraction absorbed required to develop the PBCS; 4) a clear need for the development and organization of pediatric tissue biobanks to serve as a source for ontogenic research; and 5) a lack of literature published in age-based pediatric pharmacokinetics in order to build Physiologically- and Population-Based Pharmacokinetic (PBPK) databases. Conclusions To begin the process of establishing a PBPK model, ten pediatric therapeutic agents were selected (based on their adult BCS classifications). Those agents should be targeted for additional research in the future. The PBCS working group also identified several areas where a greater emphasis on research is needed to enable the development of a PBCS. PMID:23149009

  13. iCrowd: agent-based behavior modeling and crowd simulator

    Science.gov (United States)

    Kountouriotis, Vassilios I.; Paterakis, Manolis; Thomopoulos, Stelios C. A.

    2016-05-01

    Initially designed in the context of the TASS (Total Airport Security System) FP-7 project, the Crowd Simulation platform developed by the Integrated Systems Lab of the Institute of Informatics and Telecommunications at N.C.S.R. Demokritos, has evolved into a complete domain-independent agent-based behavior simulator with an emphasis on crowd behavior and building evacuation simulation. Under continuous development, it reflects an effort to implement a modern, multithreaded, data-oriented simulation engine employing latest state-of-the-art programming technologies and paradigms. It is based on an extensible architecture that separates core services from the individual layers of agent behavior, offering a concrete simulation kernel designed for high-performance and stability. Its primary goal is to deliver an abstract platform to facilitate implementation of several Agent-Based Simulation solutions with applicability in several domains of knowledge, such as: (i) Crowd behavior simulation during [in/out] door evacuation. (ii) Non-Player Character AI for Game-oriented applications and Gamification activities. (iii) Vessel traffic modeling and simulation for Maritime Security and Surveillance applications. (iv) Urban and Highway Traffic and Transportation Simulations. (v) Social Behavior Simulation and Modeling.

  14. Standard classification: Physics

    International Nuclear Information System (INIS)

    1977-01-01

    This is a draft standard classification of physics. The conception is based on the physics part of the systematic catalogue of the Bayerische Staatsbibliothek and on the classification given in standard textbooks. The ICSU-AB classification now used worldwide by physics information services was not taken into account. (BJ) [de

  15. Agent-based modeling as a tool for program design and evaluation.

    Science.gov (United States)

    Lawlor, Jennifer A; McGirr, Sara

    2017-12-01

    Recently, systems thinking and systems science approaches have gained popularity in the field of evaluation; however, there has been relatively little exploration of how evaluators could use quantitative tools to assist in the implementation of systems approaches therein. The purpose of this paper is to explore potential uses of one such quantitative tool, agent-based modeling, in evaluation practice. To this end, we define agent-based modeling and offer potential uses for it in typical evaluation activities, including: engaging stakeholders, selecting an intervention, modeling program theory, setting performance targets, and interpreting evaluation results. We provide demonstrative examples from published agent-based modeling efforts both inside and outside the field of evaluation for each of the evaluative activities discussed. We further describe potential pitfalls of this tool and offer cautions for evaluators who may chose to implement it in their practice. Finally, the article concludes with a discussion of the future of agent-based modeling in evaluation practice and a call for more formal exploration of this tool as well as other approaches to simulation modeling in the field. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Sensitivity Analysis of an Agent-Based Model of Culture's Consequences for Trade

    NARCIS (Netherlands)

    Burgers, S.L.G.E.; Jonker, C.M.; Hofstede, G.J.; Verwaart, D.

    2010-01-01

    This paper describes the analysis of an agent-based model’s sensitivity to changes in parameters that describe the agents’ cultural background, relational parameters, and parameters of the decision functions. As agent-based models may be very sensitive to small changes in parameter values, it is of

  17. A fingerprint classification algorithm based on combination of local and global information

    Science.gov (United States)

    Liu, Chongjin; Fu, Xiang; Bian, Junjie; Feng, Jufu

    2011-12-01

    Fingerprint recognition is one of the most important technologies in biometric identification and has been wildly applied in commercial and forensic areas. Fingerprint classification, as the fundamental procedure in fingerprint recognition, can sharply decrease the quantity for fingerprint matching and improve the efficiency of fingerprint recognition. Most fingerprint classification algorithms are based on the number and position of singular points. Because the singular points detecting method only considers the local information commonly, the classification algorithms are sensitive to noise. In this paper, we propose a novel fingerprint classification algorithm combining the local and global information of fingerprint. Firstly we use local information to detect singular points and measure their quality considering orientation structure and image texture in adjacent areas. Furthermore the global orientation model is adopted to measure the reliability of singular points group. Finally the local quality and global reliability is weighted to classify fingerprint. Experiments demonstrate the accuracy and effectivity of our algorithm especially for the poor quality fingerprint images.

  18. SoFoCles: feature filtering for microarray classification based on gene ontology.

    Science.gov (United States)

    Papachristoudis, Georgios; Diplaris, Sotiris; Mitkas, Pericles A

    2010-02-01

    Marker gene selection has been an important research topic in the classification analysis of gene expression data. Current methods try to reduce the "curse of dimensionality" by using statistical intra-feature set calculations, or classifiers that are based on the given dataset. In this paper, we present SoFoCles, an interactive tool that enables semantic feature filtering in microarray classification problems with the use of external, well-defined knowledge retrieved from the Gene Ontology. The notion of semantic similarity is used to derive genes that are involved in the same biological path during the microarray experiment, by enriching a feature set that has been initially produced with legacy methods. Among its other functionalities, SoFoCles offers a large repository of semantic similarity methods that are used in order to derive feature sets and marker genes. The structure and functionality of the tool are discussed in detail, as well as its ability to improve classification accuracy. Through experimental evaluation, SoFoCles is shown to outperform other classification schemes in terms of classification accuracy in two real datasets using different semantic similarity computation approaches.

  19. Hydrologic classification of rivers based on cluster analysis of dimensionless hydrologic signatures: Applications for environmental instream flows

    Science.gov (United States)

    Praskievicz, S. J.; Luo, C.

    2017-12-01

    Classification of rivers is useful for a variety of purposes, such as generating and testing hypotheses about watershed controls on hydrology, predicting hydrologic variables for ungaged rivers, and setting goals for river management. In this research, we present a bottom-up (based on machine learning) river classification designed to investigate the underlying physical processes governing rivers' hydrologic regimes. The classification was developed for the entire state of Alabama, based on 248 United States Geological Survey (USGS) stream gages that met criteria for length and completeness of records. Five dimensionless hydrologic signatures were derived for each gage: slope of the flow duration curve (indicator of flow variability), baseflow index (ratio of baseflow to average streamflow), rising limb density (number of rising limbs per unit time), runoff ratio (ratio of long-term average streamflow to long-term average precipitation), and streamflow elasticity (sensitivity of streamflow to precipitation). We used a Bayesian clustering algorithm to classify the gages, based on the five hydrologic signatures, into distinct hydrologic regimes. We then used classification and regression trees (CART) to predict each gaged river's membership in different hydrologic regimes based on climatic and watershed variables. Using existing geospatial data, we applied the CART analysis to classify ungaged streams in Alabama, with the National Hydrography Dataset Plus (NHDPlus) catchment (average area 3 km2) as the unit of classification. The results of the classification can be used for meeting management and conservation objectives in Alabama, such as developing statewide standards for environmental instream flows. Such hydrologic classification approaches are promising for contributing to process-based understanding of river systems.

  20. Feature selection gait-based gender classification under different circumstances

    Science.gov (United States)

    Sabir, Azhin; Al-Jawad, Naseer; Jassim, Sabah

    2014-05-01

    This paper proposes a gender classification based on human gait features and investigates the problem of two variations: clothing (wearing coats) and carrying bag condition as addition to the normal gait sequence. The feature vectors in the proposed system are constructed after applying wavelet transform. Three different sets of feature are proposed in this method. First, Spatio-temporal distance that is dealing with the distance of different parts of the human body (like feet, knees, hand, Human Height and shoulder) during one gait cycle. The second and third feature sets are constructed from approximation and non-approximation coefficient of human body respectively. To extract these two sets of feature we divided the human body into two parts, upper and lower body part, based on the golden ratio proportion. In this paper, we have adopted a statistical method for constructing the feature vector from the above sets. The dimension of the constructed feature vector is reduced based on the Fisher score as a feature selection method to optimize their discriminating significance. Finally k-Nearest Neighbor is applied as a classification method. Experimental results demonstrate that our approach is providing more realistic scenario and relatively better performance compared with the existing approaches.

  1. Organization of the secure distributed computing based on multi-agent system

    Science.gov (United States)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  2. GIS/RS-based Rapid Reassessment for Slope Land Capability Classification

    Science.gov (United States)

    Chang, T. Y.; Chompuchan, C.

    2014-12-01

    Farmland resources in Taiwan are limited because about 73% is mountainous and slope land. Moreover, the rapid urbanization and dense population resulted in the highly developed flat area. Therefore, the utilization of slope land for agriculture is more needed. In 1976, "Slope Land Conservation and Utilization Act" was promulgated to regulate the slope land utilization. Consequently, slope land capability was categorized into Class I-IV according to 4 criteria, i.e., average land slope, effective soil depth, degree of soil erosion, and parent rock. The slope land capability Class I-VI are suitable for cultivation and pasture. Whereas, Class V should be used for forestry purpose and Class VI should be the conservation land which requires intensive conservation practices. The field survey was conducted to categorize each land unit as the classification scheme. The landowners may not allow to overuse land capability limitation. In the last decade, typhoons and landslides frequently devastated in Taiwan. The rapid post-disaster reassessment of the slope land capability classification is necessary. However, the large-scale disaster on slope land is the constraint of field investigation. This study focused on using satellite remote sensing and GIS as the rapid re-evaluation method. Chenyulan watershed in Nantou County, Taiwan was selected to be a case study area. Grid-based slope derivation, topographic wetness index (TWI) and USLE soil loss calculation were used to classify slope land capability. The results showed that GIS-based classification give an overall accuracy of 68.32%. In addition, the post-disaster areas of Typhoon Morakot in 2009, which interpreted by SPOT satellite imageries, were suggested to classify as the conservation lands. These tools perform better in the large coverage post-disaster update for slope land capability classification and reduce time-consuming, manpower and material resources to the field investigation.

  3. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System

    Directory of Open Access Journals (Sweden)

    Hongqiang Li

    2016-10-01

    Full Text Available Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.

  4. A drone detection with aircraft classification based on a camera array

    Science.gov (United States)

    Liu, Hao; Qu, Fangchao; Liu, Yingjian; Zhao, Wei; Chen, Yitong

    2018-03-01

    In recent years, because of the rapid popularity of drones, many people have begun to operate drones, bringing a range of security issues to sensitive areas such as airports and military locus. It is one of the important ways to solve these problems by realizing fine-grained classification and providing the fast and accurate detection of different models of drone. The main challenges of fine-grained classification are that: (1) there are various types of drones, and the models are more complex and diverse. (2) the recognition test is fast and accurate, in addition, the existing methods are not efficient. In this paper, we propose a fine-grained drone detection system based on the high resolution camera array. The system can quickly and accurately recognize the detection of fine grained drone based on hd camera.

  5. Study on the E-commerce platform based on the agent

    Science.gov (United States)

    Fu, Ruixue; Qin, Lishuan; Gao, Yinmin

    2011-10-01

    To solve problem of dynamic integration in e-commerce, the Multi-Agent architecture of electronic commerce platform system based on Agent and Ontology has been introduced, which includes three major types of agent, Ontology and rule collection. In this architecture, service agent and rule are used to realize the business process reengineering, the reuse of software component, and agility of the electronic commerce platform. To illustrate the architecture, a simulation work has been done and the results imply that the architecture provides a very efficient method to design and implement the flexible, distributed, open and intelligent electronic commerce platform system to solve problem of dynamic integration in ecommerce. The objective of this paper is to illustrate the architecture of electronic commerce platform system, and the approach how Agent and Ontology support the electronic commerce platform system.

  6. A Computational Agent-Based Modeling Approach for Competitive Wireless Service Market

    KAUST Repository

    Douglas, C C; Hyoseop Lee,; Wonsuck Lee,

    2011-01-01

    Using an agent-based modeling method, we study market dynamism with regard to wireless cellular services that are in competition for a greater market share and profit. In the proposed model, service providers and consumers are described as agents

  7. An ant colony optimization based feature selection for web page classification.

    Science.gov (United States)

    Saraç, Esra; Özel, Selma Ayşe

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods.

  8. [Classification of cell-based medicinal products and legal implications: An overview and an update].

    Science.gov (United States)

    Scherer, Jürgen; Flory, Egbert

    2015-11-01

    In general, cell-based medicinal products do not represent a uniform class of medicinal products, but instead comprise medicinal products with diverse regulatory classification as advanced-therapy medicinal products (ATMP), medicinal products (MP), tissue preparations, or blood products. Due to the legal and scientific consequences of the development and approval of MPs, classification should be clarified as early as possible. This paper describes the legal situation in Germany and highlights specific criteria and concepts for classification, with a focus on, but not limited to, ATMPs and non-ATMPs. Depending on the stage of product development and the specific application submitted to a competent authority, legally binding classification is done by the German Länder Authorities, Paul-Ehrlich-Institut, or European Medicines Agency. On request by the applicants, the Committee for Advanced Therapies may issue scientific recommendations for classification.

  9. A spatial web/agent-based model to support stakeholders' negotiation regarding land development.

    Science.gov (United States)

    Pooyandeh, Majeed; Marceau, Danielle J

    2013-11-15

    Decision making in land management can be greatly enhanced if the perspectives of concerned stakeholders are taken into consideration. This often implies negotiation in order to reach an agreement based on the examination of multiple alternatives. This paper describes a spatial web/agent-based modeling system that was developed to support the negotiation process of stakeholders regarding land development in southern Alberta, Canada. This system integrates a fuzzy analytic hierarchy procedure within an agent-based model in an interactive visualization environment provided through a web interface to facilitate the learning and negotiation of the stakeholders. In the pre-negotiation phase, the stakeholders compare their evaluation criteria using linguistic expressions. Due to the uncertainty and fuzzy nature of such comparisons, a fuzzy Analytic Hierarchy Process is then used to prioritize the criteria. The negotiation starts by a development plan being submitted by a user (stakeholder) through the web interface. An agent called the proposer, which represents the proposer of the plan, receives this plan and starts negotiating with all other agents. The negotiation is conducted in a step-wise manner where the agents change their attitudes by assigning a new set of weights to their criteria. If an agreement is not achieved, a new location for development is proposed by the proposer agent. This process is repeated until a location is found that satisfies all agents to a certain predefined degree. To evaluate the performance of the model, the negotiation was simulated with four agents, one of which being the proposer agent, using two hypothetical development plans. The first plan was selected randomly; the other one was chosen in an area that is of high importance to one of the agents. While the agents managed to achieve an agreement about the location of the land development after three rounds of negotiation in the first scenario, seven rounds were required in the second

  10. Object based image analysis for the classification of the growth stages of Avocado crop, in Michoacán State, Mexico

    Science.gov (United States)

    Gao, Yan; Marpu, Prashanth; Morales Manila, Luis M.

    2014-11-01

    This paper assesses the suitability of 8-band Worldview-2 (WV2) satellite data and object-based random forest algorithm for the classification of avocado growth stages in Mexico. We tested both pixel-based with minimum distance (MD) and maximum likelihood (MLC) and object-based with Random Forest (RF) algorithm for this task. Training samples and verification data were selected by visual interpreting the WV2 images for seven thematic classes: fully grown, middle stage, and early stage of avocado crops, bare land, two types of natural forests, and water body. To examine the contribution of the four new spectral bands of WV2 sensor, all the tested classifications were carried out with and without the four new spectral bands. Classification accuracy assessment results show that object-based classification with RF algorithm obtained higher overall higher accuracy (93.06%) than pixel-based MD (69.37%) and MLC (64.03%) method. For both pixel-based and object-based methods, the classifications with the four new spectral bands (overall accuracy obtained higher accuracy than those without: overall accuracy of object-based RF classification with vs without: 93.06% vs 83.59%, pixel-based MD: 69.37% vs 67.2%, pixel-based MLC: 64.03% vs 36.05%, suggesting that the four new spectral bands in WV2 sensor contributed to the increase of the classification accuracy.

  11. Drunk driving detection based on classification of multivariate time series.

    Science.gov (United States)

    Li, Zhenlong; Jin, Xue; Zhao, Xiaohua

    2015-09-01

    This paper addresses the problem of detecting drunk driving based on classification of multivariate time series. First, driving performance measures were collected from a test in a driving simulator located in the Traffic Research Center, Beijing University of Technology. Lateral position and steering angle were used to detect drunk driving. Second, multivariate time series analysis was performed to extract the features. A piecewise linear representation was used to represent multivariate time series. A bottom-up algorithm was then employed to separate multivariate time series. The slope and time interval of each segment were extracted as the features for classification. Third, a support vector machine classifier was used to classify driver's state into two classes (normal or drunk) according to the extracted features. The proposed approach achieved an accuracy of 80.0%. Drunk driving detection based on the analysis of multivariate time series is feasible and effective. The approach has implications for drunk driving detection. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.

  12. SIMULATING AN EVOLUTIONARY MULTI-AGENT BASED MODEL OF THE STOCK MARKET

    Directory of Open Access Journals (Sweden)

    Diana MARICA

    2015-08-01

    Full Text Available The paper focuses on artificial stock market simulations using a multi-agent model incorporating 2,000 heterogeneous agents interacting on the artificial market. The agents interaction is due to trading activity on the market through a call auction trading mechanism. The multi-agent model uses evolutionary techniques such as genetic programming in order to generate an adaptive and evolving population of agents. Each artificial agent is endowed with wealth and a genetic programming induced trading strategy. The trading strategy evolves and adapts to the new market conditions through a process called breeding, which implies that at each simulation step, new agents with better trading strategies are generated by the model, from recombining the best performing trading strategies and replacing the agents which have the worst performing trading strategies. The simulation model was build with the help of the simulation software Altreva Adaptive Modeler which offers a suitable platform for financial market simulations of evolutionary agent based models, the S&P500 composite index being used as a benchmark for the simulation results.

  13. Classification Based on Pruning and Double Covered Rule Sets for the Internet of Things Applications

    Science.gov (United States)

    Zhou, Zhongmei; Wang, Weiping

    2014-01-01

    The Internet of things (IOT) is a hot issue in recent years. It accumulates large amounts of data by IOT users, which is a great challenge to mining useful knowledge from IOT. Classification is an effective strategy which can predict the need of users in IOT. However, many traditional rule-based classifiers cannot guarantee that all instances can be covered by at least two classification rules. Thus, these algorithms cannot achieve high accuracy in some datasets. In this paper, we propose a new rule-based classification, CDCR-P (Classification based on the Pruning and Double Covered Rule sets). CDCR-P can induce two different rule sets A and B. Every instance in training set can be covered by at least one rule not only in rule set A, but also in rule set B. In order to improve the quality of rule set B, we take measure to prune the length of rules in rule set B. Our experimental results indicate that, CDCR-P not only is feasible, but also it can achieve high accuracy. PMID:24511304

  14. Classification based on pruning and double covered rule sets for the internet of things applications.

    Science.gov (United States)

    Li, Shasha; Zhou, Zhongmei; Wang, Weiping

    2014-01-01

    The Internet of things (IOT) is a hot issue in recent years. It accumulates large amounts of data by IOT users, which is a great challenge to mining useful knowledge from IOT. Classification is an effective strategy which can predict the need of users in IOT. However, many traditional rule-based classifiers cannot guarantee that all instances can be covered by at least two classification rules. Thus, these algorithms cannot achieve high accuracy in some datasets. In this paper, we propose a new rule-based classification, CDCR-P (Classification based on the Pruning and Double Covered Rule sets). CDCR-P can induce two different rule sets A and B. Every instance in training set can be covered by at least one rule not only in rule set A, but also in rule set B. In order to improve the quality of rule set B, we take measure to prune the length of rules in rule set B. Our experimental results indicate that, CDCR-P not only is feasible, but also it can achieve high accuracy.

  15. Cancer Classification Based on Support Vector Machine Optimized by Particle Swarm Optimization and Artificial Bee Colony.

    Science.gov (United States)

    Gao, Lingyun; Ye, Mingquan; Wu, Changrong

    2017-11-29

    Intelligent optimization algorithms have advantages in dealing with complex nonlinear problems accompanied by good flexibility and adaptability. In this paper, the FCBF (Fast Correlation-Based Feature selection) method is used to filter irrelevant and redundant features in order to improve the quality of cancer classification. Then, we perform classification based on SVM (Support Vector Machine) optimized by PSO (Particle Swarm Optimization) combined with ABC (Artificial Bee Colony) approaches, which is represented as PA-SVM. The proposed PA-SVM method is applied to nine cancer datasets, including five datasets of outcome prediction and a protein dataset of ovarian cancer. By comparison with other classification methods, the results demonstrate the effectiveness and the robustness of the proposed PA-SVM method in handling various types of data for cancer classification.

  16. DOE LLW classification rationale

    International Nuclear Information System (INIS)

    Flores, A.Y.

    1991-01-01

    This report was about the rationale which the US Department of Energy had with low-level radioactive waste (LLW) classification. It is based on the Nuclear Regulatory Commission's classification system. DOE site operators met to review the qualifications and characteristics of the classification systems. They evaluated performance objectives, developed waste classification tables, and compiled dose limits on the waste. A goal of the LLW classification system was to allow each disposal site the freedom to develop limits to radionuclide inventories and concentrations according to its own site-specific characteristics. This goal was achieved with the adoption of a performance objectives system based on a performance assessment, with site-specific environmental conditions and engineered disposal systems

  17. Web based parallel/distributed medical data mining using software agents

    Energy Technology Data Exchange (ETDEWEB)

    Kargupta, H.; Stafford, B.; Hamzaoglu, I.

    1997-12-31

    This paper describes an experimental parallel/distributed data mining system PADMA (PArallel Data Mining Agents) that uses software agents for local data accessing and analysis and a web based interface for interactive data visualization. It also presents the results of applying PADMA for detecting patterns in unstructured texts of postmortem reports and laboratory test data for Hepatitis C patients.

  18. Hypercompetitive Environments: An Agent-based model approach

    Science.gov (United States)

    Dias, Manuel; Araújo, Tanya

    Information technology (IT) environments are characterized by complex changes and rapid evolution. Globalization and the spread of technological innovation have increased the need for new strategic information resources, both from individual firms and management environments. Improvements in multidisciplinary methods and, particularly, the availability of powerful computational tools, are giving researchers an increasing opportunity to investigate management environments in their true complex nature. The adoption of a complex systems approach allows for modeling business strategies from a bottom-up perspective — understood as resulting from repeated and local interaction of economic agents — without disregarding the consequences of the business strategies themselves to individual behavior of enterprises, emergence of interaction patterns between firms and management environments. Agent-based models are at the leading approach of this attempt.

  19. Mechanism-based classification of pain for physical therapy management in palliative care: A clinical commentary

    Directory of Open Access Journals (Sweden)

    Senthil P Kumar

    2011-01-01

    Full Text Available Pain relief is a major goal for palliative care in India so much that most palliative care interventions necessarily begin first with pain relief. Physical therapists play an important role in palliative care and they are regarded as highly proficient members of a multidisciplinary healthcare team towards management of chronic pain. Pain necessarily involves three different levels of classification-based upon pain symptoms, pain mechanisms and pain syndromes. Mechanism-based treatments are most likely to succeed compared to symptomatic treatments or diagnosis-based treatments. The objective of this clinical commentary is to update the physical therapists working in palliative care, on the mechanism-based classification of pain and its interpretation, with available therapeutic evidence for providing optimal patient care using physical therapy. The paper describes the evolution of mechanism-based classification of pain, the five mechanisms (central sensitization, peripheral neuropathic, nociceptive, sympathetically maintained pain and cognitive-affective are explained with recent evidence for physical therapy treatments for each of the mechanisms.

  20. Urban Image Classification: Per-Pixel Classifiers, Sub-Pixel Analysis, Object-Based Image Analysis, and Geospatial Methods. 10; Chapter

    Science.gov (United States)

    Myint, Soe W.; Mesev, Victor; Quattrochi, Dale; Wentz, Elizabeth A.

    2013-01-01

    Remote sensing methods used to generate base maps to analyze the urban environment rely predominantly on digital sensor data from space-borne platforms. This is due in part from new sources of high spatial resolution data covering the globe, a variety of multispectral and multitemporal sources, sophisticated statistical and geospatial methods, and compatibility with GIS data sources and methods. The goal of this chapter is to review the four groups of classification methods for digital sensor data from space-borne platforms; per-pixel, sub-pixel, object-based (spatial-based), and geospatial methods. Per-pixel methods are widely used methods that classify pixels into distinct categories based solely on the spectral and ancillary information within that pixel. They are used for simple calculations of environmental indices (e.g., NDVI) to sophisticated expert systems to assign urban land covers. Researchers recognize however, that even with the smallest pixel size the spectral information within a pixel is really a combination of multiple urban surfaces. Sub-pixel classification methods therefore aim to statistically quantify the mixture of surfaces to improve overall classification accuracy. While within pixel variations exist, there is also significant evidence that groups of nearby pixels have similar spectral information and therefore belong to the same classification category. Object-oriented methods have emerged that group pixels prior to classification based on spectral similarity and spatial proximity. Classification accuracy using object-based methods show significant success and promise for numerous urban 3 applications. Like the object-oriented methods that recognize the importance of spatial proximity, geospatial methods for urban mapping also utilize neighboring pixels in the classification process. The primary difference though is that geostatistical methods (e.g., spatial autocorrelation methods) are utilized during both the pre- and post-classification

  1. Application of a niche-based model for forest cover classification

    Directory of Open Access Journals (Sweden)

    Amici V

    2012-05-01

    Full Text Available In recent years, a surge of interest in biodiversity conservation have led to the development of new approaches to facilitate ecologically-based conservation policies and management plans. In particular, image classification and predictive distribution modeling applied to forest habitats, constitute a crucial issue as forests constitute the most widespread vegetation type and play a key role for ecosystem functioning. Then, the general purpose of this study is to develop a framework that in the absence of large amounts of field data for large areas may allow to select the most appropriate classification. In some cases, a hard division of classes is required, especially as support to environmental policies; despite this it is necessary to take into account problems which derive from a crisp view of ecological entities being mapped, since habitats are expected to be structurally complex and continuously vary within a landscape. In this paper, a niche model (MaxEnt, generally used to estimate species/habitat distribution, has been applied to classify forest cover in a complex Mediterranean area and to estimate the probability distribution of four forest types, producing continuous maps of forest cover. The use of the obtained models as validation of model for crisp classifications, highlighted that crisp classification, which is being continuously used in landscape research and planning, is not free from drawbacks as it is showing a high degree of inner variability. The modeling approach followed by this study, taking into account the uncertainty proper of the natural ecosystems and the use of environmental variables in land cover classification, may represent an useful approach to making more efficient and effective field inventories and to developing effective forest conservation policies.

  2. Feature selection for neural network based defect classification of ceramic components using high frequency ultrasound.

    Science.gov (United States)

    Kesharaju, Manasa; Nagarajah, Romesh

    2015-09-01

    The motivation for this research stems from a need for providing a non-destructive testing method capable of detecting and locating any defects and microstructural variations within armour ceramic components before issuing them to the soldiers who rely on them for their survival. The development of an automated ultrasonic inspection based classification system would make possible the checking of each ceramic component and immediately alert the operator about the presence of defects. Generally, in many classification problems a choice of features or dimensionality reduction is significant and simultaneously very difficult, as a substantial computational effort is required to evaluate possible feature subsets. In this research, a combination of artificial neural networks and genetic algorithms are used to optimize the feature subset used in classification of various defects in reaction-sintered silicon carbide ceramic components. Initially wavelet based feature extraction is implemented from the region of interest. An Artificial Neural Network classifier is employed to evaluate the performance of these features. Genetic Algorithm based feature selection is performed. Principal Component Analysis is a popular technique used for feature selection and is compared with the genetic algorithm based technique in terms of classification accuracy and selection of optimal number of features. The experimental results confirm that features identified by Principal Component Analysis lead to improved performance in terms of classification percentage with 96% than Genetic algorithm with 94%. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Learning features for tissue classification with the classification restricted Boltzmann machine

    DEFF Research Database (Denmark)

    van Tulder, Gijs; de Bruijne, Marleen

    2014-01-01

    Performance of automated tissue classification in medical imaging depends on the choice of descriptive features. In this paper, we show how restricted Boltzmann machines (RBMs) can be used to learn features that are especially suited for texture-based tissue classification. We introduce the convo...... outperform conventional RBM-based feature learning, which is unsupervised and uses only a generative learning objective, as well as often-used filter banks. We show that a mixture of generative and discriminative learning can produce filters that give a higher classification accuracy....

  4. Tongue Images Classification Based on Constrained High Dispersal Network

    Directory of Open Access Journals (Sweden)

    Dan Meng

    2017-01-01

    Full Text Available Computer aided tongue diagnosis has a great potential to play important roles in traditional Chinese medicine (TCM. However, the majority of the existing tongue image analyses and classification methods are based on the low-level features, which may not provide a holistic view of the tongue. Inspired by deep convolutional neural network (CNN, we propose a novel feature extraction framework called constrained high dispersal neural networks (CHDNet to extract unbiased features and reduce human labor for tongue diagnosis in TCM. Previous CNN models have mostly focused on learning convolutional filters and adapting weights between them, but these models have two major issues: redundancy and insufficient capability in handling unbalanced sample distribution. We introduce high dispersal and local response normalization operation to address the issue of redundancy. We also add multiscale feature analysis to avoid the problem of sensitivity to deformation. Our proposed CHDNet learns high-level features and provides more classification information during training time, which may result in higher accuracy when predicting testing samples. We tested the proposed method on a set of 267 gastritis patients and a control group of 48 healthy volunteers. Test results show that CHDNet is a promising method in tongue image classification for the TCM study.

  5. A Skeleton Based Programming Paradigm for Mobile Multi-Agents on Distributed Systems and Its Realization within the MAGDA Mobile Agents Platform

    OpenAIRE

    R. Aversa; B. Di Martino; N. Mazzocca; S. Venticinque

    2008-01-01

    Parallel programming effort can be reduced by using high level constructs such as algorithmic skeletons. Within the MAGDA toolset, supporting programming and execution of mobile agent based distributed applications, we provide a skeleton-based parallel programming environment, based on specialization of Algorithmic Skeleton Java interfaces and classes. Their implementation include mobile agent features for execution on heterogeneous systems, such as clusters of WSs and PCs, and support reliab...

  6. An Intelligent Fleet Condition-Based Maintenance Decision Making Method Based on Multi-Agent

    Directory of Open Access Journals (Sweden)

    Bo Sun

    2012-01-01

    Full Text Available According to the demand for condition-based maintenance online decision making among a mission oriented fleet, an intelligent maintenance decision making method based on Multi-agent and heuristic rules is proposed. The process of condition-based maintenance within an aircraft fleet (each containing one or more Line Replaceable Modules based on multiple maintenance thresholds is analyzed. Then the process is abstracted into a Multi-Agent Model, a 2-layer model structure containing host negotiation and independent negotiation is established, and the heuristic rules applied to global and local maintenance decision making is proposed. Based on Contract Net Protocol and the heuristic rules, the maintenance decision making algorithm is put forward. Finally, a fleet consisting of 10 aircrafts on a 3-wave continuous mission is illustrated to verify this method. Simulation results indicate that this method can improve the availability of the fleet, meet mission demands, rationalize the utilization of support resources and provide support for online maintenance decision making among a mission oriented fleet.

  7. Conceptual Framework for Agent-Based Modeling of Customer-Oriented Supply Networks

    OpenAIRE

    Solano-Vanegas , Clara ,; Carrillo-Ramos , Angela; Montoya-Torres , Jairo ,

    2015-01-01

    Part 3: Collaboration Frameworks; International audience; Supply Networks (SN) are complex systems involving the interaction of different actors, very often, with different objectives and goals. Among the different existing modeling approaches, agent-based systems can properly represent the autonomous behavior of SN links and, simultaneously, observe the general response of the system as a result of individual actions. Most of research using agent-based modeling in SN focuses on production is...

  8. Ontology-based multi-agent systems

    Energy Technology Data Exchange (ETDEWEB)

    Hadzic, Maja; Wongthongtham, Pornpit; Dillon, Tharam; Chang, Elizabeth [Digital Ecosystems and Business Intelligence Institute, Perth, WA (Australia)

    2009-07-01

    The Semantic web has given a great deal of impetus to the development of ontologies and multi-agent systems. Several books have appeared which discuss the development of ontologies or of multi-agent systems separately on their own. The growing interaction between agents and ontologies has highlighted the need for integrated development of these. This book is unique in being the first to provide an integrated treatment of the modeling, design and implementation of such combined ontology/multi-agent systems. It provides clear exposition of this integrated modeling and design methodology. It further illustrates this with two detailed case studies in (a) the biomedical area and (b) the software engineering area. The book is, therefore, of interest to researchers, graduate students and practitioners in the semantic web and web science area. (orig.)

  9. Agent-Based Modeling in Systems Pharmacology.

    Science.gov (United States)

    Cosgrove, J; Butler, J; Alden, K; Read, M; Kumar, V; Cucurull-Sanchez, L; Timmis, J; Coles, M

    2015-11-01

    Modeling and simulation (M&S) techniques provide a platform for knowledge integration and hypothesis testing to gain insights into biological systems that would not be possible a priori. Agent-based modeling (ABM) is an M&S technique that focuses on describing individual components rather than homogenous populations. This tutorial introduces ABM to systems pharmacologists, using relevant case studies to highlight how ABM-specific strengths have yielded success in the area of preclinical mechanistic modeling.

  10. Multi-Frequency Polarimetric SAR Classification Based on Riemannian Manifold and Simultaneous Sparse Representation

    Directory of Open Access Journals (Sweden)

    Fan Yang

    2015-07-01

    Full Text Available Normally, polarimetric SAR classification is a high-dimensional nonlinear mapping problem. In the realm of pattern recognition, sparse representation is a very efficacious and powerful approach. As classical descriptors of polarimetric SAR, covariance and coherency matrices are Hermitian semidefinite and form a Riemannian manifold. Conventional Euclidean metrics are not suitable for a Riemannian manifold, and hence, normal sparse representation classification cannot be applied to polarimetric SAR directly. This paper proposes a new land cover classification approach for polarimetric SAR. There are two principal novelties in this paper. First, a Stein kernel on a Riemannian manifold instead of Euclidean metrics, combined with sparse representation, is employed for polarimetric SAR land cover classification. This approach is named Stein-sparse representation-based classification (SRC. Second, using simultaneous sparse representation and reasonable assumptions of the correlation of representation among different frequency bands, Stein-SRC is generalized to simultaneous Stein-SRC for multi-frequency polarimetric SAR classification. These classifiers are assessed using polarimetric SAR images from the Airborne Synthetic Aperture Radar (AIRSAR sensor of the Jet Propulsion Laboratory (JPL and the Electromagnetics Institute Synthetic Aperture Radar (EMISAR sensor of the Technical University of Denmark (DTU. Experiments on single-band and multi-band data both show that these approaches acquire more accurate classification results in comparison to many conventional and advanced classifiers.

  11. A Framework for Agent-based Human Interaction Support

    Directory of Open Access Journals (Sweden)

    Axel Bürkle

    2008-10-01

    Full Text Available In this paper we describe an agent-based infrastructure for multimodal perceptual systems which aims at developing and realizing computer services that are delivered to humans in an implicit and unobtrusive way. The framework presented here supports the implementation of human-centric context-aware applications providing non-obtrusive assistance to participants in events such as meetings, lectures, conferences and presentations taking place in indoor "smart spaces". We emphasize on the design and implementation of an agent-based framework that supports "pluggable" service logic in the sense that the service developer can concentrate on coding the service logic independently of the underlying middleware. Furthermore, we give an example of the architecture's ability to support the cooperation of multiple services in a meeting scenario using an intelligent connector service and a semantic web oriented travel service.

  12. Agent-Based Approach for Modelling the Labour Migration from China to Russia

    Directory of Open Access Journals (Sweden)

    Valeriy Leonidovich Makarov

    2017-06-01

    Full Text Available The article describes the process of labour migration from China to Russia and shows its modelling using the agent-based approach. This approach allows us to simulate an artificial society in a computer program taking into account the diversity of individuals under consideration, as well as to model a set of laws and rules of conduct that make up the institutional environment in which the members of this society live. A brief review and analysis of agent-based migration models presented in the foreign literature are given. The agent-based model of labour migration from China to Russia developed by the Central Economic Mathematical Institute of the Russian Academy of Sciences simulates human behaviour close to reality, which is based on their internal purposes, determining the agents choice of territory as a place of residence. Therefore, at the development of the agents of the model and their behaviour algorithms, as well as the organization of the environment in which they exist and interact, the main characteristics of the population of two neighbouring countries and their demographic processes have been considered. Using the model, two experiments have been conducted. The purpose of the first of them was to assess the effect of depreciation of the rubble against the yuan on the overall indexes of labour migration, as well as its structure. In the second experiment, the procedure of the search of the information by agents for the migratory decision-making was changing. Namely, all generalizing information on the average salary by types of activity and skill level of employees, both in China and Russia, became available to all agents irrespective of their qualification level.

  13. Classification of Land Cover and Land Use Based on Convolutional Neural Networks

    Science.gov (United States)

    Yang, Chun; Rottensteiner, Franz; Heipke, Christian

    2018-04-01

    Land cover describes the physical material of the earth's surface, whereas land use describes the socio-economic function of a piece of land. Land use information is typically collected in geospatial databases. As such databases become outdated quickly, an automatic update process is required. This paper presents a new approach to determine land cover and to classify land use objects based on convolutional neural networks (CNN). The input data are aerial images and derived data such as digital surface models. Firstly, we apply a CNN to determine the land cover for each pixel of the input image. We compare different CNN structures, all of them based on an encoder-decoder structure for obtaining dense class predictions. Secondly, we propose a new CNN-based methodology for the prediction of the land use label of objects from a geospatial database. In this context, we present a strategy for generating image patches of identical size from the input data, which are classified by a CNN. Again, we compare different CNN architectures. Our experiments show that an overall accuracy of up to 85.7 % and 77.4 % can be achieved for land cover and land use, respectively. The classification of land cover has a positive contribution to the classification of the land use classification.

  14. Agent-based Modeling Methodology for Analyzing Weapons Systems

    Science.gov (United States)

    2015-03-26

    technique involve model structure, system representation and the degree of validity, coupled with the simplicity, of the overall model. ABM is best suited... system representation of the air combat system . We feel that a simulation model that combines ABM with equation-based representation of weapons and...AGENT-BASED MODELING METHODOLOGY FOR ANALYZING WEAPONS SYSTEMS THESIS Casey D. Connors, Major, USA

  15. Training ANFIS structure using genetic algorithm for liver cancer classification based on microarray gene expression data

    Directory of Open Access Journals (Sweden)

    Bülent Haznedar

    2017-02-01

    Full Text Available Classification is an important data mining technique, which is used in many fields mostly exemplified as medicine, genetics and biomedical engineering. The number of studies about classification of the datum on DNA microarray gene expression is specifically increased in recent years. However, because of the reasons as the abundance of gene numbers in the datum as microarray gene expressions and the nonlinear relations mostly across those datum, the success of conventional classification algorithms can be limited. Because of these reasons, the interest on classification methods which are based on artificial intelligence to solve the problem on classification has been gradually increased in recent times. In this study, a hybrid approach which is based on Adaptive Neuro-Fuzzy Inference System (ANFIS and Genetic Algorithm (GA are suggested in order to classify liver microarray cancer data set. Simulation results are compared with the results of other methods. According to the results obtained, it is seen that the recommended method is better than the other methods.

  16. Agent autonomy approach to probabilistic physics-of-failure modeling of complex dynamic systems with interacting failure mechanisms

    Science.gov (United States)

    Gromek, Katherine Emily

    A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.

  17. An Agent-Based Dynamic Model of Politics, Fertility and Economic Development

    Directory of Open Access Journals (Sweden)

    Zining Yang

    2016-08-01

    Full Text Available In the political economy of development, government policy choices at a single point in time can dramatically affect a country's development path by impacting fertility, economic and political decisions across generations. Combining system dynamics and agent-based modeling approaches in a complex adaptive system, a simulation framework of the Politics of Fertility and Economic Development (POFED is formalized to understand the relationship between politics, economic, and demography change at both macro and micro levels. First, a new political capacity measurement is used; and the system dynamics model is validated with the latest data. Second, the endogenous attributes are fused with non-cooperative game theory in an agent-based framework to simulate the interactive political economic dynamics of individual intra-societal transactions. Finally, macro and micro levels are connected with policy levers of political capacity and political instability by merging system dynamics and agent-based components. This paper also explores the agent-based model's behavioral dynamics via simulation methods to identify paths towards economic development and political stability. This model demonstrates micro level human agency can act, react and interact, thus driving macro level dynamics, while macro structures provide political, social and economic environments that constrain or incentivize micro level human behavior.

  18. Spectral-spatial classification of hyperspectral data with mutual information based segmented stacked autoencoder approach

    Science.gov (United States)

    Paul, Subir; Nagesh Kumar, D.

    2018-04-01

    Hyperspectral (HS) data comprises of continuous spectral responses of hundreds of narrow spectral bands with very fine spectral resolution or bandwidth, which offer feature identification and classification with high accuracy. In the present study, Mutual Information (MI) based Segmented Stacked Autoencoder (S-SAE) approach for spectral-spatial classification of the HS data is proposed to reduce the complexity and computational time compared to Stacked Autoencoder (SAE) based feature extraction. A non-parametric dependency measure (MI) based spectral segmentation is proposed instead of linear and parametric dependency measure to take care of both linear and nonlinear inter-band dependency for spectral segmentation of the HS bands. Then morphological profiles are created corresponding to segmented spectral features to assimilate the spatial information in the spectral-spatial classification approach. Two non-parametric classifiers, Support Vector Machine (SVM) with Gaussian kernel and Random Forest (RF) are used for classification of the three most popularly used HS datasets. Results of the numerical experiments carried out in this study have shown that SVM with a Gaussian kernel is providing better results for the Pavia University and Botswana datasets whereas RF is performing better for Indian Pines dataset. The experiments performed with the proposed methodology provide encouraging results compared to numerous existing approaches.

  19. Use of Ecohydraulic-Based Mesohabitat Classification and Fish Species Traits for Stream Restoration Design

    Directory of Open Access Journals (Sweden)

    John S. Schwartz

    2016-11-01

    Full Text Available Stream restoration practice typically relies on a geomorphological design approach in which the integration of ecological criteria is limited and generally qualitative, although the most commonly stated project objective is to restore biological integrity by enhancing habitat and water quality. Restoration has achieved mixed results in terms of ecological successes and it is evident that improved methodologies for assessment and design are needed. A design approach is suggested for mesohabitat restoration based on a review and integration of fundamental processes associated with: (1 lotic ecological concepts; (2 applied geomorphic processes for mesohabitat self-maintenance; (3 multidimensional hydraulics and habitat suitability modeling; (4 species functional traits correlated with fish mesohabitat use; and (5 multi-stage ecohydraulics-based mesohabitat classification. Classification of mesohabitat units demonstrated in this article were based on fish preferences specifically linked to functional trait strategies (i.e., feeding resting, evasion, spawning, and flow refugia, recognizing that habitat preferences shift by season and flow stage. A multi-stage classification scheme developed under this premise provides the basic “building blocks” for ecological design criteria for stream restoration. The scheme was developed for Midwest US prairie streams, but the conceptual framework for mesohabitat classification and functional traits analysis can be applied to other ecoregions.

  20. The ITE Land classification: Providing an environmental stratification of Great Britain.

    Science.gov (United States)

    Bunce, R G; Barr, C J; Gillespie, M K; Howard, D C

    1996-01-01

    The surface of Great Britain (GB) varies continuously in land cover from one area to another. The objective of any environmentally based land classification is to produce classes that match the patterns that are present by helping to define clear boundaries. The more appropriate the analysis and data used, the better the classes will fit the natural patterns. The observation of inter-correlations between ecological factors is the basis for interpreting ecological patterns in the field, and the Institute of Terrestrial Ecology (ITE) Land Classification formalises such subjective ideas. The data inevitably comprise a large number of factors in order to describe the environment adequately. Single factors, such as altitude, would only be useful on a national basis if they were the only dominant causative agent of ecological variation.The ITE Land Classification has defined 32 environmental categories called 'land classes', initially based on a sample of 1-km squares in Great Britain but subsequently extended to all 240 000 1-km squares. The original classification was produced using multivariate analysis of 75 environmental variables. The extension to all squares in GB was performed using a combination of logistic discrimination and discriminant functions. The classes have provided a stratification for successive ecological surveys, the results of which have characterised the classes in terms of botanical, zoological and landscape features.The classification has also been applied to integrate diverse datasets including satellite imagery, soils and socio-economic information. A variety of models have used the structure of the classification, for example to show potential land use change under different economic conditions. The principal data sets relevant for planning purposes have been incorporated into a user-friendly computer package, called the 'Countryside Information System'.

  1. Quantum Cascade Laser-Based Infrared Microscopy for Label-Free and Automated Cancer Classification in Tissue Sections.

    Science.gov (United States)

    Kuepper, Claus; Kallenbach-Thieltges, Angela; Juette, Hendrik; Tannapfel, Andrea; Großerueschkamp, Frederik; Gerwert, Klaus

    2018-05-16

    A feasibility study using a quantum cascade laser-based infrared microscope for the rapid and label-free classification of colorectal cancer tissues is presented. Infrared imaging is a reliable, robust, automated, and operator-independent tissue classification method that has been used for differential classification of tissue thin sections identifying tumorous regions. However, long acquisition time by the so far used FT-IR-based microscopes hampered the clinical translation of this technique. Here, the used quantum cascade laser-based microscope provides now infrared images for precise tissue classification within few minutes. We analyzed 110 patients with UICC-Stage II and III colorectal cancer, showing 96% sensitivity and 100% specificity of this label-free method as compared to histopathology, the gold standard in routine clinical diagnostics. The main hurdle for the clinical translation of IR-Imaging is overcome now by the short acquisition time for high quality diagnostic images, which is in the same time range as frozen sections by pathologists.

  2. Recurrent neural networks for breast lesion classification based on DCE-MRIs

    Science.gov (United States)

    Antropova, Natasha; Huynh, Benjamin; Giger, Maryellen

    2018-02-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a significant role in breast cancer screening, cancer staging, and monitoring response to therapy. Recently, deep learning methods are being rapidly incorporated in image-based breast cancer diagnosis and prognosis. However, most of the current deep learning methods make clinical decisions based on 2-dimentional (2D) or 3D images and are not well suited for temporal image data. In this study, we develop a deep learning methodology that enables integration of clinically valuable temporal components of DCE-MRIs into deep learning-based lesion classification. Our work is performed on a database of 703 DCE-MRI cases for the task of distinguishing benign and malignant lesions, and uses the area under the ROC curve (AUC) as the performance metric in conducting that task. We train a recurrent neural network, specifically a long short-term memory network (LSTM), on sequences of image features extracted from the dynamic MRI sequences. These features are extracted with VGGNet, a convolutional neural network pre-trained on a large dataset of natural images ImageNet. The features are obtained from various levels of the network, to capture low-, mid-, and high-level information about the lesion. Compared to a classification method that takes as input only images at a single time-point (yielding an AUC = 0.81 (se = 0.04)), our LSTM method improves lesion classification with an AUC of 0.85 (se = 0.03).

  3. A Decision-Tree-Based Algorithm for Speech/Music Classification and Segmentation

    Directory of Open Access Journals (Sweden)

    Lavner Yizhar

    2009-01-01

    Full Text Available We present an efficient algorithm for segmentation of audio signals into speech or music. The central motivation to our study is consumer audio applications, where various real-time enhancements are often applied. The algorithm consists of a learning phase and a classification phase. In the learning phase, predefined training data is used for computing various time-domain and frequency-domain features, for speech and music signals separately, and estimating the optimal speech/music thresholds, based on the probability density functions of the features. An automatic procedure is employed to select the best features for separation. In the test phase, initial classification is performed for each segment of the audio signal, using a three-stage sieve-like approach, applying both Bayesian and rule-based methods. To avoid erroneous rapid alternations in the classification, a smoothing technique is applied, averaging the decision on each segment with past segment decisions. Extensive evaluation of the algorithm, on a database of more than 12 hours of speech and more than 22 hours of music showed correct identification rates of 99.4% and 97.8%, respectively, and quick adjustment to alternating speech/music sections. In addition to its accuracy and robustness, the algorithm can be easily adapted to different audio types, and is suitable for real-time operation.

  4. Knowledge-based sea ice classification by polarimetric SAR

    DEFF Research Database (Denmark)

    Skriver, Henning; Dierking, Wolfgang

    2004-01-01

    Polarimetric SAR images acquired at C- and L-band over sea ice in the Greenland Sea, Baltic Sea, and Beaufort Sea have been analysed with respect to their potential for ice type classification. The polarimetric data were gathered by the Danish EMISAR and the US AIRSAR which both are airborne...... systems. A hierarchical classification scheme was chosen for sea ice because our knowledge about magnitudes, variations, and dependences of sea ice signatures can be directly considered. The optimal sequence of classification rules and the rules themselves depend on the ice conditions/regimes. The use...... of the polarimetric phase information improves the classification only in the case of thin ice types but is not necessary for thicker ice (above about 30 cm thickness)...

  5. Between Complexity and Parsimony: Can Agent-Based Modelling Resolve the Trade-off

    DEFF Research Database (Denmark)

    Nielsen, Helle Ørsted; Malawska, Anna Katarzyna

    2013-01-01

    to BR- based policy studies would be to couple research on bounded ra-tionality with agent-based modeling. Agent-based models (ABMs) are computational models for simulating the behavior and interactions of any number of decision makers in a dynamic system. Agent-based models are better suited than...... are general equilibrium models for capturing behavior patterns of complex systems. ABMs may have the potential to represent complex systems without oversimplifying them. At the same time, research in bounded rationality and behavioral economics has already yielded many insights that could inform the modeling......While Herbert Simon espoused development of general models of behavior, he also strongly advo-cated that these models be based on realistic assumptions about humans and therefore reflect the complexity of human cognition and social systems (Simon 1997). Hence, the model of bounded rationality...

  6. Developing framework for agent- based diabetes disease management system: user perspective.

    Science.gov (United States)

    Mohammadzadeh, Niloofar; Safdari, Reza; Rahimi, Azin

    2014-02-01

    One of the characteristics of agents is mobility which makes them very suitable for remote electronic health and tele medicine. The aim of this study is developing a framework for agent based diabetes information management at national level through identifying required agents. The main tool is a questioner that is designed in three sections based on studying library resources, performance of major organizations in the field of diabetes in and out of the country and interviews with experts in the medical, health information management and software fields. Questionnaires based on Delphi methods were distributed among 20 experts. In order to design and identify agents required in health information management for the prevention and appropriate and rapid treatment of diabetes, the results were analyzed using SPSS 17 and Results were plotted with FREEPLANE mind map software. ACCESS TO DATA TECHNOLOGY IN PROPOSED FRAMEWORK IN ORDER OF PRIORITY IS: mobile (mean 1/80), SMS, EMAIL (mean 2/80), internet, web (mean 3/30), phone (mean 3/60), WIFI (mean 4/60). In delivering health care to diabetic patients, considering social and human aspects is essential. Having a systematic view for implementation of agent systems and paying attention to all aspects such as feedbacks, user acceptance, budget, motivation, hierarchy, useful standards, affordability of individuals, identifying barriers and opportunities and so on, are necessary.

  7. Teamcore Project Control of Agent-Based Systems (COABS) Program

    National Research Council Canada - National Science Library

    Tambe, Milind

    2002-01-01

    An increasing number of agent-based systems now operate in complex dynamic environments, such as disaster rescue missions, monitoring/surveillance tasks, enterprise integration, and education/training environments...

  8. Application of Bayesian Classification to Content-Based Data Management

    Science.gov (United States)

    Lynnes, Christopher; Berrick, S.; Gopalan, A.; Hua, X.; Shen, S.; Smith, P.; Yang, K-Y.; Wheeler, K.; Curry, C.

    2004-01-01

    The high volume of Earth Observing System data has proven to be challenging to manage for data centers and users alike. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), about 1 TB of new data are archived each day. Distribution to users is also about 1 TB/day. A substantial portion of this distribution is MODIS calibrated radiance data, which has a wide variety of uses. However, much of the data is not useful for a particular user's needs: for example, ocean color users typically need oceanic pixels that are free of cloud and sun-glint. The GES DAAC is using a simple Bayesian classification scheme to rapidly classify each pixel in the scene in order to support several experimental content-based data services for near-real-time MODIS calibrated radiance products (from Direct Readout stations). Content-based subsetting would allow distribution of, say, only clear pixels to the user if desired. Content-based subscriptions would distribute data to users only when they fit the user's usability criteria in their area of interest within the scene. Content-based cache management would retain more useful data on disk for easy online access. The classification may even be exploited in an automated quality assessment of the geolocation product. Though initially to be demonstrated at the GES DAAC, these techniques have applicability in other resource-limited environments, such as spaceborne data systems.

  9. Faults Classification Of Power Electronic Circuits Based On A Support Vector Data Description Method

    Directory of Open Access Journals (Sweden)

    Cui Jiang

    2015-06-01

    Full Text Available Power electronic circuits (PECs are prone to various failures, whose classification is of paramount importance. This paper presents a data-driven based fault diagnosis technique, which employs a support vector data description (SVDD method to perform fault classification of PECs. In the presented method, fault signals (e.g. currents, voltages, etc. are collected from accessible nodes of circuits, and then signal processing techniques (e.g. Fourier analysis, wavelet transform, etc. are adopted to extract feature samples, which are subsequently used to perform offline machine learning. Finally, the SVDD classifier is used to implement fault classification task. However, in some cases, the conventional SVDD cannot achieve good classification performance, because this classifier may generate some so-called refusal areas (RAs, and in our design these RAs are resolved with the one-against-one support vector machine (SVM classifier. The obtained experiment results from simulated and actual circuits demonstrate that the improved SVDD has a classification performance close to the conventional one-against-one SVM, and can be applied to fault classification of PECs in practice.

  10. Quality-Oriented Classification of Aircraft Material Based on SVM

    Directory of Open Access Journals (Sweden)

    Hongxia Cai

    2014-01-01

    Full Text Available The existing material classification is proposed to improve the inventory management. However, different materials have the different quality-related attributes, especially in the aircraft industry. In order to reduce the cost without sacrificing the quality, we propose a quality-oriented material classification system considering the material quality character, Quality cost, and Quality influence. Analytic Hierarchy Process helps to make feature selection and classification decision. We use the improved Kraljic Portfolio Matrix to establish the three-dimensional classification model. The aircraft materials can be divided into eight types, including general type, key type, risk type, and leveraged type. Aiming to improve the classification accuracy of various materials, the algorithm of Support Vector Machine is introduced. Finally, we compare the SVM and BP neural network in the application. The results prove that the SVM algorithm is more efficient and accurate and the quality-oriented material classification is valuable.

  11. Implementation and design of a communication system of an agent-based automated substation

    Institute of Scientific and Technical Information of China (English)

    LIN Yong-jun; LIU Yu-tao; ZHANG Dan-hui

    2006-01-01

    A substation system requires that communication be transmitted reliably,accurately and in real-time.Aimed at solving problems,e.g.,flow confliction and sensitive data transmission,a model of the communication system of an agent-based automated substation is introduced.The running principle is discussed in detail and each type of agent is discussed further.At the end,the realization of the agent system applied to the substation is presented.The outcome shows that the communication system of an agent-based automated substation improves the accuracy and reliability of the data transfer and presents it in realtime.

  12. Synthesis and characterization of novel curing agents for surface coatings based on acrylamide copolymers

    International Nuclear Information System (INIS)

    Patel, N. V.; Parmar, R. J.; Parmar, J. S.

    2003-01-01

    The acrylamide based curing agents were prepared form methyl methacrylate-acrylamide copolymers by further methylolation and subsequent etherification with butanol. These were characterized for their various physico-chemical characteristics. Various sets of these ACAs were blended with hydroxyl functional acrylic resin to prepare the staving compared with the conventional melamine-formaldehyde based curing agent containing compositions. The films were also characterized thermogravimetric analysis and IR-spectra. The result reveals that the properties of certain compositions based on ACAs were remarkably better than those of conventional melamine-formaldehyde based curing agent based coatings

  13. Creating a three level building classification using topographic and address-based data for Manchester

    Science.gov (United States)

    Hussain, M.; Chen, D.

    2014-11-01

    Buildings, the basic unit of an urban landscape, host most of its socio-economic activities and play an important role in the creation of urban land-use patterns. The spatial arrangement of different building types creates varied urban land-use clusters which can provide an insight to understand the relationships between social, economic, and living spaces. The classification of such urban clusters can help in policy-making and resource management. In many countries including the UK no national-level cadastral database containing information on individual building types exists in public domain. In this paper, we present a framework for inferring functional types of buildings based on the analysis of their form (e.g. geometrical properties, such as area and perimeter, layout) and spatial relationship from large topographic and address-based GIS database. Machine learning algorithms along with exploratory spatial analysis techniques are used to create the classification rules. The classification is extended to two further levels based on the functions (use) of buildings derived from address-based data. The developed methodology was applied to the Manchester metropolitan area using the Ordnance Survey's MasterMap®, a large-scale topographic and address-based data available for the UK.

  14. An Agent Based Modelling Approach for Multi-Stakeholder Analysis of City Logistics Solutions

    NARCIS (Netherlands)

    Anand, N.

    2015-01-01

    This thesis presents a comprehensive framework for multi-stakeholder analysis of city logistics solutions using agent based modeling. The framework describes different stages for the systematic development of an agent based model for the city logistics domain. The framework includes a

  15. An agent-based method for simulating porous fluid-saturated structures with indistinguishable components

    Science.gov (United States)

    Kashani, Jamal; Pettet, Graeme John; Gu, YuanTong; Zhang, Lihai; Oloyede, Adekunle

    2017-10-01

    Single-phase porous materials contain multiple components that intermingle up to the ultramicroscopic level. Although the structures of the porous materials have been simulated with agent-based methods, the results of the available methods continue to provide patterns of distinguishable solid and fluid agents which do not represent materials with indistinguishable phases. This paper introduces a new agent (hybrid agent) and category of rules (intra-agent rule) that can be used to create emergent structures that would more accurately represent single-phase structures and materials. The novel hybrid agent carries the characteristics of system's elements and it is capable of changing within itself, while also responding to its neighbours as they also change. As an example, the hybrid agent under one-dimensional cellular automata formalism in a two-dimensional domain is used to generate patterns that demonstrate the striking morphological and characteristic similarities with the porous saturated single-phase structures where each agent of the ;structure; carries semi-permeability property and consists of both fluid and solid in space and at all times. We conclude that the ability of the hybrid agent to change locally provides an enhanced protocol to simulate complex porous structures such as biological tissues which could facilitate models for agent-based techniques and numerical methods.

  16. A Feature Selection Method for Large-Scale Network Traffic Classification Based on Spark

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2016-02-01

    Full Text Available Currently, with the rapid increasing of data scales in network traffic classifications, how to select traffic features efficiently is becoming a big challenge. Although a number of traditional feature selection methods using the Hadoop-MapReduce framework have been proposed, the execution time was still unsatisfactory with numeral iterative computations during the processing. To address this issue, an efficient feature selection method for network traffic based on a new parallel computing framework called Spark is proposed in this paper. In our approach, the complete feature set is firstly preprocessed based on Fisher score, and a sequential forward search strategy is employed for subsets. The optimal feature subset is then selected using the continuous iterations of the Spark computing framework. The implementation demonstrates that, on the precondition of keeping the classification accuracy, our method reduces the time cost of modeling and classification, and improves the execution efficiency of feature selection significantly.

  17. Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images

    Science.gov (United States)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.

    2017-10-01

    Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.

  18. A review of classification algorithms for EEG-based brain-computer interfaces: a 10 year update.

    Science.gov (United States)

    Lotte, F; Bougrain, L; Cichocki, A; Clerc, M; Congedo, M; Rakotomamonjy, A; Yger, F

    2018-06-01

    Most current electroencephalography (EEG)-based brain-computer interfaces (BCIs) are based on machine learning algorithms. There is a large diversity of classifier types that are used in this field, as described in our 2007 review paper. Now, approximately ten years after this review publication, many new algorithms have been developed and tested to classify EEG signals in BCIs. The time is therefore ripe for an updated review of EEG classification algorithms for BCIs. We surveyed the BCI and machine learning literature from 2007 to 2017 to identify the new classification approaches that have been investigated to design BCIs. We synthesize these studies in order to present such algorithms, to report how they were used for BCIs, what were the outcomes, and to identify their pros and cons. We found that the recently designed classification algorithms for EEG-based BCIs can be divided into four main categories: adaptive classifiers, matrix and tensor classifiers, transfer learning and deep learning, plus a few other miscellaneous classifiers. Among these, adaptive classifiers were demonstrated to be generally superior to static ones, even with unsupervised adaptation. Transfer learning can also prove useful although the benefits of transfer learning remain unpredictable. Riemannian geometry-based methods have reached state-of-the-art performances on multiple BCI problems and deserve to be explored more thoroughly, along with tensor-based methods. Shrinkage linear discriminant analysis and random forests also appear particularly useful for small training samples settings. On the other hand, deep learning methods have not yet shown convincing improvement over state-of-the-art BCI methods. This paper provides a comprehensive overview of the modern classification algorithms used in EEG-based BCIs, presents the principles of these methods and guidelines on when and how to use them. It also identifies a number of challenges to further advance EEG classification in BCI.

  19. A review of classification algorithms for EEG-based brain–computer interfaces: a 10 year update

    Science.gov (United States)

    Lotte, F.; Bougrain, L.; Cichocki, A.; Clerc, M.; Congedo, M.; Rakotomamonjy, A.; Yger, F.

    2018-06-01

    Objective. Most current electroencephalography (EEG)-based brain–computer interfaces (BCIs) are based on machine learning algorithms. There is a large diversity of classifier types that are used in this field, as described in our 2007 review paper. Now, approximately ten years after this review publication, many new algorithms have been developed and tested to classify EEG signals in BCIs. The time is therefore ripe for an updated review of EEG classification algorithms for BCIs. Approach. We surveyed the BCI and machine learning literature from 2007 to 2017 to identify the new classification approaches that have been investigated to design BCIs. We synthesize these studies in order to present such algorithms, to report how they were used for BCIs, what were the outcomes, and to identify their pros and cons. Main results. We found that the recently designed classification algorithms for EEG-based BCIs can be divided into four main categories: adaptive classifiers, matrix and tensor classifiers, transfer learning and deep learning, plus a few other miscellaneous classifiers. Among these, adaptive classifiers were demonstrated to be generally superior to static ones, even with unsupervised adaptation. Transfer learning can also prove useful although the benefits of transfer learning remain unpredictable. Riemannian geometry-based methods have reached state-of-the-art performances on multiple BCI problems and deserve to be explored more thoroughly, along with tensor-based methods. Shrinkage linear discriminant analysis and random forests also appear particularly useful for small training samples settings. On the other hand, deep learning methods have not yet shown convincing improvement over state-of-the-art BCI methods. Significance. This paper provides a comprehensive overview of the modern classification algorithms used in EEG-based BCIs, presents the principles of these methods and guidelines on when and how to use them. It also identifies a number of challenges

  20. Agent-based modelling of consumer energy choices

    Science.gov (United States)

    Rai, Varun; Henry, Adam Douglas

    2016-06-01

    Strategies to mitigate global climate change should be grounded in a rigorous understanding of energy systems, particularly the factors that drive energy demand. Agent-based modelling (ABM) is a powerful tool for representing the complexities of energy demand, such as social interactions and spatial constraints. Unlike other approaches for modelling energy demand, ABM is not limited to studying perfectly rational agents or to abstracting micro details into system-level equations. Instead, ABM provides the ability to represent behaviours of energy consumers -- such as individual households -- using a range of theories, and to examine how the interaction of heterogeneous agents at the micro-level produces macro outcomes of importance to the global climate, such as the adoption of low-carbon behaviours and technologies over space and time. We provide an overview of ABM work in the area of consumer energy choices, with a focus on identifying specific ways in which ABM can improve understanding of both fundamental scientific and applied aspects of the demand side of energy to aid the design of better policies and programmes. Future research needs for improving the practice of ABM to better understand energy demand are also discussed.

  1. Tweet-based Target Market Classification Using Ensemble Method

    Directory of Open Access Journals (Sweden)

    Muhammad Adi Khairul Anshary

    2016-09-01

    Full Text Available Target market classification is aimed at focusing marketing activities on the right targets. Classification of target markets can be done through data mining and by utilizing data from social media, e.g. Twitter. The end result of data mining are learning models that can classify new data. Ensemble methods can improve the accuracy of the models and therefore provide better results. In this study, classification of target markets was conducted on a dataset of 3000 tweets in order to extract features. Classification models were constructed to manipulate the training data using two ensemble methods (bagging and boosting. To investigate the effectiveness of the ensemble methods, this study used the CART (classification and regression tree algorithm for comparison. Three categories of consumer goods (computers, mobile phones and cameras and three categories of sentiments (positive, negative and neutral were classified towards three target-market categories. Machine learning was performed using Weka 3.6.9. The results of the test data showed that the bagging method improved the accuracy of CART with 1.9% (to 85.20%. On the other hand, for sentiment classification, the ensemble methods were not successful in increasing the accuracy of CART. The results of this study may be taken into consideration by companies who approach their customers through social media, especially Twitter.

  2. A new classification scheme of plastic wastes based upon recycling labels

    Energy Technology Data Exchange (ETDEWEB)

    Özkan, Kemal, E-mail: kozkan@ogu.edu.tr [Computer Engineering Dept., Eskişehir Osmangazi University, 26480 Eskişehir (Turkey); Ergin, Semih, E-mail: sergin@ogu.edu.tr [Electrical Electronics Engineering Dept., Eskişehir Osmangazi University, 26480 Eskişehir (Turkey); Işık, Şahin, E-mail: sahini@ogu.edu.tr [Computer Engineering Dept., Eskişehir Osmangazi University, 26480 Eskişehir (Turkey); Işıklı, İdil, E-mail: idil.isikli@bilecik.edu.tr [Electrical Electronics Engineering Dept., Bilecik University, 11210 Bilecik (Turkey)

    2015-01-15

    Highlights: • PET, HPDE or PP types of plastics are considered. • An automated classification of plastic bottles based on the feature extraction and classification methods is performed. • The decision mechanism consists of PCA, Kernel PCA, FLDA, SVD and Laplacian Eigenmaps methods. • SVM is selected to achieve the classification task and majority voting technique is used. - Abstract: Since recycling of materials is widely assumed to be environmentally and economically beneficial, reliable sorting and processing of waste packaging materials such as plastics is very important for recycling with high efficiency. An automated system that can quickly categorize these materials is certainly needed for obtaining maximum classification while maintaining high throughput. In this paper, first of all, the photographs of the plastic bottles have been taken and several preprocessing steps were carried out. The first preprocessing step is to extract the plastic area of a bottle from the background. Then, the morphological image operations are implemented. These operations are edge detection, noise removal, hole removing, image enhancement, and image segmentation. These morphological operations can be generally defined in terms of the combinations of erosion and dilation. The effect of bottle color as well as label are eliminated using these operations. Secondly, the pixel-wise intensity values of the plastic bottle images have been used together with the most popular subspace and statistical feature extraction methods to construct the feature vectors in this study. Only three types of plastics are considered due to higher existence ratio of them than the other plastic types in the world. The decision mechanism consists of five different feature extraction methods including as Principal Component Analysis (PCA), Kernel PCA (KPCA), Fisher’s Linear Discriminant Analysis (FLDA), Singular Value Decomposition (SVD) and Laplacian Eigenmaps (LEMAP) and uses a simple

  3. A new classification scheme of plastic wastes based upon recycling labels

    International Nuclear Information System (INIS)

    Özkan, Kemal; Ergin, Semih; Işık, Şahin; Işıklı, İdil

    2015-01-01

    Highlights: • PET, HPDE or PP types of plastics are considered. • An automated classification of plastic bottles based on the feature extraction and classification methods is performed. • The decision mechanism consists of PCA, Kernel PCA, FLDA, SVD and Laplacian Eigenmaps methods. • SVM is selected to achieve the classification task and majority voting technique is used. - Abstract: Since recycling of materials is widely assumed to be environmentally and economically beneficial, reliable sorting and processing of waste packaging materials such as plastics is very important for recycling with high efficiency. An automated system that can quickly categorize these materials is certainly needed for obtaining maximum classification while maintaining high throughput. In this paper, first of all, the photographs of the plastic bottles have been taken and several preprocessing steps were carried out. The first preprocessing step is to extract the plastic area of a bottle from the background. Then, the morphological image operations are implemented. These operations are edge detection, noise removal, hole removing, image enhancement, and image segmentation. These morphological operations can be generally defined in terms of the combinations of erosion and dilation. The effect of bottle color as well as label are eliminated using these operations. Secondly, the pixel-wise intensity values of the plastic bottle images have been used together with the most popular subspace and statistical feature extraction methods to construct the feature vectors in this study. Only three types of plastics are considered due to higher existence ratio of them than the other plastic types in the world. The decision mechanism consists of five different feature extraction methods including as Principal Component Analysis (PCA), Kernel PCA (KPCA), Fisher’s Linear Discriminant Analysis (FLDA), Singular Value Decomposition (SVD) and Laplacian Eigenmaps (LEMAP) and uses a simple

  4. Age group classification and gender detection based on forced expiratory spirometry.

    Science.gov (United States)

    Cosgun, Sema; Ozbek, I Yucel

    2015-08-01

    This paper investigates the utility of forced expiratory spirometry (FES) test with efficient machine learning algorithms for the purpose of gender detection and age group classification. The proposed method has three main stages: feature extraction, training of the models and detection. In the first stage, some features are extracted from volume-time curve and expiratory flow-volume loop obtained from FES test. In the second stage, the probabilistic models for each gender and age group are constructed by training Gaussian mixture models (GMMs) and Support vector machine (SVM) algorithm. In the final stage, the gender (or age group) of test subject is estimated by using the trained GMM (or SVM) model. Experiments have been evaluated on a large database from 4571 subjects. The experimental results show that average correct classification rate performance of both GMM and SVM methods based on the FES test is more than 99.3 % and 96.8 % for gender and age group classification, respectively.

  5. Agent-based Simulation of Reactive, Pro-active, and Social Animal Behaviour

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Mira, J.

    1998-01-01

    In this paper it is shown how animal behaviour can be simulated in an agent-based manner. Different models are shown for different types of behaviour, varying from purely reactive behaviour to pro-active and social behaviour. The compositional development method for multi-agent systems DESIRE and

  6. FEATURE EXTRACTION BASED WAVELET TRANSFORM IN BREAST CANCER DIAGNOSIS USING FUZZY AND NON-FUZZY CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    Pelin GORGEL

    2013-01-01

    Full Text Available This study helps to provide a second eye to the expert radiologists for the classification of manually extracted breast masses taken from 60 digital mammıgrams. These mammograms have been acquired from Istanbul University Faculty of Medicine Hospital and have 78 masses. The diagnosis is implemented with pre-processing by using feature extraction based Fast Wavelet Transform (FWT. Afterwards Adaptive Neuro-Fuzzy Inference System (ANFIS based fuzzy subtractive clustering and Support Vector Machines (SVM methods are used for the classification. It is a comparative study which uses these methods respectively. According to the results of the study, ANFIS based subtractive clustering produces ??% while SVM produces ??% accuracy in malignant-benign classification. The results demonstrate that the developed system could help the radiologists for a true diagnosis and decrease the number of the missing cancerous regions or unnecessary biopsies.

  7. FACET CLASSIFICATIONS OF E-LEARNING TOOLS

    Directory of Open Access Journals (Sweden)

    Olena Yu. Balalaieva

    2013-12-01

    Full Text Available The article deals with the classification of e-learning tools based on the facet method, which suggests the separation of the parallel set of objects into independent classification groups; at the same time it is not assumed rigid classification structure and pre-built finite groups classification groups are formed by a combination of values taken from the relevant facets. An attempt to systematize the existing classification of e-learning tools from the standpoint of classification theory is made for the first time. Modern Ukrainian and foreign facet classifications of e-learning tools are described; their positive and negative features compared to classifications based on a hierarchical method are analyzed. The original author's facet classification of e-learning tools is proposed.

  8. Comparison of real-time classification systems for arrhythmia detection on Android-based mobile devices.

    Science.gov (United States)

    Leutheuser, Heike; Gradl, Stefan; Kugler, Patrick; Anneken, Lars; Arnold, Martin; Achenbach, Stephan; Eskofier, Bjoern M

    2014-01-01

    The electrocardiogram (ECG) is a key diagnostic tool in heart disease and may serve to detect ischemia, arrhythmias, and other conditions. Automatic, low cost monitoring of the ECG signal could be used to provide instantaneous analysis in case of symptoms and may trigger the presentation to the emergency department. Currently, since mobile devices (smartphones, tablets) are an integral part of daily life, they could form an ideal basis for automatic and low cost monitoring solution of the ECG signal. In this work, we aim for a realtime classification system for arrhythmia detection that is able to run on Android-based mobile devices. Our analysis is based on 70% of the MIT-BIH Arrhythmia and on 70% of the MIT-BIH Supraventricular Arrhythmia databases. The remaining 30% are reserved for the final evaluation. We detected the R-peaks with a QRS detection algorithm and based on the detected R-peaks, we calculated 16 features (statistical, heartbeat, and template-based). With these features and four different feature subsets we trained 8 classifiers using the Embedded Classification Software Toolbox (ECST) and compared the computational costs for each classification decision and the memory demand for each classifier. We conclude that the C4.5 classifier is best for our two-class classification problem (distinction of normal and abnormal heartbeats) with an accuracy of 91.6%. This classifier still needs a detailed feature selection evaluation. Our next steps are implementing the C4.5 classifier for Android-based mobile devices and evaluating the final system using the remaining 30% of the two used databases.

  9. Histogram Curve Matching Approaches for Object-based Image Classification of Land Cover and Land Use

    Science.gov (United States)

    Toure, Sory I.; Stow, Douglas A.; Weeks, John R.; Kumar, Sunil

    2013-01-01

    The classification of image-objects is usually done using parametric statistical measures of central tendency and/or dispersion (e.g., mean or standard deviation). The objectives of this study were to analyze digital number histograms of image objects and evaluate classifications measures exploiting characteristic signatures of such histograms. Two histograms matching classifiers were evaluated and compared to the standard nearest neighbor to mean classifier. An ADS40 airborne multispectral image of San Diego, California was used for assessing the utility of curve matching classifiers in a geographic object-based image analysis (GEOBIA) approach. The classifications were performed with data sets having 0.5 m, 2.5 m, and 5 m spatial resolutions. Results show that histograms are reliable features for characterizing classes. Also, both histogram matching classifiers consistently performed better than the one based on the standard nearest neighbor to mean rule. The highest classification accuracies were produced with images having 2.5 m spatial resolution. PMID:24403648

  10. SUPPORT VECTOR MACHINE CLASSIFICATION OF OBJECT-BASED DATA FOR CROP MAPPING, USING MULTI-TEMPORAL LANDSAT IMAGERY

    Directory of Open Access Journals (Sweden)

    R. Devadas

    2012-07-01

    Full Text Available Crop mapping and time series analysis of agronomic cycles are critical for monitoring land use and land management practices, and analysing the issues of agro-environmental impacts and climate change. Multi-temporal Landsat data can be used to analyse decadal changes in cropping patterns at field level, owing to its medium spatial resolution and historical availability. This study attempts to develop robust remote sensing techniques, applicable across a large geographic extent, for state-wide mapping of cropping history in Queensland, Australia. In this context, traditional pixel-based classification was analysed in comparison with image object-based classification using advanced supervised machine-learning algorithms such as Support Vector Machine (SVM. For the Darling Downs region of southern Queensland we gathered a set of Landsat TM images from the 2010–2011 cropping season. Landsat data, along with the vegetation index images, were subjected to multiresolution segmentation to obtain polygon objects. Object-based methods enabled the analysis of aggregated sets of pixels, and exploited shape-related and textural variation, as well as spectral characteristics. SVM models were chosen after examining three shape-based parameters, twenty-three textural parameters and ten spectral parameters of the objects. We found that the object-based methods were superior to the pixel-based methods for classifying 4 major landuse/land cover classes, considering the complexities of within field spectral heterogeneity and spectral mixing. Comparative analysis clearly revealed that higher overall classification accuracy (95% was observed in the object-based SVM compared with that of traditional pixel-based classification (89% using maximum likelihood classifier (MLC. Object-based classification also resulted speckle-free images. Further, object-based SVM models were used to classify different broadacre crop types for summer and winter seasons. The influence of

  11. Bearing Fault Classification Based on Conditional Random Field

    Directory of Open Access Journals (Sweden)

    Guofeng Wang

    2013-01-01

    Full Text Available Condition monitoring of rolling element bearing is paramount for predicting the lifetime and performing effective maintenance of the mechanical equipment. To overcome the drawbacks of the hidden Markov model (HMM and improve the diagnosis accuracy, conditional random field (CRF model based classifier is proposed. In this model, the feature vectors sequences and the fault categories are linked by an undirected graphical model in which their relationship is represented by a global conditional probability distribution. In comparison with the HMM, the main advantage of the CRF model is that it can depict the temporal dynamic information between the observation sequences and state sequences without assuming the independence of the input feature vectors. Therefore, the interrelationship between the adjacent observation vectors can also be depicted and integrated into the model, which makes the classifier more robust and accurate than the HMM. To evaluate the effectiveness of the proposed method, four kinds of bearing vibration signals which correspond to normal, inner race pit, outer race pit and roller pit respectively are collected from the test rig. And the CRF and HMM models are built respectively to perform fault classification by taking the sub band energy features of wavelet packet decomposition (WPD as the observation sequences. Moreover, K-fold cross validation method is adopted to improve the evaluation accuracy of the classifier. The analysis and comparison under different fold times show that the accuracy rate of classification using the CRF model is higher than the HMM. This method brings some new lights on the accurate classification of the bearing faults.

  12. Efficient Fingercode Classification

    Science.gov (United States)

    Sun, Hong-Wei; Law, Kwok-Yan; Gollmann, Dieter; Chung, Siu-Leung; Li, Jian-Bin; Sun, Jia-Guang

    In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e. g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.

  13. [Surgical treatment of chronic pancreatitis based on classification of M. Buchler and coworkers].

    Science.gov (United States)

    Krivoruchko, I A; Boĭko, V V; Goncharova, N N; Andreeshchev, S A

    2011-08-01

    The results of surgical treatment of 452 patients, suffering chronic pancreatitis (CHP), were analyzed. The CHP classification, elaborated by M. Buchler and coworkers (2009), based on clinical signs, morphological peculiarities and pancreatic function analysis, contains scientifically substantiated recommendations for choice of diagnostic methods and complex treatment of the disease. The classification proposed is simple in application and constitutes an instrument for studying and comparison of the CHP course severity, the patients prognosis and treatment.

  14. Movie Popularity Classification based on Inherent Movie Attributes using C4.5, PART and Correlation Coefficient

    DEFF Research Database (Denmark)

    Ibnal Asad, Khalid; Ahmed, Tanvir; Rahman, Md. Saiedur

    2012-01-01

    Abundance of movie data across the internet makes it an obvious candidate for machine learning and knowledge discovery. But most researches are directed towards bi-polar classification of movie or generation of a movie recommendation system based on reviews given by viewers on various internet...... sites. Classification of movie popularity based solely on attributes of a movie i.e. actor, actress, director rating, language, country and budget etc. has been less highlighted due to large number of attributes that are associated with each movie and their differences in dimensions. In this paper, we...... propose classification scheme of pre-release movie popularity based on inherent attributes using C4.S and PART classifier algorithm and define the relation between attributes of post release movies using correlation coefficient....

  15. A Classification-based Review Recommender

    Science.gov (United States)

    O'Mahony, Michael P.; Smyth, Barry

    Many online stores encourage their users to submit product/service reviews in order to guide future purchasing decisions. These reviews are often listed alongside product recommendations but, to date, limited attention has been paid as to how best to present these reviews to the end-user. In this paper, we describe a supervised classification approach that is designed to identify and recommend the most helpful product reviews. Using the TripAdvisor service as a case study, we compare the performance of several classification techniques using a range of features derived from hotel reviews. We then describe how these classifiers can be used as the basis for a practical recommender that automatically suggests the mosthelpful contrasting reviews to end-users. We present an empirical evaluation which shows that our approach achieves a statistically significant improvement over alternative review ranking schemes.

  16. Elements of decisional dynamics: An agent-based approach applied to artificial financial market

    Science.gov (United States)

    Lucas, Iris; Cotsaftis, Michel; Bertelle, Cyrille

    2018-02-01

    This paper introduces an original mathematical description for describing agents' decision-making process in the case of problems affected by both individual and collective behaviors in systems characterized by nonlinear, path dependent, and self-organizing interactions. An application to artificial financial markets is proposed by designing a multi-agent system based on the proposed formalization. In this application, agents' decision-making process is based on fuzzy logic rules and the price dynamics is purely deterministic according to the basic matching rules of a central order book. Finally, while putting most parameters under evolutionary control, the computational agent-based system is able to replicate several stylized facts of financial time series (distributions of stock returns showing a heavy tail with positive excess kurtosis, absence of autocorrelations in stock returns, and volatility clustering phenomenon).

  17. Multi-agent based modeling for electric vehicle integration in a distribution network operation

    DEFF Research Database (Denmark)

    Hu, Junjie; Morais, Hugo; Lind, Morten

    2016-01-01

    The purpose of this paper is to present a multi-agent based modeling technology for simulating and operating a hierarchical energy management of a power distribution system with focus on EVs integration. The proposed multi-agent system consists of four types of agents: i) Distribution system...... operator (DSO) technical agent and ii) DSO market agents that both belong to the top layer of the hierarchy and their roles are to manage the distribution network by avoiding grid congestions and using congestion prices to coordinate the energy scheduled; iii) Electric vehicle virtual power plant agents...

  18. Modeling Multi-Mobile Agents System Based on Coalition Signature Mechanism Using UML

    Institute of Scientific and Technical Information of China (English)

    SUNZhixin; HUANGHaiping; WANGRuchuan

    2004-01-01

    With the development of electronic commerce and agent techniques, multi-mobile agents cooperation can not only improve the efficiency of electronic business trade, but more importantly, it has a comprehensive applicative value in solving the security issues of mobile agent system. This paper firstly describes the mechanism of multi-mobile agents coalition signature aiming at the system security. Subsequently it brings forward a basic architecture of Multi-mobile agents system (MMAS) based on the design pattern of multi-mobile agents. The paper uses the diagrs_rn of UML, such as use case diagram, class diagram and sequence diagram to build the detailed model of the coalition signature and multi-mobile agents cooperation results. Through security analysis, we find that multimobile agents cooperation and interaction can solve some security problems of mobile agents in transfer, and also it can improve the efficiency of business trade. These results indicate that MMAS has a high security performance and can be widely used in E-commerce trade.

  19. Ontology-based intelligent fuzzy agent for diabetes application

    NARCIS (Netherlands)

    Acampora, G.; Lee, C.-S.; Wang, M.-H.; Hsu, C.-Y.; Loia, V.

    2009-01-01

    It is widely pointed out that classical ontologies are not sufficient to deal with imprecise and vague knowledge for some real world applications, but the fuzzy ontology can effectively solve data and knowledge with uncertainty. In this paper, an ontology-based intelligent fuzzy agent (OIFA),

  20. A Neural-Network-Based Approach to White Blood Cell Classification

    Directory of Open Access Journals (Sweden)

    Mu-Chun Su

    2014-01-01

    Full Text Available This paper presents a new white blood cell classification system for the recognition of five types of white blood cells. We propose a new segmentation algorithm for the segmentation of white blood cells from smear images. The core idea of the proposed segmentation algorithm is to find a discriminating region of white blood cells on the HSI color space. Pixels with color lying in the discriminating region described by an ellipsoidal region will be regarded as the nucleus and granule of cytoplasm of a white blood cell. Then, through a further morphological process, we can segment a white blood cell from a smear image. Three kinds of features (i.e., geometrical features, color features, and LDP-based texture features are extracted from the segmented cell. These features are fed into three different kinds of neural networks to recognize the types of the white blood cells. To test the effectiveness of the proposed white blood cell classification system, a total of 450 white blood cells images were used. The highest overall correct recognition rate could reach 99.11% correct. Simulation results showed that the proposed white blood cell classification system was very competitive to some existing systems.

  1. The Geographic Information Grid System Based on Mobile Agent

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    We analyze the deficiencies of current application systems, and discuss the key requirements of distributed Geographic Information service (GIS). We construct the distributed GIS on grid platform. Considering the flexibility and efficiency, we integrate the mobile agent technology into the system. We propose a new prototype system, the Geographic Information Grid System (GIGS) based on mobile agent. This system has flexible services and high performance, and improves the sharing of distributed resources. The service strategy of the system and the examples are also presented.

  2. Object-based vegetation classification with high resolution remote sensing imagery

    Science.gov (United States)

    Yu, Qian

    Vegetation species are valuable indicators to understand the earth system. Information from mapping of vegetation species and community distribution at large scales provides important insight for studying the phenological (growth) cycles of vegetation and plant physiology. Such information plays an important role in land process modeling including climate, ecosystem and hydrological models. The rapidly growing remote sensing technology has increased its potential in vegetation species mapping. However, extracting information at a species level is still a challenging research topic. I proposed an effective method for extracting vegetation species distribution from remotely sensed data and investigated some ways for accuracy improvement. The study consists of three phases. Firstly, a statistical analysis was conducted to explore the spatial variation and class separability of vegetation as a function of image scale. This analysis aimed to confirm that high resolution imagery contains the information on spatial vegetation variation and these species classes can be potentially separable. The second phase was a major effort in advancing classification by proposing a method for extracting vegetation species from high spatial resolution remote sensing data. The proposed classification employs an object-based approach that integrates GIS and remote sensing data and explores the usefulness of ancillary information. The whole process includes image segmentation, feature generation and selection, and nearest neighbor classification. The third phase introduces a spatial regression model for evaluating the mapping quality from the above vegetation classification results. The effects of six categories of sample characteristics on the classification uncertainty are examined: topography, sample membership, sample density, spatial composition characteristics, training reliability and sample object features. This evaluation analysis answered several interesting scientific questions

  3. A ROUGH SET DECISION TREE BASED MLP-CNN FOR VERY HIGH RESOLUTION REMOTELY SENSED IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    C. Zhang

    2017-09-01

    Full Text Available Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP, which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.

  4. Panacea : Automating attack classification for anomaly-based network intrusion detection systems

    NARCIS (Netherlands)

    Bolzoni, D.; Etalle, S.; Hartel, P.H.; Kirda, E.; Jha, S.; Balzarotti, D.

    2009-01-01

    Anomaly-based intrusion detection systems are usually criticized because they lack a classification of attacks, thus security teams have to manually inspect any raised alert to classify it. We present a new approach, Panacea, to automatically and systematically classify attacks detected by an

  5. Panacea : Automating attack classification for anomaly-based network intrusion detection systems

    NARCIS (Netherlands)

    Bolzoni, D.; Etalle, S.; Hartel, P.H.

    2009-01-01

    Anomaly-based intrusion detection systems are usually criticized because they lack a classification of attack, thus security teams have to manually inspect any raised alert to classify it. We present a new approach, Panacea, to automatically and systematically classify attacks detected by an

  6. Agent Based Modelling for Social Simulation

    OpenAIRE

    Smit, S.K.; Ubink, E.M.; Vecht, B. van der; Langley, D.J.

    2013-01-01

    This document is the result of an exploratory project looking into the status of, and opportunities for Agent Based Modelling (ABM) at TNO. The project focussed on ABM applications containing social interactions and human factors, which we termed ABM for social simulation (ABM4SS). During the course of this project two workshops were organized. At these workshops, a wide range of experts, both ABM experts and domain experts, worked on several potential applications of ABM. The results and ins...

  7. MR/PET quantification tools: Registration, segmentation, classification, and MR-based attenuation correction

    Science.gov (United States)

    Fei, Baowei; Yang, Xiaofeng; Nye, Jonathon A.; Aarsvold, John N.; Raghunath, Nivedita; Cervo, Morgan; Stark, Rebecca; Meltzer, Carolyn C.; Votaw, John R.

    2012-01-01

    Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with [11C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR

  8. MR/PET quantification tools: Registration, segmentation, classification, and MR-based attenuation correction

    Energy Technology Data Exchange (ETDEWEB)

    Fei, Baowei, E-mail: bfei@emory.edu [Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1841 Clifton Road Northeast, Atlanta, Georgia 30329 (United States); Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia 30322 (United States); Department of Mathematics and Computer Sciences, Emory University, Atlanta, Georgia 30322 (United States); Yang, Xiaofeng; Nye, Jonathon A.; Raghunath, Nivedita; Votaw, John R. [Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia 30329 (United States); Aarsvold, John N. [Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia 30329 (United States); Nuclear Medicine Service, Atlanta Veterans Affairs Medical Center, Atlanta, Georgia 30033 (United States); Cervo, Morgan; Stark, Rebecca [The Medical Physics Graduate Program in the George W. Woodruff School, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Meltzer, Carolyn C. [Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia 30329 (United States); Department of Neurology and Department of Psychiatry and Behavior Sciences, Emory University School of Medicine, Atlanta, Georgia 30322 (United States)

    2012-10-15

    Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with [{sup 11}C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR/PET.

  9. MR/PET quantification tools: Registration, segmentation, classification, and MR-based attenuation correction

    International Nuclear Information System (INIS)

    Fei, Baowei; Yang, Xiaofeng; Nye, Jonathon A.; Raghunath, Nivedita; Votaw, John R.; Aarsvold, John N.; Cervo, Morgan; Stark, Rebecca; Meltzer, Carolyn C.

    2012-01-01

    Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with ["1"1C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR/PET.

  10. Tile-Based Semisupervised Classification of Large-Scale VHR Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Haikel Alhichri

    2018-01-01

    Full Text Available This paper deals with the problem of the classification of large-scale very high-resolution (VHR remote sensing (RS images in a semisupervised scenario, where we have a limited training set (less than ten training samples per class. Typical pixel-based classification methods are unfeasible for large-scale VHR images. Thus, as a practical and efficient solution, we propose to subdivide the large image into a grid of tiles and then classify the tiles instead of classifying pixels. Our proposed method uses the power of a pretrained convolutional neural network (CNN to first extract descriptive features from each tile. Next, a neural network classifier (composed of 2 fully connected layers is trained in a semisupervised fashion and used to classify all remaining tiles in the image. This basically presents a coarse classification of the image, which is sufficient for many RS application. The second contribution deals with the employment of the semisupervised learning to improve the classification accuracy. We present a novel semisupervised approach which exploits both the spectral and spatial relationships embedded in the remaining unlabelled tiles. In particular, we embed a spectral graph Laplacian in the hidden layer of the neural network. In addition, we apply regularization of the output labels using a spatial graph Laplacian and the random Walker algorithm. Experimental results obtained by testing the method on two large-scale images acquired by the IKONOS2 sensor reveal promising capabilities of this method in terms of classification accuracy even with less than ten training samples per class.

  11. A novel fruit shape classification method based on multi-scale analysis

    Science.gov (United States)

    Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin

    2005-11-01

    Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.

  12. Multiple kernel boosting framework based on information measure for classification

    International Nuclear Information System (INIS)

    Qi, Chengming; Wang, Yuping; Tian, Wenjie; Wang, Qun

    2016-01-01

    The performance of kernel-based method, such as support vector machine (SVM), is greatly affected by the choice of kernel function. Multiple kernel learning (MKL) is a promising family of machine learning algorithms and has attracted many attentions in recent years. MKL combines multiple sub-kernels to seek better results compared to single kernel learning. In order to improve the efficiency of SVM and MKL, in this paper, the Kullback–Leibler kernel function is derived to develop SVM. The proposed method employs an improved ensemble learning framework, named KLMKB, which applies Adaboost to learning multiple kernel-based classifier. In the experiment for hyperspectral remote sensing image classification, we employ feature selected through Optional Index Factor (OIF) to classify the satellite image. We extensively examine the performance of our approach in comparison to some relevant and state-of-the-art algorithms on a number of benchmark classification data sets and hyperspectral remote sensing image data set. Experimental results show that our method has a stable behavior and a noticeable accuracy for different data set.

  13. Actionable gene-based classification toward precision medicine in gastric cancer

    Directory of Open Access Journals (Sweden)

    Hiroshi Ichikawa

    2017-10-01

    Full Text Available Abstract Background Intertumoral heterogeneity represents a significant hurdle to identifying optimized targeted therapies in gastric cancer (GC. To realize precision medicine for GC patients, an actionable gene alteration-based molecular classification that directly associates GCs with targeted therapies is needed. Methods A total of 207 Japanese patients with GC were included in this study. Formalin-fixed, paraffin-embedded (FFPE tumor tissues were obtained from surgical or biopsy specimens and were subjected to DNA extraction. We generated comprehensive genomic profiling data using a 435-gene panel including 69 actionable genes paired with US Food and Drug Administration-approved targeted therapies, and the evaluation of Epstein-Barr virus (EBV infection and microsatellite instability (MSI status. Results Comprehensive genomic sequencing detected at least one alteration of 435 cancer-related genes in 194 GCs (93.7% and of 69 actionable genes in 141 GCs (68.1%. We classified the 207 GCs into four The Cancer Genome Atlas (TCGA subtypes using the genomic profiling data; EBV (N = 9, MSI (N = 17, chromosomal instability (N = 119, and genomically stable subtype (N = 62. Actionable gene alterations were not specific and were widely observed throughout all TCGA subtypes. To discover a novel classification which more precisely selects candidates for targeted therapies, 207 GCs were classified using hypermutated phenotype and the mutation profile of 69 actionable genes. We identified a hypermutated group (N = 32, while the others (N = 175 were sub-divided into six clusters including five with actionable gene alterations: ERBB2 (N = 25, CDKN2A, and CDKN2B (N = 10, KRAS (N = 10, BRCA2 (N = 9, and ATM cluster (N = 12. The clinical utility of this classification was demonstrated by a case of unresectable GC with a remarkable response to anti-HER2 therapy in the ERBB2 cluster. Conclusions This actionable gene-based

  14. Agent Based Framework Architecture for Supporting Content Adaptation for Mobile Government

    Directory of Open Access Journals (Sweden)

    Hasan Omar Al-Sakran

    2013-01-01

    Full Text Available Rapid spread of smart mobile technology that supports internet access is transforming the way governments provide services to their citizens. Mobile devices have different capabilities based on the manufacturers and models. This paper proposes a new framework for adapting the content of M-government services using mobile agent technology. The framework is based on a mediation architecture that uses multiple mobile agents and XML as semi-structure mediation language. The flexibility of the mediation and XML provide an adaptive environment to stream data based on the capabilities of the device sending the query to the system.

  15. Motif-Based Text Mining of Microbial Metagenome Redundancy Profiling Data for Disease Classification.

    Science.gov (United States)

    Wang, Yin; Li, Rudong; Zhou, Yuhua; Ling, Zongxin; Guo, Xiaokui; Xie, Lu; Liu, Lei

    2016-01-01

    Text data of 16S rRNA are informative for classifications of microbiota-associated diseases. However, the raw text data need to be systematically processed so that features for classification can be defined/extracted; moreover, the high-dimension feature spaces generated by the text data also pose an additional difficulty. Here we present a Phylogenetic Tree-Based Motif Finding algorithm (PMF) to analyze 16S rRNA text data. By integrating phylogenetic rules and other statistical indexes for classification, we can effectively reduce the dimension of the large feature spaces generated by the text datasets. Using the retrieved motifs in combination with common classification methods, we can discriminate different samples of both pneumonia and dental caries better than other existing methods. We extend the phylogenetic approaches to perform supervised learning on microbiota text data to discriminate the pathological states for pneumonia and dental caries. The results have shown that PMF may enhance the efficiency and reliability in analyzing high-dimension text data.

  16. Integration agent-based models and GIS as a virtual urban dynamic laboratory

    Science.gov (United States)

    Chen, Peng; Liu, Miaolong

    2007-06-01

    Based on the Agent-based Model and spatial data model, a tight-coupling integrating method of GIS and Agent-based Model (ABM) is to be discussed in this paper. The use of object-orientation for both spatial data and spatial process models facilitates their integration, which can allow exploration and explanation of spatial-temporal phenomena such as urban dynamic. In order to better understand how tight coupling might proceed and to evaluate the possible functional and efficiency gains from such a tight coupling, the agent-based model and spatial data model are discussed, and then the relationships affecting spatial data model and agent-based process models interaction. After that, a realistic crowd flow simulation experiment is presented. Using some tools provided by general GIS systems and a few specific programming languages, a new software system integrating GIS and MAS as a virtual laboratory applicable for simulating pedestrian flows in a crowd activity centre has been developed successfully. Under the environment supported by the software system, as an applicable case, a dynamic evolution process of the pedestrian's flows (dispersed process for the spectators) in a crowds' activity center - The Shanghai Stadium has been simulated successfully. At the end of the paper, some new research problems have been pointed out for the future.

  17. Agent-based modeling in ecological economics.

    Science.gov (United States)

    Heckbert, Scott; Baynes, Tim; Reeson, Andrew

    2010-01-01

    Interconnected social and environmental systems are the domain of ecological economics, and models can be used to explore feedbacks and adaptations inherent in these systems. Agent-based modeling (ABM) represents autonomous entities, each with dynamic behavior and heterogeneous characteristics. Agents interact with each other and their environment, resulting in emergent outcomes at the macroscale that can be used to quantitatively analyze complex systems. ABM is contributing to research questions in ecological economics in the areas of natural resource management and land-use change, urban systems modeling, market dynamics, changes in consumer attitudes, innovation, and diffusion of technology and management practices, commons dilemmas and self-governance, and psychological aspects to human decision making and behavior change. Frontiers for ABM research in ecological economics involve advancing the empirical calibration and validation of models through mixed methods, including surveys, interviews, participatory modeling, and, notably, experimental economics to test specific decision-making hypotheses. Linking ABM with other modeling techniques at the level of emergent properties will further advance efforts to understand dynamics of social-environmental systems.

  18. Biodiesel classification by base stock type (vegetable oil) using near infrared spectroscopy data

    Energy Technology Data Exchange (ETDEWEB)

    Balabin, Roman M., E-mail: balabin@org.chem.ethz.ch [Department of Chemistry and Applied Biosciences, ETH Zurich, 8093 Zurich (Switzerland); Safieva, Ravilya Z. [Gubkin Russian State University of Oil and Gas, 119991 Moscow (Russian Federation)

    2011-03-18

    The use of biofuels, such as bioethanol or biodiesel, has rapidly increased in the last few years. Near infrared (near-IR, NIR, or NIRS) spectroscopy (>4000 cm{sup -1}) has previously been reported as a cheap and fast alternative for biodiesel quality control when compared with infrared, Raman, or nuclear magnetic resonance (NMR) methods; in addition, NIR can easily be done in real time (on-line). In this proof-of-principle paper, we attempt to find a correlation between the near infrared spectrum of a biodiesel sample and its base stock. This correlation is used to classify fuel samples into 10 groups according to their origin (vegetable oil): sunflower, coconut, palm, soy/soya, cottonseed, castor, Jatropha, etc. Principal component analysis (PCA) is used for outlier detection and dimensionality reduction of the NIR spectral data. Four different multivariate data analysis techniques are used to solve the classification problem, including regularized discriminant analysis (RDA), partial least squares method/projection on latent structures (PLS-DA), K-nearest neighbors (KNN) technique, and support vector machines (SVMs). Classifying biodiesel by feedstock (base stock) type can be successfully solved with modern machine learning techniques and NIR spectroscopy data. KNN and SVM methods were found to be highly effective for biodiesel classification by feedstock oil type. A classification error (E) of less than 5% can be reached using an SVM-based approach. If computational time is an important consideration, the KNN technique (E = 6.2%) can be recommended for practical (industrial) implementation. Comparison with gasoline and motor oil data shows the relative simplicity of this methodology for biodiesel classification.

  19. Biodiesel classification by base stock type (vegetable oil) using near infrared spectroscopy data

    International Nuclear Information System (INIS)

    Balabin, Roman M.; Safieva, Ravilya Z.

    2011-01-01

    The use of biofuels, such as bioethanol or biodiesel, has rapidly increased in the last few years. Near infrared (near-IR, NIR, or NIRS) spectroscopy (>4000 cm -1 ) has previously been reported as a cheap and fast alternative for biodiesel quality control when compared with infrared, Raman, or nuclear magnetic resonance (NMR) methods; in addition, NIR can easily be done in real time (on-line). In this proof-of-principle paper, we attempt to find a correlation between the near infrared spectrum of a biodiesel sample and its base stock. This correlation is used to classify fuel samples into 10 groups according to their origin (vegetable oil): sunflower, coconut, palm, soy/soya, cottonseed, castor, Jatropha, etc. Principal component analysis (PCA) is used for outlier detection and dimensionality reduction of the NIR spectral data. Four different multivariate data analysis techniques are used to solve the classification problem, including regularized discriminant analysis (RDA), partial least squares method/projection on latent structures (PLS-DA), K-nearest neighbors (KNN) technique, and support vector machines (SVMs). Classifying biodiesel by feedstock (base stock) type can be successfully solved with modern machine learning techniques and NIR spectroscopy data. KNN and SVM methods were found to be highly effective for biodiesel classification by feedstock oil type. A classification error (E) of less than 5% can be reached using an SVM-based approach. If computational time is an important consideration, the KNN technique (E = 6.2%) can be recommended for practical (industrial) implementation. Comparison with gasoline and motor oil data shows the relative simplicity of this methodology for biodiesel classification.

  20. Feature selection based on SVM significance maps for classification of dementia

    NARCIS (Netherlands)

    E.E. Bron (Esther); M. Smits (Marion); J.C. van Swieten (John); W.J. Niessen (Wiro); S. Klein (Stefan)

    2014-01-01

    textabstractSupport vector machine significance maps (SVM p-maps) previously showed clusters of significantly different voxels in dementiarelated brain regions. We propose a novel feature selection method for classification of dementia based on these p-maps. In our approach, the SVM p-maps are

  1. An agent-based QoS provisioning mechanism for WDM optical networks

    Science.gov (United States)

    Ouyang, Yong; Zeng, Qingji; Yue, Ling

    2004-04-01

    This paper addresses QoS provisioning mechanisms in the WDM optical networks. With the appearance of metropolitan optical network, a hierarchical metro and wide area optical network will be envisioned in the near future. This hierarchical optical transport network is often divided into optical domains by geography, administration and technology, which usually employ different QoS routing algorithms and policies. To provide end-to-end optical QoS is becoming a new challenge for the optical network design. In this paper, we first give an overview of issues on the QoS provisioning in data, control and management planes of the WDM optical network. And then three provisioning approaches are analyzed and compared. Finally, we propose an agent-based hybrid centralized/distributed QoS provisioning mechanism based on the concept of domain agent. This agent-based hybrid mechanism employs centralized approach in the domain and distributed approach between domains. It offers scalability and intra-domain optimal QoS routing. It also keeps independence and interoperability between domains.

  2. Agent-Based Modeling of Consumer Decision making Process Based on Power Distance and Personality

    NARCIS (Netherlands)

    Roozmand, O.; Ghasem-Aghaee, N.; Hofstede, G.J.; Nematbakhsh, M.A.; Baraani, A.; Verwaart, T.

    2011-01-01

    Simulating consumer decision making processes involves different disciplines such as: sociology, social psychology, marketing, and computer science. In this paper, we propose an agent-based conceptual and computational model of consumer decision-making based on culture, personality and human needs.

  3. Assessing Consequential Scenarios in a Complex Operational Environment Using Agent Based Simulation

    Science.gov (United States)

    2017-03-16

    capabilities and maturities of 4 subelements: cognition, judgment, emotion , and critical thinking. Each model represents these subelements differently...CADSIM) 102 5.2 Evaluating Agent-Based Technologies: Maturity Level and the Human Domain 103 5.2.1 Evaluation of Maturity Level 103 5.2.2 Human...describes the maturity of agent-based models, ranging from realistic caricatures to quantitatively characterized phenomena at the microlevel. This

  4. Classification of high resolution remote sensing image based on geo-ontology and conditional random fields

    Science.gov (United States)

    Hong, Liang

    2013-10-01

    The availability of high spatial resolution remote sensing data provides new opportunities for urban land-cover classification. More geometric details can be observed in the high resolution remote sensing image, Also Ground objects in the high resolution remote sensing image have displayed rich texture, structure, shape and hierarchical semantic characters. More landscape elements are represented by a small group of pixels. Recently years, the an object-based remote sensing analysis methodology is widely accepted and applied in high resolution remote sensing image processing. The classification method based on Geo-ontology and conditional random fields is presented in this paper. The proposed method is made up of four blocks: (1) the hierarchical ground objects semantic framework is constructed based on geoontology; (2) segmentation by mean-shift algorithm, which image objects are generated. And the mean-shift method is to get boundary preserved and spectrally homogeneous over-segmentation regions ;(3) the relations between the hierarchical ground objects semantic and over-segmentation regions are defined based on conditional random fields framework ;(4) the hierarchical classification results are obtained based on geo-ontology and conditional random fields. Finally, high-resolution remote sensed image data -GeoEye, is used to testify the performance of the presented method. And the experimental results have shown the superiority of this method to the eCognition method both on the effectively and accuracy, which implies it is suitable for the classification of high resolution remote sensing image.

  5. [Object-oriented stand type classification based on the combination of multi-source remote sen-sing data].

    Science.gov (United States)

    Mao, Xue Gang; Wei, Jing Yu

    2017-11-01

    The recognition of forest type is one of the key problems in forest resource monitoring. The Radarsat-2 data and QuickBird remote sensing image were used for object-based classification to study the object-based forest type classification and recognition based on the combination of multi-source remote sensing data. In the process of object-based classification, three segmentation schemes (segmentation with QuickBird remote sensing image only, segmentation with Radarsat-2 data only, segmentation with combination of QuickBird and Radarsat-2) were adopted. For the three segmentation schemes, ten segmentation scale parameters were adopted (25-250, step 25), and modified Euclidean distance 3 index was further used to evaluate the segmented results to determine the optimal segmentation scheme and segmentation scale. Based on the optimal segmented result, three forest types of Chinese fir, Masson pine and broad-leaved forest were classified and recognized using Support Vector Machine (SVM) classifier with Radial Basis Foundation (RBF) kernel according to different feature combinations of topography, height, spectrum and common features. The results showed that the combination of Radarsat-2 data and QuickBird remote sensing image had its advantages of object-based forest type classification over using Radarsat-2 data or QuickBird remote sensing image only. The optimal scale parameter for QuickBirdRadarsat-2 segmentation was 100, and at the optimal scale, the accuracy of object-based forest type classification was the highest (OA=86%, Kappa=0.86), when using all features which were extracted from two kinds of data resources. This study could not only provide a reference for forest type recognition using multi-source remote sensing data, but also had a practical significance for forest resource investigation and monitoring.

  6. Using Agent-Based Models in the Analysis and Forecast of Socio-Economic Development of Territories

    Directory of Open Access Journals (Sweden)

    Vitalii Nikolaevich Makoveev

    2016-11-01

    Full Text Available The purpose of the paper is to study the essence of agent-based modeling, defining its features and prospects of usage in the modeling of socio-economic development of territories and systematization of domestic and foreign approaches to the development of prototypes for agent-based models of territories. Information basis for the research comprised the works on agent-based modeling by Russian and foreign scholars, especially articles and monographs of scientists of the Central Economics and Mathematics Institute under the Russian Academy of Sciences, papers presented in an international journal The Journal of Artificial Societies and Social Simulation and other sources available on the Internet. The article presents theoretical and methodological foundations of agent-based models of territories. The author considers the concepts of “agent-based modeling” and “agent” and defines specifics of agent-based models in comparison with other types of simulation modeling. The paper also describes major stages of building agent-based models for territories and considers qualification requirements to a modeling subject. Furthermore, it reviews Russian and foreign approaches to the development of prototypes for agent-based models of territories. It has been determined that most of them deal with the modeling of spatial, territorial and socio-economic development of regions, cities and municipal entities. Agents in such models are presented by households, residents of regions and cities, enterprises and organizations operating in their territory, and public administration authorities (their inclusion in the model makes it possible to test different options of management impacts on territories by changing the model parameters, for instance, the introduction of certain prohibitions and quotas, issuance of permits, distribution of financial resources, etc.. At the end of the paper, the author formulates major conclusions. He shows the complexity faced

  7. Agent-Based Model of Information Security System: Architecture and Formal Framework for Coordinated Intelligent Agents Behavior Specification

    National Research Council Canada - National Science Library

    Gorodetski, Vladimir

    2001-01-01

    The contractor will research and further develop the technology supporting an agent-based architecture for an information security system and a formal framework to specify a model of distributed knowledge...

  8. Agent-based power sharing scheme for active hybrid power sources

    Science.gov (United States)

    Jiang, Zhenhua

    The active hybridization technique provides an effective approach to combining the best properties of a heterogeneous set of power sources to achieve higher energy density, power density and fuel efficiency. Active hybrid power sources can be used to power hybrid electric vehicles with selected combinations of internal combustion engines, fuel cells, batteries, and/or supercapacitors. They can be deployed in all-electric ships to build a distributed electric power system. They can also be used in a bulk power system to construct an autonomous distributed energy system. An important aspect in designing an active hybrid power source is to find a suitable control strategy that can manage the active power sharing and take advantage of the inherent scalability and robustness benefits of the hybrid system. This paper presents an agent-based power sharing scheme for active hybrid power sources. To demonstrate the effectiveness of the proposed agent-based power sharing scheme, simulation studies are performed for a hybrid power source that can be used in a solar car as the main propulsion power module. Simulation results clearly indicate that the agent-based control framework is effective to coordinate the various energy sources and manage the power/voltage profiles.

  9. Classification of e-government documents based on cooperative expression of word vectors

    Science.gov (United States)

    Fu, Qianqian; Liu, Hao; Wei, Zhiqiang

    2017-03-01

    The effective document classification is a powerful technique to deal with the huge amount of e-government documents automatically instead of accomplishing them manually. The word-to-vector (word2vec) model, which converts semantic word into low-dimensional vectors, could be successfully employed to classify the e-government documents. In this paper, we propose the cooperative expressions of word vector (Co-word-vector), whose multi-granularity of integration explores the possibility of modeling documents in the semantic space. Meanwhile, we also aim to improve the weighted continuous bag of words model based on word2vec model and distributed representation of topic-words based on LDA model. Furthermore, combining the two levels of word representation, performance result shows that our proposed method on the e-government document classification outperform than the traditional method.

  10. SB certification handout material requirements, test methods, responsibilities, and minimum classification levels for mixture-based specification for flexible base.

    Science.gov (United States)

    2012-10-01

    A handout with tables representing the material requirements, test methods, responsibilities, and minimum classification levels mixture-based specification for flexible base and details on aggregate and test methods employed, along with agency and co...

  11. MATT: Multi Agents Testing Tool Based Nets within Nets

    Directory of Open Access Journals (Sweden)

    Sara Kerraoui

    2016-12-01

    As part of this effort, we propose a model based testing approach for multi agent systems based on such a model called Reference net, where a tool, which aims to providing a uniform and automated approach is developed. The feasibility and the advantage of the proposed approach are shown through a short case study.

  12. Forest Classification Based on Forest texture in Northwest Yunnan Province

    Science.gov (United States)

    Wang, Jinliang; Gao, Yan; Wang, Xiaohua; Fu, Lei

    2014-03-01

    Forest texture is an intrinsic characteristic and an important visual feature of a forest ecological system. Full utilization of forest texture will be a great help in increasing the accuracy of forest classification based on remote sensed data. Taking Shangri-La as a study area, forest classification has been based on the texture. The results show that: (1) From the texture abundance, texture boundary, entropy as well as visual interpretation, the combination of Grayscale-gradient co-occurrence matrix and wavelet transformation is much better than either one of both ways of forest texture information extraction; (2) During the forest texture information extraction, the size of the texture-suitable window determined by the semi-variogram method depends on the forest type (evergreen broadleaf forest is 3×3, deciduous broadleaf forest is 5×5, etc.). (3)While classifying forest based on forest texture information, the texture factor assembly differs among forests: Variance Heterogeneity and Correlation should be selected when the window is between 3×3 and 5×5 Mean, Correlation, and Entropy should be used when the window in the range of 7×7 to 19×19 and Correlation, Second Moment, and Variance should be used when the range is larger than 21×21.

  13. Forest Classification Based on Forest texture in Northwest Yunnan Province

    International Nuclear Information System (INIS)

    Wang, Jinliang; Gao, Yan; Fu, Lei; Wang, Xiaohua

    2014-01-01

    Forest texture is an intrinsic characteristic and an important visual feature of a forest ecological system. Full utilization of forest texture will be a great help in increasing the accuracy of forest classification based on remote sensed data. Taking Shangri-La as a study area, forest classification has been based on the texture. The results show that: (1) From the texture abundance, texture boundary, entropy as well as visual interpretation, the combination of Grayscale-gradient co-occurrence matrix and wavelet transformation is much better than either one of both ways of forest texture information extraction; (2) During the forest texture information extraction, the size of the texture-suitable window determined by the semi-variogram method depends on the forest type (evergreen broadleaf forest is 3×3, deciduous broadleaf forest is 5×5, etc.). (3)While classifying forest based on forest texture information, the texture factor assembly differs among forests: Variance Heterogeneity and Correlation should be selected when the window is between 3×3 and 5×5; Mean, Correlation, and Entropy should be used when the window in the range of 7×7 to 19×19; and Correlation, Second Moment, and Variance should be used when the range is larger than 21×21

  14. Agent-based models of financial markets

    Energy Technology Data Exchange (ETDEWEB)

    Samanidou, E [Department of Economics, University of Kiel, Olshausenstrasse 40, D-24118 Kiel (Germany); Zschischang, E [HSH Nord Bank, Portfolio Mngmt. and Inv., Martensdamm 6, D-24103 Kiel (Germany); Stauffer, D [Institute for Theoretical Physics, Cologne University, D-50923 Koeln (Germany); Lux, T [Department of Economics, University of Kiel, Olshausenstrasse 40, D-24118 Kiel (Germany)

    2007-03-15

    This review deals with several microscopic ('agent-based') models of financial markets which have been studied by economists and physicists over the last decade: Kim-Markowitz, Levy-Levy-Solomon, Cont-Bouchaud, Solomon-Weisbuch, Lux-Marchesi, Donangelo-Sneppen and Solomon-Levy-Huang. After an overview of simulation approaches in financial economics, we first give a summary of the Donangelo-Sneppen model of monetary exchange and compare it with related models in economics literature. Our selective review then outlines the main ingredients of some influential early models of multi-agent dynamics in financial markets (Kim-Markowitz, Levy-Levy-Solomon). As will be seen, these contributions draw their inspiration from the complex appearance of investors' interactions in real-life markets. Their main aim is to reproduce (and, thereby, provide possible explanations) for the spectacular bubbles and crashes seen in certain historical episodes, but they lack (like almost all the work before 1998 or so) a perspective in terms of the universal statistical features of financial time series. In fact, awareness of a set of such regularities (power-law tails of the distribution of returns, temporal scaling of volatility) only gradually appeared over the nineties. With the more precise description of the formerly relatively vague characteristics (e.g. moving from the notion of fat tails to the more concrete one of a power law with index around three), it became clear that financial market dynamics give rise to some kind of universal scaling law. Showing similarities with scaling laws for other systems with many interacting sub-units, an exploration of financial markets as multi-agent systems appeared to be a natural consequence. This topic has been pursued by quite a number of contributions appearing in both the physics and economics literature since the late nineties. From the wealth of different flavours of multi-agent models that have appeared up to now, we

  15. Agent-based models of financial markets

    Science.gov (United States)

    Samanidou, E.; Zschischang, E.; Stauffer, D.; Lux, T.

    2007-03-01

    This review deals with several microscopic ('agent-based') models of financial markets which have been studied by economists and physicists over the last decade: Kim-Markowitz, Levy-Levy-Solomon, Cont-Bouchaud, Solomon-Weisbuch, Lux-Marchesi, Donangelo-Sneppen and Solomon-Levy-Huang. After an overview of simulation approaches in financial economics, we first give a summary of the Donangelo-Sneppen model of monetary exchange and compare it with related models in economics literature. Our selective review then outlines the main ingredients of some influential early models of multi-agent dynamics in financial markets (Kim-Markowitz, Levy-Levy-Solomon). As will be seen, these contributions draw their inspiration from the complex appearance of investors' interactions in real-life markets. Their main aim is to reproduce (and, thereby, provide possible explanations) for the spectacular bubbles and crashes seen in certain historical episodes, but they lack (like almost all the work before 1998 or so) a perspective in terms of the universal statistical features of financial time series. In fact, awareness of a set of such regularities (power-law tails of the distribution of returns, temporal scaling of volatility) only gradually appeared over the nineties. With the more precise description of the formerly relatively vague characteristics (e.g. moving from the notion of fat tails to the more concrete one of a power law with index around three), it became clear that financial market dynamics give rise to some kind of universal scaling law. Showing similarities with scaling laws for other systems with many interacting sub-units, an exploration of financial markets as multi-agent systems appeared to be a natural consequence. This topic has been pursued by quite a number of contributions appearing in both the physics and economics literature since the late nineties. From the wealth of different flavours of multi-agent models that have appeared up to now, we discuss the Cont

  16. Agent-based models of financial markets

    International Nuclear Information System (INIS)

    Samanidou, E; Zschischang, E; Stauffer, D; Lux, T

    2007-01-01

    This review deals with several microscopic ('agent-based') models of financial markets which have been studied by economists and physicists over the last decade: Kim-Markowitz, Levy-Levy-Solomon, Cont-Bouchaud, Solomon-Weisbuch, Lux-Marchesi, Donangelo-Sneppen and Solomon-Levy-Huang. After an overview of simulation approaches in financial economics, we first give a summary of the Donangelo-Sneppen model of monetary exchange and compare it with related models in economics literature. Our selective review then outlines the main ingredients of some influential early models of multi-agent dynamics in financial markets (Kim-Markowitz, Levy-Levy-Solomon). As will be seen, these contributions draw their inspiration from the complex appearance of investors' interactions in real-life markets. Their main aim is to reproduce (and, thereby, provide possible explanations) for the spectacular bubbles and crashes seen in certain historical episodes, but they lack (like almost all the work before 1998 or so) a perspective in terms of the universal statistical features of financial time series. In fact, awareness of a set of such regularities (power-law tails of the distribution of returns, temporal scaling of volatility) only gradually appeared over the nineties. With the more precise description of the formerly relatively vague characteristics (e.g. moving from the notion of fat tails to the more concrete one of a power law with index around three), it became clear that financial market dynamics give rise to some kind of universal scaling law. Showing similarities with scaling laws for other systems with many interacting sub-units, an exploration of financial markets as multi-agent systems appeared to be a natural consequence. This topic has been pursued by quite a number of contributions appearing in both the physics and economics literature since the late nineties. From the wealth of different flavours of multi-agent models that have appeared up to now, we discuss the Cont

  17. Erbium-Based Perfusion Contrast Agent for Small-Animal Microvessel Imaging

    Directory of Open Access Journals (Sweden)

    Justin J. Tse

    2017-01-01

    Full Text Available Micro-computed tomography (micro-CT facilitates the visualization and quantification of contrast-enhanced microvessels within intact tissue specimens, but conventional preclinical vascular contrast agents may be inadequate near dense tissue (such as bone. Typical lead-based contrast agents do not exhibit optimal X-ray absorption properties when used with X-ray tube potentials below 90 kilo-electron volts (keV. We have developed a high-atomic number lanthanide (erbium contrast agent, with a K-edge at 57.5 keV. This approach optimizes X-ray absorption in the output spectral band of conventional microfocal spot X-ray tubes. Erbium oxide nanoparticles (nominal diameter 4000 Hounsfield units, and perfusion of vessels < 10 μm in diameter was demonstrated in kidney glomeruli. The described new contrast agent facilitated the visualization and quantification of vessel density and microarchitecture, even adjacent to dense bone. Erbium’s K-edge makes this contrast agent ideally suited for both single- and dual-energy micro-CT, expanding potential preclinical research applications in models of musculoskeletal, oncological, cardiovascular, and neurovascular diseases.

  18. A Chinese text classification system based on Naive Bayes algorithm

    Directory of Open Access Journals (Sweden)

    Cui Wei

    2016-01-01

    Full Text Available In this paper, aiming at the characteristics of Chinese text classification, using the ICTCLAS(Chinese lexical analysis system of Chinese academy of sciences for document segmentation, and for data cleaning and filtering the Stop words, using the information gain and document frequency feature selection algorithm to document feature selection. Based on this, based on the Naive Bayesian algorithm implemented text classifier , and use Chinese corpus of Fudan University has carried on the experiment and analysis on the system.

  19. Multi-dimensional information diffusion and balancing market supply: an agent-based approach

    NARCIS (Netherlands)

    Osinga, S.A.; Kramer, M.R.; Hofstede, G.J.; Beulens, A.J.M.

    2013-01-01

    This agent-based information management model is designed to explore how multi-dimensional information, spreading through a population of agents (for example farmers) affects market supply. Farmers make quality decisions that must be aligned with available markets. Markets distinguish themselves by

  20. Reliability of a treatment-based classification system for subgrouping people with low back pain.

    Science.gov (United States)

    Henry, Sharon M; Fritz, Julie M; Trombley, Andrea R; Bunn, Janice Y

    2012-09-01

    Observational, cross-sectional reliability study. To examine the interrater reliability of novice raters in their use of the treatment-based classification (TBC) system for low back pain and to explore the patterns of disagreement in classification errors. Although the interrater reliability of individual test items in the TBC system is moderate to good, some error persists in classification decision making. Understanding which classification errors are common could direct further refinement of the TBC system. Using previously recorded patient data (n = 24), 12 novice raters classified patients according to the TBC schema. These classification results were combined with those of 7 other raters, allowing examination of the overall agreement using the kappa statistic, as well as agreement/disagreement among pairwise comparisons in classification assignments. A chi-square test examined differences in percent agreement between the novice and more experienced raters and differences in classification distributions between these 2 groups of raters. Among 12 novice raters, there was 80.9% agreement in the pairs of classification (κ = 0.62; 95% confidence interval: 0.59, 0.65) and an overall 75.5% agreement (κ = 0.57; 95% confidence interval: 0.55, 0.69) for the combined data set. Raters were least likely to agree on a classification of stabilization (77.5% agreement). The overall percentage of pairwise classification judgments that disagreed was 24.5%, with the most common disagreement being between manipulation and stabilization (11.0%), followed by a mismatch between stabilization and specific exercise (8.2%). Additional refinement is needed to reduce rater disagreement that persists in the TBC decision-making algorithm, particularly in the stabilization category. J Orthop Sports Phys Ther 2012;42(9):797-805, Epub 7 June 2012. doi:10.2519/jospt.2012.4078.