WorldWideScience

Sample records for agent based classification

  1. An Agent Based Classification Model

    CERN Document Server

    Gu, Feng; Greensmith, Julie

    2009-01-01

    The major function of this model is to access the UCI Wisconsin Breast Can- cer data-set[1] and classify the data items into two categories, which are normal and anomalous. This kind of classifi cation can be referred as anomaly detection, which discriminates anomalous behaviour from normal behaviour in computer systems. One popular solution for anomaly detection is Artifi cial Immune Sys- tems (AIS). AIS are adaptive systems inspired by theoretical immunology and observed immune functions, principles and models which are applied to prob- lem solving. The Dendritic Cell Algorithm (DCA)[2] is an AIS algorithm that is developed specifi cally for anomaly detection. It has been successfully applied to intrusion detection in computer security. It is believed that agent-based mod- elling is an ideal approach for implementing AIS, as intelligent agents could be the perfect representations of immune entities in AIS. This model evaluates the feasibility of re-implementing the DCA in an agent-based simulation environ- ...

  2. AN AGENT BASED FRAMEWORK FOR SENTIMENT CLASSIFICATION OF ONLINE REVIEWS USING ONTOLOGY

    Directory of Open Access Journals (Sweden)

    P. Kalaivani

    2014-01-01

    Full Text Available In this study, we design and develop an agent based framework for sentiment classification of online reviews using ontology. The book review ranking is based on the sentiment classification result. We propose a novel approach with the help of the JADE platform to solve problems by non-visual automatic sentiment classification. The description of book reviews ranking are generated from the ontology based mapping. This approach employs the data extraction agent which is used to retrieve the books comments i.e., the user reviews from the specified blogs. The Second agent is the recommendation agent i.e., domain ontology is used for identifying domain related features in comments. The Third agent is feature selection agent in which XML document content is split into single sentence. Each word in the sentence is mapped with ontology. A Mapping process is used for identifying the domain related sentences in that context. These processes are used for ranking the book results based on customer reviews. The book review ranking system can be extended to other product-review easily.

  3. Odor Classification using Agent Technology

    Directory of Open Access Journals (Sweden)

    Sigeru OMATU

    2014-03-01

    Full Text Available In order to measure and classify odors, Quartz Crystal Microbalance (QCM can be used. In the present study, seven QCM sensors and three different odors are used. The system has been developed as a virtual organization of agents using an agent platform called PANGEA (Platform for Automatic coNstruction of orGanizations of intElligent Agents. This is a platform for developing open multi-agent systems, specifically those including organizational aspects. The main reason for the use of agents is the scalability of the platform, i.e. the way in which it models the services. The system models functionalities as services inside the agents, or as Service Oriented Approach (SOA architecture compliant services using Web Services. This way the adaptation of the odor classification systems with new algorithms, tools and classification techniques is allowed.

  4. Towards A Multi Agent System Based Data Mining For Proteins Prediction And Classification

    Directory of Open Access Journals (Sweden)

    Mohammad Khaled Awwad Al-Maghasbeh

    2015-08-01

    Full Text Available Abstract To understand the structure function paradigm in this paper a new algorithm for proteins classification and prediction is proposed. It uses multi agent system technique that represents a new paradigm for conceptualizing designing and implementing software systems to predict and classify the protein structures. For classifying the proteins support vector machine SVM has been developed to extract feature from the proteins sequences. This paper describes a method for predicting and classifying secondary structure of proteins. Support vector machine SVM modules were developed using multi-agent system principle for predicting the proteins and its function and achieved maximum accuracy specificity sensitivity of 92 94.09and 91.59 respectively. The proposed algorithm provide a good understanding for proteins structure which affect positively on biological science specially on understanding the behavior and the relationships between proteins.

  5. Multi-Agent Information Classification Using Dynamic Acquaintance Lists.

    Science.gov (United States)

    Mukhopadhyay, Snehasis; Peng, Shengquan; Raje, Rajeev; Palakal, Mathew; Mostafa, Javed

    2003-01-01

    Discussion of automated information services focuses on information classification and collaborative agents, i.e. intelligent computer programs. Highlights include multi-agent systems; distributed artificial intelligence; thesauri; document representation and classification; agent modeling; acquaintances, or remote agents discovered through…

  6. Agent Collaborative Target Localization and Classification in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Sheng Wang

    2007-07-01

    Full Text Available Wireless sensor networks (WSNs are autonomous networks that have beenfrequently deployed to collaboratively perform target localization and classification tasks.Their autonomous and collaborative features resemble the characteristics of agents. Suchsimilarities inspire the development of heterogeneous agent architecture for WSN in thispaper. The proposed agent architecture views WSN as multi-agent systems and mobileagents are employed to reduce in-network communication. According to the architecture,an energy based acoustic localization algorithm is proposed. In localization, estimate oftarget location is obtained by steepest descent search. The search algorithm adapts tomeasurement environments by dynamically adjusting its termination condition. With theagent architecture, target classification is accomplished by distributed support vectormachine (SVM. Mobile agents are employed for feature extraction and distributed SVMlearning to reduce communication load. Desirable learning performance is guaranteed bycombining support vectors and convex hull vectors. Fusion algorithms are designed tomerge SVM classification decisions made from various modalities. Real world experimentswith MICAz sensor nodes are conducted for vehicle localization and classification.Experimental results show the proposed agent architecture remarkably facilitates WSNdesigns and algorithm implementation. The localization and classification algorithms alsoprove to be accurate and energy efficient.

  7. 基于Multi-agent理论的社会网络文体分类方法%Social Network Style Classification Method Based on Multi-agent Theory

    Institute of Scientific and Technical Information of China (English)

    吴家菁; 王杨; 闫小敬; 赵传信; 陈付龙

    2014-01-01

    Recently, there are some problems like extracting hardly and lacking classification methods in stylistic classification of social networks. Combining network stylistic diversity, multi-attribution and dynamic characteristics .A attribute fusion and thesaurus associated method based multi-agent has been proposed from feature extraction. Firstly, it extracts the basic attributes of keywords and meaning of characteristics. Then, a multi-agent fusion classification model has been established with the interaction of multi-agent and it also gives the algorithm of the model. The experimental results show that this method which compares with the traditional single fusion classification classifier and other multi-classifier fusion classification not only achieves the high-precision network stylistic classification in semantic network through Semantic features extraction but also receives Social Network stylistic classification’s automation. The method has a higher accuracy classification and stability.%针对当前社会网络中的文体分类存在分类效果不理想问题,结合网络文体的多样性、多归属性及动态性的特征,提出了一种基于multi-agent的属性融合和词库关联的网络文体分类方法。首先提取网络文体的特征关键词和词义等基本属性,建立 Multi-agent 的融合分类模型,并给出了基于 Multi-agent 的社会网络文体融合分类算法。实验结果表明该方法与传统单分类器以及其他多分类器融合分类方法相比,不仅可以通过语义特征提取对语义网络中的网络文体进行高精度分类,而且可以实现社会网络文体分类的自动化,具有更高的分类精度与稳定性。

  8. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases...

  9. Pitch Based Sound Classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic......, and that even though classification gets marginally better, not much is achieved by increasing the window size beyond 1 s....

  10. Classification and basic properties of contrast agents for magnetic resonance imaging.

    Science.gov (United States)

    Geraldes, Carlos F G C; Laurent, Sophie

    2009-01-01

    A comprehensive classification of contrast agents currently used or under development for magnetic resonance imaging (MRI) is presented. Agents based on small chelates, macromolecular systems, iron oxides and other nanosystems, as well as responsive, chemical exchange saturation transfer (CEST) and hyperpolarization agents are covered in order to discuss the various possibilities of using MRI as a molecular imaging technique. The classification includes composition, magnetic properties, biodistribution and imaging applications. Chemical compositions of various classes of MRI contrast agents are tabulated, and their magnetic status including diamagnetic, paramagnetic and superparamagnetic are outlined. Classification according to biodistribution covers all types of MRI contrast agents including, among others, extracellular, blood pool, polymeric, particulate, responsive, oral, and organ specific (hepatobiliary, RES, lymph nodes, bone marrow and brain). Various targeting strategies of molecular, macromolecular and particulate carriers are also illustrated.

  11. Agent-Based Optimization

    CERN Document Server

    Jędrzejowicz, Piotr; Kacprzyk, Janusz

    2013-01-01

    This volume presents a collection of original research works by leading specialists focusing on novel and promising approaches in which the multi-agent system paradigm is used to support, enhance or replace traditional approaches to solving difficult optimization problems. The editors have invited several well-known specialists to present their solutions, tools, and models falling under the common denominator of the agent-based optimization. The book consists of eight chapters covering examples of application of the multi-agent paradigm and respective customized tools to solve  difficult optimization problems arising in different areas such as machine learning, scheduling, transportation and, more generally, distributed and cooperative problem solving.

  12. A new multi criteria classification approach in a multi agent system applied to SEEG analysis.

    Science.gov (United States)

    Kinié, A; Ndiaye, M; Montois, J J; Jacquelet, Y

    2007-01-01

    This work is focused on the study of the organization of the SEEG signals during epileptic seizures with multi-agent system approach. This approach is based on cooperative mechanisms of auto-organization at the micro level and of emergence of a global function at the macro level. In order to evaluate this approach we propose a distributed collaborative approach for the classification of the interesting signals. This new multi-criteria classification method is able to provide a relevant brain area structures organisation and to bring out epileptogenic networks elements. The method is compared to another classification approach a fuzzy classification and gives better results when applied to SEEG signals.

  13. Agent-Based Cloud Computing

    OpenAIRE

    Sim, Kwang Mong

    2012-01-01

    Agent-based cloud computing is concerned with the design and development of software agents for bolstering cloud service\\ud discovery, service negotiation, and service composition. The significance of this work is introducing an agent-based paradigm for\\ud constructing software tools and testbeds for cloud resource management. The novel contributions of this work include: 1) developing\\ud Cloudle: an agent-based search engine for cloud service discovery, 2) showing that agent-based negotiatio...

  14. A Novel Approach for Cardiac Disease Prediction and Classification Using Intelligent Agents

    CERN Document Server

    Kuttikrishnan, Murugesan

    2010-01-01

    The goal is to develop a novel approach for cardiac disease prediction and diagnosis using intelligent agents. Initially the symptoms are preprocessed using filter and wrapper based agents. The filter removes the missing or irrelevant symptoms. Wrapper is used to extract the data in the data set according to the threshold limits. Dependency of each symptom is identified using dependency checker agent. The classification is based on the prior and posterior probability of the symptoms with the evidence value. Finally the symptoms are classified in to five classes namely absence, starting, mild, moderate and serious. Using the cooperative approach the cardiac problem is solved and verified.

  15. Biogeography based Satellite Image Classification

    CERN Document Server

    Panchal, V K; Kaur, Navdeep; Kundra, Harish

    2009-01-01

    Biogeography is the study of the geographical distribution of biological organisms. The mindset of the engineer is that we can learn from nature. Biogeography Based Optimization is a burgeoning nature inspired technique to find the optimal solution of the problem. Satellite image classification is an important task because it is the only way we can know about the land cover map of inaccessible areas. Though satellite images have been classified in past by using various techniques, the researchers are always finding alternative strategies for satellite image classification so that they may be prepared to select the most appropriate technique for the feature extraction task in hand. This paper is focused on classification of the satellite image of a particular land cover using the theory of Biogeography based Optimization. The original BBO algorithm does not have the inbuilt property of clustering which is required during image classification. Hence modifications have been proposed to the original algorithm and...

  16. Multi-agent Negotiation Mechanisms for Statistical Target Classification in Wireless Multimedia Sensor Networks

    Directory of Open Access Journals (Sweden)

    Sheng Wang

    2007-10-01

    Full Text Available The recent availability of low cost and miniaturized hardware has allowedwireless sensor networks (WSNs to retrieve audio and video data in real worldapplications, which has fostered the development of wireless multimedia sensor networks(WMSNs. Resource constraints and challenging multimedia data volume makedevelopment of efficient algorithms to perform in-network processing of multimediacontents imperative. This paper proposes solving problems in the domain of WMSNs fromthe perspective of multi-agent systems. The multi-agent framework enables flexible networkconfiguration and efficient collaborative in-network processing. The focus is placed ontarget classification in WMSNs where audio information is retrieved by microphones. Todeal with the uncertainties related to audio information retrieval, the statistical approachesof power spectral density estimates, principal component analysis and Gaussian processclassification are employed. A multi-agent negotiation mechanism is specially developed toefficiently utilize limited resources and simultaneously enhance classification accuracy andreliability. The negotiation is composed of two phases, where an auction based approach isfirst exploited to allocate the classification task among the agents and then individual agentdecisions are combined by the committee decision mechanism. Simulation experiments withreal world data are conducted and the results show that the proposed statistical approachesand negotiation mechanism not only reduce memory and computation requi

  17. Modulation classification based on spectrogram

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The aim of modulation classification (MC) is to identify the modulation type of a communication signal. It plays an important role in many cooperative or noncooperative communication applications. Three spectrogram-based modulation classification methods are proposed. Their reccgnition scope and performance are investigated or evaluated by theoretical analysis and extensive simulation studies. The method taking moment-like features is robust to frequency offset while the other two, which make use of principal component analysis (PCA) with different transformation inputs,can achieve satisfactory accuracy even at low SNR (as low as 2 dB). Due to the properties of spectrogram, the statistical pattern recognition techniques, and the image preprocessing steps, all of our methods are insensitive to unknown phase and frequency offsets, timing errors, and the arriving sequence of symbols.

  18. Agent Based Individual Traffic Guidance

    DEFF Research Database (Denmark)

    Wanscher, Jørgen

    This thesis investigates the possibilities in applying Operations Research (OR) to autonomous vehicular traffic. The explicit difference to most other research today is that we presume that an agent is present in every vehicle - hence Agent Based Individual Traffic guidance (ABIT). The next...... evolutionary step for the in-vehicle route planners is the introduction of two-way communication. We presume that the agent is capable of exactly this. Based on this presumption we discuss the possibilities and define a taxonomy and use this to discuss the ABIT system. Based on a set of scenarios we conclude...

  19. Agent Based Multiviews Requirements Model

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Based on the current researches of viewpoints oriented requirements engineering and intelligent agent, we present the concept of viewpoint agent and its abstract model based on a meta-language for multiviews requirements engineering. It provided a basis for consistency checking and integration of different viewpoint requirements, at the same time, these checking and integration works can automatically realized in virtue of intelligent agent's autonomy, proactiveness and social ability. Finally, we introduce the practical application of the model by the case study of data flow diagram.

  20. Multi-robot system learning based on evolutionary classification

    Directory of Open Access Journals (Sweden)

    Manko Sergey

    2016-01-01

    Full Text Available This paper presents a novel machine learning method for agents of a multi-robot system. The learning process is based on knowledge discovery through continual analysis of robot sensory information. We demonstrate that classification trees and evolutionary forests may be a basis for creation of autonomous robots capable both of learning and knowledge exchange with other agents in multi-robot system. The results of experimental studies confirm the effectiveness of the proposed approach.

  1. Classification based polynomial image interpolation

    Science.gov (United States)

    Lenke, Sebastian; Schröder, Hartmut

    2008-02-01

    Due to the fast migration of high resolution displays for home and office environments there is a strong demand for high quality picture scaling. This is caused on the one hand by large picture sizes and on the other hand due to an enhanced visibility of picture artifacts on these displays [1]. There are many proposals for an enhanced spatial interpolation adaptively matched to picture contents like e.g. edges. The drawback of these approaches is the normally integer and often limited interpolation factor. In order to achieve rational factors there exist combinations of adaptive and non adaptive linear filters, but due to the non adaptive step the overall quality is notably limited. We present in this paper a content adaptive polyphase interpolation method which uses "offline" trained filter coefficients and an "online" linear filtering depending on a simple classification of the input situation. Furthermore we present a new approach to a content adaptive interpolation polynomial, which allows arbitrary polyphase interpolation factors at runtime and further improves the overall interpolation quality. The main goal of our new approach is to optimize interpolation quality by adapting higher order polynomials directly to the image content. In addition we derive filter constraints for enhanced picture quality. Furthermore we extend the classification based filtering to the temporal dimension in order to use it for an intermediate image interpolation.

  2. From fault classification to fault tolerance for multi-agent systems

    CERN Document Server

    Potiron, Katia; Taillibert, Patrick

    2013-01-01

    Faults are a concern for Multi-Agent Systems (MAS) designers, especially if the MAS are built for industrial or military use because there must be some guarantee of dependability. Some fault classification exists for classical systems, and is used to define faults. When dependability is at stake, such fault classification may be used from the beginning of the system's conception to define fault classes and specify which types of faults are expected. Thus, one may want to use fault classification for MAS; however, From Fault Classification to Fault Tolerance for Multi-Agent Systems argues that

  3. Multi-Agent Pathfinding with n Agents on Graphs with n Vertices: Combinatorial Classification and Tight Algorithmic Bounds

    DEFF Research Database (Denmark)

    Förster, Klaus-Tycho; Groner, Linus; Hoefler, Torsten

    2017-01-01

    We investigate the multi-agent pathfinding (MAPF) problem with $n$ agents on graphs with $n$ vertices: Each agent has a unique start and goal vertex, with the objective of moving all agents in parallel movements to their goal s.t.~each vertex and each edge may only be used by one agent at a time....... We give a combinatorial classification of all graphs where this problem is solvable in general, including cases where the solvability depends on the initial agent placement. Furthermore, we present an algorithm solving the MAPF problem in our setting, requiring O(n²) rounds, or O(n³) moves...... of individual agents. Complementing these results, we show that there are graphs where Omega(n²) rounds and Omega(n³) moves are required for any algorithm....

  4. Review of therapeutic agents for burns pruritus and protocols for management in adult and paediatric patients using the GRADE classification

    Directory of Open Access Journals (Sweden)

    Goutos Ioannis

    2010-10-01

    Full Text Available To review the current evidence on therapeutic agents for burns pruritus and use the Grading of Recommendations, Assessment, Development and Evaluation (GRADE classification to propose therapeutic protocols for adult and paediatric patients. All published interventions for burns pruritus were analysed by a multidisciplinary panel of burns specialists following the GRADE classification to rate individual agents. Following the collation of results and panel discussion, consensus protocols are presented. Twenty-three studies appraising therapeutic agents in the burns literature were identified. The majority of these studies (16 out of 23 are of an observational nature, making an evidence-based approach to defining optimal therapy not feasible. Our multidisciplinary approach employing the GRADE classification recommends the use of antihistamines (cetirizine and cimetidine and gabapentin as the first-line pharmacological agents for both adult and paediatric patients. Ondansetron and loratadine are the second-line medications in our protocols. We additionally recommend a variety of non-pharmacological adjuncts for the perusal of clinicians in order to maximise symptomatic relief in patients troubled with postburn itch. Most studies in the subject area lack sufficient statistical power to dictate a ′gold standard′ treatment agent for burns itch. We encourage clinicians to employ the GRADE system in order to delineate the most appropriate therapeutic approach for burns pruritus until further research elucidates the most efficacious interventions. This widely adopted classification empowers burns clinicians to tailor therapeutic regimens according to current evidence, patient values, risks and resource considerations in different medical environments.

  5. Cluster-based adaptive metric classification

    NARCIS (Netherlands)

    Giotis, Ioannis; Petkov, Nicolai

    2012-01-01

    Introducing adaptive metric has been shown to improve the results of distance-based classification algorithms. Existing methods are often computationally intensive, either in the training or in the classification phase. We present a novel algorithm that we call Cluster-Based Adaptive Metric (CLAM) c

  6. Ontology-Based Classification System Development Methodology

    Directory of Open Access Journals (Sweden)

    Grabusts Peter

    2015-12-01

    Full Text Available The aim of the article is to analyse and develop an ontology-based classification system methodology that uses decision tree learning with statement propositionalized attributes. Classical decision tree learning algorithms, as well as decision tree learning with taxonomy and propositionalized attributes have been observed. Thus, domain ontology can be extracted from the data sets and can be used for data classification with the help of a decision tree. The use of ontology methods in decision tree-based classification systems has been researched. Using such methodologies, the classification accuracy in some cases can be improved.

  7. Empirically Based, Agent-based models

    Directory of Open Access Journals (Sweden)

    Elinor Ostrom

    2006-12-01

    Full Text Available There is an increasing drive to combine agent-based models with empirical methods. An overview is provided of the various empirical methods that are used for different kinds of questions. Four categories of empirical approaches are identified in which agent-based models have been empirically tested: case studies, stylized facts, role-playing games, and laboratory experiments. We discuss how these different types of empirical studies can be combined. The various ways empirical techniques are used illustrate the main challenges of contemporary social sciences: (1 how to develop models that are generalizable and still applicable in specific cases, and (2 how to scale up the processes of interactions of a few agents to interactions among many agents.

  8. Efficient Model for Distributed Computing based on Smart Embedded Agent

    Directory of Open Access Journals (Sweden)

    Hassna Bensag

    2017-02-01

    Full Text Available Technological advances of embedded computing exposed humans to an increasing intrusion of computing in their day-to-day life (e.g. smart devices. Cooperation, autonomy, and mobility made the agent a promising mechanism for embedded devices. The work aims to present a new model of an embedded agent designed to be implemented in smart devices in order to achieve parallel tasks in a distribute environment. To validate the proposed model, a case study was developed for medical image segmentation using Cardiac Magnetic Resonance Image (MRI. In the first part of this paper, we focus on implementing the parallel algorithm of classification using C-means method in embedded systems. We propose then a new concept of distributed classification using multi-agent systems based on JADE and Raspberry PI 2 devices.

  9. Ontology-Based Classification System Development Methodology

    OpenAIRE

    2015-01-01

    The aim of the article is to analyse and develop an ontology-based classification system methodology that uses decision tree learning with statement propositionalized attributes. Classical decision tree learning algorithms, as well as decision tree learning with taxonomy and propositionalized attributes have been observed. Thus, domain ontology can be extracted from the data sets and can be used for data classification with the help of a decision tree. The use of ontology methods in decision ...

  10. An Authentication Technique Based on Classification

    Institute of Scientific and Technical Information of China (English)

    李钢; 杨杰

    2004-01-01

    We present a novel watermarking approach based on classification for authentication, in which a watermark is embedded into the host image. When the marked image is modified, the extracted watermark is also different to the original watermark, and different kinds of modification lead to different extracted watermarks. In this paper, different kinds of modification are considered as classes, and we used classification algorithm to recognize the modifications with high probability. Simulation results show that the proposed method is potential and effective.

  11. Distance-based features in pattern classification

    Directory of Open Access Journals (Sweden)

    Lin Wei-Yang

    2011-01-01

    Full Text Available Abstract In data mining and pattern classification, feature extraction and representation methods are a very important step since the extracted features have a direct and significant impact on the classification accuracy. In literature, numbers of novel feature extraction and representation methods have been proposed. However, many of them only focus on specific domain problems. In this article, we introduce a novel distance-based feature extraction method for various pattern classification problems. Specifically, two distances are extracted, which are based on (1 the distance between the data and its intra-cluster center and (2 the distance between the data and its extra-cluster centers. Experiments based on ten datasets containing different numbers of classes, samples, and dimensions are examined. The experimental results using naïve Bayes, k-NN, and SVM classifiers show that concatenating the original features provided by the datasets to the distance-based features can improve classification accuracy except image-related datasets. In particular, the distance-based features are suitable for the datasets which have smaller numbers of classes, numbers of samples, and the lower dimensionality of features. Moreover, two datasets, which have similar characteristics, are further used to validate this finding. The result is consistent with the first experiment result that adding the distance-based features can improve the classification performance.

  12. Texture Classification based on Gabor Wavelet

    Directory of Open Access Journals (Sweden)

    Amandeep Kaur

    2012-07-01

    Full Text Available This paper presents the comparison of Texture classification algorithms based on Gabor Wavelets. The focus of this paper is on feature extraction scheme for texture classification. The texture feature for an image can be classified using texture descriptors. In this paper we have used Homogeneous texture descriptor that uses Gabor Wavelets concept. For texture classification, we have used online texture database that is Brodatz’s database and three advanced well known classifiers: Support Vector Machine, K-nearest neighbor method and decision tree induction method. The results shows that classification using Support vector machines gives better results as compare to the other classifiers. It can accurately discriminate between a testing image data and training data.

  13. Inventory classification based on decoupling points

    Directory of Open Access Journals (Sweden)

    Joakim Wikner

    2015-01-01

    Full Text Available The ideal state of continuous one-piece flow may never be achieved. Still the logistics manager can improve the flow by carefully positioning inventory to buffer against variations. Strategies such as lean, postponement, mass customization, and outsourcing all rely on strategic positioning of decoupling points to separate forecast-driven from customer-order-driven flows. Planning and scheduling of the flow are also based on classification of decoupling points as master scheduled or not. A comprehensive classification scheme for these types of decoupling points is introduced. The approach rests on identification of flows as being either demand based or supply based. The demand or supply is then combined with exogenous factors, classified as independent, or endogenous factors, classified as dependent. As a result, eight types of strategic as well as tactical decoupling points are identified resulting in a process-based framework for inventory classification that can be used for flow design.

  14. Agent-based enterprise integration

    Energy Technology Data Exchange (ETDEWEB)

    N. M. Berry; C. M. Pancerella

    1998-12-01

    The authors are developing and deploying software agents in an enterprise information architecture such that the agents manage enterprise resources and facilitate user interaction with these resources. The enterprise agents are built on top of a robust software architecture for data exchange and tool integration across heterogeneous hardware and software. The resulting distributed multi-agent system serves as a method of enhancing enterprises in the following ways: providing users with knowledge about enterprise resources and applications; accessing the dynamically changing enterprise; locating enterprise applications and services; and improving search capabilities for applications and data. Furthermore, agents can access non-agents (i.e., databases and tools) through the enterprise framework. The ultimate target of the effort is the user; they are attempting to increase user productivity in the enterprise. This paper describes their design and early implementation and discusses the planned future work.

  15. Model Based Testing for Agent Systems

    Science.gov (United States)

    Zhang, Zhiyong; Thangarajah, John; Padgham, Lin

    Although agent technology is gaining world wide popularity, a hindrance to its uptake is the lack of proper testing mechanisms for agent based systems. While many traditional software testing methods can be generalized to agent systems, there are many aspects that are different and which require an understanding of the underlying agent paradigm. In this paper we present certain aspects of a testing framework that we have developed for agent based systems. The testing framework is a model based approach using the design models of the Prometheus agent development methodology. In this paper we focus on model based unit testing and identify the appropriate units, present mechanisms for generating suitable test cases and for determining the order in which the units are to be tested, present a brief overview of the unit testing process and an example. Although we use the design artefacts from Prometheus the approach is suitable for any plan and event based agent system.

  16. CATS-based Agents That Err

    Science.gov (United States)

    Callantine, Todd J.

    2002-01-01

    This report describes preliminary research on intelligent agents that make errors. Such agents are crucial to the development of novel agent-based techniques for assessing system safety. The agents extend an agent architecture derived from the Crew Activity Tracking System that has been used as the basis for air traffic controller agents. The report first reviews several error taxonomies. Next, it presents an overview of the air traffic controller agents, then details several mechanisms for causing the agents to err in realistic ways. The report presents a performance assessment of the error-generating agents, and identifies directions for further research. The research was supported by the System-Wide Accident Prevention element of the FAA/NASA Aviation Safety Program.

  17. Texture Image Classification Based on Gabor Wavelet

    Institute of Scientific and Technical Information of China (English)

    DENG Wei-bing; LI Hai-fei; SHI Ya-li; YANG Xiao-hui

    2014-01-01

    For a texture image, by recognizining the class of every pixel of the image, it can be partitioned into disjoint regions of uniform texture. This paper proposed a texture image classification algorithm based on Gabor wavelet. In this algorithm, characteristic of every image is obtained through every pixel and its neighborhood of this image. And this algorithm can achieve the information transform between different sizes of neighborhood. Experiments on standard Brodatz texture image dataset show that our proposed algorithm can achieve good classification rates.

  18. Density Based Support Vector Machines for Classification

    Directory of Open Access Journals (Sweden)

    Zahra Nazari

    2015-04-01

    Full Text Available Support Vector Machines (SVM is the most successful algorithm for classification problems. SVM learns the decision boundary from two classes (for Binary Classification of training points. However, sometimes there are some less meaningful samples amongst training points, which are corrupted by noises or misplaced in wrong side, called outliers. These outliers are affecting on margin and classification performance, and machine should better to discard them. SVM as a popular and widely used classification algorithm is very sensitive to these outliers and lacks the ability to discard them. Many research results prove this sensitivity which is a weak point for SVM. Different approaches are proposed to reduce the effect of outliers but no method is suitable for all types of data sets. In this paper, the new method of Density Based SVM (DBSVM is introduced. Population Density is the basic concept which is used in this method for both linear and non-linear SVM to detect outliers. Experiments on artificial data sets, real high-dimensional benchmark data sets of Liver disorder and Heart disease, and data sets of new and fatigued banknotes’ acoustic signals can prove the efficiency of this method on noisy data classification and the better generalization that it can provide compared to the standard SVM.

  19. Classification of Base Sequences (+1,

    Directory of Open Access Journals (Sweden)

    Dragomir Ž. Ðoković

    2010-01-01

    Full Text Available Base sequences BS(+1, are quadruples of {±1}-sequences (;;;, with A and B of length +1 and C and D of length n, such that the sum of their nonperiodic autocor-relation functions is a -function. The base sequence conjecture, asserting that BS(+1, exist for all n, is stronger than the famous Hadamard matrix conjecture. We introduce a new definition of equivalence for base sequences BS(+1, and construct a canonical form. By using this canonical form, we have enumerated the equivalence classes of BS(+1, for ≤30. As the number of equivalence classes grows rapidly (but not monotonically with n, the tables in the paper cover only the cases ≤13.

  20. Agent Based Reasoning in Multilevel Flow Modeling

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2012-01-01

    to launch the MFM Workbench into an agent based environment, which can complement disadvantages of the original software. The agent-based MFM Workbench is centered on a concept called “Blackboard System” and use an event based mechanism to arrange the reasoning tasks. This design will support the new...

  1. An Agent-Based Distributed Manufacturing System

    Institute of Scientific and Technical Information of China (English)

    J.Li; J.Y.H.Fuh; Y.F.Zhang; A.Y.C.Nee

    2006-01-01

    Agent theories have shown their promising capability in solving distributed complex system ever since its development. In this paper, one multi-agent based distributed product design and manufacturing planning system is presented. The objective of the research is to develop a distributed collaborative design environment for supporting cooperation among the existing engineering functions. In the system, the functional agents for design, manufacturability evaluation,process planning and scheduling are efficiently integrated with a facilitator agent. This paper firstly gives an introduction to the system structure, and the definitions for each executive agent are then described and a prototype of the proposed is also included at the end part.

  2. Image-based Vehicle Classification System

    CERN Document Server

    Ng, Jun Yee

    2012-01-01

    Electronic toll collection (ETC) system has been a common trend used for toll collection on toll road nowadays. The implementation of electronic toll collection allows vehicles to travel at low or full speed during the toll payment, which help to avoid the traffic delay at toll road. One of the major components of an electronic toll collection is the automatic vehicle detection and classification (AVDC) system which is important to classify the vehicle so that the toll is charged according to the vehicle classes. Vision-based vehicle classification system is one type of vehicle classification system which adopt camera as the input sensing device for the system. This type of system has advantage over the rest for it is cost efficient as low cost camera is used. The implementation of vision-based vehicle classification system requires lower initial investment cost and very suitable for the toll collection trend migration in Malaysia from single ETC system to full-scale multi-lane free flow (MLFF). This project ...

  3. Distance-based classification of keystroke dynamics

    Science.gov (United States)

    Tran Nguyen, Ngoc

    2016-07-01

    This paper uses the keystroke dynamics in user authentication. The relationship between the distance metrics and the data template, for the first time, was analyzed and new distance based algorithm for keystroke dynamics classification was proposed. The results of the experiments on the CMU keystroke dynamics benchmark dataset1 were evaluated with an equal error rate of 0.0614. The classifiers using the proposed distance metric outperform existing top performing keystroke dynamics classifiers which use traditional distance metrics.

  4. Web-Based Computing Resource Agent Publishing

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Web-based Computing Resource Publishing is a efficient way to provide additional computing capacity for users who need more computing resources than that they themselves could afford by making use of idle computing resources in the Web.Extensibility and reliability are crucial for agent publishing. The parent-child agent framework and primary-slave agent framework were proposed respectively and discussed in detail.

  5. Agent-based modeling of sustainable behaviors

    CERN Document Server

    Sánchez-Maroño, Noelia; Fontenla-Romero, Oscar; Polhill, J; Craig, Tony; Bajo, Javier; Corchado, Juan

    2017-01-01

    Using the O.D.D. (Overview, Design concepts, Detail) protocol, this title explores the role of agent-based modeling in predicting the feasibility of various approaches to sustainability. The chapters incorporated in this volume consist of real case studies to illustrate the utility of agent-based modeling and complexity theory in discovering a path to more efficient and sustainable lifestyles. The topics covered within include: households' attitudes toward recycling, designing decision trees for representing sustainable behaviors, negotiation-based parking allocation, auction-based traffic signal control, and others. This selection of papers will be of interest to social scientists who wish to learn more about agent-based modeling as well as experts in the field of agent-based modeling.

  6. Agent based computational model of trust

    NARCIS (Netherlands)

    A. Gorobets (Alexander); B. Nooteboom (Bart)

    2004-01-01

    textabstractThis paper employs the methodology of Agent-Based Computational Economics (ACE) to investigate under what conditions trust can be viable in markets. The emergence and breakdown of trust is modeled in a context of multiple buyers and suppliers. Agents adapt their trust in a partner, the w

  7. Collaborative Representation based Classification for Face Recognition

    CERN Document Server

    Zhang, Lei; Feng, Xiangchu; Ma, Yi; Zhang, David

    2012-01-01

    By coding a query sample as a sparse linear combination of all training samples and then classifying it by evaluating which class leads to the minimal coding residual, sparse representation based classification (SRC) leads to interesting results for robust face recognition. It is widely believed that the l1- norm sparsity constraint on coding coefficients plays a key role in the success of SRC, while its use of all training samples to collaboratively represent the query sample is rather ignored. In this paper we discuss how SRC works, and show that the collaborative representation mechanism used in SRC is much more crucial to its success of face classification. The SRC is a special case of collaborative representation based classification (CRC), which has various instantiations by applying different norms to the coding residual and coding coefficient. More specifically, the l1 or l2 norm characterization of coding residual is related to the robustness of CRC to outlier facial pixels, while the l1 or l2 norm c...

  8. Texture feature based liver lesion classification

    Science.gov (United States)

    Doron, Yeela; Mayer-Wolf, Nitzan; Diamant, Idit; Greenspan, Hayit

    2014-03-01

    Liver lesion classification is a difficult clinical task. Computerized analysis can support clinical workflow by enabling more objective and reproducible evaluation. In this paper, we evaluate the contribution of several types of texture features for a computer-aided diagnostic (CAD) system which automatically classifies liver lesions from CT images. Based on the assumption that liver lesions of various classes differ in their texture characteristics, a variety of texture features were examined as lesion descriptors. Although texture features are often used for this task, there is currently a lack of detailed research focusing on the comparison across different texture features, or their combinations, on a given dataset. In this work we investigated the performance of Gray Level Co-occurrence Matrix (GLCM), Local Binary Patterns (LBP), Gabor, gray level intensity values and Gabor-based LBP (GLBP), where the features are obtained from a given lesion`s region of interest (ROI). For the classification module, SVM and KNN classifiers were examined. Using a single type of texture feature, best result of 91% accuracy, was obtained with Gabor filtering and SVM classification. Combination of Gabor, LBP and Intensity features improved the results to a final accuracy of 97%.

  9. Texture classification based on EMD and FFT

    Institute of Scientific and Technical Information of China (English)

    XIONG Chang-zhen; XU Jun-yi; ZOU Jian-cheng; QI Dong-xu

    2006-01-01

    Empirical mode decomposition (EMD) is an adaptive and approximately orthogonal filtering process that reflects human's visual mechanism of differentiating textures. In this paper, we present a modified 2D EMD algorithm using the FastRBF and an appropriate number of iterations in the shifting process (SP), then apply it to texture classification. Rotation-invariant texture feature vectors are extracted using auto-registration and circular regions of magnitude spectra of 2D fast Fourier transform(FFT). In the experiments, we employ a Bayesion classifier to classify a set of 15 distinct natural textures selected from the Brodatz album. The experimental results, based on different testing datasets for images with different orientations, show the effectiveness of the proposed classification scheme.

  10. Feature-Based Classification of Networks

    CERN Document Server

    Barnett, Ian; Kuijjer, Marieke L; Mucha, Peter J; Onnela, Jukka-Pekka

    2016-01-01

    Network representations of systems from various scientific and societal domains are neither completely random nor fully regular, but instead appear to contain recurring structural building blocks. These features tend to be shared by networks belonging to the same broad class, such as the class of social networks or the class of biological networks. At a finer scale of classification within each such class, networks describing more similar systems tend to have more similar features. This occurs presumably because networks representing similar purposes or constructions would be expected to be generated by a shared set of domain specific mechanisms, and it should therefore be possible to classify these networks into categories based on their features at various structural levels. Here we describe and demonstrate a new, hybrid approach that combines manual selection of features of potential interest with existing automated classification methods. In particular, selecting well-known and well-studied features that ...

  11. SQL based cardiovascular ultrasound image classification.

    Science.gov (United States)

    Nandagopalan, S; Suryanarayana, Adiga B; Sudarshan, T S B; Chandrashekar, Dhanalakshmi; Manjunath, C N

    2013-01-01

    This paper proposes a novel method to analyze and classify the cardiovascular ultrasound echocardiographic images using Naïve-Bayesian model via database OLAP-SQL. Efficient data mining algorithms based on tightly-coupled model is used to extract features. Three algorithms are proposed for classification namely Naïve-Bayesian Classifier for Discrete variables (NBCD) with SQL, NBCD with OLAP-SQL, and Naïve-Bayesian Classifier for Continuous variables (NBCC) using OLAP-SQL. The proposed model is trained with 207 patient images containing normal and abnormal categories. Out of the three proposed algorithms, a high classification accuracy of 96.59% was achieved from NBCC which is better than the earlier methods.

  12. Econophysics of agent-based models

    CERN Document Server

    Aoyama, Hideaki; Chakrabarti, Bikas; Chakraborti, Anirban; Ghosh, Asim

    2014-01-01

    The primary goal of this book is to present the research findings and conclusions of physicists, economists, mathematicians and financial engineers working in the field of "Econophysics" who have undertaken agent-based modelling, comparison with empirical studies and related investigations. Most standard economic models assume the existence of the representative agent, who is “perfectly rational” and applies the utility maximization principle when taking action. One reason for this is the desire to keep models mathematically tractable: no tools are available to economists for solving non-linear models of heterogeneous adaptive agents without explicit optimization. In contrast, multi-agent models, which originated from statistical physics considerations, allow us to go beyond the prototype theories of traditional economics involving the representative agent. This book is based on the Econophys-Kolkata VII Workshop, at which many such modelling efforts were presented. In the book, leading researchers in the...

  13. Digital image-based classification of biodiesel.

    Science.gov (United States)

    Costa, Gean Bezerra; Fernandes, David Douglas Sousa; Almeida, Valber Elias; Araújo, Thomas Souto Policarpo; Melo, Jessica Priscila; Diniz, Paulo Henrique Gonçalves Dias; Véras, Germano

    2015-07-01

    This work proposes a simple, rapid, inexpensive, and non-destructive methodology based on digital images and pattern recognition techniques for classification of biodiesel according to oil type (cottonseed, sunflower, corn, or soybean). For this, differing color histograms in RGB (extracted from digital images), HSI, Grayscale channels, and their combinations were used as analytical information, which was then statistically evaluated using Soft Independent Modeling by Class Analogy (SIMCA), Partial Least Squares Discriminant Analysis (PLS-DA), and variable selection using the Successive Projections Algorithm associated with Linear Discriminant Analysis (SPA-LDA). Despite good performances by the SIMCA and PLS-DA classification models, SPA-LDA provided better results (up to 95% for all approaches) in terms of accuracy, sensitivity, and specificity for both the training and test sets. The variables selected Successive Projections Algorithm clearly contained the information necessary for biodiesel type classification. This is important since a product may exhibit different properties, depending on the feedstock used. Such variations directly influence the quality, and consequently the price. Moreover, intrinsic advantages such as quick analysis, requiring no reagents, and a noteworthy reduction (the avoidance of chemical characterization) of waste generation, all contribute towards the primary objective of green chemistry.

  14. Diagnosing Learning Disabilities in a Special Education By an Intelligent Agent Based System

    Directory of Open Access Journals (Sweden)

    Khaled Nasser elSayed

    2013-04-01

    Full Text Available The presented paper provides an intelligent agent based classification system for diagnosing and evaluation of learning disabilities with special education students. It provides pedagogy psychology profiles for those students and offer solution strategies with the best educational activities. It provides tools that allow class teachers to discuss psycho functions and basic skills for learning skills, then, performs psycho pedagogy evaluation by comprising a series of strategies in a semantic network knowledge base. The system’s agent performs its classification of student’s disabilities based on its past experience that it got from the exemplars that were classified by expert and acquired in its knowledge base

  15. Agent-based modeling and network dynamics

    CERN Document Server

    Namatame, Akira

    2016-01-01

    The book integrates agent-based modeling and network science. It is divided into three parts, namely, foundations, primary dynamics on and of social networks, and applications. The book begins with the network origin of agent-based models, known as cellular automata, and introduce a number of classic models, such as Schelling’s segregation model and Axelrod’s spatial game. The essence of the foundation part is the network-based agent-based models in which agents follow network-based decision rules. Under the influence of the substantial progress in network science in late 1990s, these models have been extended from using lattices into using small-world networks, scale-free networks, etc. The book also shows that the modern network science mainly driven by game-theorists and sociophysicists has inspired agent-based social scientists to develop alternative formation algorithms, known as agent-based social networks. The book reviews a number of pioneering and representative models in this family. Upon the gi...

  16. Agent-oriented commonsense knowledge base

    Institute of Scientific and Technical Information of China (English)

    陆汝钤; 石纯一; 张松懋; 毛希平; 徐晋晖; 杨萍; 范路

    2000-01-01

    Common sense processing has been the key difficulty in Al community. Through analyzing various research methods on common sense, a large-scale agent-oriented commonsense knowledge base is described in this paper. We propose a new type of agent——CBS agent, specify common sense oriented semantic network descriptive language-Csnet, augment Prolog for common sense, analyze the ontology structure, and give the execution mechanism of the knowledge base.

  17. Genome-based Taxonomic Classification of Bacteroidetes

    Directory of Open Access Journals (Sweden)

    Richard L. Hahnke

    2016-12-01

    Full Text Available The bacterial phylum Bacteroidetes, characterized by a distinct gliding motility, occurs in a broad variety of ecosystems, habitats, life styles and physiologies. Accordingly, taxonomic classification of the phylum, based on a limited number of features, proved difficult and controversial in the past, for example, when decisions were based on unresolved phylogenetic trees of the 16S rRNA gene sequence. Here we use a large collection of type-strain genomes from Bacteroidetes and closely related phyla for assessing their taxonomy based on the principles of phylogenetic classification and trees inferred from genome-scale data. No significant conflict between 16S rRNA gene and whole-genome phylogenetic analysis is found, whereas many but not all of the involved taxa are supported as monophyletic groups, particularly in the genome-scale trees. Phenotypic and phylogenomic features support the separation of Balneolaceae as new phylum Balneolaeota from Rhodothermaeota and of Saprospiraceae as new class Saprospiria from Chitinophagia. Epilithonimonas is nested within the older genus Chryseobacterium and without significant phenotypic differences; thus merging the two genera is proposed. Similarly, Vitellibacter is proposed to be included in Aequorivita. Flexibacter is confirmed as being heterogeneous and dissected, yielding six distinct genera. Hallella seregens is a later heterotypic synonym of Prevotella dentalis. Compared to values directly calculated from genome sequences, the G+C content mentioned in many species descriptions is too imprecise; moreover, corrected G+C content values have a significantly better fit to the phylogeny. Corresponding emendations of species descriptions are provided where necessary. Whereas most observed conflict with the current classification of Bacteroidetes is already visible in 16S rRNA gene trees, as expected whole-genome phylogenies are much better resolved.

  18. Genome-Based Taxonomic Classification of Bacteroidetes.

    Science.gov (United States)

    Hahnke, Richard L; Meier-Kolthoff, Jan P; García-López, Marina; Mukherjee, Supratim; Huntemann, Marcel; Ivanova, Natalia N; Woyke, Tanja; Kyrpides, Nikos C; Klenk, Hans-Peter; Göker, Markus

    2016-01-01

    The bacterial phylum Bacteroidetes, characterized by a distinct gliding motility, occurs in a broad variety of ecosystems, habitats, life styles, and physiologies. Accordingly, taxonomic classification of the phylum, based on a limited number of features, proved difficult and controversial in the past, for example, when decisions were based on unresolved phylogenetic trees of the 16S rRNA gene sequence. Here we use a large collection of type-strain genomes from Bacteroidetes and closely related phyla for assessing their taxonomy based on the principles of phylogenetic classification and trees inferred from genome-scale data. No significant conflict between 16S rRNA gene and whole-genome phylogenetic analysis is found, whereas many but not all of the involved taxa are supported as monophyletic groups, particularly in the genome-scale trees. Phenotypic and phylogenomic features support the separation of Balneolaceae as new phylum Balneolaeota from Rhodothermaeota and of Saprospiraceae as new class Saprospiria from Chitinophagia. Epilithonimonas is nested within the older genus Chryseobacterium and without significant phenotypic differences; thus merging the two genera is proposed. Similarly, Vitellibacter is proposed to be included in Aequorivita. Flexibacter is confirmed as being heterogeneous and dissected, yielding six distinct genera. Hallella seregens is a later heterotypic synonym of Prevotella dentalis. Compared to values directly calculated from genome sequences, the G+C content mentioned in many species descriptions is too imprecise; moreover, corrected G+C content values have a significantly better fit to the phylogeny. Corresponding emendations of species descriptions are provided where necessary. Whereas most observed conflict with the current classification of Bacteroidetes is already visible in 16S rRNA gene trees, as expected whole-genome phylogenies are much better resolved.

  19. Cirrhosis Classification Based on Texture Classification of Random Features

    Directory of Open Access Journals (Sweden)

    Hui Liu

    2014-01-01

    Full Text Available Accurate staging of hepatic cirrhosis is important in investigating the cause and slowing down the effects of cirrhosis. Computer-aided diagnosis (CAD can provide doctors with an alternative second opinion and assist them to make a specific treatment with accurate cirrhosis stage. MRI has many advantages, including high resolution for soft tissue, no radiation, and multiparameters imaging modalities. So in this paper, multisequences MRIs, including T1-weighted, T2-weighted, arterial, portal venous, and equilibrium phase, are applied. However, CAD does not meet the clinical needs of cirrhosis and few researchers are concerned with it at present. Cirrhosis is characterized by the presence of widespread fibrosis and regenerative nodules in the hepatic, leading to different texture patterns of different stages. So, extracting texture feature is the primary task. Compared with typical gray level cooccurrence matrix (GLCM features, texture classification from random features provides an effective way, and we adopt it and propose CCTCRF for triple classification (normal, early, and middle and advanced stage. CCTCRF does not need strong assumptions except the sparse character of image, contains sufficient texture information, includes concise and effective process, and makes case decision with high accuracy. Experimental results also illustrate the satisfying performance and they are also compared with typical NN with GLCM.

  20. Cirrhosis classification based on texture classification of random features.

    Science.gov (United States)

    Liu, Hui; Shao, Ying; Guo, Dongmei; Zheng, Yuanjie; Zhao, Zuowei; Qiu, Tianshuang

    2014-01-01

    Accurate staging of hepatic cirrhosis is important in investigating the cause and slowing down the effects of cirrhosis. Computer-aided diagnosis (CAD) can provide doctors with an alternative second opinion and assist them to make a specific treatment with accurate cirrhosis stage. MRI has many advantages, including high resolution for soft tissue, no radiation, and multiparameters imaging modalities. So in this paper, multisequences MRIs, including T1-weighted, T2-weighted, arterial, portal venous, and equilibrium phase, are applied. However, CAD does not meet the clinical needs of cirrhosis and few researchers are concerned with it at present. Cirrhosis is characterized by the presence of widespread fibrosis and regenerative nodules in the hepatic, leading to different texture patterns of different stages. So, extracting texture feature is the primary task. Compared with typical gray level cooccurrence matrix (GLCM) features, texture classification from random features provides an effective way, and we adopt it and propose CCTCRF for triple classification (normal, early, and middle and advanced stage). CCTCRF does not need strong assumptions except the sparse character of image, contains sufficient texture information, includes concise and effective process, and makes case decision with high accuracy. Experimental results also illustrate the satisfying performance and they are also compared with typical NN with GLCM.

  1. "Chromosome": a knowledge-based system for the chromosome classification.

    Science.gov (United States)

    Ramstein, G; Bernadet, M

    1993-01-01

    Chromosome, a knowledge-based analysis system has been designed for the classification of human chromosomes. Its aim is to perform an optimal classification by driving a tool box containing the procedures of image processing, pattern recognition and classification. This paper presents the general architecture of Chromosome, based on a multiagent system generator. The image processing tool box is described from the met aphasic enhancement to the fine classification. Emphasis is then put on the knowledge base intended for the chromosome recognition. The global classification process is also presented, showing how Chromosome proceeds to classify a given chromosome. Finally, we discuss further extensions of the system for the karyotype building.

  2. Spatial interactions in agent-based modeling

    CERN Document Server

    Ausloos, Marcel; Merlone, Ugo

    2014-01-01

    Agent Based Modeling (ABM) has become a widespread approach to model complex interactions. In this chapter after briefly summarizing some features of ABM the different approaches in modeling spatial interactions are discussed. It is stressed that agents can interact either indirectly through a shared environment and/or directly with each other. In such an approach, higher-order variables such as commodity prices, population dynamics or even institutions, are not exogenously specified but instead are seen as the results of interactions. It is highlighted in the chapter that the understanding of patterns emerging from such spatial interaction between agents is a key problem as much as their description through analytical or simulation means. The chapter reviews different approaches for modeling agents' behavior, taking into account either explicit spatial (lattice based) structures or networks. Some emphasis is placed on recent ABM as applied to the description of the dynamics of the geographical distribution o...

  3. Fuzzy Rule Base System for Software Classification

    Directory of Open Access Journals (Sweden)

    Adnan Shaout

    2013-07-01

    Full Text Available Given the central role that software development plays in the delivery and application of informationtechnology, managers have been focusing on process improvement in the software development area. Thisimprovement has increased the demand for software measures, or metrics to manage the process. Thismetrics provide a quantitative basis for the development and validation of models during the softwaredevelopment process. In this paper a fuzzy rule-based system will be developed to classify java applicationsusing object oriented metrics. The system will contain the following features:Automated method to extract the OO metrics from the source code,Default/base set of rules that can be easily configured via XML file so companies, developers, teamleaders,etc, can modify the set of rules according to their needs,Implementation of a framework so new metrics, fuzzy sets and fuzzy rules can be added or removeddepending on the needs of the end user,General classification of the software application and fine-grained classification of the java classesbased on OO metrics, andTwo interfaces are provided for the system: GUI and command.

  4. Agent-based simulation of animal behaviour

    OpenAIRE

    Jonker, C.M.; Treur, J.

    1998-01-01

    In this paper it is shown how animal behaviour can be simulated in an agent-based manner. Different models are shown for different types of behaviour, varying from purely reactive behaviour to pro-active, social and adaptive behaviour. The compositional development method for multi-agent systems DESIRE and its software environment supports the conceptual and detailed design, and execution of these models. Experiments reported in the literature on animal behaviour have been simulated for a num...

  5. Malware Classification based on Call Graph Clustering

    CERN Document Server

    Kinable, Joris

    2010-01-01

    Each day, anti-virus companies receive tens of thousands samples of potentially harmful executables. Many of the malicious samples are variations of previously encountered malware, created by their authors to evade pattern-based detection. Dealing with these large amounts of data requires robust, automatic detection approaches. This paper studies malware classification based on call graph clustering. By representing malware samples as call graphs, it is possible to abstract certain variations away, and enable the detection of structural similarities between samples. The ability to cluster similar samples together will make more generic detection techniques possible, thereby targeting the commonalities of the samples within a cluster. To compare call graphs mutually, we compute pairwise graph similarity scores via graph matchings which approximately minimize the graph edit distance. Next, to facilitate the discovery of similar malware samples, we employ several clustering algorithms, including k-medoids and DB...

  6. Agent Based Individual Traffic guidance

    DEFF Research Database (Denmark)

    Wanscher, Jørgen Bundgaard

    2004-01-01

    When working with traffic planning or guidance it is common practice to view the vehicles as a combined mass. >From this models are employed to specify the vehicle supply and demand for each region. As the models are complex and the calculations are equally demanding the regions and the detail...... can be obtained through cellular phone tracking or GPS systems. This information can then be used to provide individual traffic guidance as opposed to the mass information systems of today -- dynamic roadsigns and trafficradio. The goal is to achieve better usage of road and time. The main topic...... of the paper is the possibilities of using ABIT when disruptions occur (accidents, congestion, and roadwork). The discussion will be based on realistic case studies....

  7. Age Classification Based On Integrated Approach

    Directory of Open Access Journals (Sweden)

    Pullela. SVVSR Kumar

    2014-05-01

    Full Text Available The present paper presents a new age classification method by integrating the features derived from Grey Level Co-occurrence Matrix (GLCM with a new structural approach derived from four distinct LBP's (4-DLBP on a 3 x 3 image. The present paper derived four distinct patterns called Left Diagonal (LD, Right diagonal (RD, vertical centre (VC and horizontal centre (HC LBP's. For all the LBP's the central pixel value of the 3 x 3 neighbourhood is significant. That is the reason in the present research LBP values are evaluated by comparing all 9 pixels of the 3 x 3 neighbourhood with the average value of the neighbourhood. The four distinct LBP's are grouped into two distinct LBP's. Based on these two distinct LBP's GLCM is computed and features are evaluated to classify the human age into four age groups i.e: Child (0-15, Young adult (16-30, Middle aged adult (31-50 and senior adult (>50. The co-occurrence features extracted from the 4-DLBP provides complete texture information about an image which is useful for classification. The proposed 4-DLBP reduces the size of the LBP from 6561 to 79 in the case of original texture spectrum and 2020 to 79 in the case of Fuzzy Texture approach.

  8. Behavior-based dual dynamic agent architecture

    Institute of Scientific and Technical Information of China (English)

    仵博; 吴敏; 曹卫华

    2003-01-01

    The objective of the architecture is to make agent promptly and adaptively accomplish tasks in the real-time and dynamic environment. The architecture is composed of elementary level behavior layer and high level be-havior layer. In the elementary level behavior layer, the reactive architecture is introduced to make agent promptlyreact to events; in the high level behavior layer, the deliberation architecture is used to enhance the intelligence ofthe agent. A confidence degree concept is proposed to combine the two layers of the architecture. An agent decisionmaking process is also presented, which is based on the architecture. The results of experiment in RoboSoccer simu-lation team show that the proposed architecture and the decision process are successful.

  9. Automatic web services classification based on rough set theory

    Institute of Scientific and Technical Information of China (English)

    陈立; 张英; 宋自林; 苗壮

    2013-01-01

    With development of web services technology, the number of existing services in the internet is growing day by day. In order to achieve automatic and accurate services classification which can be beneficial for service related tasks, a rough set theory based method for services classification was proposed. First, the services descriptions were preprocessed and represented as vectors. Elicited by the discernibility matrices based attribute reduction in rough set theory and taking into account the characteristic of decision table of services classification, a method based on continuous discernibility matrices was proposed for dimensionality reduction. And finally, services classification was processed automatically. Through the experiment, the proposed method for services classification achieves approving classification result in all five testing categories. The experiment result shows that the proposed method is accurate and could be used in practical web services classification.

  10. Agent Based Modeling Applications for Geosciences

    Science.gov (United States)

    Stein, J. S.

    2004-12-01

    Agent-based modeling techniques have successfully been applied to systems in which complex behaviors or outcomes arise from varied interactions between individuals in the system. Each individual interacts with its environment, as well as with other individuals, by following a set of relatively simple rules. Traditionally this "bottom-up" modeling approach has been applied to problems in the fields of economics and sociology, but more recently has been introduced to various disciplines in the geosciences. This technique can help explain the origin of complex processes from a relatively simple set of rules, incorporate large and detailed datasets when they exist, and simulate the effects of extreme events on system-wide behavior. Some of the challenges associated with this modeling method include: significant computational requirements in order to keep track of thousands to millions of agents, methods and strategies of model validation are lacking, as is a formal methodology for evaluating model uncertainty. Challenges specific to the geosciences, include how to define agents that control water, contaminant fluxes, climate forcing and other physical processes and how to link these "geo-agents" into larger agent-based simulations that include social systems such as demographics economics and regulations. Effective management of limited natural resources (such as water, hydrocarbons, or land) requires an understanding of what factors influence the demand for these resources on a regional and temporal scale. Agent-based models can be used to simulate this demand across a variety of sectors under a range of conditions and determine effective and robust management policies and monitoring strategies. The recent focus on the role of biological processes in the geosciences is another example of an area that could benefit from agent-based applications. A typical approach to modeling the effect of biological processes in geologic media has been to represent these processes in

  11. Graph-based Methods for Orbit Classification

    Energy Technology Data Exchange (ETDEWEB)

    Bagherjeiran, A; Kamath, C

    2005-09-29

    An important step in the quest for low-cost fusion power is the ability to perform and analyze experiments in prototype fusion reactors. One of the tasks in the analysis of experimental data is the classification of orbits in Poincare plots. These plots are generated by the particles in a fusion reactor as they move within the toroidal device. In this paper, we describe the use of graph-based methods to extract features from orbits. These features are then used to classify the orbits into several categories. Our results show that existing machine learning algorithms are successful in classifying orbits with few points, a situation which can arise in data from experiments.

  12. Sentiment classification technology based on Markov logic networks

    Science.gov (United States)

    He, Hui; Li, Zhigang; Yao, Chongchong; Zhang, Weizhe

    2016-07-01

    With diverse online media emerging, there is a growing concern of sentiment classification problem. At present, text sentiment classification mainly utilizes supervised machine learning methods, which feature certain domain dependency. On the basis of Markov logic networks (MLNs), this study proposed a cross-domain multi-task text sentiment classification method rooted in transfer learning. Through many-to-one knowledge transfer, labeled text sentiment classification, knowledge was successfully transferred into other domains, and the precision of the sentiment classification analysis in the text tendency domain was improved. The experimental results revealed the following: (1) the model based on a MLN demonstrated higher precision than the single individual learning plan model. (2) Multi-task transfer learning based on Markov logical networks could acquire more knowledge than self-domain learning. The cross-domain text sentiment classification model could significantly improve the precision and efficiency of text sentiment classification.

  13. Agentes complexantes: podante, coronante e criptante classificação e nomenclatura Complexing agents: podands, coronands and cryptands classification and nomenclature

    Directory of Open Access Journals (Sweden)

    Oh Lin Whei

    1998-10-01

    Full Text Available The scientific and practical interest in crown ethers as complexing agents for actions as well as for anions and neutral low molecular species is undeniable. New molecules with crown ether properties are constantly synthesized and new application discovered. This paper presents classification and nomenclature of the classical oligoethers (crown ethers: monocyclic coronands; oligocyclic spherical cryptands; and acyclic podands.

  14. Agent-based modelling of cholera diffusion

    NARCIS (Netherlands)

    Augustijn, Ellen-Wien; Doldersum, Tom; Useya, Juliana; Augustijn, Denie

    2016-01-01

    This paper introduces a spatially explicit agent-based simulation model for micro-scale cholera diffusion. The model simulates both an environmental reservoir of naturally occurring V. cholerae bacteria and hyperinfectious V. cholerae. Objective of the research is to test if runoff from open refuse

  15. Agent-based modelling of cholera diffusion

    NARCIS (Netherlands)

    Augustijn, Ellen-Wien; Doldersum, Tom; Useya, Juliana; Augustijn, Denie

    2016-01-01

    This paper introduces a spatially explicit agent-based simulation model for micro-scale cholera diffusion. The model simulates both an environmental reservoir of naturally occurring V.cholerae bacteria and hyperinfectious V. cholerae. Objective of the research is to test if runoff from open refuse d

  16. FIPA agent based network distributed control system

    Energy Technology Data Exchange (ETDEWEB)

    D. Abbott; V. Gyurjyan; G. Heyes; E. Jastrzembski; C. Timmer; E. Wolin

    2003-03-01

    A control system with the capabilities to combine heterogeneous control systems or processes into a uniform homogeneous environment is discussed. This dynamically extensible system is an example of the software system at the agent level of abstraction. This level of abstraction considers agents as atomic entities that communicate to implement the functionality of the control system. Agents' engineering aspects are addressed by adopting the domain independent software standard, formulated by FIPA. Jade core Java classes are used as a FIPA specification implementation. A special, lightweight, XML RDFS based, control oriented, ontology markup language is developed to standardize the description of the arbitrary control system data processor. Control processes, described in this language, are integrated into the global system at runtime, without actual programming. Fault tolerance and recovery issues are also addressed.

  17. Structure-Based Algorithms for Microvessel Classification

    KAUST Repository

    Smith, Amy F.

    2015-02-01

    © 2014 The Authors. Microcirculation published by John Wiley & Sons Ltd. Objective: Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods: Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results: The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions: The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries, and venules.

  18. RECURSIVE CLASSIFICATION OF MQAM SIGNALS BASED ON HIGHER ORDER CUMULANTS

    Institute of Scientific and Technical Information of China (English)

    Chen Weidong; Yang Shaoquan

    2002-01-01

    A new feature based on higher order cumulants is proposed for classification of MQAM signals. Theoretical analysis justify that the new feature is invariant with respect to translation (shift), scale and rotation transform of signal constellations, and can suppress color or white additive Gaussian noise. Computer simulation shows that the proposed recursive orderreduction based classification algorithm can classify MQAM signals with any order.

  19. Fuzzy Constraint-Based Agent Negotiation

    Institute of Scientific and Technical Information of China (English)

    Menq-Wen Lin; K. Robert Lai; Ting-Jung Yu

    2005-01-01

    Conflicts between two or more parties arise for various reasons and perspectives. Thus, resolution of conflicts frequently relies on some form of negotiation. This paper presents a general problem-solving framework for modeling multi-issue multilateral negotiation using fuzzy constraints. Agent negotiation is formulated as a distributed fuzzy constraint satisfaction problem (DFCSP). Fuzzy constrains are thus used to naturally represent each agent's desires involving imprecision and human conceptualization, particularly when lexical imprecision and subjective matters are concerned. On the other hand, based on fuzzy constraint-based problem-solving, our approach enables an agent not only to systematically relax fuzzy constraints to generate a proposal, but also to employ fuzzy similarity to select the alternative that is subject to its acceptability by the opponents. This task of problem-solving is to reach an agreement that benefits all agents with a high satisfaction degree of fuzzy constraints, and move towards the deal more quickly since their search focuses only on the feasible solution space. An application to multilateral negotiation of a travel planning is provided to demonstrate the usefulness and effectiveness of our framework.

  20. Spectral-Spatial Hyperspectral Image Classification Based on KNN

    Science.gov (United States)

    Huang, Kunshan; Li, Shutao; Kang, Xudong; Fang, Leyuan

    2016-12-01

    Fusion of spectral and spatial information is an effective way in improving the accuracy of hyperspectral image classification. In this paper, a novel spectral-spatial hyperspectral image classification method based on K nearest neighbor (KNN) is proposed, which consists of the following steps. First, the support vector machine is adopted to obtain the initial classification probability maps which reflect the probability that each hyperspectral pixel belongs to different classes. Then, the obtained pixel-wise probability maps are refined with the proposed KNN filtering algorithm that is based on matching and averaging nonlocal neighborhoods. The proposed method does not need sophisticated segmentation and optimization strategies while still being able to make full use of the nonlocal principle of real images by using KNN, and thus, providing competitive classification with fast computation. Experiments performed on two real hyperspectral data sets show that the classification results obtained by the proposed method are comparable to several recently proposed hyperspectral image classification methods.

  1. Integrating Globality and Locality for Robust Representation Based Classification

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2014-01-01

    Full Text Available The representation based classification method (RBCM has shown huge potential for face recognition since it first emerged. Linear regression classification (LRC method and collaborative representation classification (CRC method are two well-known RBCMs. LRC and CRC exploit training samples of each class and all the training samples to represent the testing sample, respectively, and subsequently conduct classification on the basis of the representation residual. LRC method can be viewed as a “locality representation” method because it just uses the training samples of each class to represent the testing sample and it cannot embody the effectiveness of the “globality representation.” On the contrary, it seems that CRC method cannot own the benefit of locality of the general RBCM. Thus we propose to integrate CRC and LRC to perform more robust representation based classification. The experimental results on benchmark face databases substantially demonstrate that the proposed method achieves high classification accuracy.

  2. A new classification algorithm based on RGH-tree search

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, we put forward a new classification algorithm based on RGH-Tree search and perform the classification analysis and comparison study. This algorithm can save computing resource and increase the classification efficiency. The experiment shows that this algorithm can get better effect in dealing with three dimensional multi-kind data. We find that the algorithm has better generalization ability for small training set and big testing result.

  3. Intelligent Agent-Based System for Digital Library Information Retrieval

    Institute of Scientific and Technical Information of China (English)

    师雪霖; 牛振东; 宋瀚涛; 宋丽哲

    2003-01-01

    A new information search model is reported and the design and implementation of a system based on intelligent agent is presented. The system is an assistant information retrieval system which helps users to search what they need. The system consists of four main components: interface agent, information retrieval agent, broker agent and learning agent. They collaborate to implement system functions. The agents apply learning mechanisms based on an improved ID3 algorithm.

  4. CATS-based Air Traffic Controller Agents

    Science.gov (United States)

    Callantine, Todd J.

    2002-01-01

    This report describes intelligent agents that function as air traffic controllers. Each agent controls traffic in a single sector in real time; agents controlling traffic in adjoining sectors can coordinate to manage an arrival flow across a given meter fix. The purpose of this research is threefold. First, it seeks to study the design of agents for controlling complex systems. In particular, it investigates agent planning and reactive control functionality in a dynamic environment in which a variety perceptual and decision making skills play a central role. It examines how heuristic rules can be applied to model planning and decision making skills, rather than attempting to apply optimization methods. Thus, the research attempts to develop intelligent agents that provide an approximation of human air traffic controller behavior that, while not based on an explicit cognitive model, does produce task performance consistent with the way human air traffic controllers operate. Second, this research sought to extend previous research on using the Crew Activity Tracking System (CATS) as the basis for intelligent agents. The agents use a high-level model of air traffic controller activities to structure the control task. To execute an activity in the CATS model, according to the current task context, the agents reference a 'skill library' and 'control rules' that in turn execute the pattern recognition, planning, and decision-making required to perform the activity. Applying the skills enables the agents to modify their representation of the current control situation (i.e., the 'flick' or 'picture'). The updated representation supports the next activity in a cycle of action that, taken as a whole, simulates air traffic controller behavior. A third, practical motivation for this research is to use intelligent agents to support evaluation of new air traffic control (ATC) methods to support new Air Traffic Management (ATM) concepts. Current approaches that use large, human

  5. AN OBJECT-BASED METHOD FOR CHINESE LANDFORM TYPES CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    H. Ding

    2016-06-01

    Full Text Available Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM. In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  6. An Object-Based Method for Chinese Landform Types Classification

    Science.gov (United States)

    Ding, Hu; Tao, Fei; Zhao, Wufan; Na, Jiaming; Tang, Guo'an

    2016-06-01

    Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM). In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  7. Fast Wavelet-Based Visual Classification

    CERN Document Server

    Yu, Guoshen

    2008-01-01

    We investigate a biologically motivated approach to fast visual classification, directly inspired by the recent work of Serre et al. Specifically, trading-off biological accuracy for computational efficiency, we explore using wavelet and grouplet-like transforms to parallel the tuning of visual cortex V1 and V2 cells, alternated with max operations to achieve scale and translation invariance. A feature selection procedure is applied during learning to accelerate recognition. We introduce a simple attention-like feedback mechanism, significantly improving recognition and robustness in multiple-object scenes. In experiments, the proposed algorithm achieves or exceeds state-of-the-art success rate on object recognition, texture and satellite image classification, language identification and sound classification.

  8. Knowledge-Based Classification in Automated Soil Mapping

    Institute of Scientific and Technical Information of China (English)

    ZHOU BIN; WANG RENCHAO

    2003-01-01

    A machine-learning approach was developed for automated building of knowledge bases for soil resourcesmapping by using a classification tree to generate knowledge from training data. With this method, buildinga knowledge base for automated soil mapping was easier than using the conventional knowledge acquisitionapproach. The knowledge base built by classification tree was used by the knowledge classifier to perform thesoil type classification of Longyou County, Zhejiang Province, China using Landsat TM bi-temporal imagesand GIS data. To evaluate the performance of the resultant knowledge bases, the classification results werecompared to existing soil map based on a field survey. The accuracy assessment and analysis of the resultantsoil maps suggested that the knowledge bases built by the machine-learning method was of good quality formapping distribution model of soil classes over the study area.

  9. Shape classification based on singular value decomposition transform

    Institute of Scientific and Technical Information of China (English)

    SHAABAN Zyad; ARIF Thawar; BABA Sami; KREKOR Lala

    2009-01-01

    In this paper, a new shape classification system based on singular value decomposition (SVD) transform using nearest neighbour classifier was proposed. The gray scale image of the shape object was converted into a black and white image. The squared Euclidean distance transform on binary image was applied to extract the boundary image of the shape. SVD transform features were extracted from the the boundary of the object shapes. In this paper, the proposed classification system based on SVD transform feature extraction method was compared with classifier based on moment invariants using nearest neighbour classifier. The experimental results showed the advantage of our proposed classification system.

  10. Multiclass Classification Based on the Analytical Center of Version Space

    Institute of Scientific and Technical Information of China (English)

    ZENGFanzi; QIUZhengding; YUEJianhai; LIXiangqian

    2005-01-01

    Analytical center machine, based on the analytical center of version space, outperforms support vector machine, especially when the version space is elongated or asymmetric. While analytical center machine for binary classification is well understood, little is known about corresponding multiclass classification.Moreover, considering that the current multiclass classification method: “one versus all” needs repeatedly constructing classifiers to separate a single class from all the others, which leads to daunting computation and low efficiency of classification, and that though multiclass support vector machine corresponds to a simple quadratic optimization, it is not very effective when the version spaceis asymmetric or elongated, Thus, the multiclass classification approach based on the analytical center of version space is proposed to address the above problems. Experiments on wine recognition and glass identification dataset demonstrate validity of the approach proposed.

  11. Parallel Implementation of Classification Algorithms Based on Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Wenbo Wang

    2012-09-01

    Full Text Available As an important task of data mining, Classification has been received considerable attention in many applications, such as information retrieval, web searching, etc. The enlarging volumes of information emerging by the progress of technology and the growing individual needs of data mining, makes classifying of very large scale of data a challenging task. In order to deal with the problem, many researchers try to design efficient parallel classification algorithms. This paper introduces the classification algorithms and cloud computing briefly, based on it analyses the bad points of the present parallel classification algorithms, then addresses a new model of parallel classifying algorithms. And it mainly introduces a parallel Naïve Bayes classification algorithm based on MapReduce, which is a simple yet powerful parallel programming technique. The experimental results demonstrate that the proposed algorithm improves the original algorithm performance, and it can process large datasets efficiently on commodity hardware.

  12. An Efficient Audio Classification Approach Based on Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Lhoucine Bahatti

    2016-05-01

    Full Text Available In order to achieve an audio classification aimed to identify the composer, the use of adequate and relevant features is important to improve performance especially when the classification algorithm is based on support vector machines. As opposed to conventional approaches that often use timbral features based on a time-frequency representation of the musical signal using constant window, this paper deals with a new audio classification method which improves the features extraction according the Constant Q Transform (CQT approach and includes original audio features related to the musical context in which the notes appear. The enhancement done by this work is also lay on the proposal of an optimal features selection procedure which combines filter and wrapper strategies. Experimental results show the accuracy and efficiency of the adopted approach in the binary classification as well as in the multi-class classification.

  13. Behavior Based Social Dimensions Extraction for Multi-Label Classification.

    Science.gov (United States)

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes' behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes' connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions.

  14. Classification

    Science.gov (United States)

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  15. Agent-based Cloud service composition

    OpenAIRE

    Sim, Kwang Mong; Gutierrez-Garcia, J. Octavio

    2013-01-01

    Service composition in multi-Cloud environments\\ud must coordinate self-interested participants, automate\\ud service selection, (re)configure distributed services, and deal\\ud with incomplete information about Cloud providers and\\ud their services. This work proposes an agent-based approach\\ud to compose services in multi-Cloud environments for different\\ud types of Cloud services: one-time virtualized services,\\ud e.g., processing a rendering job, persistent virtualized services,\\ud e.g., in...

  16. Agent-based modeling and simulation

    CERN Document Server

    Taylor, Simon

    2014-01-01

    Operational Research (OR) deals with the use of advanced analytical methods to support better decision-making. It is multidisciplinary with strong links to management science, decision science, computer science and many application areas such as engineering, manufacturing, commerce and healthcare. In the study of emergent behaviour in complex adaptive systems, Agent-based Modelling & Simulation (ABMS) is being used in many different domains such as healthcare, energy, evacuation, commerce, manufacturing and defense. This collection of articles presents a convenient introduction to ABMS with pa

  17. Lipid-based antifungal agents: current status.

    Science.gov (United States)

    Arikan, S; Rex, J H

    2001-03-01

    Immunocompromised patients are well known to be predisposed to developing invasive fungal infections. These infections are usually difficult to diagnose and more importantly, the resulting mortality rate is high. The limited number of antifungal agents available and their high rate of toxicity are the major factors complicating the issue. However, the development of lipid-based formulations of existing antifungal agents has opened a new era in antifungal therapy. The best examples are the lipid-based amphotericin B preparations, amphotericin B lipid complex (ABLC; Abelcet), amphotericin B colloidal dispersion (ABCD; Amphotec or Amphocil), and liposomal amphotericin B (AmBisome). These formulations have shown that antifungal activity is maintained while toxicity is reduced. This progress is followed by the incorporation of nystatin into liposomes. Liposomal nystatin formulation is under development and studies of it have provided encouraging data. Finally, lipid-based formulations of hamycin, miconazole, and ketoconazole have been developed but remain experimental. Advances in technology of liposomes and other lipid formulations have provided promising new tools for management of fungal infections.

  18. TENSOR MODELING BASED FOR AIRBORNE LiDAR DATA CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    N. Li

    2016-06-01

    Full Text Available Feature selection and description is a key factor in classification of Earth observation data. In this paper a classification method based on tensor decomposition is proposed. First, multiple features are extracted from raw LiDAR point cloud, and raster LiDAR images are derived by accumulating features or the “raw” data attributes. Then, the feature rasters of LiDAR data are stored as a tensor, and tensor decomposition is used to select component features. This tensor representation could keep the initial spatial structure and insure the consideration of the neighborhood. Based on a small number of component features a k nearest neighborhood classification is applied.

  19. Tensor Modeling Based for Airborne LiDAR Data Classification

    Science.gov (United States)

    Li, N.; Liu, C.; Pfeifer, N.; Yin, J. F.; Liao, Z. Y.; Zhou, Y.

    2016-06-01

    Feature selection and description is a key factor in classification of Earth observation data. In this paper a classification method based on tensor decomposition is proposed. First, multiple features are extracted from raw LiDAR point cloud, and raster LiDAR images are derived by accumulating features or the "raw" data attributes. Then, the feature rasters of LiDAR data are stored as a tensor, and tensor decomposition is used to select component features. This tensor representation could keep the initial spatial structure and insure the consideration of the neighborhood. Based on a small number of component features a k nearest neighborhood classification is applied.

  20. An Agent-Based Monetary Production Simulation Model

    DEFF Research Database (Denmark)

    Bruun, Charlotte

    2006-01-01

    An Agent-Based Simulation Model Programmed in Objective Borland Pascal. Program and source code is downloadable......An Agent-Based Simulation Model Programmed in Objective Borland Pascal. Program and source code is downloadable...

  1. Speech Segregation based on Binary Classification

    Science.gov (United States)

    2016-07-15

    other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a...to the adoption of the ideal ratio mask (IRM). A subsequent listening evaluation shows increased intelligibility in noise for human listeners...15. SUBJECT TERMS Binary classification, time-frequency masking, supervised speech segregation, speech intelligibility , room reverberation 16

  2. Intelligent Hybrid Cluster Based Classification Algorithm for Social Network Analysis

    Directory of Open Access Journals (Sweden)

    S. Muthurajkumar

    2014-05-01

    Full Text Available In this paper, we propose an hybrid clustering based classification algorithm based on mean approach to effectively classify to mine the ordered sequences (paths from weblog data in order to perform social network analysis. In the system proposed in this work for social pattern analysis, the sequences of human activities are typically analyzed by switching behaviors, which are likely to produce overlapping clusters. In this proposed system, a robust Modified Boosting algorithm is proposed to hybrid clustering based classification for clustering the data. This work is useful to provide connection between the aggregated features from the network data and traditional indices used in social network analysis. Experimental results show that the proposed algorithm improves the decision results from data clustering when combined with the proposed classification algorithm and hence it is proved that of provides better classification accuracy when tested with Weblog dataset. In addition, this algorithm improves the predictive performance especially for multiclass datasets which can increases the accuracy.

  3. Hybrid Support Vector Machines-Based Multi-fault Classification

    Institute of Scientific and Technical Information of China (English)

    GAO Guo-hua; ZHANG Yong-zhong; ZHU Yu; DUAN Guang-huang

    2007-01-01

    Support Vector Machines (SVM) is a new general machine-learning tool based on structural risk minimization principle. This characteristic is very signific ant for the fault diagnostics when the number of fault samples is limited. Considering that SVM theory is originally designed for a two-class classification, a hybrid SVM scheme is proposed for multi-fault classification of rotating machinery in our paper. Two SVM strategies, 1-v-1 (one versus one) and 1-v-r (one versus rest), are respectively adopted at different classification levels. At the parallel classification level, using 1-v-1 strategy, the fault features extracted by various signal analysis methods are transferred into the multiple parallel SVM and the local classification results are obtained. At the serial classification level, these local results values are fused by one serial SVM based on 1-v-r strategy. The hybrid SVM scheme introduced in our paper not only generalizes the performance of signal binary SVMs but improves the precision and reliability of the fault classification results. The actually testing results show the availability suitability of this new method.

  4. Agent Communication Channel Based on BACnet

    Institute of Scientific and Technical Information of China (English)

    Jiang Wen-bin; Zhou Man-li

    2004-01-01

    We analyze the common shortcoming in the existing agent MTPs (message transport protocols). With employing the File object and related service AtomicWriteFile of BACnet (a data communication protocol building automation and control networks), a new method of agent message transport is proposed and implemented. Every agent platform (AP) has one specified File object and agents in another AP can communicate with agents in the AP by using AtomicWriteFile service. Agent messages can be in a variety of formats. In implementation, BACnet/IP and Ethernet are applied as the BACnet data link layers respectively. The experiment results show that the BACnet can provide perfect support for agent communication like other conventional protocols such as hypertext transfer protocol(HTTP), remote method invocation (RMI) etc. and has broken through the restriction of TCP/IP. By this approach, the agent technology is introduced into the building automation control network system.

  5. Key-phrase based classification of public health web pages.

    Science.gov (United States)

    Dolamic, Ljiljana; Boyer, Célia

    2013-01-01

    This paper describes and evaluates the public health web pages classification model based on key phrase extraction and matching. Easily extendible both in terms of new classes as well as the new language this method proves to be a good solution for text classification faced with the total lack of training data. To evaluate the proposed solution we have used a small collection of public health related web pages created by a double blind manual classification. Our experiments have shown that by choosing the adequate threshold value the desired value for either precision or recall can be achieved.

  6. Support vector classification algorithm based on variable parameter linear programming

    Institute of Scientific and Technical Information of China (English)

    Xiao Jianhua; Lin Jian

    2007-01-01

    To solve the problems of SVM in dealing with large sample size and asymmetric distributed samples, a support vector classification algorithm based on variable parameter linear programming is proposed.In the proposed algorithm, linear programming is employed to solve the optimization problem of classification to decrease the computation time and to reduce its complexity when compared with the original model.The adjusted punishment parameter greatly reduced the classification error resulting from asymmetric distributed samples and the detailed procedure of the proposed algorithm is given.An experiment is conducted to verify whether the proposed algorithm is suitable for asymmetric distributed samples.

  7. Words semantic orientation classification based on HowNet

    Institute of Scientific and Technical Information of China (English)

    LI Dun; MA Yong-tao; GUO Jian-li

    2009-01-01

    Based on the text orientation classification, a new measurement approach to semantic orientation of words was proposed. According to the integrated and detailed definition of words in HowNet, seed sets including the words with intense orientations were built up. The orientation similarity between the seed words and the given word was then calculated using the sentiment weight priority to recognize the semantic orientation of common words. Finally, the words' semantic orientation and the context were combined to recognize the given words' orientation. The experiments show that the measurement approach achieves better results for common words' orientation classification and contributes particularly to the text orientation classification of large granularities.

  8. Radar Target Classification using Recursive Knowledge-Based Methods

    DEFF Research Database (Denmark)

    Jochumsen, Lars Wurtz

    The topic of this thesis is target classification of radar tracks from a 2D mechanically scanning coastal surveillance radar. The measurements provided by the radar are position data and therefore the classification is mainly based on kinematic data, which is deduced from the position. The target...... been terminated. Therefore, an update of the classification results must be made for each measurement of the target. The data for this work are collected throughout the PhD and are both collected from radars and other sensors such as GPS....

  9. Cancer classification based on gene expression using neural networks.

    Science.gov (United States)

    Hu, H P; Niu, Z J; Bai, Y P; Tan, X H

    2015-12-21

    Based on gene expression, we have classified 53 colon cancer patients with UICC II into two groups: relapse and no relapse. Samples were taken from each patient, and gene information was extracted. Of the 53 samples examined, 500 genes were considered proper through analyses by S-Kohonen, BP, and SVM neural networks. Classification accuracy obtained by S-Kohonen neural network reaches 91%, which was more accurate than classification by BP and SVM neural networks. The results show that S-Kohonen neural network is more plausible for classification and has a certain feasibility and validity as compared with BP and SVM neural networks.

  10. Comparison and analysis of biological agent category lists based on biosafety and biodefense.

    Directory of Open Access Journals (Sweden)

    Deqiao Tian

    Full Text Available Biological agents pose a serious threat to human health, economic development, social stability and even national security. The classification of biological agents is a basic requirement for both biosafety and biodefense. We compared and analyzed the Biological Agent Laboratory Biosafety Category list and the defining criteria according to the World Health Organization (WHO, the National Institutes of Health (NIH, the European Union (EU and China. We also compared and analyzed the Biological Agent Biodefense Category list and the defining criteria according to the Centers for Disease Control and Prevention (CDC of the United States, the EU and Russia. The results show some inconsistencies among or between the two types of category lists and criteria. We suggest that the classification of biological agents based on laboratory biosafety should reduce the number of inconsistencies and contradictions. Developing countries should also produce lists of biological agents to direct their development of biodefense capabilities.To develop a suitable biological agent list should also strengthen international collaboration and cooperation.

  11. Agent Based Model of Livestock Movements

    Science.gov (United States)

    Miron, D. J.; Emelyanova, I. V.; Donald, G. E.; Garner, G. M.

    The modelling of livestock movements within Australia is of national importance for the purposes of the management and control of exotic disease spread, infrastructure development and the economic forecasting of livestock markets. In this paper an agent based model for the forecasting of livestock movements is presented. This models livestock movements from farm to farm through a saleyard. The decision of farmers to sell or buy cattle is often complex and involves many factors such as climate forecast, commodity prices, the type of farm enterprise, the number of animals available and associated off-shore effects. In this model the farm agent's intelligence is implemented using a fuzzy decision tree that utilises two of these factors. These two factors are the livestock price fetched at the last sale and the number of stock on the farm. On each iteration of the model farms choose either to buy, sell or abstain from the market thus creating an artificial supply and demand. The buyers and sellers then congregate at the saleyard where livestock are auctioned using a second price sealed bid. The price time series output by the model exhibits properties similar to those found in real livestock markets.

  12. Agent-based modeling in ecological economics.

    Science.gov (United States)

    Heckbert, Scott; Baynes, Tim; Reeson, Andrew

    2010-01-01

    Interconnected social and environmental systems are the domain of ecological economics, and models can be used to explore feedbacks and adaptations inherent in these systems. Agent-based modeling (ABM) represents autonomous entities, each with dynamic behavior and heterogeneous characteristics. Agents interact with each other and their environment, resulting in emergent outcomes at the macroscale that can be used to quantitatively analyze complex systems. ABM is contributing to research questions in ecological economics in the areas of natural resource management and land-use change, urban systems modeling, market dynamics, changes in consumer attitudes, innovation, and diffusion of technology and management practices, commons dilemmas and self-governance, and psychological aspects to human decision making and behavior change. Frontiers for ABM research in ecological economics involve advancing the empirical calibration and validation of models through mixed methods, including surveys, interviews, participatory modeling, and, notably, experimental economics to test specific decision-making hypotheses. Linking ABM with other modeling techniques at the level of emergent properties will further advance efforts to understand dynamics of social-environmental systems.

  13. Fuzzy Aspect Based Opinion Classification System for Mining Tourist Reviews

    Directory of Open Access Journals (Sweden)

    Muhammad Afzaal

    2016-01-01

    Full Text Available Due to the large amount of opinions available on the websites, tourists are often overwhelmed with information and find it extremely difficult to use the available information to make a decision about the tourist places to visit. A number of opinion mining methods have been proposed in the past to identify and classify an opinion into positive or negative. Recently, aspect based opinion mining has been introduced which targets the various aspects present in the opinion text. A number of existing aspect based opinion classification methods are available in the literature but very limited research work has targeted the automatic aspect identification and extraction of implicit, infrequent, and coreferential aspects. Aspect based classification suffers from the presence of irrelevant sentences in a typical user review. Such sentences make the data noisy and degrade the classification accuracy of the machine learning algorithms. This paper presents a fuzzy aspect based opinion classification system which efficiently extracts aspects from user opinions and perform near to accurate classification. We conducted experiments on real world datasets to evaluate the effectiveness of our proposed system. Experimental results prove that the proposed system not only is effective in aspect extraction but also improves the classification accuracy.

  14. A Syntactic Classification based Web Page Ranking Algorithm

    CERN Document Server

    Mukhopadhyay, Debajyoti; Kim, Young-Chon

    2011-01-01

    The existing search engines sometimes give unsatisfactory search result for lack of any categorization of search result. If there is some means to know the preference of user about the search result and rank pages according to that preference, the result will be more useful and accurate to the user. In the present paper a web page ranking algorithm is being proposed based on syntactic classification of web pages. Syntactic Classification does not bother about the meaning of the content of a web page. The proposed approach mainly consists of three steps: select some properties of web pages based on user's demand, measure them, and give different weightage to each property during ranking for different types of pages. The existence of syntactic classification is supported by running fuzzy c-means algorithm and neural network classification on a set of web pages. The change in ranking for difference in type of pages but for same query string is also being demonstrated.

  15. Texture Classification Using Sparse Frame-Based Representations

    Directory of Open Access Journals (Sweden)

    Skretting Karl

    2006-01-01

    Full Text Available A new method for supervised texture classification, denoted by frame texture classification method (FTCM, is proposed. The method is based on a deterministic texture model in which a small image block, taken from a texture region, is modeled as a sparse linear combination of frame elements. FTCM has two phases. In the design phase a frame is trained for each texture class based on given texture example images. The design method is an iterative procedure in which the representation error, given a sparseness constraint, is minimized. In the classification phase each pixel in a test image is labeled by analyzing its spatial neighborhood. This block is represented by each of the frames designed for the texture classes under consideration, and the frame giving the best representation gives the class. The FTCM is applied to nine test images of natural textures commonly used in other texture classification work, yielding excellent overall performance.

  16. Feature Extraction based Face Recognition, Gender and Age Classification

    Directory of Open Access Journals (Sweden)

    Venugopal K R

    2010-01-01

    Full Text Available The face recognition system with large sets of training sets for personal identification normally attains good accuracy. In this paper, we proposed Feature Extraction based Face Recognition, Gender and Age Classification (FEBFRGAC algorithm with only small training sets and it yields good results even with one image per person. This process involves three stages: Pre-processing, Feature Extraction and Classification. The geometric features of facial images like eyes, nose, mouth etc. are located by using Canny edge operator and face recognition is performed. Based on the texture and shape information gender and age classification is done using Posteriori Class Probability and Artificial Neural Network respectively. It is observed that the face recognition is 100%, the gender and age classification is around 98% and 94% respectively.

  17. Analysis of Kernel Approach in Fuzzy-Based Image Classifications

    Directory of Open Access Journals (Sweden)

    Mragank Singhal

    2013-03-01

    Full Text Available This paper presents a framework of kernel approach in the field of fuzzy based image classification in remote sensing. The goal of image classification is to separate images according to their visual content into two or more disjoint classes. Fuzzy logic is relatively young theory. Major advantage of this theory is that it allows the natural description, in linguistic terms, of problems that should be solved rather than in terms of relationships between precise numerical values. This paper describes how remote sensing data with uncertainty are handled with fuzzy based classification using Kernel approach for land use/land cover maps generation. The introduction to fuzzification using Kernel approach provides the basis for the development of more robust approaches to the remote sensing classification problem. The kernel explicitly defines a similarity measure between two samples and implicitly represents the mapping of the input space to the feature space.

  18. Object Based and Pixel Based Classification Using Rapideye Satellite Imager of ETI-OSA, Lagos, Nigeria

    Directory of Open Access Journals (Sweden)

    Esther Oluwafunmilayo Makinde

    2016-12-01

    Full Text Available Several studies have been carried out to find an appropriate method to classify the remote sensing data. Traditional classification approaches are all pixel-based, and do not utilize the spatial information within an object which is an important source of information to image classification. Thus, this study compared the pixel based and object based classification algorithms using RapidEye satellite image of Eti-Osa LGA, Lagos. In the object-oriented approach, the image was segmented to homogenous area by suitable parameters such as scale parameter, compactness, shape etc. Classification based on segments was done by a nearest neighbour classifier. In the pixel-based classification, the spectral angle mapper was used to classify the images. The user accuracy for each class using object based classification were 98.31% for waterbody, 92.31% for vegetation, 86.67% for bare soil and 90.57% for Built up while the user accuracy for the pixel based classification were 98.28% for waterbody, 84.06% for Vegetation 86.36% and 79.41% for Built up. These classification techniques were subjected to accuracy assessment and the overall accuracy of the Object based classification was 94.47%, while that of Pixel based classification yielded 86.64%. The result of classification and accuracy assessment show that the object-based approach gave more accurate and satisfying results

  19. Agent based modeling in tactical wargaming

    Science.gov (United States)

    James, Alex; Hanratty, Timothy P.; Tuttle, Daniel C.; Coles, John B.

    2016-05-01

    Army staffs at division, brigade, and battalion levels often plan for contingency operations. As such, analysts consider the impact and potential consequences of actions taken. The Army Military Decision-Making Process (MDMP) dictates identification and evaluation of possible enemy courses of action; however, non-state actors often do not exhibit the same level and consistency of planned actions that the MDMP was originally designed to anticipate. The fourth MDMP step is a particular challenge, wargaming courses of action within the context of complex social-cultural behaviors. Agent-based Modeling (ABM) and its resulting emergent behavior is a potential solution to model terrain in terms of the human domain and improve the results and rigor of the traditional wargaming process.

  20. Agents-based distributed processes control systems

    Directory of Open Access Journals (Sweden)

    Adrian Gligor

    2011-12-01

    Full Text Available Large industrial distributed systems have revealed a remarkable development in recent years. We may note an increase of their structural and functional complexity, at the same time with those on requirements side. These are some reasons why there are involvednumerous researches, energy and resources to solve problems related to these types of systems. The paper addresses the issue of industrial distributed systems with special attention being given to the distributed industrial processes control systems. A solution for a distributed process control system based on mobile intelligent agents is presented.The main objective of the proposed system is to provide an optimal solution in terms of costs, maintenance, reliability and flexibility. The paper focuses on requirements, architecture, functionality and advantages brought by the proposed solution.

  1. Knowledge Management in Role Based Agents

    Science.gov (United States)

    Kır, Hüseyin; Ekinci, Erdem Eser; Dikenelli, Oguz

    In multi-agent system literature, the role concept is getting increasingly researched to provide an abstraction to scope beliefs, norms, goals of agents and to shape relationships of the agents in the organization. In this research, we propose a knowledgebase architecture to increase applicability of roles in MAS domain by drawing inspiration from the self concept in the role theory of sociology. The proposed knowledgebase architecture has granulated structure that is dynamically organized according to the agent's identification in a social environment. Thanks to this dynamic structure, agents are enabled to work on consistent knowledge in spite of inevitable conflicts between roles and the agent. The knowledgebase architecture is also implemented and incorporated into the SEAGENT multi-agent system development framework.

  2. An Active Learning Exercise for Introducing Agent-Based Modeling

    Science.gov (United States)

    Pinder, Jonathan P.

    2013-01-01

    Recent developments in agent-based modeling as a method of systems analysis and optimization indicate that students in business analytics need an introduction to the terminology, concepts, and framework of agent-based modeling. This article presents an active learning exercise for MBA students in business analytics that demonstrates agent-based…

  3. Tomato classification based on laser metrology and computer algorithms

    Science.gov (United States)

    Igno Rosario, Otoniel; Muñoz Rodríguez, J. Apolinar; Martínez Hernández, Haydeé P.

    2011-08-01

    An automatic technique for tomato classification is presented based on size and color. The size is determined based on surface contouring by laser line scanning. Here, a Bezier network computes the tomato height based on the line position. The tomato color is determined by CIELCH color space and the components red and green. Thus, the tomato size is classified in large, medium and small. Also, the tomato is classified into six colors associated with its maturity. The performance and accuracy of the classification system is evaluated based on methods reported in the recent years. The technique is tested and experimental results are presented.

  4. Visual words based approach for tissue classification in mammograms

    Science.gov (United States)

    Diamant, Idit; Goldberger, Jacob; Greenspan, Hayit

    2013-02-01

    The presence of Microcalcifications (MC) is an important indicator for developing breast cancer. Additional indicators for cancer risk exist, such as breast tissue density type. Different methods have been developed for breast tissue classification for use in Computer-aided diagnosis systems. Recently, the visual words (VW) model has been successfully applied for different classification tasks. The goal of our work is to explore VW based methodologies for various mammography classification tasks. We start with the challenge of classifying breast density and then focus on classification of normal tissue versus Microcalcifications. The presented methodology is based on patch-based visual words model which includes building a dictionary for a training set using local descriptors and representing the image using a visual word histogram. Classification is then performed using k-nearest-neighbour (KNN) and Support vector machine (SVM) classifiers. We tested our algorithm on the MIAS and DDSM publicly available datasets. The input is a representative region-of-interest per mammography image, manually selected and labelled by expert. In the tissue density task, classification accuracy reached 85% using KNN and 88% using SVM, which competes with the state-of-the-art results. For MC vs. normal tissue, accuracy reached 95.6% using SVM. Results demonstrate the feasibility to classify breast tissue using our model. Currently, we are improving the results further while also investigating VW capability to classify additional important mammogram classification problems. We expect that the methodology presented will enable high levels of classification, suggesting new means for automated tools for mammography diagnosis support.

  5. Classification of LiDAR Data with Point Based Classification Methods

    Science.gov (United States)

    Yastikli, N.; Cetin, Z.

    2016-06-01

    LiDAR is one of the most effective systems for 3 dimensional (3D) data collection in wide areas. Nowadays, airborne LiDAR data is used frequently in various applications such as object extraction, 3D modelling, change detection and revision of maps with increasing point density and accuracy. The classification of the LiDAR points is the first step of LiDAR data processing chain and should be handled in proper way since the 3D city modelling, building extraction, DEM generation, etc. applications directly use the classified point clouds. The different classification methods can be seen in recent researches and most of researches work with the gridded LiDAR point cloud. In grid based data processing of the LiDAR data, the characteristic point loss in the LiDAR point cloud especially vegetation and buildings or losing height accuracy during the interpolation stage are inevitable. In this case, the possible solution is the use of the raw point cloud data for classification to avoid data and accuracy loss in gridding process. In this study, the point based classification possibilities of the LiDAR point cloud is investigated to obtain more accurate classes. The automatic point based approaches, which are based on hierarchical rules, have been proposed to achieve ground, building and vegetation classes using the raw LiDAR point cloud data. In proposed approaches, every single LiDAR point is analyzed according to their features such as height, multi-return, etc. then automatically assigned to the class which they belong to. The use of un-gridded point cloud in proposed point based classification process helped the determination of more realistic rule sets. The detailed parameter analyses have been performed to obtain the most appropriate parameters in the rule sets to achieve accurate classes. The hierarchical rule sets were created for proposed Approach 1 (using selected spatial-based and echo-based features) and Approach 2 (using only selected spatial-based features

  6. Semantic Document Image Classification Based on Valuable Text Pattern

    Directory of Open Access Journals (Sweden)

    Hossein Pourghassem

    2011-01-01

    Full Text Available Knowledge extraction from detected document image is a complex problem in the field of information technology. This problem becomes more intricate when we know, a negligible percentage of the detected document images are valuable. In this paper, a segmentation-based classification algorithm is used to analysis the document image. In this algorithm, using a two-stage segmentation approach, regions of the image are detected, and then classified to document and non-document (pure region regions in the hierarchical classification. In this paper, a novel valuable definition is proposed to classify document image in to valuable or invaluable categories. The proposed algorithm is evaluated on a database consisting of the document and non-document image that provide from Internet. Experimental results show the efficiency of the proposed algorithm in the semantic document image classification. The proposed algorithm provides accuracy rate of 98.8% for valuable and invaluable document image classification problem.

  7. Indoor scene classification of robot vision based on cloud computing

    Science.gov (United States)

    Hu, Tao; Qi, Yuxiao; Li, Shipeng

    2016-07-01

    For intelligent service robots, indoor scene classification is an important issue. To overcome the weak real-time performance of conventional algorithms, a new method based on Cloud computing is proposed for global image features in indoor scene classification. With MapReduce method, global PHOG feature of indoor scene image is extracted in parallel. And, feature eigenvector is used to train the decision classifier through SVM concurrently. Then, the indoor scene is validly classified by decision classifier. To verify the algorithm performance, we carried out an experiment with 350 typical indoor scene images from MIT LabelMe image library. Experimental results show that the proposed algorithm can attain better real-time performance. Generally, it is 1.4 2.1 times faster than traditional classification methods which rely on single computation, while keeping stable classification correct rate as 70%.

  8. Classification approach based on association rules mining for unbalanced data

    CERN Document Server

    Ndour, Cheikh

    2012-01-01

    This paper deals with the supervised classification when the response variable is binary and its class distribution is unbalanced. In such situation, it is not possible to build a powerful classifier by using standard methods such as logistic regression, classification tree, discriminant analysis, etc. To overcome this short-coming of these methods that provide classifiers with low sensibility, we tackled the classification problem here through an approach based on the association rules learning because this approach has the advantage of allowing the identification of the patterns that are well correlated with the target class. Association rules learning is a well known method in the area of data-mining. It is used when dealing with large database for unsupervised discovery of local patterns that expresses hidden relationships between variables. In considering association rules from a supervised learning point of view, a relevant set of weak classifiers is obtained from which one derives a classification rule...

  9. Ensemble polarimetric SAR image classification based on contextual sparse representation

    Science.gov (United States)

    Zhang, Lamei; Wang, Xiao; Zou, Bin; Qiao, Zhijun

    2016-05-01

    Polarimetric SAR image interpretation has become one of the most interesting topics, in which the construction of the reasonable and effective technique of image classification is of key importance. Sparse representation represents the data using the most succinct sparse atoms of the over-complete dictionary and the advantages of sparse representation also have been confirmed in the field of PolSAR classification. However, it is not perfect, like the ordinary classifier, at different aspects. So ensemble learning is introduced to improve the issue, which makes a plurality of different learners training and obtained the integrated results by combining the individual learner to get more accurate and ideal learning results. Therefore, this paper presents a polarimetric SAR image classification method based on the ensemble learning of sparse representation to achieve the optimal classification.

  10. Pathological Bases for a Robust Application of Cancer Molecular Classification

    Directory of Open Access Journals (Sweden)

    Salvador J. Diaz-Cano

    2015-04-01

    Full Text Available Any robust classification system depends on its purpose and must refer to accepted standards, its strength relying on predictive values and a careful consideration of known factors that can affect its reliability. In this context, a molecular classification of human cancer must refer to the current gold standard (histological classification and try to improve it with key prognosticators for metastatic potential, staging and grading. Although organ-specific examples have been published based on proteomics, transcriptomics and genomics evaluations, the most popular approach uses gene expression analysis as a direct correlate of cellular differentiation, which represents the key feature of the histological classification. RNA is a labile molecule that varies significantly according with the preservation protocol, its transcription reflect the adaptation of the tumor cells to the microenvironment, it can be passed through mechanisms of intercellular transference of genetic information (exosomes, and it is exposed to epigenetic modifications. More robust classifications should be based on stable molecules, at the genetic level represented by DNA to improve reliability, and its analysis must deal with the concept of intratumoral heterogeneity, which is at the origin of tumor progression and is the byproduct of the selection process during the clonal expansion and progression of neoplasms. The simultaneous analysis of multiple DNA targets and next generation sequencing offer the best practical approach for an analytical genomic classification of tumors.

  11. Decentralized network management based on mobile agent

    Institute of Scientific and Technical Information of China (English)

    李锋; 冯珊

    2004-01-01

    The mobile agent technology can be employed effectively for the decentralized management of complex networks. We show how the integration of mobile agent with legacy management protocol, such as simple network management protocol (SNMP), leads to decentralized management architecture. HostWatcher is a framework that allows mobile agents to roam network, collect and process data, and perform certain adaptive actions. A prototype system is built and a quantitative analysis underlines the benefits in respect to reducing network load.

  12. Study on Increasing the Accuracy of Classification Based on Ant Colony algorithm

    Science.gov (United States)

    Yu, M.; Chen, D.-W.; Dai, C.-Y.; Li, Z.-L.

    2013-05-01

    The application for GIS advances the ability of data analysis on remote sensing image. The classification and distill of remote sensing image is the primary information source for GIS in LUCC application. How to increase the accuracy of classification is an important content of remote sensing research. Adding features and researching new classification methods are the ways to improve accuracy of classification. Ant colony algorithm based on mode framework defined, agents of the algorithms in nature-inspired computation field can show a kind of uniform intelligent computation mode. It is applied in remote sensing image classification is a new method of preliminary swarm intelligence. Studying the applicability of ant colony algorithm based on more features and exploring the advantages and performance of ant colony algorithm are provided with very important significance. The study takes the outskirts of Fuzhou with complicated land use in Fujian Province as study area. The multi-source database which contains the integration of spectral information (TM1-5, TM7, NDVI, NDBI) and topography characters (DEM, Slope, Aspect) and textural information (Mean, Variance, Homogeneity, Contrast, Dissimilarity, Entropy, Second Moment, Correlation) were built. Classification rules based different characters are discovered from the samples through ant colony algorithm and the classification test is performed based on these rules. At the same time, we compare with traditional maximum likelihood method, C4.5 algorithm and rough sets classifications for checking over the accuracies. The study showed that the accuracy of classification based on the ant colony algorithm is higher than other methods. In addition, the land use and cover changes in Fuzhou for the near term is studied and display the figures by using remote sensing technology based on ant colony algorithm. In addition, the land use and cover changes in Fuzhou for the near term is studied and display the figures by using

  13. Agent-Based Health Monitoring System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose combination of software intelligent agents to achieve decentralized reasoning, with fault detection and diagnosis using PCA, neural nets, and maximum...

  14. Recent advances in agent-based complex automated negotiation

    CERN Document Server

    Ito, Takayuki; Zhang, Minjie; Fujita, Katsuhide; Robu, Valentin

    2016-01-01

    This book covers recent advances in Complex Automated Negotiations as a widely studied emerging area in the field of Autonomous Agents and Multi-Agent Systems. The book includes selected revised and extended papers from the 7th International Workshop on Agent-Based Complex Automated Negotiation (ACAN2014), which was held in Paris, France, in May 2014. The book also includes brief introductions about Agent-based Complex Automated Negotiation which are based on tutorials provided in the workshop, and brief summaries and descriptions about the ANAC'14 (Automated Negotiating Agents Competition) competition, where authors of selected finalist agents explain the strategies and the ideas used by them. The book is targeted to academic and industrial researchers in various communities of autonomous agents and multi-agent systems, such as agreement technology, mechanism design, electronic commerce, related areas, as well as graduate, undergraduate, and PhD students working in those areas or having interest in them.

  15. Dissimilarity-based classification of anatomical tree structures

    DEFF Research Database (Denmark)

    Sørensen, Lauge Emil Borch Laurs; Lo, Pechin Chien Pau; Dirksen, Asger

    2011-01-01

    A novel method for classification of abnormality in anatomical tree structures is presented. A tree is classified based on direct comparisons with other trees in a dissimilarity-based classification scheme. The pair-wise dissimilarity measure between two trees is based on a linear assignment...... by including anatomical features in the branch feature vectors. The proposed approach is applied to classify airway trees in computed tomography images of subjects with and without chronic obstructive pulmonary disease (COPD). Using the wall area percentage (WA%), a common measure of airway abnormality in COPD...

  16. Modelling of robotic work cells using agent based-approach

    Science.gov (United States)

    Sękala, A.; Banaś, W.; Gwiazda, A.; Monica, Z.; Kost, G.; Hryniewicz, P.

    2016-08-01

    In the case of modern manufacturing systems the requirements, both according the scope and according characteristics of technical procedures are dynamically changing. This results in production system organization inability to keep up with changes in a market demand. Accordingly, there is a need for new design methods, characterized, on the one hand with a high efficiency and on the other with the adequate level of the generated organizational solutions. One of the tools that could be used for this purpose is the concept of agent systems. These systems are the tools of artificial intelligence. They allow assigning to agents the proper domains of procedures and knowledge so that they represent in a self-organizing system of an agent environment, components of a real system. The agent-based system for modelling robotic work cell should be designed taking into consideration many limitations considered with the characteristic of this production unit. It is possible to distinguish some grouped of structural components that constitute such a system. This confirms the structural complexity of a work cell as a specific production system. So it is necessary to develop agents depicting various aspects of the work cell structure. The main groups of agents that are used to model a robotic work cell should at least include next pattern representatives: machine tool agents, auxiliary equipment agents, robots agents, transport equipment agents, organizational agents as well as data and knowledge bases agents. In this way it is possible to create the holarchy of the agent-based system.

  17. Dissimilarity-based classification of anatomical tree structures

    DEFF Research Database (Denmark)

    Sørensen, Lauge; Lo, Pechin Chien Pau; Dirksen, Asger

    2011-01-01

    A novel method for classification of abnormality in anatomical tree structures is presented. A tree is classified based on direct comparisons with other trees in a dissimilarity-based classification scheme. The pair-wise dissimilarity measure between two trees is based on a linear assignment...... between the branch feature vectors representing those trees. Hereby, localized information in the branches is collectively used in classification and variations in feature values across the tree are taken into account. An approximate anatomical correspondence between matched branches can be achieved...... by including anatomical features in the branch feature vectors. The proposed approach is applied to classify airway trees in computed tomography images of subjects with and without chronic obstructive pulmonary disease (COPD). Using the wall area percentage (WA%), a common measure of airway abnormality in COPD...

  18. Classification of Gait Types Based on the Duty-factor

    DEFF Research Database (Denmark)

    Fihl, Preben; Moeslund, Thomas B.

    2007-01-01

    This paper deals with classification of human gait types based on the notion that different gait types are in fact different types of locomotion, i.e., running is not simply walking done faster. We present the duty-factor, which is a descriptor based on this notion. The duty-factor is independent...

  19. Mobile Agent Based on Internet%基于Internet的移动Agent

    Institute of Scientific and Technical Information of China (English)

    徐练; 周龙骧; 王翰虎

    2001-01-01

    Mobile Agent is a hybrid of Internet technology and Artificial Intelligence. Today there are tremendous amount of information resources distributing among Internet ,but it's very difficult to find the wanted-thing. Internet has increasingly become a vital compute platform for electron commercial which has highly popular through the world. Developing new Internet-based application programs such as shopping online,e-business,search engine etc pose new task. Mobile Agent proffers new clue and technology. Considering Internet,this thesis conducts a research on architecture,mobile mechanism in mobile Agent system. Based on the Agent theory research and engineering ,the thesis focuses point at researching Mobile Agents,which have the ability to rove through the network. Using OMG's "Mobile Agent Facility Specification" for reference,we design a model architecture of Mobile Agent System. Based on the architecture ,the article analyzes the key technology and gives methods to resolving them ,emphases on mobility mechanism of Agent and implementing it. At last a model of java-based Mobile Agent System is given.

  20. D Land Cover Classification Based on Multispectral LIDAR Point Clouds

    Science.gov (United States)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

  1. Super pixel density based clustering automatic image classification method

    Science.gov (United States)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  2. Validating agent based models through virtual worlds.

    Energy Technology Data Exchange (ETDEWEB)

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E.; Bernstein, Jeremy Ray Rhythm

    2014-01-01

    As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior

  3. Atmospheric circulation classification comparison based on wildfires in Portugal

    Science.gov (United States)

    Pereira, M. G.; Trigo, R. M.

    2009-04-01

    Atmospheric circulation classifications are not a simple description of atmospheric states but a tool to understand and interpret the atmospheric processes and to model the relation between atmospheric circulation and surface climate and other related variables (Radan Huth et al., 2008). Classifications were initially developed with weather forecasting purposes, however with the progress in computer processing capability, new and more robust objective methods were developed and applied to large datasets prompting atmospheric circulation classification methods to one of the most important fields in synoptic and statistical climatology. Classification studies have been extensively used in climate change studies (e.g. reconstructed past climates, recent observed changes and future climates), in bioclimatological research (e.g. relating human mortality to climatic factors) and in a wide variety of synoptic climatological applications (e.g. comparison between datasets, air pollution, snow avalanches, wine quality, fish captures and forest fires). Likewise, atmospheric circulation classifications are important for the study of the role of weather in wildfire occurrence in Portugal because the daily synoptic variability is the most important driver of local weather conditions (Pereira et al., 2005). In particular, the objective classification scheme developed by Trigo and DaCamara (2000) to classify the atmospheric circulation affecting Portugal have proved to be quite useful in discriminating the occurrence and development of wildfires as well as the distribution over Portugal of surface climatic variables with impact in wildfire activity such as maximum and minimum temperature and precipitation. This work aims to present: (i) an overview the existing circulation classification for the Iberian Peninsula, and (ii) the results of a comparison study between these atmospheric circulation classifications based on its relation with wildfires and relevant meteorological

  4. Agent-Based Decentralized Control Method for Islanded Microgrids

    DEFF Research Database (Denmark)

    Li, Qiang; Chen, Feixiong; Chen, Minyou;

    2016-01-01

    In this paper, an agent-based decentralized control model for islanded microgrids is proposed, which consists of a two-layer control structure. The bottom layer is the electrical distribution microgrid, while the top layer is the communication network composed of agents. An agent is regarded as a...

  5. Cement industry control system based on multi agent

    Institute of Scientific and Technical Information of China (English)

    王海东; 邱冠周; 黄圣生

    2004-01-01

    Cement production is characterized by its great capacity, long-time delay, multi variables, difficult measurement and muhi disturbances. According to the distributed intelligent control strategy based on the multi agent, the multi agent control system of cement production is built, which includes integrated optimal control and diagnosis control. The distributed and multiple level structure of multi agent system for the cement control is studied. The optimal agent is in the distributed state, which aims at the partial process of the cement production, and forms the optimal layer. The diagnosis agent located on the diagnosis layer is the diagnosis unit which aims at the whole process of the cement production, and the central management unit of the system. The system cooperation is realized by the communication among optimal agents and diagnosis agent. The architecture of the optimal agent and the diagnosis agent are designed. The detailed functions of the optimal agent and the diagnosis agent are analyzed.At last the realization methods of the agents are given, and the application of the multi agent control system is presented. The multi agent system has been successfully applied to the off-line control of one cement plant with capacity of 5 000 t/d. The results show that the average yield of the clinker increases 9.3% and the coal consumption decreases 7.5 kg/t.

  6. Agent Based Processing of Global Evaluation Function

    CERN Document Server

    Hossain, M Shahriar; Joarder, Md Mahbubul Alam

    2011-01-01

    Load balancing across a networked environment is a monotonous job. Moreover, if the job to be distributed is a constraint satisfying one, the distribution of load demands core intelligence. This paper proposes parallel processing through Global Evaluation Function by means of randomly initialized agents for solving Constraint Satisfaction Problems. A potential issue about the number of agents in a machine under the invocation of distribution is discussed here for securing the maximum benefit from Global Evaluation and parallel processing. The proposed system is compared with typical solution that shows an exclusive outcome supporting the nobility of parallel implementation of Global Evaluation Function with certain number of agents in each invoked machine.

  7. Agent Persuasion Mechanism of Acquaintance

    Science.gov (United States)

    Jinghua, Wu; Wenguang, Lu; Hailiang, Meng

    Agent persuasion can improve negotiation efficiency in dynamic environment based on its initiative and autonomy, and etc., which is being affected much more by acquaintance. Classification of acquaintance on agent persuasion is illustrated, and the agent persuasion model of acquaintance is also illustrated. Then the concept of agent persuasion degree of acquaintance is given. Finally, relative interactive mechanism is elaborated.

  8. An Efficient Semantic Model For Concept Based Clustering And Classification

    Directory of Open Access Journals (Sweden)

    SaiSindhu Bandaru

    2012-03-01

    Full Text Available Usually in text mining techniques the basic measures like term frequency of a term (word or phrase is computed to compute the importance of the term in the document. But with statistical analysis, the original semantics of the term may not carry the exact meaning of the term. To overcome this problem, a new framework has been introduced which relies on concept based model and synonym based approach. The proposed model can efficiently find significant matching and related concepts between documents according to concept based and synonym based approaches. Large sets of experiments using the proposed model on different set in clustering and classification are conducted. Experimental results demonstrate the substantialenhancement of the clustering quality using sentence based, document based, corpus based and combined approach concept analysis. A new similarity measure has been proposed to find the similarity between adocument and the existing clusters, which can be used in classification of the document with existing clusters.

  9. Object Based and Pixel Based Classification Using Rapideye Satellite Imager of ETI-OSA, Lagos, Nigeria

    OpenAIRE

    Esther Oluwafunmilayo Makinde; Ayobami Taofeek Salami; James Bolarinwa Olaleye; Oluwapelumi Comfort Okewusi

    2016-01-01

    Several studies have been carried out to find an appropriate method to classify the remote sensing data. Traditional classification approaches are all pixel-based, and do not utilize the spatial information within an object which is an important source of information to image classification. Thus, this study compared the pixel based and object based classification algorithms using RapidEye satellite image of Eti-Osa LGA, Lagos. In the object-oriented approach, the image was segmented to homog...

  10. An Interactive Tool for Creating Multi-Agent Systems and Interactive Agent-based Games

    DEFF Research Database (Denmark)

    Lund, Henrik Hautop; Pagliarini, Luigi

    2011-01-01

    Utilizing principles from parallel and distributed processing combined with inspiration from modular robotics, we developed the modular interactive tiles. As an educational tool, the modular interactive tiles facilitate the learning of multi-agent systems and interactive agent-based games....... The modular and physical property of the tiles provides students with hands-on experience in exploring the theoretical aspects underlying multi-agent systems which often appear as challenging to students. By changing the representation of the cognitive challenging aspects of multi-agent systems education...

  11. Improvement of Bioactive Compound Classification through Integration of Orthogonal Cell-Based Biosensing Methods

    Directory of Open Access Journals (Sweden)

    Goran N. Jovanovic

    2007-01-01

    Full Text Available Lack of specificity for different classes of chemical and biological agents, and false positives and negatives, can limit the range of applications for cell-based biosensors. This study suggests that the integration of results from algal cells (Mesotaenium caldariorum and fish chromatophores (Betta splendens improves classification efficiency and detection reliability. Cells were challenged with paraquat, mercuric chloride, sodium arsenite and clonidine. The two detection systems were independently investigated for classification of the toxin set by performing discriminant analysis. The algal system correctly classified 72% of the bioactive compounds, whereas the fish chromatophore system correctly classified 68%. The combined classification efficiency was 95%. The algal sensor readout is based on fluorescence measurements of changes in the energy producing pathways of photosynthetic cells, whereas the response from fish chromatophores was quantified using optical density. Change in optical density reflects interference with the functioning of cellular signal transduction networks. Thus, algal cells and fish chromatophores respond to the challenge agents through sufficiently different mechanisms of action to be considered orthogonal.

  12. NIM: A Node Influence Based Method for Cancer Classification

    Directory of Open Access Journals (Sweden)

    Yiwen Wang

    2014-01-01

    Full Text Available The classification of different cancer types owns great significance in the medical field. However, the great majority of existing cancer classification methods are clinical-based and have relatively weak diagnostic ability. With the rapid development of gene expression technology, it is able to classify different kinds of cancers using DNA microarray. Our main idea is to confront the problem of cancer classification using gene expression data from a graph-based view. Based on a new node influence model we proposed, this paper presents a novel high accuracy method for cancer classification, which is composed of four parts: the first is to calculate the similarity matrix of all samples, the second is to compute the node influence of training samples, the third is to obtain the similarity between every test sample and each class using weighted sum of node influence and similarity matrix, and the last is to classify each test sample based on its similarity between every class. The data sets used in our experiments are breast cancer, central nervous system, colon tumor, prostate cancer, acute lymphoblastic leukemia, and lung cancer. experimental results showed that our node influence based method (NIM is more efficient and robust than the support vector machine, K-nearest neighbor, C4.5, naive Bayes, and CART.

  13. NIM: a node influence based method for cancer classification.

    Science.gov (United States)

    Wang, Yiwen; Yao, Min; Yang, Jianhua

    2014-01-01

    The classification of different cancer types owns great significance in the medical field. However, the great majority of existing cancer classification methods are clinical-based and have relatively weak diagnostic ability. With the rapid development of gene expression technology, it is able to classify different kinds of cancers using DNA microarray. Our main idea is to confront the problem of cancer classification using gene expression data from a graph-based view. Based on a new node influence model we proposed, this paper presents a novel high accuracy method for cancer classification, which is composed of four parts: the first is to calculate the similarity matrix of all samples, the second is to compute the node influence of training samples, the third is to obtain the similarity between every test sample and each class using weighted sum of node influence and similarity matrix, and the last is to classify each test sample based on its similarity between every class. The data sets used in our experiments are breast cancer, central nervous system, colon tumor, prostate cancer, acute lymphoblastic leukemia, and lung cancer. experimental results showed that our node influence based method (NIM) is more efficient and robust than the support vector machine, K-nearest neighbor, C4.5, naive Bayes, and CART.

  14. MAIA: a framework for developing agent-based social simulations

    NARCIS (Netherlands)

    Ghorbani, Amineh; Dignum, Virginia; Bots, Pieter; Dijkema, Gerhard

    2013-01-01

    In this paper we introduce and motivate a conceptualization framework for agent-based social simulation, MAIA: Modelling Agent systems based on Institutional Analysis. The MAIA framework is based on Ostrom's Institutional Analysis and Development framework, and provides an extensive set of modelling

  15. Russian and Foreign Experience of Integration of Agent-Based Models and Geographic Information Systems

    Directory of Open Access Journals (Sweden)

    Konstantin Anatol’evich Gulin

    2016-11-01

    Full Text Available The article provides an overview of the mechanisms of integration of agent-based models and GIS technology developed by Russian and foreign researchers. The basic framework of the article is based on critical analysis of domestic and foreign literature (monographs, scientific articles. The study is based on the application of universal scientific research methods: system approach, analysis and synthesis, classification, systematization and grouping, generalization and comparison. The article presents theoretical and methodological bases of integration of agent-based models and geographic information systems. The concept and essence of agent-based models are explained; their main advantages (compared to other modeling methods are identified. The paper characterizes the operating environment of agents as a key concept in the theory of agent-based modeling. It is shown that geographic information systems have a wide range of information resources for calculations, searching, modeling of the real world in various aspects, acting as an effective tool for displaying the agents’ operating environment and allowing to bring the model as close as possible to the real conditions. The authors also focus on a wide range of possibilities for various researches in different spatial and temporal contexts. Comparative analysis of platforms supporting the integration of agent-based models and geographic information systems has been carried out. The authors give examples of complex socio-economic models: the model of a creative city, humanitarian assistance model. In the absence of standards for research results description, the authors focus on the models’ elements such as the characteristics of the agents and their operation environment, agents’ behavior, rules of interaction between the agents and the external environment. The paper describes the possibilities and prospects of implementing these models

  16. Vessel-guided airway segmentation based on voxel classification

    DEFF Research Database (Denmark)

    Lo, Pechin Chien Pau; Sporring, Jon; Ashraf, Haseem;

    2008-01-01

    This paper presents a method for improving airway tree segmentation using vessel orientation information. We use the fact that an airway branch is always accompanied by an artery, with both structures having similar orientations. This work is based on a  voxel classification airway segmentation...

  17. Hierarchical Real-time Network Traffic Classification Based on ECOC

    Directory of Open Access Journals (Sweden)

    Yaou Zhao

    2013-09-01

    Full Text Available Classification of network traffic is basic and essential for manynetwork researches and managements. With the rapid development ofpeer-to-peer (P2P application using dynamic port disguisingtechniques and encryption to avoid detection, port-based and simplepayload-based network traffic classification methods were diminished.An alternative method based on statistics and machine learning hadattracted researchers' attention in recent years. However, most ofthe proposed algorithms were off-line and usually used a single classifier.In this paper a new hierarchical real-time model was proposed which comprised of a three tuple (source ip, destination ip and destination portlook up table(TT-LUT part and layered milestone part. TT-LUT was used to quickly classify short flows whichneed not to pass the layered milestone part, and milestones in layered milestone partcould classify the other flows in real-time with the real-time feature selection and statistics.Every milestone was a ECOC(Error-Correcting Output Codes based model which was usedto improve classification performance. Experiments showed that the proposedmodel can improve the efficiency of real-time to 80%, and themulti-class classification accuracy encouragingly to 91.4% on the datasets which had been captured from the backbone router in our campus through a week.

  18. Classification and Target Group Selection Based Upon Frequent Patterns

    NARCIS (Netherlands)

    W.H.L.M. Pijls (Wim); R. Potharst (Rob)

    2000-01-01

    textabstractIn this technical report , two new algorithms based upon frequent patterns are proposed. One algorithm is a classification method. The other one is an algorithm for target group selection. In both algorithms, first of all, the collection of frequent patterns in the training set is constr

  19. Novel insights in agent-based complex automated negotiation

    CERN Document Server

    Lopez-Carmona, Miguel; Ito, Takayuki; Zhang, Minjie; Bai, Quan; Fujita, Katsuhide

    2014-01-01

    This book focuses on all aspects of complex automated negotiations, which are studied in the field of autonomous agents and multi-agent systems. This book consists of two parts. I: Agent-Based Complex Automated Negotiations, and II: Automated Negotiation Agents Competition. The chapters in Part I are extended versions of papers presented at the 2012 international workshop on Agent-Based Complex Automated Negotiation (ACAN), after peer reviews by three Program Committee members. Part II examines in detail ANAC 2012 (The Third Automated Negotiating Agents Competition), in which automated agents that have different negotiation strategies and are implemented by different developers are automatically negotiated in the several negotiation domains. ANAC is an international competition in which automated negotiation strategies, submitted by a number of universities and research institutes across the world, are evaluated in tournament style. The purpose of the competition is to steer the research in the area of bilate...

  20. Pulse frequency classification based on BP neural network

    Institute of Scientific and Technical Information of China (English)

    WANG Rui; WANG Xu; YANG Dan; FU Rong

    2006-01-01

    In Traditional Chinese Medicine (TCM), it is an important parameter of the clinic disease diagnosis to analysis the pulse frequency. This article accords to pulse eight major essentials to identify pulse type of the pulse frequency classification based on back-propagation neural networks (BPNN). The pulse frequency classification includes slow pulse, moderate pulse, rapid pulse etc. By feature parameter of the pulse frequency analysis research and establish to identify system of pulse frequency features. The pulse signal from detecting system extracts period, frequency etc feature parameter to compare with standard feature value of pulse type. The result shows that identify-rate attains 92.5% above.

  1. Optimizing Mining Association Rules for Artificial Immune System based Classification

    Directory of Open Access Journals (Sweden)

    SAMEER DIXIT

    2011-08-01

    Full Text Available The primary function of a biological immune system is to protect the body from foreign molecules known as antigens. It has great pattern recognition capability that may be used to distinguish between foreigncells entering the body (non-self or antigen and the body cells (self. Immune systems have many characteristics such as uniqueness, autonomous, recognition of foreigners, distributed detection, and noise tolerance . Inspired by biological immune systems, Artificial Immune Systems have emerged during the last decade. They are incited by many researchers to design and build immune-based models for a variety of application domains. Artificial immune systems can be defined as a computational paradigm that is inspired by theoretical immunology, observed immune functions, principles and mechanisms. Association rule mining is one of the most important and well researched techniques of data mining. The goal of association rules is to extract interesting correlations, frequent patterns, associations or casual structures among sets of items in thetransaction databases or other data repositories. Association rules are widely used in various areas such as inventory control, telecommunication networks, intelligent decision making, market analysis and risk management etc. Apriori is the most widely used algorithm for mining the association rules. Other popular association rule mining algorithms are frequent pattern (FP growth, Eclat, dynamic itemset counting (DIC etc. Associative classification uses association rule mining in the rule discovery process to predict the class labels of the data. This technique has shown great promise over many other classification techniques. Associative classification also integrates the process of rule discovery and classification to build the classifier for the purpose of prediction. The main problem with the associative classification approach is the discovery of highquality association rules in a very large space of

  2. Fault Diagnosis for Fuel Cell Based on Naive Bayesian Classification

    Directory of Open Access Journals (Sweden)

    Liping Fan

    2013-07-01

    Full Text Available Many kinds of uncertain factors may exist in the process of fault diagnosis and affect diagnostic results. Bayesian network is one of the most effective theoretical models for uncertain knowledge expression and reasoning. The method of naive Bayesian classification is used in this paper in fault diagnosis of a proton exchange membrane fuel cell (PEMFC system. Based on the model of PEMFC, fault data are obtained through simulation experiment, learning and training of the naive Bayesian classification are finished, and some testing samples are selected to validate this method. Simulation results demonstrate that the method is feasible.    

  3. Adaptive stellar spectral subclass classification based on Bayesian SVMs

    Science.gov (United States)

    Du, Changde; Luo, Ali; Yang, Haifeng

    2017-02-01

    Stellar spectral classification is one of the most fundamental tasks in survey astronomy. Many automated classification methods have been applied to spectral data. However, their main limitation is that the model parameters must be tuned repeatedly to deal with different data sets. In this paper, we utilize the Bayesian support vector machines (BSVM) to classify the spectral subclass data. Based on Gibbs sampling, BSVM can infer all model parameters adaptively according to different data sets, which allows us to circumvent the time-consuming cross validation for penalty parameter. We explored different normalization methods for stellar spectral data, and the best one has been suggested in this study. Finally, experimental results on several stellar spectral subclass classification problems show that the BSVM model not only possesses good adaptability but also provides better prediction performance than traditional methods.

  4. Hyperspectral image classification based on volumetric texture and dimensionality reduction

    Science.gov (United States)

    Su, Hongjun; Sheng, Yehua; Du, Peijun; Chen, Chen; Liu, Kui

    2015-06-01

    A novel approach using volumetric texture and reduced-spectral features is presented for hyperspectral image classification. Using this approach, the volumetric textural features were extracted by volumetric gray-level co-occurrence matrices (VGLCM). The spectral features were extracted by minimum estimated abundance covariance (MEAC) and linear prediction (LP)-based band selection, and a semi-supervised k-means (SKM) clustering method with deleting the worst cluster (SKMd) bandclustering algorithms. Moreover, four feature combination schemes were designed for hyperspectral image classification by using spectral and textural features. It has been proven that the proposed method using VGLCM outperforms the gray-level co-occurrence matrices (GLCM) method, and the experimental results indicate that the combination of spectral information with volumetric textural features leads to an improved classification performance in hyperspectral imagery.

  5. Collective Machine Learning: Team Learning and Classification in Multi-Agent Systems

    Science.gov (United States)

    Gifford, Christopher M.

    2009-01-01

    This dissertation focuses on the collaboration of multiple heterogeneous, intelligent agents (hardware or software) which collaborate to learn a task and are capable of sharing knowledge. The concept of collaborative learning in multi-agent and multi-robot systems is largely under studied, and represents an area where further research is needed to…

  6. Magnetic resonance imaging using gadolinium-based contrast agents.

    Science.gov (United States)

    Mitsumori, Lee M; Bhargava, Puneet; Essig, Marco; Maki, Jeffrey H

    2014-02-01

    The purpose of this article was to review the basic properties of available gadolinium-based magnetic resonance contrast agents, discuss their fundamental differences, and explore common and evolving applications of gadolinium-based magnetic resonance contrast throughout the body excluding the central nervous system. A more specific aim of this article was to explore novel uses of these gadolinium-based contrast agents and applications where a particular agent has been demonstrated to behave differently or be better suited for certain applications than the other contrast agents in this class.

  7. Multi-Agent Reinforcement Learning Algorithm Based on Action Prediction

    Institute of Scientific and Technical Information of China (English)

    TONG Liang; LU Ji-lian

    2006-01-01

    Multi-agent reinforcement learning algorithms are studied. A prediction-based multi-agent reinforcement learning algorithm is presented for multi-robot cooperation task. The multi-robot cooperation experiment based on multi-agent inverted pendulum is made to test the efficency of the new algorithm, and the experiment results show that the new algorithm can achieve the cooperation strategy much faster than the primitive multiagent reinforcement learning algorithm.

  8. Integrative disease classification based on cross-platform microarray data

    Directory of Open Access Journals (Sweden)

    Huang Haiyan

    2009-01-01

    Full Text Available Abstract Background Disease classification has been an important application of microarray technology. However, most microarray-based classifiers can only handle data generated within the same study, since microarray data generated by different laboratories or with different platforms can not be compared directly due to systematic variations. This issue has severely limited the practical use of microarray-based disease classification. Results In this study, we tested the feasibility of disease classification by integrating the large amount of heterogeneous microarray datasets from the public microarray repositories. Cross-platform data compatibility is created by deriving expression log-rank ratios within datasets. One may then compare vectors of log-rank ratios across datasets. In addition, we systematically map textual annotations of datasets to concepts in Unified Medical Language System (UMLS, permitting quantitative analysis of the phenotype "distance" between datasets and automated construction of disease classes. We design a new classification approach named ManiSVM, which integrates Manifold data transformation with SVM learning to exploit the data properties. Using the leave one dataset out cross validation, ManiSVM achieved the overall accuracy of 70.7% (68.6% precision and 76.9% recall with many disease classes achieving the accuracy higher than 80%. Conclusion Our results not only demonstrated the feasibility of the integrated disease classification approach, but also showed that the classification accuracy increases with the number of homogenous training datasets. Thus, the power of the integrative approach will increase with the continuous accumulation of microarray data in public repositories. Our study shows that automated disease diagnosis can be an important and promising application of the enormous amount of costly to generate, yet freely available, public microarray data.

  9. Hardware Accelerators Targeting a Novel Group Based Packet Classification Algorithm

    Directory of Open Access Journals (Sweden)

    O. Ahmed

    2013-01-01

    Full Text Available Packet classification is a ubiquitous and key building block for many critical network devices. However, it remains as one of the main bottlenecks faced when designing fast network devices. In this paper, we propose a novel Group Based Search packet classification Algorithm (GBSA that is scalable, fast, and efficient. GBSA consumes an average of 0.4 megabytes of memory for a 10 k rule set. The worst-case classification time per packet is 2 microseconds, and the preprocessing speed is 3 M rules/second based on an Xeon processor operating at 3.4 GHz. When compared with other state-of-the-art classification techniques, the results showed that GBSA outperforms the competition with respect to speed, memory usage, and processing time. Moreover, GBSA is amenable to implementation in hardware. Three different hardware implementations are also presented in this paper including an Application Specific Instruction Set Processor (ASIP implementation and two pure Register-Transfer Level (RTL implementations based on Impulse-C and Handel-C flows, respectively. Speedups achieved with these hardware accelerators ranged from 9x to 18x compared with a pure software implementation running on an Xeon processor.

  10. Fast rule-based bioactivity prediction using associative classification mining

    Directory of Open Access Journals (Sweden)

    Yu Pulan

    2012-11-01

    Full Text Available Abstract Relating chemical features to bioactivities is critical in molecular design and is used extensively in the lead discovery and optimization process. A variety of techniques from statistics, data mining and machine learning have been applied to this process. In this study, we utilize a collection of methods, called associative classification mining (ACM, which are popular in the data mining community, but so far have not been applied widely in cheminformatics. More specifically, classification based on predictive association rules (CPAR, classification based on multiple association rules (CMAR and classification based on association rules (CBA are employed on three datasets using various descriptor sets. Experimental evaluations on anti-tuberculosis (antiTB, mutagenicity and hERG (the human Ether-a-go-go-Related Gene blocker datasets show that these three methods are computationally scalable and appropriate for high speed mining. Additionally, they provide comparable accuracy and efficiency to the commonly used Bayesian and support vector machines (SVM methods, and produce highly interpretable models.

  11. Sparse Representation Based Binary Hypothesis Model for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Yidong Tang

    2016-01-01

    Full Text Available The sparse representation based classifier (SRC and its kernel version (KSRC have been employed for hyperspectral image (HSI classification. However, the state-of-the-art SRC often aims at extended surface objects with linear mixture in smooth scene and assumes that the number of classes is given. Considering the small target with complex background, a sparse representation based binary hypothesis (SRBBH model is established in this paper. In this model, a query pixel is represented in two ways, which are, respectively, by background dictionary and by union dictionary. The background dictionary is composed of samples selected from the local dual concentric window centered at the query pixel. Thus, for each pixel the classification issue becomes an adaptive multiclass classification problem, where only the number of desired classes is required. Furthermore, the kernel method is employed to improve the interclass separability. In kernel space, the coding vector is obtained by using kernel-based orthogonal matching pursuit (KOMP algorithm. Then the query pixel can be labeled by the characteristics of the coding vectors. Instead of directly using the reconstruction residuals, the different impacts the background dictionary and union dictionary have on reconstruction are used for validation and classification. It enhances the discrimination and hence improves the performance.

  12. An Emotional Agent Model Based on Granular Computing

    Directory of Open Access Journals (Sweden)

    Jun Hu

    2012-01-01

    Full Text Available Affective computing has a very important significance for fulfilling intelligent information processing and harmonious communication between human being and computers. A new model for emotional agent is proposed in this paper to make agent have the ability of handling emotions, based on the granular computing theory and the traditional BDI agent model. Firstly, a new emotion knowledge base based on granular computing for emotion expression is presented in the model. Secondly, a new emotional reasoning algorithm based on granular computing is proposed. Thirdly, a new emotional agent model based on granular computing is presented. Finally, based on the model, an emotional agent for patient assistant in hospital is realized, experiment results show that it is efficient to handle simple emotions.

  13. Agent-based Modeling Methodology for Analyzing Weapons Systems

    Science.gov (United States)

    2015-03-26

    43 Figure 14: Simulation Study Methodology for the Weapon System Analysis Metrics Definition and Data Collection The analysis plan calls for...AGENT-BASED MODELING METHODOLOGY FOR ANALYZING WEAPONS SYSTEMS THESIS Casey D. Connors, Major, USA...AGENT-BASED MODELING METHODOLOGY FOR ANALYZING WEAPONS SYSTEMS THESIS Presented to the Faculty Department of Operational Sciences

  14. Agent Types and Structures based on Analysis of Building Design

    DEFF Research Database (Denmark)

    Hartvig, Susanne C

    1997-01-01

    Based on an anaysis of building design an initial division of design agent into five classes: information collectors, generators, modifiers amd evaluators is presented.......Based on an anaysis of building design an initial division of design agent into five classes: information collectors, generators, modifiers amd evaluators is presented....

  15. Choice-Based Conjoint Analysis: Classification vs. Discrete Choice Models

    Science.gov (United States)

    Giesen, Joachim; Mueller, Klaus; Taneva, Bilyana; Zolliker, Peter

    Conjoint analysis is a family of techniques that originated in psychology and later became popular in market research. The main objective of conjoint analysis is to measure an individual's or a population's preferences on a class of options that can be described by parameters and their levels. We consider preference data obtained in choice-based conjoint analysis studies, where one observes test persons' choices on small subsets of the options. There are many ways to analyze choice-based conjoint analysis data. Here we discuss the intuition behind a classification based approach, and compare this approach to one based on statistical assumptions (discrete choice models) and to a regression approach. Our comparison on real and synthetic data indicates that the classification approach outperforms the discrete choice models.

  16. Trace elements based classification on clinkers. Application to Spanish clinkers

    OpenAIRE

    Tamás, F. D.; Abonyi, J.; Puertas, F.

    2001-01-01

    The qualitative identification to determine the origin (i.e. manufacturing factory) of Spanish clinkers is described. The classification of clinkers produced in different factories can be based on their trace element content. Approximately fifteen clinker sorts are analysed, collected from 11 Spanish cement factories to determine their Mg, Sr, Ba, Mn, Ti, Zr, Zn and V content. An expert system formulated by a binary decision tree is designed based on the collected data. The performance of the...

  17. Classification of Mental Disorders Based on Temperament

    Directory of Open Access Journals (Sweden)

    Nadi Sakhvidi

    2015-08-01

    Full Text Available Context Different paradoxical theories are available regarding psychiatric disorders. The current study aimed to establish a more comprehensive overall approach. Evidence Acquisition This basic study examined ancient medical books. “The Canon” by Avicenna and “Comprehensive Textbook of Psychiatry” by Kaplan and Sadock were the most important and frequently consulted books in this study. Results Four groups of temperaments were identified: high active, high flexible; high active, low flexible; low active, low flexible; and low active, high flexible. When temperament deteriorates personality, non-psychotic, and psychotic psychiatric disorders can develop. Conclusions Temperaments can provide a basis to classify psychiatric disorders. Psychiatric disorders can be placed in a spectrum based on temperaments.

  18. An Agent-Based Modeling for Pandemic Influenza in Egypt

    OpenAIRE

    Khalil, Khaled M.; Abdel-Aziz, M.; Nazmy, Taymour T.; Salem, Abdel-Badeeh M.

    2010-01-01

    Pandemic influenza has great potential to cause large and rapid increases in deaths and serious illness. The objective of this paper is to develop an agent-based model to simulate the spread of pandemic influenza (novel H1N1) in Egypt. The proposed multi-agent model is based on the modeling of individuals' interactions in a space time context. The proposed model involves different types of parameters such as: social agent attributes, distribution of Egypt population, and patterns of agents' i...

  19. Automatic and Intelligent Power Quality Disturbances Monitoring Based on Multi-Agent Systems

    Directory of Open Access Journals (Sweden)

    M. Hajian

    2012-09-01

    Full Text Available Power quality monitoring is the first step in the identification of power quality disturbances and reducing them in order to improve the performance of the power system. The aim of this paper is to propose the architecture of a new intelligent strategy for online and offline power quality monitoring system based on multi-agent systems. In this study, a multi-agent system for solving some problems in power quality monitoring, including computational complexity, low accuracy, change in the data pattern, non adaptive structure of detection system to changing conditions is proposed. In the proposed strategy, the agent characteristics, such as automatic and dynamic performance, intelligent, learning, reasoning ability, objectively and interoperability of agents are used. This paper is presented in two stages. In the one stage, to indicate problems in power quality monitoring, different methods of feature extraction, feature selection and classification for automatic recognition of power quality disturbances have been analyzed. Optimal selection of input feature vector of distinguish system is applied using different methods of data mining. Also, three well-known classifiers are considered. In another stage of the paper, to solve some challenges, the design of investigated structures in the form of a multi-agent system is expressed. The results of the experiments in this paper demonstrate the superiority of agents and multi-agent systems for online and offline power quality monitoring.

  20. A wrapper-based approach to image segmentation and classification.

    Science.gov (United States)

    Farmer, Michael E; Jain, Anil K

    2005-12-01

    The traditional processing flow of segmentation followed by classification in computer vision assumes that the segmentation is able to successfully extract the object of interest from the background image. It is extremely difficult to obtain a reliable segmentation without any prior knowledge about the object that is being extracted from the scene. This is further complicated by the lack of any clearly defined metrics for evaluating the quality of segmentation or for comparing segmentation algorithms. We propose a method of segmentation that addresses both of these issues, by using the object classification subsystem as an integral part of the segmentation. This will provide contextual information regarding the objects to be segmented, as well as allow us to use the probability of correct classification as a metric to determine the quality of the segmentation. We view traditional segmentation as a filter operating on the image that is independent of the classifier, much like the filter methods for feature selection. We propose a new paradigm for segmentation and classification that follows the wrapper methods of feature selection. Our method wraps the segmentation and classification together, and uses the classification accuracy as the metric to determine the best segmentation. By using shape as the classification feature, we are able to develop a segmentation algorithm that relaxes the requirement that the object of interest to be segmented must be homogeneous in some low-level image parameter, such as texture, color, or grayscale. This represents an improvement over other segmentation methods that have used classification information only to modify the segmenter parameters, since these algorithms still require an underlying homogeneity in some parameter space. Rather than considering our method as, yet, another segmentation algorithm, we propose that our wrapper method can be considered as an image segmentation framework, within which existing image segmentation

  1. Similarity-Based Classification in Partially Labeled Networks

    Science.gov (United States)

    Zhang, Qian-Ming; Shang, Ming-Sheng; Lü, Linyuan

    Two main difficulties in the problem of classification in partially labeled networks are the sparsity of the known labeled nodes and inconsistency of label information. To address these two difficulties, we propose a similarity-based method, where the basic assumption is that two nodes are more likely to be categorized into the same class if they are more similar. In this paper, we introduce ten similarity indices defined based on the network structure. Empirical results on the co-purchase network of political books show that the similarity-based method can, to some extent, overcome these two difficulties and give higher accurate classification than the relational neighbors method, especially when the labeled nodes are sparse. Furthermore, we find that when the information of known labeled nodes is sufficient, the indices considering only local information can perform as good as those global indices while having much lower computational complexity.

  2. Object-Based Classification and Change Detection of Hokkaido, Japan

    Science.gov (United States)

    Park, J. G.; Harada, I.; Kwak, Y.

    2016-06-01

    Topography and geology are factors to characterize the distribution of natural vegetation. Topographic contour is particularly influential on the living conditions of plants such as soil moisture, sunlight, and windiness. Vegetation associations having similar characteristics are present in locations having similar topographic conditions unless natural disturbances such as landslides and forest fires or artificial disturbances such as deforestation and man-made plantation bring about changes in such conditions. We developed a vegetation map of Japan using an object-based segmentation approach with topographic information (elevation, slope, slope direction) that is closely related to the distribution of vegetation. The results found that the object-based classification is more effective to produce a vegetation map than the pixel-based classification.

  3. Classification data mining method based on dynamic RBF neural networks

    Science.gov (United States)

    Zhou, Lijuan; Xu, Min; Zhang, Zhang; Duan, Luping

    2009-04-01

    With the widely application of databases and sharp development of Internet, The capacity of utilizing information technology to manufacture and collect data has improved greatly. It is an urgent problem to mine useful information or knowledge from large databases or data warehouses. Therefore, data mining technology is developed rapidly to meet the need. But DM (data mining) often faces so much data which is noisy, disorder and nonlinear. Fortunately, ANN (Artificial Neural Network) is suitable to solve the before-mentioned problems of DM because ANN has such merits as good robustness, adaptability, parallel-disposal, distributing-memory and high tolerating-error. This paper gives a detailed discussion about the application of ANN method used in DM based on the analysis of all kinds of data mining technology, and especially lays stress on the classification Data Mining based on RBF neural networks. Pattern classification is an important part of the RBF neural network application. Under on-line environment, the training dataset is variable, so the batch learning algorithm (e.g. OLS) which will generate plenty of unnecessary retraining has a lower efficiency. This paper deduces an incremental learning algorithm (ILA) from the gradient descend algorithm to improve the bottleneck. ILA can adaptively adjust parameters of RBF networks driven by minimizing the error cost, without any redundant retraining. Using the method proposed in this paper, an on-line classification system was constructed to resolve the IRIS classification problem. Experiment results show the algorithm has fast convergence rate and excellent on-line classification performance.

  4. Land Cover and Land Use Classification with TWOPAC: towards Automated Processing for Pixel- and Object-Based Image Classification

    Directory of Open Access Journals (Sweden)

    Stefan Dech

    2012-09-01

    Full Text Available We present a novel and innovative automated processing environment for the derivation of land cover (LC and land use (LU information. This processing framework named TWOPAC (TWinned Object and Pixel based Automated classification Chain enables the standardized, independent, user-friendly, and comparable derivation of LC and LU information, with minimized manual classification labor. TWOPAC allows classification of multi-spectral and multi-temporal remote sensing imagery from different sensor types. TWOPAC enables not only pixel-based classification, but also allows classification based on object-based characteristics. Classification is based on a Decision Tree approach (DT for which the well-known C5.0 code has been implemented, which builds decision trees based on the concept of information entropy. TWOPAC enables automatic generation of the decision tree classifier based on a C5.0-retrieved ascii-file, as well as fully automatic validation of the classification output via sample based accuracy assessment.Envisaging the automated generation of standardized land cover products, as well as area-wide classification of large amounts of data in preferably a short processing time, standardized interfaces for process control, Web Processing Services (WPS, as introduced by the Open Geospatial Consortium (OGC, are utilized. TWOPAC’s functionality to process geospatial raster or vector data via web resources (server, network enables TWOPAC’s usability independent of any commercial client or desktop software and allows for large scale data processing on servers. Furthermore, the components of TWOPAC were built-up using open source code components and are implemented as a plug-in for Quantum GIS software for easy handling of the classification process from the user’s perspective.

  5. Rule-Based Classification of Chemical Structures by Scaffold.

    Science.gov (United States)

    Schuffenhauer, Ansgar; Varin, Thibault

    2011-08-01

    Databases for small organic chemical molecules usually contain millions of structures. The screening decks of pharmaceutical companies contain more than a million of structures. Nevertheless chemical substructure searching in these databases can be performed interactively in seconds. Because of this nobody has really missed structural classification of these databases for the purpose of finding data for individual chemical substructures. However, a full deck high-throughput screen produces also activity data for more than a million of substances. How can this amount of data be analyzed? Which are the active scaffolds identified by an assays? To answer such questions systematic classifications of molecules by scaffolds are needed. In this review it is described how molecules can be hierarchically classified by their scaffolds. It is explained how such classifications can be used to identify active scaffolds in an HTS data set. Once active classes are identified, they need to be visualized in the context of related scaffolds in order to understand SAR. Consequently such visualizations are another topic of this review. In addition scaffold based diversity measures are discussed and an outlook is given about the potential impact of structural classifications on a chemically aware semantic web.

  6. Comparison Of Power Quality Disturbances Classification Based On Neural Network

    Directory of Open Access Journals (Sweden)

    Nway Nway Kyaw Win

    2015-07-01

    Full Text Available Abstract Power quality disturbances PQDs result serious problems in the reliability safety and economy of power system network. In order to improve electric power quality events the detection and classification of PQDs must be made type of transient fault. Software analysis of wavelet transform with multiresolution analysis MRA algorithm and feed forward neural network probabilistic and multilayer feed forward neural network based methodology for automatic classification of eight types of PQ signals flicker harmonics sag swell impulse fluctuation notch and oscillatory will be presented. The wavelet family Db4 is chosen in this system to calculate the values of detailed energy distributions as input features for classification because it can perform well in detecting and localizing various types of PQ disturbances. This technique classifies the types of PQDs problem sevents.The classifiers classify and identify the disturbance type according to the energy distribution. The results show that the PNN can analyze different power disturbance types efficiently. Therefore it can be seen that PNN has better classification accuracy than MLFF.

  7. Structure-based classification and ontology in chemistry

    Directory of Open Access Journals (Sweden)

    Hastings Janna

    2012-04-01

    Full Text Available Abstract Background Recent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving relevant results from the available information, and organising those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures, while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies. Results We analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches. Conclusion Systems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational

  8. An AERONET-based aerosol classification using the Mahalanobis distance

    Science.gov (United States)

    Hamill, Patrick; Giordano, Marco; Ward, Carolyne; Giles, David; Holben, Brent

    2016-09-01

    We present an aerosol classification based on AERONET aerosol data from 1993 to 2012. We used the AERONET Level 2.0 almucantar aerosol retrieval products to define several reference aerosol clusters which are characteristic of the following general aerosol types: Urban-Industrial, Biomass Burning, Mixed Aerosol, Dust, and Maritime. The classification of a particular aerosol observation as one of these aerosol types is determined by its five-dimensional Mahalanobis distance to each reference cluster. We have calculated the fractional aerosol type distribution at 190 AERONET sites, as well as the monthly variation in aerosol type at those locations. The results are presented on a global map and individually in the supplementary material. Our aerosol typing is based on recognizing that different geographic regions exhibit characteristic aerosol types. To generate reference clusters we only keep data points that lie within a Mahalanobis distance of 2 from the centroid. Our aerosol characterization is based on the AERONET retrieved quantities, therefore it does not include low optical depth values. The analysis is based on "point sources" (the AERONET sites) rather than globally distributed values. The classifications obtained will be useful in interpreting aerosol retrievals from satellite borne instruments.

  9. Vehicle Maneuver Detection with Accelerometer-Based Classification

    Directory of Open Access Journals (Sweden)

    Javier Cervantes-Villanueva

    2016-09-01

    Full Text Available In the mobile computing era, smartphones have become instrumental tools to develop innovative mobile context-aware systems. In that sense, their usage in the vehicular domain eases the development of novel and personal transportation solutions. In this frame, the present work introduces an innovative mechanism to perceive the current kinematic state of a vehicle on the basis of the accelerometer data from a smartphone mounted in the vehicle. Unlike previous proposals, the introduced architecture targets the computational limitations of such devices to carry out the detection process following an incremental approach. For its realization, we have evaluated different classification algorithms to act as agents within the architecture. Finally, our approach has been tested with a real-world dataset collected by means of the ad hoc mobile application developed.

  10. Agent-Based Design for E-learning Environment

    Directory of Open Access Journals (Sweden)

    Khadidja Harbouche

    2007-01-01

    Full Text Available We presented an agent-based e-learning environment. Our aim was to allow many users to interact collectively and intelligently with the environment. In this cooperation model, human users and artificial agents carry out tasks in the learners’ service. We define the internal structure of our kernel supposed to work within Internet/Intranet settings. Design was structured in three parts: individual learning space, collaborative space, and cooperative space. We advocate the employment of an agent-based approach, a suitable for two main reasons: agents were a natural metaphor of human acts, and the learning systems are generally complex. Prometheus methodology used for the design and emphasis placed on the agent-based features.

  11. Applying revenue management to agent-based transportation planning

    NARCIS (Netherlands)

    Douma, Albert; Schuur, Peter; Heijden, van der Matthieu

    2006-01-01

    We consider a multi-company, less-than-truckload, dynamic VRP based on the concept of multi-agent systems. We focus on the intelligence of one vehicle agent and especially on its bidding strategy. We address the problem how to price loads that are offered in real-time such that available capacity is

  12. Access Control for Agent-based Computing: A Distributed Approach.

    Science.gov (United States)

    Antonopoulos, Nick; Koukoumpetsos, Kyriakos; Shafarenko, Alex

    2001-01-01

    Discusses the mobile software agent paradigm that provides a foundation for the development of high performance distributed applications and presents a simple, distributed access control architecture based on the concept of distributed, active authorization entities (lock cells), any combination of which can be referenced by an agent to provide…

  13. Complexity in Simplicity: Flexible Agent-based State Space Exploration

    DEFF Research Database (Denmark)

    Rasmussen, Jacob Illum; Larsen, Kim Guldstrand

    2007-01-01

    In this paper, we describe a new flexible framework for state space exploration based on cooperating agents. The idea is to let various agents with different search patterns explore the state space individually and communicate information about fruitful subpaths of the search tree to each other...

  14. Agent-based transportation planning compared with scheduling heuristics

    NARCIS (Netherlands)

    Mes, Martijn; Heijden, van der Matthieu; Harten, van Aart

    2004-01-01

    Here we consider the problem of dynamically assigning vehicles to transportation orders that have di¤erent time windows and should be handled in real time. We introduce a new agent-based system for the planning and scheduling of these transportation networks. Intelligent vehicle agents schedule thei

  15. Label-Embedding for Attribute-Based Classification

    OpenAIRE

    Akata, Zeynep; Perronnin, Florent; Harchaoui, Zaid; Schmid, Cordelia

    2013-01-01

    International audience; Attributes are an intermediate representation, which enables parameter sharing between classes, a must when training data is scarce. We propose to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors. We introduce a function which measures the compatibility between an image and a label embedding. The parameters of this function are learned on a training set of labeled samples to ensure that, gi...

  16. Hierarchical Classification of Chinese Documents Based on N-grams

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    We explore the techniques of utilizing N-gram informatio n tocategorize Chinese text documents hierarchically so that the classifier can shak e off the burden of large dictionaries and complex segmentation processing, and subsequently be domain and time independent. A hierarchical Chinese text classif ier is implemented. Experimental results show that hierarchically classifying Chinese text documents based N-grams can achieve satisfactory performance and outperforms the other traditional Chinese text classifiers.

  17. An agent oriented information system: an MDA based development

    Directory of Open Access Journals (Sweden)

    Mohamed Sadgal

    2012-09-01

    Full Text Available Information systems (IS development should not only accomplish functional models but also conceptual models to represent the organizational environment in which it will have to evolve and must be aligned with strategic objectives. Generally, a significant innovations in the enterprise, is to organize its IS around its business processes. Otherwise, business models must be enriched by the agent paradigm to reduce the complexity involved in solving a problem by the structuring of knowledge on a set of intelligent agents, the association between agents and activities and collaboration among agents. To do this, we propose an agent oriented approach based on the model-driven-architecture (MDA for the information system development. This approach uses in its different phases, the BPMN language for the business processes modeling, AML language for the agent modeling, and JADEX platform for the implementation. The IS development is realized by different automated mappings from source models to target models.

  18. MODEL-BASED PERFORMANCE EVALUATION APPROACH FOR MOBILE AGENT SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Li Xin; Mi Zhengkun; Meng Xudong

    2004-01-01

    Claimed as the next generation programming paradigm, mobile agent technology has attracted extensive interests in recent years. However, up to now, limited research efforts have been devoted to the performance study of mobile agent system and most of these researches focus on agent behavior analysis resulting in that models are hard to apply to mobile agent systems. To bridge the gap, a new performance evaluation model derived from operation mechanisms of mobile agent platforms is proposed. Details are discussed for the design of companion simulation software, which can provide the system performance such as response time of platform to mobile agent. Further investigation is followed on the determination of model parameters. Finally comparison is made between the model-based simulation results and measurement-based real performance of mobile agent systems. The results show that the proposed model and designed software are effective in evaluating performance characteristics of mobile agent systems. The proposed approach can also be considered as the basis of performance analysis for large systems composed of multiple mobile agent platforms.

  19. Tree-based disease classification using protein data.

    Science.gov (United States)

    Zhu, Hongtu; Yu, Chang-Yung; Zhang, Heping

    2003-09-01

    A reliable and precise classification of diseases is essential for successful diagnosis and treatment. Using mass spectrometry from clinical specimens, scientists may find the protein variations among disease and use this information to improve diagnosis. In this paper, we propose a novel procedure to classify disease status based on the protein data from mass spectrometry. Our new tree-based algorithm consists of three steps: projection, selection and classification tree. The projection step aims to project all observations from specimens into the same bases so that the projected data have fixed coordinates. Thus, for each specimen, we obtain a large vector of 'coefficients' on the same basis. The purpose of the selection step is data reduction by condensing the large vector from the projection step into a much lower order of informative vector. Finally, using these reduced vectors, we apply recursive partitioning to construct an informative classification tree. This method has been successfully applied to protein data, provided by the Department of Radiology and Chemistry at Duke University.

  20. Multi-agent Based Charges subsystem for Supply Chain Logistics

    Directory of Open Access Journals (Sweden)

    Pankaj Rani

    2012-05-01

    Full Text Available The main objective of this paper is to design charges subsystem using multi agent technology which deals with calculation, accrual and collection of various charges levied at the goods in a supply chain Logistics. Accrual of various charges such as freight, demurrage, and wharfage take place implicitly in the SC system at the various events of different subsystems which is collected and calculated by software agents. An Agent-based modeling is an approach based on the idea that a system is composed of decentralized individual ‘agents’ and that each agent interacts with other agents according to its localized knowledge. Our aim is to design a flexible architecture that can deal with next generation supply chain problems based on a multi-agent architecture. In this article, a multi agent system has been developed to calculate charges levied at various stages on good sheds.. Each entity is modeled as one agent and their coordination lead to control inventories and minimize the total cost of SC by sharing information and forecasting knowledge and using negotiation mechanism.

  1. Agent-based Simulation of the Maritime Domain

    Directory of Open Access Journals (Sweden)

    O. Vaněk

    2010-01-01

    Full Text Available In this paper, a multi-agent based simulation platform is introduced that focuses on legitimate and illegitimate aspects of maritime traffic, mainly on intercontinental transport through piracy afflicted areas. The extensible architecture presented here comprises several modules controlling the simulation and the life-cycle of the agents, analyzing the simulation output and visualizing the entire simulated domain. The simulation control module is initialized by various configuration scenarios to simulate various real-world situations, such as a pirate ambush, coordinated transit through a transport corridor, or coastal fishing and local traffic. The environmental model provides a rich set of inputs for agents that use the geo-spatial data and the vessel operational characteristics for their reasoning. The agent behavior model based on finite state machines together with planning algorithms allows complex expression of agent behavior, so the resulting simulation output can serve as a substitution for real world data from the maritime domain.

  2. The DTW-based representation space for seismic pattern classification

    Science.gov (United States)

    Orozco-Alzate, Mauricio; Castro-Cabrera, Paola Alexandra; Bicego, Manuele; Londoño-Bonilla, John Makario

    2015-12-01

    Distinguishing among the different seismic volcanic patterns is still one of the most important and labor-intensive tasks for volcano monitoring. This task could be lightened and made free from subjective bias by using automatic classification techniques. In this context, a core but often overlooked issue is the choice of an appropriate representation of the data to be classified. Recently, it has been suggested that using a relative representation (i.e. proximities, namely dissimilarities on pairs of objects) instead of an absolute one (i.e. features, namely measurements on single objects) is advantageous to exploit the relational information contained in the dissimilarities to derive highly discriminant vector spaces, where any classifier can be used. According to that motivation, this paper investigates the suitability of a dynamic time warping (DTW) dissimilarity-based vector representation for the classification of seismic patterns. Results show the usefulness of such a representation in the seismic pattern classification scenario, including analyses of potential benefits from recent advances in the dissimilarity-based paradigm such as the proper selection of representation sets and the combination of different dissimilarity representations that might be available for the same data.

  3. Data Classification Based on Confidentiality in Virtual Cloud Environment

    Directory of Open Access Journals (Sweden)

    Munwar Ali Zardari

    2014-10-01

    Full Text Available The aim of this study is to provide suitable security to data based on the security needs of data. It is very difficult to decide (in cloud which data need what security and which data do not need security. However it will be easy to decide the security level for data after data classification according to their security level based on the characteristics of the data. In this study, we have proposed a data classification cloud model to solve data confidentiality issue in cloud computing environment. The data are classified into two major classes: sensitive and non-sensitive. The K-Nearest Neighbour (K-NN classifier is used for data classification and the Rivest, Shamir and Adelman (RSA algorithm is used to encrypt sensitive data. After implementing the proposed model, it is found that the confidentiality level of data is increased and this model is proved to be more cost and memory friendly for the users as well as for the cloud services providers. The data storage service is one of the cloud services where data servers are virtualized of all users. In a cloud server, the data are stored in two ways. First encrypt the received data and store on cloud servers. Second store data on the cloud servers without encryption. Both of these data storage methods can face data confidentiality issue, because the data have different values and characteristics that must be identified before sending to cloud severs.

  4. Changing Histopathological Diagnostics by Genome-Based Tumor Classification

    Directory of Open Access Journals (Sweden)

    Michael Kloth

    2014-05-01

    Full Text Available Traditionally, tumors are classified by histopathological criteria, i.e., based on their specific morphological appearances. Consequently, current therapeutic decisions in oncology are strongly influenced by histology rather than underlying molecular or genomic aberrations. The increase of information on molecular changes however, enabled by the Human Genome Project and the International Cancer Genome Consortium as well as the manifold advances in molecular biology and high-throughput sequencing techniques, inaugurated the integration of genomic information into disease classification. Furthermore, in some cases it became evident that former classifications needed major revision and adaption. Such adaptations are often required by understanding the pathogenesis of a disease from a specific molecular alteration, using this molecular driver for targeted and highly effective therapies. Altogether, reclassifications should lead to higher information content of the underlying diagnoses, reflecting their molecular pathogenesis and resulting in optimized and individual therapeutic decisions. The objective of this article is to summarize some particularly important examples of genome-based classification approaches and associated therapeutic concepts. In addition to reviewing disease specific markers, we focus on potentially therapeutic or predictive markers and the relevance of molecular diagnostics in disease monitoring.

  5. An ellipse detection algorithm based on edge classification

    Science.gov (United States)

    Yu, Liu; Chen, Feng; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    In order to enhance the speed and accuracy of ellipse detection, an ellipse detection algorithm based on edge classification is proposed. Too many edge points are removed by making edge into point in serialized form and the distance constraint between the edge points. It achieves effective classification by the criteria of the angle between the edge points. And it makes the probability of randomly selecting the edge points falling on the same ellipse greatly increased. Ellipse fitting accuracy is significantly improved by the optimization of the RED algorithm. It uses Euclidean distance to measure the distance from the edge point to the elliptical boundary. Experimental results show that: it can detect ellipse well in case of edge with interference or edges blocking each other. It has higher detecting precision and less time consuming than the RED algorithm.

  6. Entropy coders for image compression based on binary forward classification

    Science.gov (United States)

    Yoo, Hoon; Jeong, Jechang

    2000-12-01

    Entropy coders as a noiseless compression method are widely used as final step compression for images, and there have been many contributions to increase of entropy coder performance and to reduction of entropy coder complexity. In this paper, we propose some entropy coders based on the binary forward classification (BFC). The BFC requires overhead of classification but there is no change between the amount of input information and the total amount of classified output information, which we prove this property in this paper. And using the proved property, we propose entropy coders that are the BFC followed by Golomb-Rice coders (BFC+GR) and the BFC followed by arithmetic coders (BFC+A). The proposed entropy coders introduce negligible additional complexity due to the BFC. Simulation results also show better performance than other entropy coders that have similar complexity to the proposed coders.

  7. A novel classification method based on membership function

    Science.gov (United States)

    Peng, Yaxin; Shen, Chaomin; Wang, Lijia; Zhang, Guixu

    2011-03-01

    We propose a method for medical image classification using membership function. Our aim is to classify the image as several classes based on a prior knowledge. For every point, we calculate its membership function, i.e., the probability that the point belongs to each class. The point is finally labeled as the class with the highest value of membership function. The classification is reduced to a minimization problem of a functional with arguments of membership functions. Three novelties are in our paper. First, bias correction and Rudin-Osher-Fatemi (ROF) model are adopted to the input image to enhance the image quality. Second, unconstrained functional is used. We use variable substitution to avoid the constraints that membership functions should be positive and with sum one. Third, several techniques are used to fasten the computation. The experimental result of ventricle shows the validity of this approach.

  8. SPEECH/MUSIC CLASSIFICATION USING WAVELET BASED FEATURE EXTRACTION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Thiruvengatanadhan Ramalingam

    2014-01-01

    Full Text Available Audio classification serves as the fundamental step towards the rapid growth in audio data volume. Due to the increasing size of the multimedia sources speech and music classification is one of the most important issues for multimedia information retrieval. In this work a speech/music discrimination system is developed which utilizes the Discrete Wavelet Transform (DWT as the acoustic feature. Multi resolution analysis is the most significant statistical way to extract the features from the input signal and in this study, a method is deployed to model the extracted wavelet feature. Support Vector Machines (SVM are based on the principle of structural risk minimization. SVM is applied to classify audio into their classes namely speech and music, by learning from training data. Then the proposed method extends the application of Gaussian Mixture Models (GMM to estimate the probability density function using maximum likelihood decision methods. The system shows significant results with an accuracy of 94.5%.

  9. A Fuzzy Similarity Based Concept Mining Model for Text Classification

    CERN Document Server

    Puri, Shalini

    2012-01-01

    Text Classification is a challenging and a red hot field in the current scenario and has great importance in text categorization applications. A lot of research work has been done in this field but there is a need to categorize a collection of text documents into mutually exclusive categories by extracting the concepts or features using supervised learning paradigm and different classification algorithms. In this paper, a new Fuzzy Similarity Based Concept Mining Model (FSCMM) is proposed to classify a set of text documents into pre - defined Category Groups (CG) by providing them training and preparing on the sentence, document and integrated corpora levels along with feature reduction, ambiguity removal on each level to achieve high system performance. Fuzzy Feature Category Similarity Analyzer (FFCSA) is used to analyze each extracted feature of Integrated Corpora Feature Vector (ICFV) with the corresponding categories or classes. This model uses Support Vector Machine Classifier (SVMC) to classify correct...

  10. Information Fusion Using Ontology-Based Communication between Agents

    Directory of Open Access Journals (Sweden)

    Tarek Sobh

    2009-06-01

    Full Text Available The distribution of on-line applications among network nodes may require obtaining acceptable results from data analysis of multiple sensors. Such sensors data is probably heterogeneous, inconsistent, and of different types. Therefore, multiple sensor data fusion is required. Here, there are many levels of information fusion (from low level signals to high level knowledge. Agents for monitoring application field events could be used to dynamically react to those events and to take appropriate actions. In a dynamic environment even a single agent may have varying capabilities to sense that environment. The situation becomes more complex when various heterogeneous agents need to communicate with each other. Ontologies offer significant benefits to multi-agent systems. The benefits as such are interoperability, reusability, support for multi-agent systems development activities such as system analysis and agent knowledge modeling. Ontologies support multi-agent systems operations such as agent communication and reasoning. The proposed agent based model in this paper can afford a promising model for obtaining acceptable information in case of multiple sensors.

  11. Agent Based Control of Electric Power Systems with Distributed Generation

    DEFF Research Database (Denmark)

    Saleem, Arshad

    . The methodology consists of suggestions for redesign of control architecture, a prototype for a software platform which facilitates implementation of multiagent control and results from case studies of specic scenarios. The work also contributes to agent based control with an approach of model based agents....... This thesis focuses on making a systematic evaluation of using intelligent software agent technology for control of electric power systems with high penetration of distributed generation. The thesis is based upon a requirement driven approach. It starts with investigating new trends and challenges in Electric...... agents. It suggests a multiagent based exible control architecture (subgrid control) suitable for the implementation of the innovative control concepts. This subgrid control architecture is tested on a novel distributed software platform which has been developed to design, test and evaluate distributed...

  12. A Software Service Framework Model Based on Agent

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper presents an agent-based software service framework model called ASF, and definesthe basic concepts and structure of ASF model. It also describes the management and process mechanismsin ASF model.

  13. Chitosan-based formulations of drugs, imaging agents and biotherapeutics

    NARCIS (Netherlands)

    Amidi, M.; Hennink, W.E.

    2010-01-01

    This preface is part of the Advanced Drug Delivery Reviews theme issue on “Chitosan-Based Formulations of Drugs, Imaging Agents and Biotherapeutics”. This special Advanced Drug Delivery Reviews issue summarizes recent progress and different applications of chitosanbased formulations.

  14. Agent-Based Collaborative Traffic Flow Management Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose agent-based game-theoretic approaches for simulation of strategies involved in multi-objective collaborative traffic flow management (CTFM). Intelligent...

  15. Local fractal dimension based approaches for colonic polyp classification.

    Science.gov (United States)

    Häfner, Michael; Tamaki, Toru; Tanaka, Shinji; Uhl, Andreas; Wimmer, Georg; Yoshida, Shigeto

    2015-12-01

    This work introduces texture analysis methods that are based on computing the local fractal dimension (LFD; or also called the local density function) and applies them for colonic polyp classification. The methods are tested on 8 HD-endoscopic image databases, where each database is acquired using different imaging modalities (Pentax's i-Scan technology combined with or without staining the mucosa) and on a zoom-endoscopic image database using narrow band imaging. In this paper, we present three novel extensions to a LFD based approach. These extensions additionally extract shape and/or gradient information of the image to enhance the discriminativity of the original approach. To compare the results of the LFD based approaches with the results of other approaches, five state of the art approaches for colonic polyp classification are applied to the employed databases. Experiments show that LFD based approaches are well suited for colonic polyp classification, especially the three proposed extensions. The three proposed extensions are the best performing methods or at least among the best performing methods for each of the employed databases. The methods are additionally tested by means of a public texture image database, the UIUCtex database. With this database, the viewpoint invariance of the methods is assessed, an important features for the employed endoscopic image databases. Results imply that most of the LFD based methods are more viewpoint invariant than the other methods. However, the shape, size and orientation adapted LFD approaches (which are especially designed to enhance the viewpoint invariance) are in general not more viewpoint invariant than the other LFD based approaches.

  16. Rule based fuzzy logic approach for classification of fibromyalgia syndrome.

    Science.gov (United States)

    Arslan, Evren; Yildiz, Sedat; Albayrak, Yalcin; Koklukaya, Etem

    2016-06-01

    Fibromyalgia syndrome (FMS) is a chronic muscle and skeletal system disease observed generally in women, manifesting itself with a widespread pain and impairing the individual's quality of life. FMS diagnosis is made based on the American College of Rheumatology (ACR) criteria. However, recently the employability and sufficiency of ACR criteria are under debate. In this context, several evaluation methods, including clinical evaluation methods were proposed by researchers. Accordingly, ACR had to update their criteria announced back in 1990, 2010 and 2011. Proposed rule based fuzzy logic method aims to evaluate FMS at a different angle as well. This method contains a rule base derived from the 1990 ACR criteria and the individual experiences of specialists. The study was conducted using the data collected from 60 inpatient and 30 healthy volunteers. Several tests and physical examination were administered to the participants. The fuzzy logic rule base was structured using the parameters of tender point count, chronic widespread pain period, pain severity, fatigue severity and sleep disturbance level, which were deemed important in FMS diagnosis. It has been observed that generally fuzzy predictor was 95.56 % consistent with at least of the specialists, who are not a creator of the fuzzy rule base. Thus, in diagnosis classification where the severity of FMS was classified as well, consistent findings were obtained from the comparison of interpretations and experiences of specialists and the fuzzy logic approach. The study proposes a rule base, which could eliminate the shortcomings of 1990 ACR criteria during the FMS evaluation process. Furthermore, the proposed method presents a classification on the severity of the disease, which was not available with the ACR criteria. The study was not limited to only disease classification but at the same time the probability of occurrence and severity was classified. In addition, those who were not suffering from FMS were

  17. Pivotal Technology Research of Grid Based on Mobile Agent

    Institute of Scientific and Technical Information of China (English)

    CHEN Hong-wei; WANG Ru-chuan

    2004-01-01

    Grid Based on Mobile Agent is a new grid scheme. The purpose of the paper is to solve the pivotal technology of Grid Based on Mobile Agent ( GBMA) combined with thought of Virtual Organization ( VO). In GBMA, virtual organization is viewed as the basic management unit of the grid, and mobile agent is regarded as an important interactive means. Grid architecture, grid resource management and grid task management are the core technology problem of GBMA. The simulation results show that Inter- VO pattern has the obvious advantage because it can make full use of resources from other virtual organizations in GBMA environment.

  18. Dictionary-Based, Clustered Sparse Representation for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Zhen-tao Qin

    2015-01-01

    Full Text Available This paper presents a new, dictionary-based method for hyperspectral image classification, which incorporates both spectral and contextual characteristics of a sample clustered to obtain a dictionary of each pixel. The resulting pixels display a common sparsity pattern in identical clustered groups. We calculated the image’s sparse coefficients using the dictionary approach, which generated the sparse representation features of the remote sensing images. The sparse coefficients are then used to classify the hyperspectral images via a linear SVM. Experiments show that our proposed method of dictionary-based, clustered sparse coefficients can create better representations of hyperspectral images, with a greater overall accuracy and a Kappa coefficient.

  19. Typology of Digital News Media: Theoretical Bases for their Classification

    Directory of Open Access Journals (Sweden)

    Ramón SALAVERRÍA

    2017-01-01

    Full Text Available Since their beginnings in the 1990s, digital news media have undergone a process of settlement and diversification. As a result, the prolific classification of online media has become increasingly rich and complex. Based on a review of media typologies, this article proposes some theoretical bases for the distinction of the online media from previous media and, above all, for the differentiation of the various types of online media among then. With that purpose, nine typologic criteria are proposed: 1 platform, 2 temporality, 3 topic, 4 reach, 5 ownership, 6 authorship, 7 focus, 8 economic purpose, and 9 dynamism.

  20. Network Traffic Anomalies Identification Based on Classification Methods

    Directory of Open Access Journals (Sweden)

    Donatas Račys

    2015-07-01

    Full Text Available A problem of network traffic anomalies detection in the computer networks is analyzed. Overview of anomalies detection methods is given then advantages and disadvantages of the different methods are analyzed. Model for the traffic anomalies detection was developed based on IBM SPSS Modeler and is used to analyze SNMP data of the router. Investigation of the traffic anomalies was done using three classification methods and different sets of the learning data. Based on the results of investigation it was determined that C5.1 decision tree method has the largest accuracy and performance and can be successfully used for identification of the network traffic anomalies.

  1. ACO Agent Based Routing in AOMDV Environment

    Directory of Open Access Journals (Sweden)

    Kaur Amanpreet

    2016-01-01

    Full Text Available Mobile Ad-hoc Network (MANET is a group of moving nodes which can communicate with each other without the help of any central stationary node. All the nodes in the MANET act as router for forwarding data packets. The nodes in the network also move randomly and there exists no fixed infrastructure. So, path breaks are the frequent problem in MANET. The routing protocol faces a lot of problem due these path breaks. Therefore, the routing protocol which is multipath in nature is more reliable than a unipath routing protocol. Ant colony optimization is a relatively new technique which is suitable for the optimization problems. AOMDV is a multipath routing protocol. Thus, if there happens to be path break, the packets can start following the new path which has already been selected. In this paper, we are trying to add ant’s agents into AOMDV behavior. In this way, the new protocol will be benefited by the dual properties i.e. of ant’s nature and multipath nature of AOMDV. The modified concept is simulated and the outcomes are compared with AOMDV, AODV and DSR routing protocols for few performance parameters. Results obtained are encouraging; the new algorithm performs better than traditional unipath and multipath routing protocols.

  2. Classification of body movements based on posturographic data.

    Science.gov (United States)

    Saripalle, Sashi K; Paiva, Gavin C; Cliett, Thomas C; Derakhshani, Reza R; King, Gregory W; Lovelace, Christopher T

    2014-02-01

    The human body, standing on two feet, produces a continuous sway pattern. Intended movements, sensory cues, emotional states, and illnesses can all lead to subtle changes in sway appearing as alterations in ground reaction forces and the body's center of pressure (COP). The purpose of this study is to demonstrate that carefully selected COP parameters and classification methods can differentiate among specific body movements while standing, providing new prospects in camera-free motion identification. Force platform data were collected from participants performing 11 choreographed postural and gestural movements. Twenty-three different displacement- and frequency-based features were extracted from COP time series, and supplied to classification-guided feature extraction modules. For identification of movement type, several linear and nonlinear classifiers were explored; including linear discriminants, nearest neighbor classifiers, and support vector machines. The average classification rates on previously unseen test sets ranged from 67% to 100%. Within the context of this experiment, no single method was able to uniformly outperform the others for all movement types, and therefore a set of movement-specific features and classifiers is recommended.

  3. Spectrum-based kernel length estimation for Gaussian process classification.

    Science.gov (United States)

    Wang, Liang; Li, Chuan

    2014-06-01

    Recent studies have shown that Gaussian process (GP) classification, a discriminative supervised learning approach, has achieved competitive performance in real applications compared with most state-of-the-art supervised learning methods. However, the problem of automatic model selection in GP classification, involving the kernel function form and the corresponding parameter values (which are unknown in advance), remains a challenge. To make GP classification a more practical tool, this paper presents a novel spectrum analysis-based approach for model selection by refining the GP kernel function to match the given input data. Specifically, we target the problem of GP kernel length scale estimation. Spectrums are first calculated analytically from the kernel function itself using the autocorrelation theorem as well as being estimated numerically from the training data themselves. Then, the kernel length scale is automatically estimated by equating the two spectrum values, i.e., the kernel function spectrum equals to the estimated training data spectrum. Compared with the classical Bayesian method for kernel length scale estimation via maximizing the marginal likelihood (which is time consuming and could suffer from multiple local optima), extensive experimental results on various data sets show that our proposed method is both efficient and accurate.

  4. Risk Classification and Risk-based Safety and Mission Assurance

    Science.gov (United States)

    Leitner, Jesse A.

    2014-01-01

    Recent activities to revamp and emphasize the need to streamline processes and activities for Class D missions across the agency have led to various interpretations of Class D, including the lumping of a variety of low-cost projects into Class D. Sometimes terms such as Class D minus are used. In this presentation, mission risk classifications will be traced to official requirements and definitions as a measure to ensure that projects and programs align with the guidance and requirements that are commensurate for their defined risk posture. As part of this, the full suite of risk classifications, formal and informal will be defined, followed by an introduction to the new GPR 8705.4 that is currently under review.GPR 8705.4 lays out guidance for the mission success activities performed at the Classes A-D for NPR 7120.5 projects as well as for projects not under NPR 7120.5. Furthermore, the trends in stepping from Class A into higher risk posture classifications will be discussed. The talk will conclude with a discussion about risk-based safety and mission assuranceat GSFC.

  5. Geographical classification of apple based on hyperspectral imaging

    Science.gov (United States)

    Guo, Zhiming; Huang, Wenqian; Chen, Liping; Zhao, Chunjiang; Peng, Yankun

    2013-05-01

    Attribute of apple according to geographical origin is often recognized and appreciated by the consumers. It is usually an important factor to determine the price of a commercial product. Hyperspectral imaging technology and supervised pattern recognition was attempted to discriminate apple according to geographical origins in this work. Hyperspectral images of 207 Fuji apple samples were collected by hyperspectral camera (400-1000nm). Principal component analysis (PCA) was performed on hyperspectral imaging data to determine main efficient wavelength images, and then characteristic variables were extracted by texture analysis based on gray level co-occurrence matrix (GLCM) from dominant waveband image. All characteristic variables were obtained by fusing the data of images in efficient spectra. Support vector machine (SVM) was used to construct the classification model, and showed excellent performance in classification results. The total classification rate had the high classify accuracy of 92.75% in the training set and 89.86% in the prediction sets, respectively. The overall results demonstrated that the hyperspectral imaging technique coupled with SVM classifier can be efficiently utilized to discriminate Fuji apple according to geographical origins.

  6. Spectral classification of stars based on LAMOST spectra

    CERN Document Server

    Liu, Chao; Zhang, Bo; Wan, Jun-Chen; Deng, Li-Cai; Hou, Yonghui; Wang, Yuefei; Yang, Ming; Zhang, Yong

    2015-01-01

    In this work, we select the high signal-to-noise ratio spectra of stars from the LAMOST data andmap theirMK classes to the spectral features. The equivalentwidths of the prominent spectral lines, playing the similar role as the multi-color photometry, form a clean stellar locus well ordered by MK classes. The advantage of the stellar locus in line indices is that it gives a natural and continuous classification of stars consistent with either the broadly used MK classes or the stellar astrophysical parameters. We also employ a SVM-based classification algorithm to assignMK classes to the LAMOST stellar spectra. We find that the completenesses of the classification are up to 90% for A and G type stars, while it is down to about 50% for OB and K type stars. About 40% of the OB and K type stars are mis-classified as A and G type stars, respectively. This is likely owe to the difference of the spectral features between the late B type and early A type stars or between the late G and early K type stars are very we...

  7. Evolutionary game theory using agent-based methods.

    Science.gov (United States)

    Adami, Christoph; Schossau, Jory; Hintze, Arend

    2016-12-01

    Evolutionary game theory is a successful mathematical framework geared towards understanding the selective pressures that affect the evolution of the strategies of agents engaged in interactions with potential conflicts. While a mathematical treatment of the costs and benefits of decisions can predict the optimal strategy in simple settings, more realistic settings such as finite populations, non-vanishing mutations rates, stochastic decisions, communication between agents, and spatial interactions, require agent-based methods where each agent is modeled as an individual, carries its own genes that determine its decisions, and where the evolutionary outcome can only be ascertained by evolving the population of agents forward in time. While highlighting standard mathematical results, we compare those to agent-based methods that can go beyond the limitations of equations and simulate the complexity of heterogeneous populations and an ever-changing set of interactors. We conclude that agent-based methods can predict evolutionary outcomes where purely mathematical treatments cannot tread (for example in the weak selection-strong mutation limit), but that mathematics is crucial to validate the computational simulations.

  8. Supervisory Control of Fuzzy Discrete Event Systems Based on Agent

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    FDES (fuzzy discrete event systems) can effectively represent a kind of complicated systems involving deterministic uncertainties and vagueness as well as human subjective observation and judgement from the view of discrete events, here the information system is divided into some independent intelligent entitative Agents. The concept of information processing state based on Agents was proposed. The processing state of Agent can be judged by some assistant observation parameters about the Agent and its environment around, and the transition among these states can be represented by FDES based on rules. In order to ensure the harmony of the Agents for information processing, its upstream and downstream buffers are considered in the modeling of the Agent system,and the supervisory controller based on FDES is constructed. The processing state of Agent can be adjusted by the supervisory controller to improve the stability of the system and the efficiency of resource utilization during the process according to the control policies. The result of its application was provided to illustrate the validity of the supervisory adjustment.

  9. Multi Agent System Based Wide Area Protection against Cascading Events

    DEFF Research Database (Denmark)

    Liu, Zhou; Chen, Zhe; Liu, Leo;

    2012-01-01

    In this paper, a multi-agent system based wide area protection scheme is proposed in order to prevent long term voltage instability induced cascading events. The distributed relays and controllers work as a device agent which not only executes the normal function automatically but also can...... the effectiveness of proposed protection strategy. The simulation results indicate that the proposed multi agent control system can effectively coordinate the distributed relays and controllers to prevent the long term voltage instability induced cascading events....... be modified to fulfill the extra function according to external requirements. The control center is designed as a highest level agent in MAS to coordinate all the lower agents to prevent the system wide voltage disturbance. A hybrid simulation platform with MATLAB and RTDS is set up to demonstrate...

  10. Autonomous Traffic Control System Using Agent Based Technology

    CERN Document Server

    M, Venkatesh; V, Srinivas

    2011-01-01

    The way of analyzing, designing and building of real-time projects has been changed due to the rapid growth of internet, mobile technologies and intelligent applications. Most of these applications are intelligent, tiny and distributed components called as agent. Agent works like it takes the input from numerous real-time sources and gives back the real-time response. In this paper how these agents can be implemented in vehicle traffic management especially in large cities and identifying various challenges when there is a rapid growth of population and vehicles. In this paper our proposal gives a solution for using autonomous or agent based technology. These autonomous or intelligent agents have the capability to observe, act and learn from their past experience. This system uses the knowledge flow of precedent signal or data to identify the incoming flow of forthcoming signal. Our architecture involves the video analysis and exploration using some Intelligence learning algorithm to estimate and identify the...

  11. An Approach for Leukemia Classification Based on Cooperative Game Theory

    Directory of Open Access Journals (Sweden)

    Atefeh Torkaman

    2011-01-01

    Full Text Available Hematological malignancies are the types of cancer that affect blood, bone marrow and lymph nodes. As these tissues are naturally connected through the immune system, a disease affecting one of them will often affect the others as well. The hematological malignancies include; Leukemia, Lymphoma, Multiple myeloma. Among them, leukemia is a serious malignancy that starts in blood tissues especially the bone marrow, where the blood is made. Researches show, leukemia is one of the common cancers in the world. So, the emphasis on diagnostic techniques and best treatments would be able to provide better prognosis and survival for patients. In this paper, an automatic diagnosis recommender system for classifying leukemia based on cooperative game is presented. Through out this research, we analyze the flow cytometry data toward the classification of leukemia into eight classes. We work on real data set from different types of leukemia that have been collected at Iran Blood Transfusion Organization (IBTO. Generally, the data set contains 400 samples taken from human leukemic bone marrow. This study deals with cooperative game used for classification according to different weights assigned to the markers. The proposed method is versatile as there are no constraints to what the input or output represent. This means that it can be used to classify a population according to their contributions. In other words, it applies equally to other groups of data. The experimental results show the accuracy rate of 93.12%, for classification and compared to decision tree (C4.5 with (90.16% in accuracy. The result demonstrates that cooperative game is very promising to be used directly for classification of leukemia as a part of Active Medical decision support system for interpretation of flow cytometry readout. This system could assist clinical hematologists to properly recognize different kinds of leukemia by preparing suggestions and this could improve the treatment

  12. A belief revision approach for argumentation-based negotiation agents

    Directory of Open Access Journals (Sweden)

    Pilotti Pablo

    2015-09-01

    Full Text Available Negotiation is an interaction that happens in multi-agent systems when agents have conflicting objectives and must look for an acceptable agreement. A typical negotiating situation involves two agents that cannot reach their goals by themselves because they do not have some resources they need or they do not know how to use them to reach their goals. Therefore, they must start a negotiation dialogue, taking also into account that they might have incomplete or wrong beliefs about the other agent’s goals and resources. This article presents a negotiating agent model based on argumentation, which is used by the agents to reason on how to exchange resources and knowledge in order to achieve their goals. Agents that negotiate have incomplete beliefs about the others, so that the exchange of arguments gives them information that makes it possible to update their beliefs. In order to formalize their proposals in a negotiation setting, the agents must be able to generate, select and evaluate arguments associated with such offers, updating their mental state accordingly. In our approach, we will focus on an argumentation-based negotiation model between two cooperative agents. The arguments generation and interpretation process is based on belief change operations (expansions, contractions and revisions, and the selection process is a based on a strategy. This approach is presented through a high-level algorithm implemented in logic programming. We show various theoretical properties associated with this approach, which have been formalized and proved using Coq, a formal proof management system. We also illustrate, through a case study, the applicability of our approach in order to solve a slightly modified version of the well-known home improvement agents problem. Moreover, we present various simulations that allow assessing the impact of belief revision on the negotiation process.

  13. Intrusion Awareness Based on Data Fusion and SVM Classification

    Directory of Open Access Journals (Sweden)

    Ramnaresh Sharma

    2012-06-01

    Full Text Available Network intrusion awareness is important factor for risk analysis of network security. In the current decade various method and framework are available for intrusion detection and security awareness. Some method based on knowledge discovery process and some framework based on neural network. These entire model take rule based decision for the generation of security alerts. In this paper we proposed a novel method for intrusion awareness using data fusion and SVM classification. Data fusion work on the biases of features gathering of event. Support vector machine is super classifier of data. Here we used SVM for the detection of closed item of ruled based technique. Our proposed method simulate on KDD1999 DARPA data set and get better empirical evaluation result in comparison of rule based technique and neural network model.

  14. Intrusion Awareness Based on Data Fusion and SVM Classification

    Directory of Open Access Journals (Sweden)

    Ramnaresh Sharma

    2012-06-01

    Full Text Available Network intrusion awareness is important factor forrisk analysis of network security. In the currentdecade various method and framework are availablefor intrusion detection and security awareness.Some method based on knowledge discovery processand some framework based on neural network.These entire model take rule based decision for thegeneration of security alerts. In this paper weproposed a novel method for intrusion awarenessusing data fusion and SVM classification. Datafusion work on the biases of features gathering ofevent. Support vector machine is super classifier ofdata. Here we used SVM for the detection of closeditem of ruled based technique. Our proposedmethod simulate on KDD1999 DARPA data set andget better empirical evaluation result in comparisonof rule based technique and neural network model.

  15. Content Based Image Retrieval : Classification Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Shereena V.B

    2014-10-01

    Full Text Available In a content-based image retrieval system (CBIR, the main issue is to extract the image features that effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of retrieval performance of image features. This paper presents a review of fundamental aspects of content based image retrieval including feature extraction of color and texture features. Commonly used color features including color moments, color histogram and color correlogram and Gabor texture are compared. The paper reviews the increase in efficiency of image retrieval when the color and texture features are combined. The similarity measures based on which matches are made and images are retrieved are also discussed. For effective indexing and fast searching of images based on visual features, neural network based pattern learning can be used to achieve effective classification.

  16. Content Based Image Retrieval : Classification Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Shereena V.B

    2014-11-01

    Full Text Available In a content-based image retrieval system (CBIR, the main issue is to extract the image features that effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of retrieval performance of image features. This paper presents a review of fundamental aspects of content based image retrieval including feature extraction of color and texture features. Commonly used color features including color moments, color histogram and color correlogram and Gabor texture are compared. The paper reviews the increase in efficiency of image retrieval when the color and texture features are combined. The similarity measures based on which matches are made and images are retrieved are also discussed. For effective indexing and fast searching of images based on visual features, neural network based pattern learning can be used to achieve effective classification.

  17. A Rough Sets-based Agent Trust Management Framework

    Directory of Open Access Journals (Sweden)

    Sadra Abedinzadeh

    2013-03-01

    Full Text Available In a virtual society, which consists of several autonomous agents, trust helps agents to deal with the openness of the system by identifying the best agents capable of performing a specific task, or achieving a special goal. In this paper, we introduce ROSTAM, a new approach for agent trust management based on the theory of Rough Sets. ROSTAM is a generic trust management framework that can be applied to any types of multi agent systems. However, the features of the application domain must be provided to ROSTAM. These features form the trust attributes. By collecting the values for these attributes, ROSTAM is able to generate a set of trust rules by employing the theory of Rough Sets. ROSTAM then uses the trust rules to extract the set of the most trusted agents and forwards the user’s request to those agents only. After getting the results, the user must rate the interaction with each trusted agent. The rating values are subsequently utilized for updating the trust rules. We applied ROSTAM to the domain of cross-language Web search. The resulting Web search system recommends to the user the set of the most trusted pairs of translator and search engine in terms of the pairs that return the results with the highest precision of retrieval.

  18. S1 gene-based phylogeny of infectious bronchitis virus: An attempt to harmonize virus classification.

    Science.gov (United States)

    Valastro, Viviana; Holmes, Edward C; Britton, Paul; Fusaro, Alice; Jackwood, Mark W; Cattoli, Giovanni; Monne, Isabella

    2016-04-01

    Infectious bronchitis virus (IBV) is the causative agent of a highly contagious disease that results in severe economic losses to the global poultry industry. The virus exists in a wide variety of genetically distinct viral types, and both phylogenetic analysis and measures of pairwise similarity among nucleotide or amino acid sequences have been used to classify IBV strains. However, there is currently no consensus on the method by which IBV sequences should be compared, and heterogeneous genetic group designations that are inconsistent with phylogenetic history have been adopted, leading to the confusing coexistence of multiple genotyping schemes. Herein, we propose a simple and repeatable phylogeny-based classification system combined with an unambiguous and rationale lineage nomenclature for the assignment of IBV strains. By using complete nucleotide sequences of the S1 gene we determined the phylogenetic structure of IBV, which in turn allowed us to define 6 genotypes that together comprise 32 distinct viral lineages and a number of inter-lineage recombinants. Because of extensive rate variation among IBVs, we suggest that the inference of phylogenetic relationships alone represents a more appropriate criterion for sequence classification than pairwise sequence comparisons. The adoption of an internationally accepted viral nomenclature is crucial for future studies of IBV epidemiology and evolution, and the classification scheme presented here can be updated and revised novel S1 sequences should become available.

  19. Application of Bayesian Classification to Content-Based Data Management

    Science.gov (United States)

    Lynnes, Christopher; Berrick, S.; Gopalan, A.; Hua, X.; Shen, S.; Smith, P.; Yang, K-Y.; Wheeler, K.; Curry, C.

    2004-01-01

    The high volume of Earth Observing System data has proven to be challenging to manage for data centers and users alike. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), about 1 TB of new data are archived each day. Distribution to users is also about 1 TB/day. A substantial portion of this distribution is MODIS calibrated radiance data, which has a wide variety of uses. However, much of the data is not useful for a particular user's needs: for example, ocean color users typically need oceanic pixels that are free of cloud and sun-glint. The GES DAAC is using a simple Bayesian classification scheme to rapidly classify each pixel in the scene in order to support several experimental content-based data services for near-real-time MODIS calibrated radiance products (from Direct Readout stations). Content-based subsetting would allow distribution of, say, only clear pixels to the user if desired. Content-based subscriptions would distribute data to users only when they fit the user's usability criteria in their area of interest within the scene. Content-based cache management would retain more useful data on disk for easy online access. The classification may even be exploited in an automated quality assessment of the geolocation product. Though initially to be demonstrated at the GES DAAC, these techniques have applicability in other resource-limited environments, such as spaceborne data systems.

  20. The Development of Sugar-Based Anti-Melanogenic Agents.

    Science.gov (United States)

    Bin, Bum-Ho; Kim, Sung Tae; Bhin, Jinhyuk; Lee, Tae Ryong; Cho, Eun-Gyung

    2016-04-16

    The regulation of melanin production is important for managing skin darkness and hyperpigmentary disorders. Numerous anti-melanogenic agents that target tyrosinase activity/stability, melanosome maturation/transfer, or melanogenesis-related signaling pathways have been developed. As a rate-limiting enzyme in melanogenesis, tyrosinase has been the most attractive target, but tyrosinase-targeted treatments still pose serious potential risks, indicating the necessity of developing lower-risk anti-melanogenic agents. Sugars are ubiquitous natural compounds found in humans and other organisms. Here, we review the recent advances in research on the roles of sugars and sugar-related agents in melanogenesis and in the development of sugar-based anti-melanogenic agents. The proposed mechanisms of action of these agents include: (a) (natural sugars) disturbing proper melanosome maturation by inducing osmotic stress and inhibiting the PI3 kinase pathway and (b) (sugar derivatives) inhibiting tyrosinase maturation by blocking N-glycosylation. Finally, we propose an alternative strategy for developing anti-melanogenic sugars that theoretically reduce melanosomal pH by inhibiting a sucrose transporter and reduce tyrosinase activity by inhibiting copper incorporation into an active site. These studies provide evidence of the utility of sugar-based anti-melanogenic agents in managing skin darkness and curing pigmentary disorders and suggest a future direction for the development of physiologically favorable anti-melanogenic agents.

  1. Agent-based services for B2B electronic commerce

    Science.gov (United States)

    Fong, Elizabeth; Ivezic, Nenad; Rhodes, Tom; Peng, Yun

    2000-12-01

    The potential of agent-based systems has not been realized yet, in part, because of the lack of understanding of how the agent technology supports industrial needs and emerging standards. The area of business-to-business electronic commerce (b2b e-commerce) is one of the most rapidly developing sectors of industry with huge impact on manufacturing practices. In this paper, we investigate the current state of agent technology and the feasibility of applying agent-based computing to b2b e-commerce in the circuit board manufacturing sector. We identify critical tasks and opportunities in the b2b e-commerce area where agent-based services can best be deployed. We describe an implemented agent-based prototype system to facilitate the bidding process for printed circuit board manufacturing and assembly. These activities are taking place within the Internet Commerce for Manufacturing (ICM) project, the NIST- sponsored project working with industry to create an environment where small manufacturers of mechanical and electronic components may participate competitively in virtual enterprises that manufacture printed circuit assemblies.

  2. Object-based Dimensionality Reduction in Land Surface Phenology Classification

    Directory of Open Access Journals (Sweden)

    Brian E. Bunker

    2016-11-01

    Full Text Available Unsupervised classification or clustering of multi-decadal land surface phenology provides a spatio-temporal synopsis of natural and agricultural vegetation response to environmental variability and anthropogenic activities. Notwithstanding the detailed temporal information available in calibrated bi-monthly normalized difference vegetation index (NDVI and comparable time series, typical pre-classification workflows average a pixel’s bi-monthly index within the larger multi-decadal time series. While this process is one practical way to reduce the dimensionality of time series with many hundreds of image epochs, it effectively dampens temporal variation from both intra and inter-annual observations related to land surface phenology. Through a novel application of object-based segmentation aimed at spatial (not temporal dimensionality reduction, all 294 image epochs from a Moderate Resolution Imaging Spectroradiometer (MODIS bi-monthly NDVI time series covering the northern Fertile Crescent were retained (in homogenous landscape units as unsupervised classification inputs. Given the inherent challenges of in situ or manual image interpretation of land surface phenology classes, a cluster validation approach based on transformed divergence enabled comparison between traditional and novel techniques. Improved intra-annual contrast was clearly manifest in rain-fed agriculture and inter-annual trajectories showed increased cluster cohesion, reducing the overall number of classes identified in the Fertile Crescent study area from 24 to 10. Given careful segmentation parameters, this spatial dimensionality reduction technique augments the value of unsupervised learning to generate homogeneous land surface phenology units. By combining recent scalable computational approaches to image segmentation, future work can pursue new global land surface phenology products based on the high temporal resolution signatures of vegetation index time series.

  3. Fuzzy Motivations in a Multiple Agent Behaviour-Based Architecture

    Directory of Open Access Journals (Sweden)

    Tomás V. Arredondo

    2013-08-01

    Full Text Available In this article we introduce a blackboard- based multiple agent system framework that considers biologically-based motivations as a means to develop a user friendly interface. The framework includes a population-based heuristic as well as a fuzzy logic- based inference system used toward scoring system behaviours. The heuristic provides an optimization environment and the fuzzy scoring mechanism is used to give a fitness score to possible system outputs (i.e. solutions. This framework results in the generation of complex behaviours which respond to previously specified motivations. Our multiple agent blackboard and motivation-based framework is validated in a low cost mobile robot specifically built for this task. The robot was used in several navigation experiments and the motivation profile that was considered included "curiosity", "homing", "energy" and "missions". Our results show that this motivation-based approach permits a low cost multiple agent-based autonomous mobile robot to acquire a diverse set of fit behaviours that respond well to user and performance expectations. These results also validate our multiple agent framework as an incremental, flexible and practical method for the development of robust multiple agent systems.

  4. Agent-based computational economics using NetLogo

    CERN Document Server

    Damaceanu, Romulus-Catalin

    2013-01-01

    Agent-based Computational Economics using NetLogo explores how researchers can create, use and implement multi-agent computational models in Economics by using NetLogo software platform. Problems of economic science can be solved using multi-agent modelling (MAM). This technique uses a computer model to simulate the actions and interactions of autonomous entities in a network, in order to analyze the effects on the entire economic system. MAM combines elements of game theory, complex systems, emergence and evolutionary programming. The Monte Carlo method is also used in this e-book to introduc

  5. Emergent Macroeconomics An Agent-Based Approach to Business Fluctuations

    CERN Document Server

    Delli Gatti, Domenico; Gallegati, Mauro; Giulioni, Gianfranco; Palestrini, Antonio

    2008-01-01

    This book contributes substantively to the current state-of-the-art of macroeconomics by providing a method for building models in which business cycles and economic growth emerge from the interactions of a large number of heterogeneous agents. Drawing from recent advances in agent-based computational modeling, the authors show how insights from dispersed fields like the microeconomics of capital market imperfections, industrial dynamics and the theory of stochastic processes can be fruitfully combined to improve our understanding of macroeconomic dynamics. This book should be a valuable resource for all researchers interested in analyzing macroeconomic issues without recurring to a fictitious representative agent.

  6. QoS Negotiation and Renegotiation Based on Mobile Agents

    Institute of Scientific and Technical Information of China (English)

    ZHANG Shi-bing; ZHANG Deng-yin

    2006-01-01

    The Quality of Service (QoS) has received more and more attention since QoS becomes increasingly important in the Internet development. Mobile software agents represent a valid alternative to the implementation of strategies for the negotiation. In this paper, a QoS negotiation and renegotiation system architecture based on mobile agents is proposed. The agents perform the task in the whole process. Therefore, such a system can reduce the network load, overcome latency, and avoid frequent exchange information between clients and server. The simulation results show that the proposed system could improve the network resource utility about 10%.

  7. Tutorial on agent-based modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Macal, C. M.; North, M. J.; Decision and Information Sciences

    2005-01-01

    Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of autonomous, interacting agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to support their research. Some have gone so far as to contend that ABMS is a third way of doing science besides deductive and inductive reasoning. Computational advances have made possible a growing number of agent-based applications in a variety of fields. Applications range from modeling agent behavior in the stock market and supply chains, to predicting the spread of epidemics and the threat of bio-warfare, from modeling consumer behavior to understanding the fall of ancient civilizations, to name a few. This tutorial describes the theoretical and practical foundations of ABMS, identifies toolkits and methods for developing ABMS models, and provides some thoughts on the relationship between ABMS and traditional modeling techniques.

  8. Novel nanomedicine-based MRI contrast agents for gynecological malignancies.

    Science.gov (United States)

    Mody, Vicky V; Nounou, Mohamed Ismail; Bikram, Malavosklish

    2009-08-10

    Gynecological cancers result in significant morbidity and mortality in women despite advances in treatment and diagnosis. This is due to detection of the disease in the late stages following metastatic spread in which treatment options become limited and may not result in positive outcomes. In addition, traditional contrast agents are not very effective in detecting primary metastatic tumors and cells due to a lack of specificity and sensitivity of the diagnostic tools, which limits their effectiveness. Recently, the field of nanomedicine-based contrast agents offers a great opportunity to develop highly sophisticated devices that can overcome many traditional hurdles of contrast agents including solubility, cell-specific targeting, toxicities, and immunological responses. These nanomedicine-based contrast agents including liposomes, micelles, dendrimers, multifunctional magnetic polymeric nanohybrids, fullerenes, and nanotubes represent improvements over their traditional counterparts, which can significantly advance the field of molecular imaging.

  9. An Agent-Based Modeling for Pandemic Influenza in Egypt

    CERN Document Server

    Khalil, Khaled M; Nazmy, Taymour T; Salem, Abdel-Badeeh M

    2010-01-01

    Pandemic influenza has great potential to cause large and rapid increases in deaths and serious illness. The objective of this paper is to develop an agent-based model to simulate the spread of pandemic influenza (novel H1N1) in Egypt. The proposed multi-agent model is based on the modeling of individuals' interactions in a space time context. The proposed model involves different types of parameters such as: social agent attributes, distribution of Egypt population, and patterns of agents' interactions. Analysis of modeling results leads to understanding the characteristics of the modeled pandemic, transmission patterns, and the conditions under which an outbreak might occur. In addition, the proposed model is used to measure the effectiveness of different control strategies to intervene the pandemic spread.

  10. Next frontier in agent-based complex automated negotiation

    CERN Document Server

    Ito, Takayuki; Zhang, Minjie; Robu, Valentin

    2015-01-01

    This book focuses on automated negotiations based on multi-agent systems. It is intended for researchers and students in various fields involving autonomous agents and multi-agent systems, such as e-commerce tools, decision-making and negotiation support systems, and collaboration tools. The contents will help them to understand the concept of automated negotiations, negotiation protocols, negotiating agents’ strategies, and the applications of those strategies. In this book, some negotiation protocols focusing on the multiple interdependent issues in negotiations are presented, making it possible to find high-quality solutions for the complex agents’ utility functions. This book is a compilation of the extended versions of the very best papers selected from the many that were presented at the International Workshop on Agent-Based Complex Automated Negotiations.

  11. Generalization performance of graph-based semisupervised classification

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Semi-supervised learning has been of growing interest over the past few years and many methods have been proposed. Although various algorithms are provided to implement semi-supervised learning,there are still gaps in our understanding of the dependence of generalization error on the numbers of labeled and unlabeled data. In this paper,we consider a graph-based semi-supervised classification algorithm and establish its generalization error bounds. Our results show the close relations between the generalization performance and the structural invariants of data graph.

  12. Hydrophobicity classification of polymeric materials based on fractal dimension

    Directory of Open Access Journals (Sweden)

    Daniel Thomazini

    2008-12-01

    Full Text Available This study proposes a new method to obtain hydrophobicity classification (HC in high voltage polymer insulators. In the method mentioned, the HC was analyzed by fractal dimension (fd and its processing time was evaluated having as a goal the application in mobile devices. Texture images were created from spraying solutions produced of mixtures of isopropyl alcohol and distilled water in proportions, which ranged from 0 to 100% volume of alcohol (%AIA. Based on these solutions, the contact angles of the drops were measured and the textures were used as patterns for fractal dimension calculations.

  13. Radar Image Texture Classification based on Gabor Filter Bank

    OpenAIRE

    Mbainaibeye Jérôme; Olfa Marrakchi Charfi

    2014-01-01

    The aim of this paper is to design and develop a filter bank for the detection and classification of radar image texture with 4.6m resolution obtained by airborne synthetic Aperture Radar. The textures of this kind of images are more correlated and contain forms with random disposition. The design and the developing of the filter bank is based on Gabor filter. We have elaborated a set of filters applied to each set of feature texture allowing its identification and enhancement in comparison w...

  14. A Large Scale, High Resolution Agent-Based Insurgency Model

    Science.gov (United States)

    2013-09-30

    contains color images. 14. ABSTRACT Recent years have seen a large growth in research developed aimed at modeling the intricate social-cultural climate in...a relatively small (40x40) lattice with toroidal structure, such that agent movement that extended beyond a grid edge would appear on the opposite...locations containing obstacles such as rivers. 8 A Large Scale, High Resolution Agent-Based Insurgency Model Figure 3. Black and white mask of the

  15. A Method for Data Classification Based on Discernibility Matrix and Discernibility Function

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A method for data classification will influence the efficiency of classification. Attributes reduction based on discernibility matrix and discernibility function in rough sets can use in data classification, so we put forward a method for data classification. Namely, firstly, we use discernibility matrix and discernibility function to delete superfluous attributes in formation system and get a necessary attribute set. Secondly, we delete superfluous attribute values and get decision rules. Finally, we classify data by means of decision rules. The experiments show that data classification using this method is simpler in the structure, and can improve the efficiency of classification.

  16. Semi-Supervised Classification based on Gaussian Mixture Model for remote imagery

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Semi-Supervised Classification (SSC),which makes use of both labeled and unlabeled data to determine classification borders in feature space,has great advantages in extracting classification information from mass data.In this paper,a novel SSC method based on Gaussian Mixture Model (GMM) is proposed,in which each class’s feature space is described by one GMM.Experiments show the proposed method can achieve high classification accuracy with small amount of labeled data.However,for the same accuracy,supervised classification methods such as Support Vector Machine,Object Oriented Classification,etc.should be provided with much more labeled data.

  17. Task Classification Based Energy-Aware Consolidation in Clouds

    Directory of Open Access Journals (Sweden)

    HeeSeok Choi

    2016-01-01

    Full Text Available We consider a cloud data center, in which the service provider supplies virtual machines (VMs on hosts or physical machines (PMs to its subscribers for computation in an on-demand fashion. For the cloud data center, we propose a task consolidation algorithm based on task classification (i.e., computation-intensive and data-intensive and resource utilization (e.g., CPU and RAM. Furthermore, we design a VM consolidation algorithm to balance task execution time and energy consumption without violating a predefined service level agreement (SLA. Unlike the existing research on VM consolidation or scheduling that applies none or single threshold schemes, we focus on a double threshold (upper and lower scheme, which is used for VM consolidation. More specifically, when a host operates with resource utilization below the lower threshold, all the VMs on the host will be scheduled to be migrated to other hosts and then the host will be powered down, while when a host operates with resource utilization above the upper threshold, a VM will be migrated to avoid using 100% of resource utilization. Based on experimental performance evaluations with real-world traces, we prove that our task classification based energy-aware consolidation algorithm (TCEA achieves a significant energy reduction without incurring predefined SLA violations.

  18. Feature selection gait-based gender classification under different circumstances

    Science.gov (United States)

    Sabir, Azhin; Al-Jawad, Naseer; Jassim, Sabah

    2014-05-01

    This paper proposes a gender classification based on human gait features and investigates the problem of two variations: clothing (wearing coats) and carrying bag condition as addition to the normal gait sequence. The feature vectors in the proposed system are constructed after applying wavelet transform. Three different sets of feature are proposed in this method. First, Spatio-temporal distance that is dealing with the distance of different parts of the human body (like feet, knees, hand, Human Height and shoulder) during one gait cycle. The second and third feature sets are constructed from approximation and non-approximation coefficient of human body respectively. To extract these two sets of feature we divided the human body into two parts, upper and lower body part, based on the golden ratio proportion. In this paper, we have adopted a statistical method for constructing the feature vector from the above sets. The dimension of the constructed feature vector is reduced based on the Fisher score as a feature selection method to optimize their discriminating significance. Finally k-Nearest Neighbor is applied as a classification method. Experimental results demonstrate that our approach is providing more realistic scenario and relatively better performance compared with the existing approaches.

  19. Agent-based modeling and simulation Part 3 : desktop ABMS.

    Energy Technology Data Exchange (ETDEWEB)

    Macal, C. M.; North, M. J.; Decision and Information Sciences

    2007-01-01

    Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of autonomous, interacting agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to support their research. Some have gone so far as to contend that ABMS 'is a third way of doing science,' in addition to traditional deductive and inductive reasoning (Axelrod 1997b). Computational advances have made possible a growing number of agent-based models across a variety of application domains. Applications range from modeling agent behavior in the stock market, supply chains, and consumer markets, to predicting the spread of epidemics, the threat of bio-warfare, and the factors responsible for the fall of ancient civilizations. This tutorial describes the theoretical and practical foundations of ABMS, identifies toolkits and methods for developing agent models, and illustrates the development of a simple agent-based model of shopper behavior using spreadsheets.

  20. Optimization based tumor classification from microarray gene expression data.

    Directory of Open Access Journals (Sweden)

    Onur Dagliyan

    Full Text Available BACKGROUND: An important use of data obtained from microarray measurements is the classification of tumor types with respect to genes that are either up or down regulated in specific cancer types. A number of algorithms have been proposed to obtain such classifications. These algorithms usually require parameter optimization to obtain accurate results depending on the type of data. Additionally, it is highly critical to find an optimal set of markers among those up or down regulated genes that can be clinically utilized to build assays for the diagnosis or to follow progression of specific cancer types. In this paper, we employ a mixed integer programming based classification algorithm named hyper-box enclosure method (HBE for the classification of some cancer types with a minimal set of predictor genes. This optimization based method which is a user friendly and efficient classifier may allow the clinicians to diagnose and follow progression of certain cancer types. METHODOLOGY/PRINCIPAL FINDINGS: We apply HBE algorithm to some well known data sets such as leukemia, prostate cancer, diffuse large B-cell lymphoma (DLBCL, small round blue cell tumors (SRBCT to find some predictor genes that can be utilized for diagnosis and prognosis in a robust manner with a high accuracy. Our approach does not require any modification or parameter optimization for each data set. Additionally, information gain attribute evaluator, relief attribute evaluator and correlation-based feature selection methods are employed for the gene selection. The results are compared with those from other studies and biological roles of selected genes in corresponding cancer type are described. CONCLUSIONS/SIGNIFICANCE: The performance of our algorithm overall was better than the other algorithms reported in the literature and classifiers found in WEKA data-mining package. Since it does not require a parameter optimization and it performs consistently very high prediction rate on

  1. Scene classification of infrared images based on texture feature

    Science.gov (United States)

    Zhang, Xiao; Bai, Tingzhu; Shang, Fei

    2008-12-01

    Scene Classification refers to as assigning a physical scene into one of a set of predefined categories. Utilizing the method texture feature is good for providing the approach to classify scenes. Texture can be considered to be repeating patterns of local variation of pixel intensities. And texture analysis is important in many applications of computer image analysis for classification or segmentation of images based on local spatial variations of intensity. Texture describes the structural information of images, so it provides another data to classify comparing to the spectrum. Now, infrared thermal imagers are used in different kinds of fields. Since infrared images of the objects reflect their own thermal radiation, there are some shortcomings of infrared images: the poor contrast between the objectives and background, the effects of blurs edges, much noise and so on. Because of these shortcomings, it is difficult to extract to the texture feature of infrared images. In this paper we have developed an infrared image texture feature-based algorithm to classify scenes of infrared images. This paper researches texture extraction using Gabor wavelet transform. The transformation of Gabor has excellent capability in analysis the frequency and direction of the partial district. Gabor wavelets is chosen for its biological relevance and technical properties In the first place, after introducing the Gabor wavelet transform and the texture analysis methods, the infrared images are extracted texture feature by Gabor wavelet transform. It is utilized the multi-scale property of Gabor filter. In the second place, we take multi-dimensional means and standard deviation with different scales and directions as texture parameters. The last stage is classification of scene texture parameters with least squares support vector machine (LS-SVM) algorithm. SVM is based on the principle of structural risk minimization (SRM). Compared with SVM, LS-SVM has overcome the shortcoming of

  2. Evaluating Water Demand Using Agent-Based Modeling

    Science.gov (United States)

    Lowry, T. S.

    2004-12-01

    The supply and demand of water resources are functions of complex, inter-related systems including hydrology, climate, demographics, economics, and policy. To assess the safety and sustainability of water resources, planners often rely on complex numerical models that relate some or all of these systems using mathematical abstractions. The accuracy of these models relies on how well the abstractions capture the true nature of the systems interactions. Typically, these abstractions are based on analyses of observations and/or experiments that account only for the statistical mean behavior of each system. This limits the approach in two important ways: 1) It cannot capture cross-system disruptive events, such as major drought, significant policy change, or terrorist attack, and 2) it cannot resolve sub-system level responses. To overcome these limitations, we are developing an agent-based water resources model that includes the systems of hydrology, climate, demographics, economics, and policy, to examine water demand during normal and extraordinary conditions. Agent-based modeling (ABM) develops functional relationships between systems by modeling the interaction between individuals (agents), who behave according to a probabilistic set of rules. ABM is a "bottom-up" modeling approach in that it defines macro-system behavior by modeling the micro-behavior of individual agents. While each agent's behavior is often simple and predictable, the aggregate behavior of all agents in each system can be complex, unpredictable, and different than behaviors observed in mean-behavior models. Furthermore, the ABM approach creates a virtual laboratory where the effects of policy changes and/or extraordinary events can be simulated. Our model, which is based on the demographics and hydrology of the Middle Rio Grande Basin in the state of New Mexico, includes agent groups of residential, agricultural, and industrial users. Each agent within each group determines its water usage

  3. Agent-Based Simulations for Project Management

    Science.gov (United States)

    White, J. Chris; Sholtes, Robert M.

    2011-01-01

    Currently, the most common approach used in project planning tools is the Critical Path Method (CPM). While this method was a great improvement over the basic Gantt chart technique being used at the time, it now suffers from three primary flaws: (1) task duration is an input, (2) productivity impacts are not considered , and (3) management corrective actions are not included. Today, computers have exceptional computational power to handle complex simulations of task e)(eculion and project management activities (e.g ., dynamically changing the number of resources assigned to a task when it is behind schedule). Through research under a Department of Defense contract, the author and the ViaSim team have developed a project simulation tool that enables more realistic cost and schedule estimates by using a resource-based model that literally turns the current duration-based CPM approach "on its head." The approach represents a fundamental paradigm shift in estimating projects, managing schedules, and reducing risk through innovative predictive techniques.

  4. Web entity extraction based on entity attribute classification

    Science.gov (United States)

    Li, Chuan-Xi; Chen, Peng; Wang, Ru-Jing; Su, Ya-Ru

    2011-12-01

    The large amount of entity data are continuously published on web pages. Extracting these entities automatically for further application is very significant. Rule-based entity extraction method yields promising result, however, it is labor-intensive and hard to be scalable. The paper proposes a web entity extraction method based on entity attribute classification, which can avoid manual annotation of samples. First, web pages are segmented into different blocks by algorithm Vision-based Page Segmentation (VIPS), and a binary classifier LibSVM is trained to retrieve the candidate blocks which contain the entity contents. Second, the candidate blocks are partitioned into candidate items, and the classifiers using LibSVM are performed for the attributes annotation of the items and then the annotation results are aggregated into an entity. Results show that the proposed method performs well to extract agricultural supply and demand entities from web pages.

  5. Nanochemistry of protein-based delivery agents

    Directory of Open Access Journals (Sweden)

    Subin R.C.K. Rajendran

    2016-07-01

    Full Text Available The past decade has seen an increased interest in the conversion of food proteins into functional biomaterials, including their use for loading and delivery of physiologically active compounds such as nutraceuticals and pharmaceuticals. Proteins possess a competitive advantage over other platforms for the development of nanodelivery systems since they are biocompatible, amphipathic, and widely available. Proteins also have unique molecular structures and diverse functional groups that can be selectively modified to alter encapsulation and release properties. A number of physical and chemical methods have been used for preparing protein nanoformulations, each based on different underlying protein chemistry. This review focuses on the chemistry of the reorganization and/or modification of proteins into functional nanostructures for delivery, from the perspective of their preparation, functionality, stability and physiological behavior.

  6. Nanochemistry of protein-based delivery agents

    Science.gov (United States)

    Rajendran, Subin; Udenigwe, Chibuike; Yada, Rickey

    2016-07-01

    The past decade has seen an increased interest in the conversion of food proteins into functional biomaterials, including their use for loading and delivery of physiologically active compounds such as nutraceuticals and pharmaceuticals. Proteins possess a competitive advantage over other platforms for the development of nanodelivery systems since they are biocompatible, amphipathic, and widely available. Proteins also have unique molecular structures and diverse functional groups that can be selectively modified to alter encapsulation and release properties. A number of physical and chemical methods have been used for preparing protein nanoformulations, each based on different underlying protein chemistry. This review focuses on the chemistry of the reorganization and/or modification of proteins into functional nanostructures for delivery, from the perspective of their preparation, functionality, stability and physiological behavior.

  7. Soft computing based feature selection for environmental sound classification

    NARCIS (Netherlands)

    Shakoor, A.; May, T.M.; Van Schijndel, N.H.

    2010-01-01

    Environmental sound classification has a wide range of applications,like hearing aids, mobile communication devices, portable media players, and auditory protection devices. Sound classification systemstypically extract features from the input sound. Using too many features increases complexity unne

  8. ECG-based heartbeat classification for arrhythmia detection: A survey.

    Science.gov (United States)

    Luz, Eduardo José da S; Schwartz, William Robson; Cámara-Chávez, Guillermo; Menotti, David

    2016-04-01

    An electrocardiogram (ECG) measures the electric activity of the heart and has been widely used for detecting heart diseases due to its simplicity and non-invasive nature. By analyzing the electrical signal of each heartbeat, i.e., the combination of action impulse waveforms produced by different specialized cardiac tissues found in the heart, it is possible to detect some of its abnormalities. In the last decades, several works were developed to produce automatic ECG-based heartbeat classification methods. In this work, we survey the current state-of-the-art methods of ECG-based automated abnormalities heartbeat classification by presenting the ECG signal preprocessing, the heartbeat segmentation techniques, the feature description methods and the learning algorithms used. In addition, we describe some of the databases used for evaluation of methods indicated by a well-known standard developed by the Association for the Advancement of Medical Instrumentation (AAMI) and described in ANSI/AAMI EC57:1998/(R)2008 (ANSI/AAMI, 2008). Finally, we discuss limitations and drawbacks of the methods in the literature presenting concluding remarks and future challenges, and also we propose an evaluation process workflow to guide authors in future works.

  9. Understanding Acupuncture Based on ZHENG Classification from System Perspective

    Directory of Open Access Journals (Sweden)

    Junwei Fang

    2013-01-01

    Full Text Available Acupuncture is an efficient therapy method originated in ancient China, the study of which based on ZHENG classification is a systematic research on understanding its complexity. The system perspective is contributed to understand the essence of phenomena, and, as the coming of the system biology era, broader technology platforms such as omics technologies were established for the objective study of traditional chinese medicine (TCM. Omics technologies could dynamically determine molecular components of various levels, which could achieve a systematic understanding of acupuncture by finding out the relationships of various response parts. After reviewing the literature of acupuncture studied by omics approaches, the following points were found. Firstly, with the help of omics approaches, acupuncture was found to be able to treat diseases by regulating the neuroendocrine immune (NEI network and the change of which could reflect the global effect of acupuncture. Secondly, the global effect of acupuncture could reflect ZHENG information at certain structure and function levels, which might reveal the mechanism of Meridian and Acupoint Specificity. Furthermore, based on comprehensive ZHENG classification, omics researches could help us understand the action characteristics of acupoints and the molecular mechanisms of their synergistic effect.

  10. Gear Crack Level Classification Based on EMD and EDT

    Directory of Open Access Journals (Sweden)

    Haiping Li

    2015-01-01

    Full Text Available Gears are the most essential parts in rotating machinery. Crack fault is one of damage modes most frequently occurring in gears. So, this paper deals with the problem of different crack levels classification. The proposed method is mainly based on empirical mode decomposition (EMD and Euclidean distance technique (EDT. First, vibration signal acquired by accelerometer is processed by EMD and intrinsic mode functions (IMFs are obtained. Then, a correlation coefficient based method is proposed to select the sensitive IMFs which contain main gear fault information. And energy of these IMFs is chosen as the fault feature by comparing with kurtosis and skewness. Finally, Euclidean distances between test sample and four classes trained samples are calculated, and on this basis, fault level classification of the test sample can be made. The proposed approach is tested and validated through a gearbox experiment, in which four crack levels and three kinds of loads are utilized. The results show that the proposed method has high accuracy rates in classifying different crack levels and may be adaptive to different conditions.

  11. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation

    Directory of Open Access Journals (Sweden)

    Rui Sun

    2016-08-01

    Full Text Available Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.

  12. 78 FR 58153 - Prevailing Rate Systems; North American Industry Classification System Based Federal Wage System...

    Science.gov (United States)

    2013-09-23

    ... RIN 3206-AM78 Prevailing Rate Systems; North American Industry Classification System Based Federal... Industry Classification System (NAICS) codes currently used in Federal Wage System wage survey industry... 2007 North American Industry Classification System (NAICS) codes used in Federal Wage System (FWS)...

  13. 78 FR 18252 - Prevailing Rate Systems; North American Industry Classification System Based Federal Wage System...

    Science.gov (United States)

    2013-03-26

    ... Industry Classification System Based Federal Wage System Wage Surveys AGENCY: U. S. Office of Personnel... is issuing a proposed rule that would update the 2007 North American Industry Classification System... North American Industry Classification System (NAICS) codes used in Federal Wage System (FWS)...

  14. An Agent Communication Framework Based on XML and SOAP Technique

    Institute of Scientific and Technical Information of China (English)

    李晓瑜

    2009-01-01

    This thesis introducing XML technology and SOAP technology,present an agent communication fi-amework based on XML and SOAP technique,and analyze the principle,architecture,function and benefit of it. At the end, based on KQML communication primitive lan- guages.

  15. MATT: Multi Agents Testing Tool Based Nets within Nets

    Directory of Open Access Journals (Sweden)

    Sara Kerraoui

    2016-12-01

    As part of this effort, we propose a model based testing approach for multi agent systems based on such a model called Reference net, where a tool, which aims to providing a uniform and automated approach is developed. The feasibility and the advantage of the proposed approach are shown through a short case study.

  16. Agent-based analysis of organizations : formalization and simulation

    NARCIS (Netherlands)

    Dignum, M.V.; Tick, C.

    2008-01-01

    Organizational effectiveness depends on many factors, including individual excellence, efficient structures, effective planning and capability to understand and match context requirements. We propose a way to model organizational performance based on a combination of formal models and agent-based si

  17. Agent-based modelling of socio-technical systems

    CERN Document Server

    van Dam, Koen H; Lukszo, Zofia

    2012-01-01

    Here is a practical introduction to agent-based modelling of socio-technical systems, based on methodology developed at TU Delft, which has been deployed in a number of case studies. Offers theory, methods and practical steps for creating real-world models.

  18. A role based coordination model in agent systems

    Institute of Scientific and Technical Information of China (English)

    ZHANG Ya-ying; YOU Jin-yuan

    2005-01-01

    Coordination technology addresses the construction of open, flexible systems from active and independent software agents in concurrent and distributed systems. In most open distributed applications, multiple agents need interaction and communication to achieve their overall goal. Coordination technologies for the Internet typically are concerned with enabling interaction among agents and helping them cooperate with each other.At the same time, access control should also be considered to constrain interaction to make it harmless. Access control should be regarded as the security counterpart of coordination. At present, the combination of coordination and access control remains an open problem. Thus, we propose a role based coordination model with policy enforcement in agent application systems. In this model, coordination is combined with access control so as to fully characterize the interactions in agent systems. A set of agents interacting with each other for a common global system task constitutes a coordination group. Role based access control is applied in this model to prevent unauthorized accesses. Coordination policy is enforced in a distributed manner so that the model can be applied to the open distributed systems such as Intemet. An Internet online auction system is presented as a case study to illustrate the proposed coordination model and finally the performance analysis of the model is introduced.

  19. Web Crawler Based on Mobile Agent and Java Aglets

    Directory of Open Access Journals (Sweden)

    Md. Abu Kausar

    2013-09-01

    Full Text Available With the huge growth of the Internet, many web pages are available online. Search engines use web crawlers to collect these web pages from World Wide Web for the purpose of storage and indexing. Basically Web Crawler is a program, which finds information from the World Wide Web in a systematic and automated manner. This network load farther will be reduced by using mobile agents.The proposed approach uses mobile agents to crawl the pages. A mobile agent is not bound to the system in which it starts execution. It has the unique ability to transfer itself from one system in a network to another system. The main advantages of web crawler based on Mobile Agents are that the analysis part of the crawling process is done locally rather than remote side. This drastically reduces network load and traffic which can improve the performance and efficiency of the whole crawling process.

  20. AGENT based structural static and dynamic collaborative optimization

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A static and dynamic collaborative optimization mode for complex machine system and itsontology project relationship are put forward, on which an agent-based structural static and dynamiccollaborative optimization system is constructed as two agent colonies: optimization agent colony andfinite element analysis colony. And a two-level solving strategy as well as the necessity and possibilityfor handing with finite element analysis model in multi-level mode is discussed. Furthermore, the coop-eration of all FEA agents for optimal design of complicated structural is studied in detail. Structural stat-ic and dynamic collaborative optimization of hydraulic excavator working equimpent is taken as an ex-ample to show that the system is reliable.

  1. Study on the agile supply chain management based on agent

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The most important task of the agile supply chain management (ASCM) is to reconfigure a supply chain based on the customers' requirement. Without more sophisticated cooperation and dynamic formation in an agile supply chain, it cannot be achieved for mass customization, rapid response and high quality services. Because of its great potential in supporting cooperation for the supply chain management, agent technology can carry out the cooperative work by inter-operation across networked human, organization and machines at the abstractive level in a computational system. A major challenge in building such a system is to coordinate the behavior of individual agent or a group of agents to achieve the individual and shared goals of the participants. In this paper, the agent technology is used to support modeling and coordinating of supply chain management.

  2. Bearing Fault Classification Based on Conditional Random Field

    Directory of Open Access Journals (Sweden)

    Guofeng Wang

    2013-01-01

    Full Text Available Condition monitoring of rolling element bearing is paramount for predicting the lifetime and performing effective maintenance of the mechanical equipment. To overcome the drawbacks of the hidden Markov model (HMM and improve the diagnosis accuracy, conditional random field (CRF model based classifier is proposed. In this model, the feature vectors sequences and the fault categories are linked by an undirected graphical model in which their relationship is represented by a global conditional probability distribution. In comparison with the HMM, the main advantage of the CRF model is that it can depict the temporal dynamic information between the observation sequences and state sequences without assuming the independence of the input feature vectors. Therefore, the interrelationship between the adjacent observation vectors can also be depicted and integrated into the model, which makes the classifier more robust and accurate than the HMM. To evaluate the effectiveness of the proposed method, four kinds of bearing vibration signals which correspond to normal, inner race pit, outer race pit and roller pit respectively are collected from the test rig. And the CRF and HMM models are built respectively to perform fault classification by taking the sub band energy features of wavelet packet decomposition (WPD as the observation sequences. Moreover, K-fold cross validation method is adopted to improve the evaluation accuracy of the classifier. The analysis and comparison under different fold times show that the accuracy rate of classification using the CRF model is higher than the HMM. This method brings some new lights on the accurate classification of the bearing faults.

  3. Comparison Effectiveness of Pixel Based Classification and Object Based Classification Using High Resolution Image In Floristic Composition Mapping (Study Case: Gunung Tidar Magelang City)

    Science.gov (United States)

    Ardha Aryaguna, Prama; Danoedoro, Projo

    2016-11-01

    Developments of analysis remote sensing have same way with development of technology especially in sensor and plane. Now, a lot of image have high spatial and radiometric resolution, that's why a lot information. Vegetation object analysis such floristic composition got a lot advantage of that development. Floristic composition can be interpreted using a lot of method such pixel based classification and object based classification. The problems for pixel based method on high spatial resolution image are salt and paper who appear in result of classification. The purpose of this research are compare effectiveness between pixel based classification and object based classification for composition vegetation mapping on high resolution image Worldview-2. The results show that pixel based classification using majority 5×5 kernel windows give the highest accuracy between another classifications. The highest accuracy is 73.32% from image Worldview-2 are being radiometric corrected level surface reflectance, but for overall accuracy in every class, object based are the best between another methods. Reviewed from effectiveness aspect, pixel based are more effective then object based for vegetation composition mapping in Tidar forest.

  4. Kernel-based machine learning techniques for infrasound signal classification

    Science.gov (United States)

    Tuma, Matthias; Igel, Christian; Mialle, Pierrick

    2014-05-01

    Infrasound monitoring is one of four remote sensing technologies continuously employed by the CTBTO Preparatory Commission. The CTBTO's infrasound network is designed to monitor the Earth for potential evidence of atmospheric or shallow underground nuclear explosions. Upon completion, it will comprise 60 infrasound array stations distributed around the globe, of which 47 were certified in January 2014. Three stages can be identified in CTBTO infrasound data processing: automated processing at the level of single array stations, automated processing at the level of the overall global network, and interactive review by human analysts. At station level, the cross correlation-based PMCC algorithm is used for initial detection of coherent wavefronts. It produces estimates for trace velocity and azimuth of incoming wavefronts, as well as other descriptive features characterizing a signal. Detected arrivals are then categorized into potentially treaty-relevant versus noise-type signals by a rule-based expert system. This corresponds to a binary classification task at the level of station processing. In addition, incoming signals may be grouped according to their travel path in the atmosphere. The present work investigates automatic classification of infrasound arrivals by kernel-based pattern recognition methods. It aims to explore the potential of state-of-the-art machine learning methods vis-a-vis the current rule-based and task-tailored expert system. To this purpose, we first address the compilation of a representative, labeled reference benchmark dataset as a prerequisite for both classifier training and evaluation. Data representation is based on features extracted by the CTBTO's PMCC algorithm. As classifiers, we employ support vector machines (SVMs) in a supervised learning setting. Different SVM kernel functions are used and adapted through different hyperparameter optimization routines. The resulting performance is compared to several baseline classifiers. All

  5. Macromolecular and dendrimer-based magnetic resonance contrast agents

    Energy Technology Data Exchange (ETDEWEB)

    Bumb, Ambika; Brechbiel, Martin W. (Radiation Oncology Branch, National Cancer Inst., National Inst. of Health, Bethesda, MD (United States)), e-mail: pchoyke@mail.nih.gov; Choyke, Peter (Molecular Imaging Program, National Cancer Inst., National Inst. of Health, Bethesda, MD (United States))

    2010-09-15

    Magnetic resonance imaging (MRI) is a powerful imaging modality that can provide an assessment of function or molecular expression in tandem with anatomic detail. Over the last 20-25 years, a number of gadolinium-based MR contrast agents have been developed to enhance signal by altering proton relaxation properties. This review explores a range of these agents from small molecule chelates, such as Gd-DTPA and Gd-DOTA, to macromolecular structures composed of albumin, polylysine, polysaccharides (dextran, inulin, starch), poly(ethylene glycol), copolymers of cystamine and cystine with GD-DTPA, and various dendritic structures based on polyamidoamine and polylysine (Gadomers). The synthesis, structure, biodistribution, and targeting of dendrimer-based MR contrast agents are also discussed

  6. Hepatic CT Image Query Based on Threshold-based Classification Scheme with Gabor Features

    Institute of Scientific and Technical Information of China (English)

    JIANG Li-jun; LUO Yong-zing; ZHAO Jun; ZHUANG Tian-ge

    2008-01-01

    Hepatic computed tomography (CT) images with Gabor function were analyzed.Then a thresholdbased classification scheme was proposed using Gabor features and proceeded with the retrieval of the hepatic CT images.In our experiments,a batch of hepatic CT images containing several types of CT findings was used and compared with the Zhao's image classification scheme,support vector machines (SVM) scheme and threshold-based scheme.

  7. Highly comparative, feature-based time-series classification

    CERN Document Server

    Fulcher, Ben D

    2014-01-01

    A highly comparative, feature-based approach to time series classification is introduced that uses an extensive database of algorithms to extract thousands of interpretable features from time series. These features are derived from across the scientific time-series analysis literature, and include summaries of time series in terms of their correlation structure, distribution, entropy, stationarity, scaling properties, and fits to a range of time-series models. After computing thousands of features for each time series in a training set, those that are most informative of the class structure are selected using greedy forward feature selection with a linear classifier. The resulting feature-based classifiers automatically learn the differences between classes using a reduced number of time-series properties, and circumvent the need to calculate distances between time series. Representing time series in this way results in orders of magnitude of dimensionality reduction, allowing the method to perform well on ve...

  8. Credal Classification based on AODE and compression coefficients

    CERN Document Server

    Corani, Giorgio

    2012-01-01

    Bayesian model averaging (BMA) is an approach to average over alternative models; yet, it usually gets excessively concentrated around the single most probable model, therefore achieving only sub-optimal classification performance. The compression-based approach (Boulle, 2007) overcomes this problem, averaging over the different models by applying a logarithmic smoothing over the models' posterior probabilities. This approach has shown excellent performances when applied to ensembles of naive Bayes classifiers. AODE is another ensemble of models with high performance (Webb, 2005), based on a collection of non-naive classifiers (called SPODE) whose probabilistic predictions are aggregated by simple arithmetic mean. Aggregating the SPODEs via BMA rather than by arithmetic mean deteriorates the performance; instead, we aggregate the SPODEs via the compression coefficients and we show that the resulting classifier obtains a slight but consistent improvement over AODE. However, an important issue in any Bayesian e...

  9. Automated glioblastoma segmentation based on a multiparametric structured unsupervised classification.

    Science.gov (United States)

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V; Robles, Montserrat; Aparici, F; Martí-Bonmatí, L; García-Gómez, Juan M

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.

  10. Automated glioblastoma segmentation based on a multiparametric structured unsupervised classification.

    Directory of Open Access Journals (Sweden)

    Javier Juan-Albarracín

    Full Text Available Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM, whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF. An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.

  11. Smart Agent Learning based Hotel Search System- Android Environment

    Directory of Open Access Journals (Sweden)

    Wayne Lawrence

    2012-08-01

    Full Text Available The process of finding the finest hotel in central location is time consuming, information overload and overwhelming and in some cases poses a security risk to the client. Over time with competition in the market among travel agents and hotels, the process of hotel search and booking has improved with the advances in technology. Various web sites allow a user to select a destination from a pull-down list along with several categories to suit one’s preference.. Some of the more advanced web sites allow for a search of the destination via a map for example hotelguidge.com and jamaica.hotels.hu. Recently good amount of work been carried in the use of Intelligent agents towards hotel search on J2ME based mobile handset which still has some weakness. The proposed system so developed uses smart software agents that overcomes the weakness in the previous system by collaborating among themselves and search Google map based on criteria selected by the user and return results to the client that is precise and best suit the user requirements. In addition, the agent possesses learning capability of searching the hotels too which is based on past search experience. The booking of hotel involving cryptography has not been incorporated in this research paper and been published elsewhere. This will be facilitated on Android 2.2-enabled mobile phone using JADE-LEAP Agent development kit.

  12. A Fuzzy Similarity Based Concept Mining Model for Text Classification

    Directory of Open Access Journals (Sweden)

    Shalini Puri

    2011-11-01

    Full Text Available Text Classification is a challenging and a red hot field in the current scenario and has great importance in text categorization applications. A lot of research work has been done in this field but there is a need to categorize a collection of text documents into mutually exclusive categories by extracting the concepts or features using supervised learning paradigm and different classification algorithms. In this paper, a new Fuzzy Similarity Based Concept Mining Model (FSCMM is proposed to classify a set of text documents into pre - defined Category Groups (CG by providing them training and preparing on the sentence, document and integrated corpora levels along with feature reduction, ambiguity removal on each level to achieve high system performance. Fuzzy Feature Category Similarity Analyzer (FFCSA is used to analyze each extracted feature of Integrated Corpora Feature Vector (ICFV with the corresponding categories or classes. This model uses Support Vector Machine Classifier (SVMC to classify correctly the training data patterns into two groups; i. e., + 1 and – 1, thereby producing accurate and correct results. The proposed model works efficiently and effectively with great performance and high - accuracy results.

  13. Radar Image Texture Classification based on Gabor Filter Bank

    Directory of Open Access Journals (Sweden)

    Mbainaibeye Jérôme

    2014-01-01

    Full Text Available The aim of this paper is to design and develop a filter bank for the detection and classification of radar image texture with 4.6m resolution obtained by airborne synthetic Aperture Radar. The textures of this kind of images are more correlated and contain forms with random disposition. The design and the developing of the filter bank is based on Gabor filter. We have elaborated a set of filters applied to each set of feature texture allowing its identification and enhancement in comparison with other textures. The filter bank which we have elaborated is represented by a combination of different texture filters. After processing, the selected filter bank is the filter bank which allows the identification of all the textures of an image with a significant identification rate. This developed filter is applied to radar image and the obtained results are compared with those obtained by using filter banks issue from the generalized Gaussian models (GGM. We have shown that Gabor filter developed in this work gives the classification rate greater than the results obtained by Generalized Gaussian model. The main contribution of this work is the generation of the filter banks able to give an optimal filter bank for a given texture and in particular for radar image textures

  14. Neighborhood Hypergraph Based Classification Algorithm for Incomplete Information System

    Directory of Open Access Journals (Sweden)

    Feng Hu

    2015-01-01

    Full Text Available The problem of classification in incomplete information system is a hot issue in intelligent information processing. Hypergraph is a new intelligent method for machine learning. However, it is hard to process the incomplete information system by the traditional hypergraph, which is due to two reasons: (1 the hyperedges are generated randomly in traditional hypergraph model; (2 the existing methods are unsuitable to deal with incomplete information system, for the sake of missing values in incomplete information system. In this paper, we propose a novel classification algorithm for incomplete information system based on hypergraph model and rough set theory. Firstly, we initialize the hypergraph. Second, we classify the training set by neighborhood hypergraph. Third, under the guidance of rough set, we replace the poor hyperedges. After that, we can obtain a good classifier. The proposed approach is tested on 15 data sets from UCI machine learning repository. Furthermore, it is compared with some existing methods, such as C4.5, SVM, NavieBayes, and KNN. The experimental results show that the proposed algorithm has better performance via Precision, Recall, AUC, and F-measure.

  15. Classification of knee arthropathy with accelerometer-based vibroarthrography.

    Science.gov (United States)

    Moreira, Dinis; Silva, Joana; Correia, Miguel V; Massada, Marta

    2016-01-01

    One of the most common knee joint disorders is known as osteoarthritis which results from the progressive degeneration of cartilage and subchondral bone over time, affecting essentially elderly adults. Current evaluation techniques are either complex, expensive, invasive or simply fails into detection of small and progressive changes that occur within the knee. Vibroarthrography appeared as a new solution where the mechanical vibratory signals arising from the knee are recorded recurring only to an accelerometer and posteriorly analyzed enabling the differentiation between a healthy and an arthritic joint. In this study, a vibration-based classification system was created using a dataset with 92 healthy and 120 arthritic segments of knee joint signals collected from 19 healthy and 20 arthritic volunteers, evaluated with k-nearest neighbors and support vector machine classifiers. The best classification was obtained using the k-nearest neighbors classifier with only 6 time-frequency features with an overall accuracy of 89.8% and with a precision, recall and f-measure of 88.3%, 92.4% and 90.1%, respectively. Preliminary results showed that vibroarthrography can be a promising, non-invasive and low cost tool that could be used for screening purposes. Despite this encouraging results, several upgrades in the data collection process and analysis can be further implemented.

  16. Pro duct Image Classification Based on Fusion Features

    Institute of Scientific and Technical Information of China (English)

    YANG Xiao-hui; LIU Jing-jing; YANG Li-jun

    2015-01-01

    Two key challenges raised by a product images classification system are classi-fication precision and classification time. In some categories, classification precision of the latest techniques, in the product images classification system, is still low. In this paper, we propose a local texture descriptor termed fan refined local binary pattern, which captures more detailed information by integrating the spatial distribution into the local binary pattern feature. We compare our approach with different methods on a subset of product images on Amazon/eBay and parts of PI100 and experimental results have demonstrated that our proposed approach is superior to the current existing methods. The highest classification precision is increased by 21%and the average classification time is reduced by 2/3.

  17. Hyperspectral image classification based on spatial and spectral features and sparse representation

    Institute of Scientific and Technical Information of China (English)

    Yang Jing-Hui; Wang Li-Guo; Qian Jin-Xi

    2014-01-01

    To minimize the low classification accuracy and low utilization of spatial information in traditional hyperspectral image classification methods, we propose a new hyperspectral image classification method, which is based on the Gabor spatial texture features and nonparametric weighted spectral features, and the sparse representation classification method (Gabor–NWSF and SRC), abbreviated GNWSF–SRC. The proposed (GNWSF–SRC) method first combines the Gabor spatial features and nonparametric weighted spectral features to describe the hyperspectral image, and then applies the sparse representation method. Finally, the classification is obtained by analyzing the reconstruction error. We use the proposed method to process two typical hyperspectral data sets with different percentages of training samples. Theoretical analysis and simulation demonstrate that the proposed method improves the classification accuracy and Kappa coefficient compared with traditional classification methods and achieves better classification performance.

  18. Return Migration After Brain Drain: An Agent Based Simulation Approach

    CERN Document Server

    Biondo, A E; Rapisarda, A

    2012-01-01

    The Brain Drain phenomenon is particularly heterogeneous and is characterized by peculiar specifications. It influences the economic fundamentals of both the country of origin and the host one in terms of human capital accumulation. Here, the brain drain is considered from a microeconomic perspective: more precisely we focus on the individual rational decision to return, referring it to the social capital owned by the worker. The presented model, restricted to the case of academic personnel, compares utility levels to justify agent's migration conduct and to simulate several scenarios with a NetLogo agent based model. In particular, we developed a simulation framework based on two fundamental individual features, i.e. risk aversion and initial expectation, which characterize the dynamics of different agents according to the random evolution of their personal social networks. Our main result is that, according to the value of risk aversion and initial expectation, the probability of return migration depends on...

  19. A Method of Soil Salinization Information Extraction with SVM Classification Based on ICA and Texture Features

    Institute of Scientific and Technical Information of China (English)

    ZHANG Fei; TASHPOLAT Tiyip; KUNG Hsiang-te; DING Jian-li; MAMAT.Sawut; VERNER Johnson; HAN Gui-hong; GUI Dong-wei

    2011-01-01

    Salt-affected soils classification using remotely sensed images is one of the most common applications in remote sensing,and many algorithms have been developed and applied for this purpose in the literature.This study takes the Delta Oasis of Weigan and Kuqa Rivers as a study area and discusses the prediction of soil salinization from ETM+ Landsat data.It reports the Support Vector Machine(SVM) classification method based on Independent Component Analysis(ICA) and Texture features.Meanwhile,the letter introduces the fundamental theory of SVM algorithm and ICA,and then incorporates ICA and texture features.The classification result is compared with ICA-SVM classification,single data source SVM classification,maximum likelihood classification(MLC) and neural network classification qualitatively and quantitatively.The result shows that this method can effectively solve the problem of low accuracy and fracture classification result in single data source classification.It has high spread ability toward higher array input.The overall accuracy is 98.64%,which increases by 10.2% compared with maximum likelihood classification,even increases by 12.94% compared with neural net classification,and thus acquires good effectiveness.Therefore,the classification method based on SVM and incorporating the ICA and texture features can be adapted to RS image classification and monitoring of soil salinization.

  20. Radiological classification of renal angiomyolipomas based on 127 tumors

    Directory of Open Access Journals (Sweden)

    Prando Adilson

    2003-01-01

    Full Text Available PURPOSE: Demonstrate radiological findings of 127 angiomyolipomas (AMLs and propose a classification based on the radiological evidence of fat. MATERIALS AND METHODS: The imaging findings of 85 consecutive patients with AMLs: isolated (n = 73, multiple without tuberous sclerosis (TS (n = 4 and multiple with TS (n = 8, were retrospectively reviewed. Eighteen AMLs (14% presented with hemorrhage. All patients were submitted to a dedicated helical CT or magnetic resonance studies. All hemorrhagic and non-hemorrhagic lesions were grouped together since our objective was to analyze the presence of detectable fat. Out of 85 patients, 53 were monitored and 32 were treated surgically due to large perirenal component (n = 13, hemorrhage (n = 11 and impossibility of an adequate preoperative characterization (n = 8. There was not a case of renal cell carcinoma (RCC with fat component in this group of patients. RESULTS: Based on the presence and amount of detectable fat within the lesion, AMLs were classified in 4 distinct radiological patterns: Pattern-I, predominantly fatty (usually less than 2 cm in diameter and intrarenal: 54%; Pattern-II, partially fatty (intrarenal or exophytic: 29%; Pattern-III, minimally fatty (most exophytic and perirenal: 11%; and Pattern-IV, without fat (most exophytic and perirenal: 6%. CONCLUSIONS: This proposed classification might be useful to understand the imaging manifestations of AMLs, their differential diagnosis and determine when further radiological evaluation would be necessary. Small (< 1.5 cm, pattern-I AMLs tend to be intra-renal, homogeneous and predominantly fatty. As they grow they tend to be partially or completely exophytic and heterogeneous (patterns II and III. The rare pattern-IV AMLs, however, can be small or large, intra-renal or exophytic but are always homogeneous and hyperdense mass. Since no renal cell carcinoma was found in our series, from an evidence-based practice, all renal mass with detectable

  1. TOWARDS AN ONTOLOGY-BASED MULTI-AGENT MEDICAL INFORMATION SYSTEM BASED ON THE WEB

    Institute of Scientific and Technical Information of China (English)

    张全海; 施鹏飞

    2002-01-01

    This paper described an ontology-based multi-agent knowledge process made (MAKM) which is one of multi-agents systems (MAS) and uses semantic network to describe agents to help to locate relative agents distributed in the workgroup. In MAKM, an agent is the entity to implement the distributed task processing and to access the information or knowledge. Knowledge query manipulation language (KQML) is adapted to realize the communication among agents. So using the MAKM mode, different knowledge and information on the medical domain could be organized and utilized efficiently when a collaborative task is implemented on the web.

  2. Simulating cancer growth with multiscale agent-based modeling.

    Science.gov (United States)

    Wang, Zhihui; Butner, Joseph D; Kerketta, Romica; Cristini, Vittorio; Deisboeck, Thomas S

    2015-02-01

    There have been many techniques developed in recent years to in silico model a variety of cancer behaviors. Agent-based modeling is a specific discrete-based hybrid modeling approach that allows simulating the role of diversity in cell populations as well as within each individual cell; it has therefore become a powerful modeling method widely used by computational cancer researchers. Many aspects of tumor morphology including phenotype-changing mutations, the adaptation to microenvironment, the process of angiogenesis, the influence of extracellular matrix, reactions to chemotherapy or surgical intervention, the effects of oxygen and nutrient availability, and metastasis and invasion of healthy tissues have been incorporated and investigated in agent-based models. In this review, we introduce some of the most recent agent-based models that have provided insight into the understanding of cancer growth and invasion, spanning multiple biological scales in time and space, and we further describe several experimentally testable hypotheses generated by those models. We also discuss some of the current challenges of multiscale agent-based cancer models.

  3. Agent-based Modeling with MATSim for Hazards Evacuation Planning

    Science.gov (United States)

    Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.

    2015-12-01

    Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.

  4. Fines Classification Based on Sensitivity to Pore-Fluid Chemistry

    KAUST Repository

    Jang, Junbong

    2015-12-28

    The 75-μm particle size is used to discriminate between fine and coarse grains. Further analysis of fine grains is typically based on the plasticity chart. Whereas pore-fluid-chemistry-dependent soil response is a salient and distinguishing characteristic of fine grains, pore-fluid chemistry is not addressed in current classification systems. Liquid limits obtained with electrically contrasting pore fluids (deionized water, 2-M NaCl brine, and kerosene) are combined to define the soil "electrical sensitivity." Liquid limit and electrical sensitivity can be effectively used to classify fine grains according to their fluid-soil response into no-, low-, intermediate-, or high-plasticity fine grains of low, intermediate, or high electrical sensitivity. The proposed methodology benefits from the accumulated experience with liquid limit in the field and addresses the needs of a broader range of geotechnical engineering problems. © ASCE.

  5. Improved Collaborative Filtering Recommendation Based on Classification and User Trust

    Institute of Scientific and Technical Information of China (English)

    Xiao-Lin Xu; Guang-Lin Xu

    2016-01-01

    When dealing with the ratings from users, traditional collaborative filtering algorithms do not consider the credibility of rating data, which affects the accuracy of similarity. To address this issue, the paper proposes an improved algorithm based on classification and user trust. It firstly classifies all the ratings by the categories of items. And then, for each category, it evaluates the trustworthy degree of each user on the category and imposes the degree on the ratings of the user. Finally, the algorithm explores the similarities between users, finds the nearest neighbors, and makes recommendations within each category. Simulations show that the improved algorithm outperforms the traditional collaborative filtering algorithms and enhances the accuracy of recommendation.

  6. About Classification Methods Based on Tensor Modelling for Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Salah Bourennane

    2010-03-01

    Full Text Available Denoising and Dimensionality Reduction (DR are key issue to improve the classifiers efficiency for Hyper spectral images (HSI. The multi-way Wiener filtering recently developed is used, Principal and independent component analysis (PCA; ICA and projection pursuit(PP approaches to DR have been investigated. These matrix algebra methods are applied on vectorized images. Thereof, the spatial rearrangement is lost. To jointly take advantage of the spatial and spectral information, HSI has been recently represented as tensor. Offering multiple ways to decompose data orthogonally, we introduced filtering and DR methods based on multilinear algebra tools. The DR is performed on spectral way using PCA, or PP joint to an orthogonal projection onto a lower subspace dimension of the spatial ways. Weshow the classification improvement using the introduced methods in function to existing methods. This experiment is exemplified using real-world HYDICE data. Multi-way filtering, Dimensionality reduction, matrix and multilinear algebra tools, tensor processing.

  7. Prediction of Breast Cancer using Rule Based Classification

    Directory of Open Access Journals (Sweden)

    Nagendra Kumar SINGH

    2015-12-01

    Full Text Available The current work proposes a model for prediction of breast cancer using the classification approach in data mining. The proposed model is based on various parameters, including symptoms of breast cancer, gene mutation and other risk factors causing breast cancer. Mutations have been predicted in breast cancer causing genes with the help of alignment of normal and abnormal gene sequences; then predicting the class label of breast cancer (risky or safe on the basis of IF-THEN rules, using Genetic Algorithm (GA. In this work, GA has used variable gene encoding mechanisms for chromosomes encoding, uniform population generations and selects two chromosomes by Roulette-Wheel selection technique for two-point crossover, which gives better solutions. The performance of the model is evaluated using the F score measure, Matthews Correlation Coefficient (MCC and Receiver Operating Characteristic (ROC by plotting points (Sensitivity V/s 1- Specificity.

  8. The generalization ability of online SVM classification based on Markov sampling.

    Science.gov (United States)

    Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang

    2015-03-01

    In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

  9. Classification of cassava genotypes based on qualitative and quantitative data.

    Science.gov (United States)

    Oliveira, E J; Oliveira Filho, O S; Santos, V S

    2015-02-02

    We evaluated the genetic variation of cassava accessions based on qualitative (binomial and multicategorical) and quantitative traits (continuous). We characterized 95 accessions obtained from the Cassava Germplasm Bank of Embrapa Mandioca e Fruticultura; we evaluated these accessions for 13 continuous, 10 binary, and 25 multicategorical traits. First, we analyzed the accessions based only on quantitative traits; next, we conducted joint analysis (qualitative and quantitative traits) based on the Ward-MLM method, which performs clustering in two stages. According to the pseudo-F, pseudo-t2, and maximum likelihood criteria, we identified five and four groups based on quantitative trait and joint analysis, respectively. The smaller number of groups identified based on joint analysis may be related to the nature of the data. On the other hand, quantitative data are more subject to environmental effects in the phenotype expression; this results in the absence of genetic differences, thereby contributing to greater differentiation among accessions. For most of the accessions, the maximum probability of classification was >0.90, independent of the trait analyzed, indicating a good fit of the clustering method. Differences in clustering according to the type of data implied that analysis of quantitative and qualitative traits in cassava germplasm might explore different genomic regions. On the other hand, when joint analysis was used, the means and ranges of genetic distances were high, indicating that the Ward-MLM method is very useful for clustering genotypes when there are several phenotypic traits, such as in the case of genetic resources and breeding programs.

  10. Agent-based Personal Network (PN) service architecture

    DEFF Research Database (Denmark)

    Jiang, Bo; Olesen, Henning

    2004-01-01

    In this paper we proposte a new concept for a centralized agent system as the solution for the PN service architecture, which aims to efficiently control and manage the PN resources and enable the PN based services to run seamlessly over different networks and devices. The working principle...

  11. An Intelligent Agent Based on Virtual Geographic Environment System

    Institute of Scientific and Technical Information of China (English)

    SHEN Dayong; LIN Hui; GONG Jianhua; ZHAO Yibin; FANG Zhaobao; GUO Zhongyang

    2004-01-01

    On the basis of previous work, this paper designs an intelligent agent based on virtual geographic environment (VGE) system that is characterized by huge data, rapid computation, multi-user, multi-thread and intelligence and issues challenges to traditional GIS models and algorithms. The new advances in software and hardware technology lay a reliable basis for system design, development and application.

  12. Resource Based Multi Agent Plan Merging: framework and application

    NARCIS (Netherlands)

    De Weerdt, M.M.; Van der Krogt, R.P.J.; Witteveen, C.

    2003-01-01

    We discuss a resource-based planning framework where agents are able to merge plans by exchanging resources. In this framework, plans are specified as structured objects composed of resource consuming and resource producing processes (actions). A plan itself can also be conceived as a process consum

  13. Mobile Agent-Based Directed Diffusion in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Victor C. M. Leung

    2007-01-01

    Full Text Available In the environments where the source nodes are close to one another and generate a lot of sensory data traffic with redundancy, transmitting all sensory data by individual nodes not only wastes the scarce wireless bandwidth, but also consumes a lot of battery energy. Instead of each source node sending sensory data to its sink for aggregation (the so-called client/server computing, Qi et al. in 2003 proposed a mobile agent (MA-based distributed sensor network (MADSN for collaborative signal and information processing, which considerably reduces the sensory data traffic and query latency as well. However, MADSN is based on the assumption that the operation of mobile agent is only carried out within one hop in a clustering-based architecture. This paper considers MA in multihop environments and adopts directed diffusion (DD to dispatch MA. The gradient in DD gives a hint to efficiently forward the MA among target sensors. The mobile agent paradigm in combination with the DD framework is dubbed mobile agent-based directed diffusion (MADD. With appropriate parameters set, extensive simulation shows that MADD exhibits better performance than original DD (in the client/server paradigm in terms of packet delivery ratio, energy consumption, and end-to-end delivery latency.

  14. Structuring Qualitative Data for Agent-Based Modelling

    NARCIS (Netherlands)

    Ghorbani, Amineh; Dijkema, Gerard P.J.; Schrauwen, Noortje

    2015-01-01

    Using ethnography to build agent-based models may result in more empirically grounded simulations. Our study on innovation practice and culture in the Westland horticulture sector served to explore what information and data from ethnographic analysis could be used in models and how. MAIA, a framewor

  15. On infrastructure network design with agent-based modelling

    NARCIS (Netherlands)

    Chappin, E.J.L.; Heijnen, P.W.

    2014-01-01

    We have developed an agent-based model to optimize green-field network design in an industrial area. We aim to capture some of the deep uncertainties surrounding infrastructure design by modelling it developing specific ant colony optimizations. Hence, we propose a variety of extensions to our exist

  16. Agent-based Multimodal Interface for Dynamically Autonomous Mobile Robots

    Science.gov (United States)

    2003-01-01

    Agent-based Multimodal Interface for Dynamically Autonomous Mobile Robots Donald Sofge, Magdalena Bugajska, William Adams, Dennis...computing paradigm for integrated distributed artificial intelligence systems on autonomous mobile robots (Figure 1). Figure 1 – CoABS Grid...Architecture for Dynamically Autonomous Mobile Robots The remainder of the paper is organized as follows. Section 2 describes our integrated AI

  17. An agent-based architecture for multimodal interaction

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.

    2001-01-01

    In this paper, an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The agent-based architecture can be used to create multimodal interaction. The generic process model has been designed, implemented and used to simulate di

  18. A comparative study on classification of sleep stage based on EEG signals using feature selection and classification algorithms.

    Science.gov (United States)

    Şen, Baha; Peker, Musa; Çavuşoğlu, Abdullah; Çelebi, Fatih V

    2014-03-01

    Sleep scoring is one of the most important diagnostic methods in psychiatry and neurology. Sleep staging is a time consuming and difficult task undertaken by sleep experts. This study aims to identify a method which would classify sleep stages automatically and with a high degree of accuracy and, in this manner, will assist sleep experts. This study consists of three stages: feature extraction, feature selection from EEG signals, and classification of these signals. In the feature extraction stage, it is used 20 attribute algorithms in four categories. 41 feature parameters were obtained from these algorithms. Feature selection is important in the elimination of irrelevant and redundant features and in this manner prediction accuracy is improved and computational overhead in classification is reduced. Effective feature selection algorithms such as minimum redundancy maximum relevance (mRMR); fast correlation based feature selection (FCBF); ReliefF; t-test; and Fisher score algorithms are preferred at the feature selection stage in selecting a set of features which best represent EEG signals. The features obtained are used as input parameters for the classification algorithms. At the classification stage, five different classification algorithms (random forest (RF); feed-forward neural network (FFNN); decision tree (DT); support vector machine (SVM); and radial basis function neural network (RBF)) classify the problem. The results, obtained from different classification algorithms, are provided so that a comparison can be made between computation times and accuracy rates. Finally, it is obtained 97.03 % classification accuracy using the proposed method. The results show that the proposed method indicate the ability to design a new intelligent assistance sleep scoring system.

  19. Quality-Oriented Classification of Aircraft Material Based on SVM

    Directory of Open Access Journals (Sweden)

    Hongxia Cai

    2014-01-01

    Full Text Available The existing material classification is proposed to improve the inventory management. However, different materials have the different quality-related attributes, especially in the aircraft industry. In order to reduce the cost without sacrificing the quality, we propose a quality-oriented material classification system considering the material quality character, Quality cost, and Quality influence. Analytic Hierarchy Process helps to make feature selection and classification decision. We use the improved Kraljic Portfolio Matrix to establish the three-dimensional classification model. The aircraft materials can be divided into eight types, including general type, key type, risk type, and leveraged type. Aiming to improve the classification accuracy of various materials, the algorithm of Support Vector Machine is introduced. Finally, we compare the SVM and BP neural network in the application. The results prove that the SVM algorithm is more efficient and accurate and the quality-oriented material classification is valuable.

  20. An Agent Based Model for Social Class Emergence

    Science.gov (United States)

    Yang, Xiaoxiang; Rodriguez Segura, Daniel; Lin, Fei; Mazilu, Irina

    We present an open system agent-based model to analyze the effects of education and the society-specific wealth transactions on the emergence of social classes. Building on previous studies, we use realistic functions to model how years of education affect the income level. Numerical simulations show that the fraction of an individual's total transactions that is invested rather than consumed can cause wealth gaps between different income brackets in the long run. In an attempt to incorporate the network effects, we also explore how the probability of interactions among agents depending on the spread of their income brackets affects wealth distribution.

  1. The Geographic Information Grid System Based on Mobile Agent

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    We analyze the deficiencies of current application systems, and discuss the key requirements of distributed Geographic Information service (GIS). We construct the distributed GIS on grid platform. Considering the flexibility and efficiency, we integrate the mobile agent technology into the system. We propose a new prototype system, the Geographic Information Grid System (GIGS) based on mobile agent. This system has flexible services and high performance, and improves the sharing of distributed resources. The service strategy of the system and the examples are also presented.

  2. SPY AGENT BASED SECURE DATA AGGREGATION IN WSN

    Directory of Open Access Journals (Sweden)

    T. Lathies Bhasker

    2014-12-01

    Full Text Available Wireless sensor network consist lot of sensor devices which are activated by using the battery power. These sensor devices are mostly used in hostile environment, military applications etc. So in this type of environment it is highly difficult to collect and transmit the data to the Sink without any data lost. In this paper we proposed SPY Agent based secure data aggregation scheme. Here one SPY Agent moves around the network and monitors the aggregator nodes i.e, the Cluster Heads for secure data collection. In the Simulation section we have analyzed our proposed architecture for both proactive and reactive protocols.

  3. Many-body methods in agent-based epidemic models

    CERN Document Server

    Nakamura, Gilberto M

    2016-01-01

    The susceptible-infected-susceptible (SIS) agent-based model is usually employed in the investigation of epidemics. The model describes a Markov process for a single communicable disease among susceptible (S) and infected (I) agents. However, the disease spreading forecasting is often restricted to numerical simulations, while analytic formulations lack both general results and perturbative approaches since they are subjected to asymmetric time generators. Here, we discuss perturbation theory, approximations and application of many-body techniques in epidemic models in the framework for squared norm of probability vector $|P(t)| ^2$, in which asymmetric time generators are replaced by their symmetric counterparts.

  4. Dynamic Agent Classification and Tracking Using an Ad Hoc Mobile Acoustic Sensor Network

    Directory of Open Access Journals (Sweden)

    Friedlander David

    2003-01-01

    Full Text Available Autonomous networks of sensor platforms can be designed to interact in dynamic and noisy environments to determine the occurrence of specified transient events that define the dynamic process of interest. For example, a sensor network may be used for battlefield surveillance with the purpose of detecting, identifying, and tracking enemy activity. When the number of nodes is large, human oversight and control of low-level operations is not feasible. Coordination and self-organization of multiple autonomous nodes is necessary to maintain connectivity and sensor coverage and to combine information for better understanding the dynamics of the environment. Resource conservation requires adaptive clustering in the vicinity of the event. This paper presents methods for dynamic distributed signal processing using an ad hoc mobile network of microsensors to detect, identify, and track targets in noisy environments. They seamlessly integrate data from fixed and mobile platforms and dynamically organize platforms into clusters to process local data along the trajectory of the targets. Local analysis of sensor data is used to determine a set of target attribute values and classify the target. Sensor data from a field test in the Marine base at Twentynine Palms, Calif, was analyzed using the techniques described in this paper. The results were compared to "ground truth" data obtained from GPS receivers on the vehicles.

  5. Using Agents in Web-Based Constructivist Collaborative Learning System

    Institute of Scientific and Technical Information of China (English)

    刘莹; 林福宗; 王雪

    2004-01-01

    Web-based learning systems are one of the most interesting topics in the area of the application of computers to education. Collaborative learning, as an important principle in constructivist learning theory, is an important instruction mode for open and distance learning systems. Through collaborative learning, students can greatly improve their creativity, exploration capability, and social cooperation. This paper used an agent-based coordination mechanism to respond to the requirements of an efficient and motivating learning process. This coordination mechanism is based on a Web-based constructivist collaborative learning system, in which students can learn in groups and interact with each other by several kinds of communication modes to achieve their learning objectives efficiently and actively. In this learning system, artificial agents represent an active part in the collaborative learning process; they can partially replace human instructors during the multi-mode interaction of the students.

  6. A Formal Approach for Agent Based Large Concurrent Intelligent Systems

    CERN Document Server

    Chaudhary, Ankit

    2011-01-01

    Large Intelligent Systems are so complex these days that an urgent need for designing such systems in best available way is evolving. Modeling is the useful technique to show a complex real world system into the form of abstraction, so that analysis and implementation of the intelligent system become easy and is useful in gathering the prior knowledge of system that is not possible to experiment with the real world complex systems. This paper discusses a formal approach of agent-based large systems modeling for intelligent systems, which describes design level precautions, challenges and techniques using autonomous agents, as its fundamental modeling abstraction. We are discussing Ad-Hoc Network System as a case study in which we are using mobile agents where nodes are free to relocate, as they form an Intelligent Systems. The designing is very critical in this scenario and it can reduce the whole cost, time duration and risk involved in the project.

  7. Technology of structure damage monitoring based on multi-agent

    Institute of Scientific and Technical Information of China (English)

    Hongbing Sun; Shenfang Yuan; Xia Zhao; Hengbao Zhou; Dong Liang

    2010-01-01

    The health monitoring for large-scale structures need to resolve a large number of difficulties,such as the data transmission and distributing information handling.To solve these problems,the technology of multi-agent is a good candidate to be used in the field of structural health monitoring.A structural health monitoring system architecture based on multi-agent technology is proposed.The measurement system for aircraft airfoil is designed with FBG,strain gage,and corresponding signal processing circuit.The experiment to determine the location of the concentrate loading on the structure is carried on with the system combined with technologies of pattern recognition and multi-agent.The results show that the system can locate the concentrate loading of the aircraft airfoil at the accuracy of 91.2%.

  8. Differential Protection for Distributed Micro-Grid Based on Agent

    Directory of Open Access Journals (Sweden)

    ZHOU Bin

    2013-05-01

    Full Text Available The Micro-grid, even though not a replacement of the conventional centralized power transmission grid, plays a very important role in the success of rapid development of renewable energy resources technologies. Due to the facts of decentralization, independence and dynamic of sources within a Micro-grid, a high level automation of protection is a must. Multi-Agent system as a approach to handle distributed system issues has been developed. This paper presents an MAS based differential protection method for distributed micro-grid. The nodes within a micro-grid are divided into primary and backup protection zones. The agents follow predefined rules to take actions to protect the system and isolate the fault when it happens. Furthermore, an algorithm is proposed to achieve high availability in case of Agent itself malfunction. The method is using Matlab for simulation and shows it satisfies relay protection in terms of the selectivity, sensitivity, rapidity and reliability requirements.

  9. Automated segmentation of atherosclerotic histology based on pattern classification

    Directory of Open Access Journals (Sweden)

    Arna van Engelen

    2013-01-01

    Full Text Available Background: Histology sections provide accurate information on atherosclerotic plaque composition, and are used in various applications. To our knowledge, no automated systems for plaque component segmentation in histology sections currently exist. Materials and Methods: We perform pixel-wise classification of fibrous, lipid, and necrotic tissue in Elastica Von Gieson-stained histology sections, using features based on color channel intensity and local image texture and structure. We compare an approach where we train on independent data to an approach where we train on one or two sections per specimen in order to segment the remaining sections. We evaluate the results on segmentation accuracy in histology, and we use the obtained histology segmentations to train plaque component classification methods in ex vivo Magnetic resonance imaging (MRI and in vivo MRI and computed tomography (CT. Results: In leave-one-specimen-out experiments on 176 histology slices of 13 plaques, a pixel-wise accuracy of 75.7 ± 6.8% was obtained. This increased to 77.6 ± 6.5% when two manually annotated slices of the specimen to be segmented were used for training. Rank correlations of relative component volumes with manually annotated volumes were high in this situation (P = 0.82-0.98. Using the obtained histology segmentations to train plaque component classification methods in ex vivo MRI and in vivo MRI and CT resulted in similar image segmentations for training on the automated histology segmentations as for training on a fully manual ground truth. The size of the lipid-rich necrotic core was significantly smaller when training on fully automated histology segmentations than when manually annotated histology sections were used. This difference was reduced and not statistically significant when one or two slices per section were manually annotated for histology segmentation. Conclusions: Good histology segmentations can be obtained by automated segmentation

  10. Agent-based reasoning for distributed multi-INT analysis

    Science.gov (United States)

    Inchiosa, Mario E.; Parker, Miles T.; Perline, Richard

    2006-05-01

    Fully exploiting the intelligence community's exponentially growing data resources will require computational approaches differing radically from those currently available. Intelligence data is massive, distributed, and heterogeneous. Conventional approaches requiring highly structured and centralized data will not meet this challenge. We report on a new approach, Agent-Based Reasoning (ABR). In NIST evaluations, the use of ABR software tripled analysts' solution speed, doubled accuracy, and halved perceived difficulty. ABR makes use of populations of fine-grained, locally interacting agents that collectively reason about intelligence scenarios in a self-organizing, "bottom-up" process akin to those found in biological and other complex systems. Reproduction rules allow agents to make inferences from multi-INT data, while movement rules organize information and optimize reasoning. Complementary deterministic and stochastic agent behaviors enhance reasoning power and flexibility. Agent interaction via small-world networks - such as are found in nervous systems, social networks, and power distribution grids - dramatically increases the rate of discovering intelligence fragments that usefully connect to yield new inferences. Small-world networks also support the distributed processing necessary to address intelligence community data challenges. In addition, we have found that ABR pre-processing can boost the performance of commercial text clustering software. Finally, we have demonstrated interoperability with Knowledge Engineering systems and seen that reasoning across diverse data sources can be a rich source of inferences.

  11. Agent based decision support in the supply chain context

    OpenAIRE

    Hilletofth, Per; Lättilä, Lauri

    2012-01-01

    Purpose – The purpose of this paper is to investigate the benefits and the barriers of agent based decision support (ABDS) systems in the supply chain context. Design/methodology/approach – Two ABDS systems have been developed and evaluated. The first system concerns a manufacturing supply chain while the second concerns a service supply chain. The systems are based on actual case companies. Findings – This research shows that the benefits of ABDS systems in the supply chain context include t...

  12. Multi-agent Based Hierarchy Simulation Models of Carrier-based Aircraft Catapult Launch

    Institute of Scientific and Technical Information of China (English)

    Wang Weijun; Qu Xiangju; Guo Linliang

    2008-01-01

    With the aid of multi-agent based modeling approach to complex systems,the hierarchy simulation models of carrier-based aircraft catapult launch are developed.Ocean,carrier,aircraft,and atmosphere are treated as aggregation agents,the detailed components like catapult,landing gears,and disturbances are considered as meta-agents,which belong to their aggregation agent.Thus,the model with two layers is formed i.e.the aggregation agent layer and the meta-agent layer.The information communication among all agents is described.The meta-agents within one aggregation agent communicate with each other directly by information sharing,but the meta-agents,which belong to different aggregation agents exchange their information through the aggregation layer fast,and then perceive it from the sharing environment,that is the aggregation agent.Thus,not only the hierarchy model is built,but also the environment perceived by each agent is specified.Meanwhile,the problem of balancing the independency of agent and the resource consumption brought by real-time communication within multi-agent system (MAS) is resolved.Each agent involved in carrier-based aircraft catapult launch is depicted,with considering the interaction within disturbed atmospheric environment and multiple motion bodies including carrier,aircraft,and landing gears.The models of reactive agents among them are derived based on tensors,and the perceived messages and inner frameworks of each agent are characterized.Finally,some results of a simulation instance are given.The simulation and modeling of dynamic system based on multi-agent system is of benefit to express physical concepts and logical hierarchy clearly and precisely.The system model can easily draw in kinds of other agents to achieve a precise simulation of more complex system.This modeling technique makes the complex integral dynamic equations of multibodies decompose into parallel operations of single agent,and it is convenient to expand,maintain,and reuse

  13. Simulating Interactive Learning Scenarios with Intelligent Pedagogical Agents in a Virtual World through BDI-Based Agents

    Directory of Open Access Journals (Sweden)

    Mohamed Soliman

    2013-04-01

    Full Text Available Intelligent Pedagogical Agents (IPAs are designed for pedagogical purposes to support learning in 3D virtual learning environments. Several benefits of IPAs have been found adding to support learning effectiveness. Pedagogical agents can be thought of as a central point of interaction between the learner and the learning environment. And hence, the intelligent behavior and functional richness of pedagogical agents have the potential to reward back into increased engagement and learning effectiveness. However, the realization of those agents remains to be a challenge based on intelligent agents in virtual worlds. This paper reports the challenging reasons and most importantly an approach for simplification. A simulation based on BDI agents is introduced opening the road for several extensions and experimentation before implementation of IPAs in a virtual world can take place. The simulation provides a proof-of concept based on three intelligent agents to represent an IPA, a learner, and learning object implemented in JACK and Jadex intelligent agent platforms. To that end, the paper exhibits the difficulties, resolutions, and decisions made when designing and implementing the learning scenario in both domains of the virtual world and the agent-based simulation while comparing the two agent platforms.

  14. Classification of types of stuttering symptoms based on brain activity.

    Science.gov (United States)

    Jiang, Jing; Lu, Chunming; Peng, Danling; Zhu, Chaozhe; Howell, Peter

    2012-01-01

    Among the non-fluencies seen in speech, some are more typical (MT) of stuttering speakers, whereas others are less typical (LT) and are common to both stuttering and fluent speakers. No neuroimaging work has evaluated the neural basis for grouping these symptom types. Another long-debated issue is which type (LT, MT) whole-word repetitions (WWR) should be placed in. In this study, a sentence completion task was performed by twenty stuttering patients who were scanned using an event-related design. This task elicited stuttering in these patients. Each stuttered trial from each patient was sorted into the MT or LT types with WWR put aside. Pattern classification was employed to train a patient-specific single trial model to automatically classify each trial as MT or LT using the corresponding fMRI data. This model was then validated by using test data that were independent of the training data. In a subsequent analysis, the classification model, just established, was used to determine which type the WWR should be placed in. The results showed that the LT and the MT could be separated with high accuracy based on their brain activity. The brain regions that made most contribution to the separation of the types were: the left inferior frontal cortex and bilateral precuneus, both of which showed higher activity in the MT than in the LT; and the left putamen and right cerebellum which showed the opposite activity pattern. The results also showed that the brain activity for WWR was more similar to that of the LT and fluent speech than to that of the MT. These findings provide a neurological basis for separating the MT and the LT types, and support the widely-used MT/LT symptom grouping scheme. In addition, WWR play a similar role as the LT, and thus should be placed in the LT type.

  15. Classification of types of stuttering symptoms based on brain activity.

    Directory of Open Access Journals (Sweden)

    Jing Jiang

    Full Text Available Among the non-fluencies seen in speech, some are more typical (MT of stuttering speakers, whereas others are less typical (LT and are common to both stuttering and fluent speakers. No neuroimaging work has evaluated the neural basis for grouping these symptom types. Another long-debated issue is which type (LT, MT whole-word repetitions (WWR should be placed in. In this study, a sentence completion task was performed by twenty stuttering patients who were scanned using an event-related design. This task elicited stuttering in these patients. Each stuttered trial from each patient was sorted into the MT or LT types with WWR put aside. Pattern classification was employed to train a patient-specific single trial model to automatically classify each trial as MT or LT using the corresponding fMRI data. This model was then validated by using test data that were independent of the training data. In a subsequent analysis, the classification model, just established, was used to determine which type the WWR should be placed in. The results showed that the LT and the MT could be separated with high accuracy based on their brain activity. The brain regions that made most contribution to the separation of the types were: the left inferior frontal cortex and bilateral precuneus, both of which showed higher activity in the MT than in the LT; and the left putamen and right cerebellum which showed the opposite activity pattern. The results also showed that the brain activity for WWR was more similar to that of the LT and fluent speech than to that of the MT. These findings provide a neurological basis for separating the MT and the LT types, and support the widely-used MT/LT symptom grouping scheme. In addition, WWR play a similar role as the LT, and thus should be placed in the LT type.

  16. Sequence-based classification using discriminatory motif feature selection.

    Directory of Open Access Journals (Sweden)

    Hao Xiong

    Full Text Available Most existing methods for sequence-based classification use exhaustive feature generation, employing, for example, all k-mer patterns. The motivation behind such (enumerative approaches is to minimize the potential for overlooking important features. However, there are shortcomings to this strategy. First, practical constraints limit the scope of exhaustive feature generation to patterns of length ≤ k, such that potentially important, longer (> k predictors are not considered. Second, features so generated exhibit strong dependencies, which can complicate understanding of derived classification rules. Third, and most importantly, numerous irrelevant features are created. These concerns can compromise prediction and interpretation. While remedies have been proposed, they tend to be problem-specific and not broadly applicable. Here, we develop a generally applicable methodology, and an attendant software pipeline, that is predicated on discriminatory motif finding. In addition to the traditional training and validation partitions, our framework entails a third level of data partitioning, a discovery partition. A discriminatory motif finder is used on sequences and associated class labels in the discovery partition to yield a (small set of features. These features are then used as inputs to a classifier in the training partition. Finally, performance assessment occurs on the validation partition. Important attributes of our approach are its modularity (any discriminatory motif finder and any classifier can be deployed and its universality (all data, including sequences that are unaligned and/or of unequal length, can be accommodated. We illustrate our approach on two nucleosome occupancy datasets and a protein solubility dataset, previously analyzed using enumerative feature generation. Our method achieves excellent performance results, with and without optimization of classifier tuning parameters. A Python pipeline implementing the approach is

  17. An innovative blazar classification based on radio jet kinematics

    Science.gov (United States)

    Hervet, O.; Boisson, C.; Sol, H.

    2016-07-01

    Context. Blazars are usually classified following their synchrotron peak frequency (νF(ν) scale) as high, intermediate, low frequency peaked BL Lacs (HBLs, IBLs, LBLs), and flat spectrum radio quasars (FSRQs), or, according to their radio morphology at large scale, FR I or FR II. However, the diversity of blazars is such that these classes seem insufficient to chart the specific properties of each source. Aims: We propose to classify a wide sample of blazars following the kinematic features of their radio jets seen in very long baseline interferometry (VLBI). Methods: For this purpose we use public data from the MOJAVE collaboration in which we select a sample of blazars with known redshift and sufficient monitoring to constrain apparent velocities. We selected 161 blazars from a sample of 200 sources. We identify three distinct classes of VLBI jets depending on radio knot kinematics: class I with quasi-stationary knots, class II with knots in relativistic motion from the radio core, and class I/II, intermediate, showing quasi-stationary knots at the jet base and relativistic motions downstream. Results: A notable result is the good overlap of this kinematic classification with the usual spectral classification; class I corresponds to HBLs, class II to FSRQs, and class I/II to IBLs/LBLs. We deepen this study by characterizing the physical parameters of jets from VLBI radio data. Hence we focus on the singular case of the class I/II by the study of the blazar BL Lac itself. Finally we show how the interpretation that radio knots are recollimation shocks is fully appropriate to describe the characteristics of these three classes.

  18. Research and Application of Human Capital Strategic Classification Tool: Human Capital Classification Matrix Based on Biological Natural Attribute

    Directory of Open Access Journals (Sweden)

    Yong Liu

    2014-12-01

    Full Text Available In order to study the causes of weak human capital structure strategic classification management in China, we analyze that enterprises around the world face increasingly difficult for human capital management. In order to provide strategically sound answers, the HR managers need the critical information provided by the right technology processing and analytical tools. In this study, there are different types and levels of human capital in formal organization management, which is not the same contribution to a formal organization. An important guarantee for sustained and healthy development of the formal or informal organization is lower human capital risk. To resist this risk is primarily dependent on human capital hedge force and appreciation force in value, which is largely dependent on the strategic value of the performance of senior managers. Based on the analysis of high-level managers perspective, we also discuss the value and configuration of principles and methods to be followed in human capital strategic classification based on Boston Consulting Group (BCG matrix and build Human Capital Classification (HCC matrix based on biological natural attribute to effectively realize human capital structure strategic classification.

  19. Automated classification of mouse pup isolation syllables: from cluster analysis to an Excel-based "mouse pup syllable classification calculator".

    Science.gov (United States)

    Grimsley, Jasmine M S; Gadziola, Marie A; Wenstrup, Jeffrey J

    2012-01-01

    Mouse pups vocalize at high rates when they are cold or isolated from the nest. The proportions of each syllable type produced carry information about disease state and are being used as behavioral markers for the internal state of animals. Manual classifications of these vocalizations identified 10 syllable types based on their spectro-temporal features. However, manual classification of mouse syllables is time consuming and vulnerable to experimenter bias. This study uses an automated cluster analysis to identify acoustically distinct syllable types produced by CBA/CaJ mouse pups, and then compares the results to prior manual classification methods. The cluster analysis identified two syllable types, based on their frequency bands, that have continuous frequency-time structure, and two syllable types featuring abrupt frequency transitions. Although cluster analysis computed fewer syllable types than manual classification, the clusters represented well the probability distributions of the acoustic features within syllables. These probability distributions indicate that some of the manually classified syllable types are not statistically distinct. The characteristics of the four classified clusters were used to generate a Microsoft Excel-based mouse syllable classifier that rapidly categorizes syllables, with over a 90% match, into the syllable types determined by cluster analysis.

  20. Hyperspectral remote sensing image classification based on decision level fusion

    Institute of Scientific and Technical Information of China (English)

    Peijun Du; Wei Zhang; Junshi Xia

    2011-01-01

    @@ To apply decision level fusion to hyperspectral remote sensing (HRS) image classification, three decision level fusion strategies are experimented on and compared, namely, linear consensus algorithm, improved evidence theory, and the proposed support vector machine (SVM) combiner.To evaluate the effects of the input features on classification performance, four schemes are used to organize input features for member classifiers.In the experiment, by using the operational modular imaging spectrometer (OMIS) II HRS image, the decision level fusion is shown as an effective way for improving the classification accuracy of the HRS image, and the proposed SVM combiner is especially suitable for decision level fusion.The results also indicate that the optimization of input features can improve the classification performance.%To apply decision level fusion to hyperspectral remote sensing (HRS) image classification, three decision level fusion strategies are experimented on and compared, namely, linear consensus algorithm, improved evidence theory, and the proposed support vector machine (SVM) combiner. To evaluate the effects of the input features on classification performance, four schemes are used to organize input features for member classifiers. In the experiment, by using the operational modular imaging spectrometer (OMIS) Ⅱ HRS image, the decision level fusion is shown as an effective way for improving the classification accuracy of the HRS image, and the proposed SVM combiner is especially suitable for decision level fusion. The results also indicate that the optimization of input features can improve the classification performance.

  1. Text Passage Retrieval Based on Colon Classification: Retrieval Performance.

    Science.gov (United States)

    Shepherd, Michael A.

    1981-01-01

    Reports the results of experiments using colon classification for the analysis, representation, and retrieval of primary information from the full text of documents. Recall, precision, and search length measures indicate colon classification did not perform significantly better than Boolean or simple word occurrence systems. Thirteen references…

  2. Text Classification Retrieval Based on Complex Network and ICA Algorithm

    Directory of Open Access Journals (Sweden)

    Hongxia Li

    2013-08-01

    Full Text Available With the development of computer science and information technology, the library is developing toward information and network. The library digital process converts the book into digital information. The high-quality preservation and management are achieved by computer technology as well as text classification techniques. It realizes knowledge appreciation. This paper introduces complex network theory in the text classification process and put forwards the ICA semantic clustering algorithm. It realizes the independent component analysis of complex network text classification. Through the ICA clustering algorithm of independent component, it realizes character words clustering extraction of text classification. The visualization of text retrieval is improved. Finally, we make a comparative analysis of collocation algorithm and ICA clustering algorithm through text classification and keyword search experiment. The paper gives the clustering degree of algorithm and accuracy figure. Through simulation analysis, we find that ICA clustering algorithm increases by 1.2% comparing with text classification clustering degree. Accuracy can be improved by 11.1% at most. It improves the efficiency and accuracy of text classification retrieval. It also provides a theoretical reference for text retrieval classification of eBook

  3. Empirical agent-based modelling challenges and solutions

    CERN Document Server

    Barreteau, Olivier

    2014-01-01

    This instructional book showcases techniques to parameterise human agents in empirical agent-based models (ABM). In doing so, it provides a timely overview of key ABM methodologies and the most innovative approaches through a variety of empirical applications.  It features cutting-edge research from leading academics and practitioners, and will provide a guide for characterising and parameterising human agents in empirical ABM.  In order to facilitate learning, this text shares the valuable experiences of other modellers in particular modelling situations. Very little has been published in the area of empirical ABM, and this contributed volume will appeal to graduate-level students and researchers studying simulation modeling in economics, sociology, ecology, and trans-disciplinary studies, such as topics related to sustainability. In a similar vein to the instruction found in a cookbook, this text provides the empirical modeller with a set of 'recipes'  ready to be implemented. Agent-based modeling (AB...

  4. A spectral-spatial kernel-based method for hyperspectral imagery classification

    Science.gov (United States)

    Li, Li; Ge, Hongwei; Gao, Jianqiang

    2017-02-01

    Spectral-based classification methods have gained increasing attention in hyperspectral imagery classification. Nevertheless, the spectral cannot fully represent the inherent spatial distribution of the imagery. In this paper, a spectral-spatial kernel-based method for hyperspectral imagery classification is proposed. Firstly, the spatial feature was extracted by using area median filtering (AMF). Secondly, the result of the AMF was used to construct spatial feature patch according to different window sizes. Finally, using the kernel technique, the spectral feature and the spatial feature were jointly used for the classification through a support vector machine (SVM) formulation. Therefore, for hyperspectral imagery classification, the proposed method was called spectral-spatial kernel-based support vector machine (SSF-SVM). To evaluate the proposed method, experiments are performed on three hyperspectral images. The experimental results show that an improvement is possible with the proposed technique in most of the real world classification problems.

  5. Hydrological landscape classification: investigating the performance of HAND based landscape classifications in a central European meso-scale catchment

    Directory of Open Access Journals (Sweden)

    S. Gharari

    2011-11-01

    Full Text Available This paper presents a detailed performance and sensitivity analysis of a recently developed hydrological landscape classification method based on dominant runoff mechanisms. Three landscape classes are distinguished: wetland, hillslope and plateau, corresponding to three dominant hydrological regimes: saturation excess overland flow, storage excess sub-surface flow, and deep percolation. Topography, geology and land use hold the key to identifying these landscapes. The height above the nearest drainage (HAND and the surface slope, which can be easily obtained from a digital elevation model, appear to be the dominant topographical controls for hydrological classification. In this paper several indicators for classification are tested as well as their sensitivity to scale and resolution of observed points (sample size. The best results are obtained by the simple use of HAND and slope. The results obtained compared well with the topographical wetness index. The HAND based landscape classification appears to be an efficient method to ''read the landscape'' on the basis of which conceptual models can be developed.

  6. Data Stream Classification Based on the Gamma Classifier

    Directory of Open Access Journals (Sweden)

    Abril Valeria Uriarte-Arcia

    2015-01-01

    Full Text Available The ever increasing data generation confronts us with the problem of handling online massive amounts of information. One of the biggest challenges is how to extract valuable information from these massive continuous data streams during single scanning. In a data stream context, data arrive continuously at high speed; therefore the algorithms developed to address this context must be efficient regarding memory and time management and capable of detecting changes over time in the underlying distribution that generated the data. This work describes a novel method for the task of pattern classification over a continuous data stream based on an associative model. The proposed method is based on the Gamma classifier, which is inspired by the Alpha-Beta associative memories, which are both supervised pattern recognition models. The proposed method is capable of handling the space and time constrain inherent to data stream scenarios. The Data Streaming Gamma classifier (DS-Gamma classifier implements a sliding window approach to provide concept drift detection and a forgetting mechanism. In order to test the classifier, several experiments were performed using different data stream scenarios with real and synthetic data streams. The experimental results show that the method exhibits competitive performance when compared to other state-of-the-art algorithms.

  7. Comprehensive Study on Lexicon-based Ensemble Classification Sentiment Analysis

    Directory of Open Access Journals (Sweden)

    Łukasz Augustyniak

    2015-12-01

    Full Text Available We propose a novel method for counting sentiment orientation that outperforms supervised learning approaches in time and memory complexity and is not statistically significantly different from them in accuracy. Our method consists of a novel approach to generating unigram, bigram and trigram lexicons. The proposed method, called frequentiment, is based on calculating the frequency of features (words in the document and averaging their impact on the sentiment score as opposed to documents that do not contain these features. Afterwards, we use ensemble classification to improve the overall accuracy of the method. What is important is that the frequentiment-based lexicons with sentiment threshold selection outperform other popular lexicons and some supervised learners, while being 3–5 times faster than the supervised approach. We compare 37 methods (lexicons, ensembles with lexicon’s predictions as input and supervised learners applied to 10 Amazon review data sets and provide the first statistical comparison of the sentiment annotation methods that include ensemble approaches. It is one of the most comprehensive comparisons of domain sentiment analysis in the literature.

  8. A texton-based approach for the classification of lung parenchyma in CT images

    DEFF Research Database (Denmark)

    Gangeh, Mehrdad J.; Sørensen, Lauge; Shaker, Saher B.

    2010-01-01

    In this paper, a texton-based classification system based on raw pixel representation along with a support vector machine with radial basis function kernel is proposed for the classification of emphysema in computed tomography images of the lung. The proposed approach is tested on 168 annotated...

  9. Classification of Polarimetric SAR Image Based on the Subspace Method

    Science.gov (United States)

    Xu, J.; Li, Z.; Tian, B.; Chen, Q.; Zhang, P.

    2013-07-01

    Land cover classification is one of the most significant applications in remote sensing. Compared to optical sensing technologies, synthetic aperture radar (SAR) can penetrate through clouds and have all-weather capabilities. Therefore, land cover classification for SAR image is important in remote sensing. The subspace method is a novel method for the SAR data, which reduces data dimensionality by incorporating feature extraction into the classification process. This paper uses the averaged learning subspace method (ALSM) method that can be applied to the fully polarimetric SAR image for classification. The ALSM algorithm integrates three-component decomposition, eigenvalue/eigenvector decomposition and textural features derived from the gray-level cooccurrence matrix (GLCM). The study site, locates in the Dingxing county, in Hebei Province, China. We compare the subspace method with the traditional supervised Wishart classification. By conducting experiments on the fully polarimetric Radarsat-2 image, we conclude the proposed method yield higher classification accuracy. Therefore, the ALSM classification method is a feasible and alternative method for SAR image.

  10. Mobile Agent Based Framework for Integrating Digital Library System

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Few of the current approaches to achieve the integration of digital library system have considered the influence of network factors on quality of service for the integration system of digital libraries. For this reason, a mobile agent based framework for integrating digital library system is proposed. Based on this framework, a prototype system is implemented and the key technique for it are described. Compared with the current approaches, using mobile agent technique to achieve the integration of digital library system can not only avoid transmitting a lot of data on the network, lower the dependence on network bandwidth for the system, but also improve the quality of service for the integration system of digital libraries in intermitted or unreliable network connection settings.

  11. Agent-based Modelling, a new kind of research

    DEFF Research Database (Denmark)

    Held, Fabian P.; Wilkinson, Ian F.; Marks, Robert E.;

    2014-01-01

    We discuss the use of Agent-based Modelling for the development and testing of theories about emergent social phenomena in marketing and the social sciences in general. We address both theoretical aspects about the types of phenomena that are suitably addressed with this approach and practical gu...... development. The main goal of this paper was to make research on complex social systems more accessible and help anticipate and structure the research process. © 2014 Australian and New Zealand Marketing Academy.......We discuss the use of Agent-based Modelling for the development and testing of theories about emergent social phenomena in marketing and the social sciences in general. We address both theoretical aspects about the types of phenomena that are suitably addressed with this approach and practical...

  12. A Framework for Agent-based Human Interaction Support

    Directory of Open Access Journals (Sweden)

    Axel Bürkle

    2008-10-01

    Full Text Available In this paper we describe an agent-based infrastructure for multimodal perceptual systems which aims at developing and realizing computer services that are delivered to humans in an implicit and unobtrusive way. The framework presented here supports the implementation of human-centric context-aware applications providing non-obtrusive assistance to participants in events such as meetings, lectures, conferences and presentations taking place in indoor "smart spaces". We emphasize on the design and implementation of an agent-based framework that supports "pluggable" service logic in the sense that the service developer can concentrate on coding the service logic independently of the underlying middleware. Furthermore, we give an example of the architecture's ability to support the cooperation of multiple services in a meeting scenario using an intelligent connector service and a semantic web oriented travel service.

  13. Web-based supplier relationship framework using agent systems

    Institute of Scientific and Technical Information of China (English)

    Oboulhas Conrad Tsahat Onesime; XU Xiao-fei(徐晓飞); ZHAN De-chen(战德臣)

    2004-01-01

    In order to enable both manufacturers and suppliers to be profitable on today' s highly competitive markets, manufacturers and suppliers must be quick in selecting best partners establishing strategic relationship, and collaborating with each other so that they can satisfy the changing competitive manufacturing requirements. A web-based supplier relationships (SR) framework is therfore proposed using multi-agent systems and linear programming technique to reduce supply cost, increase flexibility and shorten response time. Web-based SR approach is an ideal platform for information exchange that helps buyers and suppliers to maintain the availability of materials in the right quantity, at the right place, and at the right time, and keep the customer-supplier relationship more transparent. A multi-agent system prototype was implemented by simulation, which shows the feasibility of the proposed architecture.

  14. CORBA-Based Analysis of Multi Agent Behavior

    Institute of Scientific and Technical Information of China (English)

    Swapan Bhattacharya; Anirban Banerjee; Shibdas Bandyopadhyay

    2005-01-01

    An agent is a computer software that is capable of taking independent action on behalf of its user or owner. It is an entity with goals, actions and domain knowledge, situated in an environment. Multiagent systems comprises of multiple autonomous, interacting computer software, or agents. These systems can successfully emulate the entities active in a distributed environment. The analysis of multiagent behavior has been studied in this paper based on a specific board game problem similar to the famous problem of GO. In this paper a framework is developed to define the states of the multiagent entities and measure the convergence metrics for this problem. An analysis of the changes of states leading to the goal state is also made. We support our study of multiagent behavior by simulations based on a CORBA framework in order to substantiate our findings.

  15. Agent-Based Chemical Plume Tracing Using Fluid Dynamics

    Science.gov (United States)

    Zarzhitsky, Dimitri; Spears, Diana; Thayer, David; Spears, William

    2004-01-01

    This paper presents a rigorous evaluation of a novel, distributed chemical plume tracing algorithm. The algorithm is a combination of the best aspects of the two most popular predecessors for this task. Furthermore, it is based on solid, formal principles from the field of fluid mechanics. The algorithm is applied by a network of mobile sensing agents (e.g., robots or micro-air vehicles) that sense the ambient fluid velocity and chemical concentration, and calculate derivatives. The algorithm drives the robotic network to the source of the toxic plume, where measures can be taken to disable the source emitter. This work is part of a much larger effort in research and development of a physics-based approach to developing networks of mobile sensing agents for monitoring, tracking, reporting and responding to hazardous conditions.

  16. An approach for mechanical fault classification based on generalized discriminant analysis

    Institute of Scientific and Technical Information of China (English)

    LI Wei-hua; SHI Tie-lin; YANG Shu-zi

    2006-01-01

    To deal with pattern classification of complicated mechanical faults,an approach to multi-faults classification based on generalized discriminant analysis is presented.Compared with linear discriminant analysis (LDA),generalized discriminant analysis (GDA),one of nonlinear discriminant analysis methods,is more suitable for classifying the linear non-separable problem.The connection and difference between KPCA (Kernel Principal Component Analysis) and GDA is discussed.KPCA is good at detection of machine abnormality while GDA performs well in multi-faults classification based on the collection of historical faults symptoms.When the proposed method is applied to air compressor condition classification and gear fault classification,an excellent performance in complicated multi-faults classification is presented.

  17. Rapid Occupant Classification System Based Rough Sets Theory

    Directory of Open Access Journals (Sweden)

    Lin Chen

    2012-09-01

    Full Text Available In the intelligent airbag system, the correct classification of occupant type is the precondition and plays an important role in controlling the airbag release time and inflation strength during emergent accidents. In the paper, the novel rapid occupant classification system is proposed in which tens of pressure sensors are needed to real-time collect pressure distribution data and then the rough sets theory is combined to extract classification knowledge from data features. Furthermore, Experiments have been done to verify its efficiency and effectiviness.

  18. A NEW SVM BASED EMOTIONAL CLASSIFICATION OF IMAGE

    Institute of Scientific and Technical Information of China (English)

    Wang Weining; Yu Yinglin; Zhang Jianchao

    2005-01-01

    How high-level emotional representation of art paintings can be inferred from percep tual level features suited for the particular classes (dynamic vs. static classification)is presented. The key points are feature selection and classification. According to the strong relationship between notable lines of image and human sensations, a novel feature vector WLDLV (Weighted Line Direction-Length Vector) is proposed, which includes both orientation and length information of lines in an image. Classification is performed by SVM (Support Vector Machine) and images can be classified into dynamic and static. Experimental results demonstrate the effectiveness and superiority of the algorithm.

  19. Thrombin-Based Hemostatic Agent in Primary Total Knee Arthroplasty.

    Science.gov (United States)

    Fu, Xin; Tian, Peng; Xu, Gui-Jun; Sun, Xiao-Lei; Ma, Xin-Long

    2017-02-01

    The present meta-analysis pooled the results from randomized controlled trials (RCTs) to identify and assess the efficacy and safety of thrombin-based hemostatic agent in primary total knee arthroplasty (TKA). Potential academic articles were identified from the Cochrane Library, Medline (1966-2015.5), PubMed (1966-2015.5), Embase (1980-2015.5), and ScienceDirect (1966-2015.5). Relevant journals and the recommendations of expert panels were also searched by using Google search engine. RCTs assessing the efficacy and safety of thrombin-based hemostatic agent in primary TKA were included. Pooling of data was analyzed by RevMan 5.1 (The Cochrane Collaboration, Oxford, UK). A total of four RCTs met the inclusion criteria. The meta-analysis revealed significant differences in postoperative hemoglobin decline (p < 0.00001), total blood loss (p < 0.00001), drainage volume (p = 0.01), and allogenic blood transfusion (p = 0.01) between the treatment group and the control group. No significant differences were found regarding incidence of infection (p = 0.45) and deep vein thrombosis (DVT; p = 0.80) between the groups. Meta-analysis indicated that the application of thrombin-based hemostatic agent before wound closure decreased postoperative hemoglobin decline, drainage volume, total blood loss, and transfusion rate and did not increase the risk of infection, DVT, or other complications. Therefore, the reviewers believe that thrombin-based hemostatic agent is effective and safe in primary TKA.

  20. Essays in Agent-Based Macro and Monetary Economics

    OpenAIRE

    Lengnick, Matthias

    2015-01-01

    This dissertation consists of three major parts. The first part (chapter 2 and 3) presents different models which integrate macroeconomics with agent-based financial markets. These models feature bounded rational expectations. They are applied to analyse the impact of financial market speculation on the macro economy in general and the performance of several kinds of financial transaction taxes as well as conventional and unconventional monetary policy in particular. The second part (chapter ...

  1. Agent Based Model of Young Researchers in Higher Education Institutions

    Directory of Open Access Journals (Sweden)

    Josip Stepanic

    2013-04-01

    Full Text Available Group of young researchers in higher education institutions in general perform demandable tasks with relatively high contribution to institutions’ and societies’ innovation production. In order to analyse in more details interaction among young researchers and diverse institutions in society, we aim toward developing the numerical simulation, agent-based model.This article presents foundations of the model, preliminary results of its simulation along with perspectives of its further development and improvements.

  2. Cognitive Modeling for Agent-Based Simulation of Child Maltreatment

    Science.gov (United States)

    Hu, Xiaolin; Puddy, Richard

    This paper extends previous work to develop cognitive modeling for agent-based simulation of child maltreatment (CM). The developed model is inspired from parental efficacy, parenting stress, and the theory of planned behavior. It provides an explanatory, process-oriented model of CM and incorporates causality relationship and feedback loops from different factors in the social ecology in order for simulating the dynamics of CM. We describe the model and present simulation results to demonstrate the features of this model.

  3. Investigating the feasibility of a BCI-driven robot-based writing agent for handicapped individuals

    Science.gov (United States)

    Syan, Chanan S.; Harnarinesingh, Randy E. S.; Beharry, Rishi

    2014-07-01

    Brain-Computer Interfaces (BCIs) predominantly employ output actuators such as virtual keyboards and wheelchair controllers to enable handicapped individuals to interact and communicate with their environment. However, BCI-based assistive technologies are limited in their application. There is minimal research geared towards granting disabled individuals the ability to communicate using written words. This is a drawback because involving a human attendant in writing tasks can entail a breach of personal privacy where the task entails sensitive and private information such as banking matters. BCI-driven robot-based writing however can provide a safeguard for user privacy where it is required. This study investigated the feasibility of a BCI-driven writing agent using the 3 degree-of- freedom Phantom Omnibot. A full alphanumerical English character set was developed and validated using a teach pendant program in MATLAB. The Omnibot was subsequently interfaced to a P300-based BCI. Three subjects utilised the BCI in the online context to communicate words to the writing robot over a Local Area Network (LAN). The average online letter-wise classification accuracy was 91.43%. The writing agent legibly constructed the communicated letters with minor errors in trajectory execution. The developed system therefore provided a feasible platform for BCI-based writing.

  4. An agent-based multi-scale wind generation model

    Energy Technology Data Exchange (ETDEWEB)

    Kremers, E.; Lewald, N. [Karlsruhe Univ., Karlsruhe (Germany). European Inst. for Energy Research; Barambones, O.; Gonzalez de Durana, J.M. [Univ. of the Basque Country, Vitoria (Spain). Dept. of Engineering

    2009-07-01

    The introduction of renewable energies, the liberalization of energy markets and the emergence of new, distributed producers that feed into the grid at almost every level of the system have all contributed to a paradigm shift in energy systems. This paper presented an agent-based model for simulating wind power systems on multiple time scales. The purpose of the study was to generate a flexible model that would permit simulating the output of a wind farm. The model was developed using multiparadigm modelling. It also combined a variety of approaches such as agent-based modelling, discrete events and dynamic systems. The paper explained the theoretical background concerning the basic models for wind speed generation and power turbines, as well as the fundamentals of agent-based modelling. The implementation of these models was illustrated. The paper also discussed several sample simulations and discussed the application of the model. It was concluded that the paradigm change encompassed new tools and methods that could deal with decentralized decision-making, planning and self-organisation. The large amount of new technologies in the energy production chain requires a shift from a top-down to a more bottom-up approach. 12 refs., 1 tab., 7 figs.

  5. An Efficient Method for Landscape Image Classification and Matching Based on MPEG-7 Descriptors

    OpenAIRE

    2011-01-01

    In this thesis, an efficient approach for landscape image classification and matching system based on the MPEG-7 (Moving Picture Expert group) color and shape descriptor. Image classification is the task of deciding whether an image landscape or not. These classifications use the dominant color descriptor method for finding the dominant color in the image. In DCD we examine whole image pixel values. The pixel value contains Red, Green and Blue color values in the RGB color model. After calcul...

  6. Analysis on Design of Kohonen-network System Based on Classification of Complex Signals

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The key methods of detection and classification of the electroencephalogram(EEG) used in recent years are introduced . Taking EEG for example, the design plan of Kohonen neural network system based on detection and classification of complex signals is proposed, and both the network design and signal processing are analyzed, including pre-processing of signals, extraction of signal features, classification of signal and network topology, etc.

  7. Trace elements based classification on clinkers. Application to Spanish clinkers

    Directory of Open Access Journals (Sweden)

    Tamás, F. D.

    2001-12-01

    Full Text Available The qualitative identification to determine the origin (i.e. manufacturing factory of Spanish clinkers is described. The classification of clinkers produced in different factories can be based on their trace element content. Approximately fifteen clinker sorts are analysed, collected from 11 Spanish cement factories to determine their Mg, Sr, Ba, Mn, Ti, Zr, Zn and V content. An expert system formulated by a binary decision tree is designed based on the collected data. The performance of the obtained classifier was measured by ten-fold cross validation. The results show that the proposed method is useful to identify an easy-to-use expert system that is able to determine the origin of the clinker based on its trace element content.

    En el presente trabajo se describe el procedimiento de identificación cualitativa de clínkeres españoles con el objeto de determinar su origen (fábrica. Esa clasificación de los clínkeres se basa en el contenido de sus elementos traza. Se analizaron 15 clínkeres diferentes procedentes de 11 fábricas de cemento españolas, determinándose los contenidos en Mg, Sr, Ba, Mn, Ti, Zr, Zn y V. Se ha diseñado un sistema experto mediante un árbol de decisión binario basado en los datos recogidos. La clasificación obtenida fue examinada mediante la validación cruzada de 10 valores. Los resultados obtenidos muestran que el modelo propuesto es válido para identificar, de manera fácil, un sistema experto capaz de determinar el origen de un clínker basándose en el contenido de sus elementos traza.

  8. Knowledge-based sea ice classification by polarimetric SAR

    DEFF Research Database (Denmark)

    Skriver, Henning; Dierking, Wolfgang

    2004-01-01

    Polarimetric SAR images acquired at C- and L-band over sea ice in the Greenland Sea, Baltic Sea, and Beaufort Sea have been analysed with respect to their potential for ice type classification. The polarimetric data were gathered by the Danish EMISAR and the US AIRSAR which both are airborne...... systems. A hierarchical classification scheme was chosen for sea ice because our knowledge about magnitudes, variations, and dependences of sea ice signatures can be directly considered. The optimal sequence of classification rules and the rules themselves depend on the ice conditions/regimes. The use...... of the polarimetric phase information improves the classification only in the case of thin ice types but is not necessary for thicker ice (above about 30 cm thickness)...

  9. Plant Electrical Signal Classification Based on Waveform Similarity

    Directory of Open Access Journals (Sweden)

    Yang Chen

    2016-10-01

    Full Text Available (1 Background: Plant electrical signals are important physiological traits which reflect plant physiological state. As a kind of phenotypic data, plant action potential (AP evoked by external stimuli—e.g., electrical stimulation, environmental stress—may be associated with inhibition of gene expression related to stress tolerance. However, plant AP is a response to environment changes and full of variability. It is an aperiodic signal with refractory period, discontinuity, noise, and artifacts. In consequence, there are still challenges to automatically recognize and classify plant AP; (2 Methods: Therefore, we proposed an AP recognition algorithm based on dynamic difference threshold to extract all waveforms similar to AP. Next, an incremental template matching algorithm was used to classify the AP and non-AP waveforms; (3 Results: Experiment results indicated that the template matching algorithm achieved a classification rate of 96.0%, and it was superior to backpropagation artificial neural networks (BP-ANNs, supported vector machine (SVM and deep learning method; (4 Conclusion: These findings imply that the proposed methods are likely to expand possibilities for rapidly recognizing and classifying plant action potentials in the database in the future.

  10. Radar-Derived Quantitative Precipitation Estimation Based on Precipitation Classification

    Directory of Open Access Journals (Sweden)

    Lili Yang

    2016-01-01

    Full Text Available A method for improving radar-derived quantitative precipitation estimation is proposed. Tropical vertical profiles of reflectivity (VPRs are first determined from multiple VPRs. Upon identifying a tropical VPR, the event can be further classified as either tropical-stratiform or tropical-convective rainfall by a fuzzy logic (FL algorithm. Based on the precipitation-type fields, the reflectivity values are converted into rainfall rate using a Z-R relationship. In order to evaluate the performance of this rainfall classification scheme, three experiments were conducted using three months of data and two study cases. In Experiment I, the Weather Surveillance Radar-1988 Doppler (WSR-88D default Z-R relationship was applied. In Experiment II, the precipitation regime was separated into convective and stratiform rainfall using the FL algorithm, and corresponding Z-R relationships were used. In Experiment III, the precipitation regime was separated into convective, stratiform, and tropical rainfall, and the corresponding Z-R relationships were applied. The results show that the rainfall rates obtained from all three experiments match closely with the gauge observations, although Experiment II could solve the underestimation, when compared to Experiment I. Experiment III significantly reduced this underestimation and generated the most accurate radar estimates of rain rate among the three experiments.

  11. A tentative classification of paleoweathering formations based on geomorphological criteria

    Science.gov (United States)

    Battiau-Queney, Yvonne

    1996-05-01

    A geomorphological classification is proposed that emphasizes the usefulness of paleoweathering records in any reconstruction of past landscapes. Four main paleoweathering records are recognized: 1. Paleoweathering formations buried beneath a sedimentary or volcanic cover. Most of them are saprolites, sometimes with preserved overlying soils. Ages range from Archean to late Cenozoic times; 2. Paleoweathering formations trapped in karst: some of them have buried pre-existent karst landforms, others have developed simultaneously with the subjacent karst; 3. Relict paleoweathering formations: although inherited, they belong to the present landscape. Some of them are indurated (duricrusts, silcretes, ferricretes,…); others are not and owe their preservation to a stable morphotectonic environment; 4. Polyphased weathering mantles: weathering has taken place in changing geochemical conditions. After examples of each type are provided, the paper considers the relations between chemical weathering and landform development. The climatic significance of paleoweathering formations is discussed. Some remote morphogenic systems have no present equivalent. It is doubtful that chemical weathering alone might lead to widespread planation surfaces. Moreover, classical theories based on sea-level and rivers as the main factors of erosion are not really adequate to explain the observed landscapes.

  12. Classification of CT brain images based on deep learning networks.

    Science.gov (United States)

    Gao, Xiaohong W; Hui, Rui; Tian, Zengmin

    2017-01-01

    While computerised tomography (CT) may have been the first imaging tool to study human brain, it has not yet been implemented into clinical decision making process for diagnosis of Alzheimer's disease (AD). On the other hand, with the nature of being prevalent, inexpensive and non-invasive, CT does present diagnostic features of AD to a great extent. This study explores the significance and impact on the application of the burgeoning deep learning techniques to the task of classification of CT brain images, in particular utilising convolutional neural network (CNN), aiming at providing supplementary information for the early diagnosis of Alzheimer's disease. Towards this end, three categories of CT images (N = 285) are clustered into three groups, which are AD, lesion (e.g. tumour) and normal ageing. In addition, considering the characteristics of this collection with larger thickness along the direction of depth (z) (~3-5 mm), an advanced CNN architecture is established integrating both 2D and 3D CNN networks. The fusion of the two CNN networks is subsequently coordinated based on the average of Softmax scores obtained from both networks consolidating 2D images along spatial axial directions and 3D segmented blocks respectively. As a result, the classification accuracy rates rendered by this elaborated CNN architecture are 85.2%, 80% and 95.3% for classes of AD, lesion and normal respectively with an average of 87.6%. Additionally, this improved CNN network appears to outperform the others when in comparison with 2D version only of CNN network as well as a number of state of the art hand-crafted approaches. As a result, these approaches deliver accuracy rates in percentage of 86.3, 85.6 ± 1.10, 86.3 ± 1.04, 85.2 ± 1.60, 83.1 ± 0.35 for 2D CNN, 2D SIFT, 2D KAZE, 3D SIFT and 3D KAZE respectively. The two major contributions of the paper constitute a new 3-D approach while applying deep learning technique to extract signature information

  13. Group Behavior Learning in Multi-Agent Systems Based on Social Interaction Among Agents

    OpenAIRE

    Zhang, Kun; Maeda, Yoichiro; Takahashi, Yasutake

    2011-01-01

    Research on multi-agent systems, in which autonomous agents are able to learn cooperative behavior, has been the subject of rising expectations in recent years. We have aimed at the group behavior generation of the multi-agents who have high levelsof autonomous learning ability, like that of human beings, through social interaction between agents to acquire cooperative behavior. The sharing of environmentstates can improve cooperative ability, andthe changing state of the environment in the i...

  14. IMPROVEMENT OF TCAM-BASED PACKET CLASSIFICATION ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    Xu Zhen; Zhang Jun; Rui Liyang; Sun Jun

    2008-01-01

    The feature of Ternary Content Addressable Memories (TCAMs) makes them particularly attractive for IP address lookup and packet classification applications in a router system. However, the limitations of TCAMs impede their utilization. In this paper, the solutions for decreasing the power consumption and avoiding entry expansion in range matching are addressed. Experimental results demonstrate that the proposed techniques can make some big improvements on the performance of TCAMs in IP address lookup and packet classification.

  15. Influence-based autonomy levels in agent Decision-making

    NARCIS (Netherlands)

    Vecht, B. van der; Meyer, A.P.; Neef, R.M.; Dignum, F.; Meyer, J.J.C.

    2007-01-01

    Autonomy is a crucial and powerful feature of agents and it is the subject of much research in the agent field. Controlling the autonomy of agents is a way to coordinate the behavior of groups of agents. Our approach is to look at it as a design problem for agents. We analyze the autonomy of an agen

  16. Tweet-based Target Market Classification Using Ensemble Method

    Directory of Open Access Journals (Sweden)

    Muhammad Adi Khairul Anshary

    2016-09-01

    Full Text Available Target market classification is aimed at focusing marketing activities on the right targets. Classification of target markets can be done through data mining and by utilizing data from social media, e.g. Twitter. The end result of data mining are learning models that can classify new data. Ensemble methods can improve the accuracy of the models and therefore provide better results. In this study, classification of target markets was conducted on a dataset of 3000 tweets in order to extract features. Classification models were constructed to manipulate the training data using two ensemble methods (bagging and boosting. To investigate the effectiveness of the ensemble methods, this study used the CART (classification and regression tree algorithm for comparison. Three categories of consumer goods (computers, mobile phones and cameras and three categories of sentiments (positive, negative and neutral were classified towards three target-market categories. Machine learning was performed using Weka 3.6.9. The results of the test data showed that the bagging method improved the accuracy of CART with 1.9% (to 85.20%. On the other hand, for sentiment classification, the ensemble methods were not successful in increasing the accuracy of CART. The results of this study may be taken into consideration by companies who approach their customers through social media, especially Twitter.

  17. Renoprotection and the Bardoxolone Methyl Story - Is This the Right Way Forward A Novel View of Renoprotection in CKD Trials: A New Classification Scheme for Renoprotective Agents

    Directory of Open Access Journals (Sweden)

    Macaulay Onuigbo

    2013-04-01

    Full Text Available In the June 2011 issue of the New England Journal of Medicine, the BEAM (Bardoxolone Methyl Treatment: Renal Function in CKD/Type 2 Diabetes trial investigators rekindled new interest and also some controversy regarding the concept of renoprotection and the role of renoprotective agents, when they reported significant increases in the mean estimated glomerular filtration rate (eGFR in diabetic chronic kidney disease (CKD patients with an eGFR of 20-45 ml/min/1.73 m2 of body surface area at enrollment who received the trial drug bardoxolone methyl versus placebo. Unfortunately, subsequent phase IIIb trials failed to show that the drug is a safe alternative renoprotective agent. Current renoprotection paradigms depend wholly and entirely on angiotensin blockade; however, these agents [angiotensin converting enzyme (ACE inhibitors and angiotensin receptor blockers (ARBs] have proved to be imperfect renoprotective agents. In this review, we examine the mechanistic limitations of the various previous randomized controlled trials on CKD renoprotection, including the paucity of veritable, elaborate and systematic assessment methods for the documentation and reporting of individual patient-level, drug-related adverse events. We review the evidence base for the presence of putative, multiple independent and unrelated pathogenetic mechanisms that drive (diabetic and non-diabetic CKD progression. Furthermore, we examine the validity, or lack thereof, of the hyped notion that the blockade of a single molecule (angiotensin II, which can only antagonize the angiotensin cascade, would veritably successfully, consistently and unfailingly deliver adequate and qualitative renoprotection results in (diabetic and non-diabetic CKD patients. We clearly posit that there is this overarching impetus to arrive at the inference that multiple, disparately diverse and independent pathways, including any veritable combination of the mechanisms that we examine in this review

  18. Hierarchical structure for audio-video based semantic classification of sports video sequences

    Science.gov (United States)

    Kolekar, M. H.; Sengupta, S.

    2005-07-01

    A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.

  19. Phylogeny of certain biocontrol agents with special reference to nematophagous fungi based on RAPd.

    Science.gov (United States)

    Jarullah, B M S; Subramanian, R B; Jummanah, M S J

    2005-01-01

    A number of phylogenetic studies have been carried out on biocontrol agents having similar biological control activity. However, no work has been carried out to determine the phylogenetic relationship amongst various groups of biological control agents with varied biocontrol properties. Our aim was to derive a phylogenetic relationship between diverse biocontrol agents belonging to the deuteromycetes and determine its correlation with their spore morphology and their biocontrol activity. RAPD was used to assess genomic variability in fungi used as biological control agents which included ten isolates of nematophagous fungi such as Arthrobotrys sp., Duddingtonia sp., Paecilomyces sp. and Verticillium sp., along with two isolates of fungal biocontrol agents such as Trichoderma sp. and two isolates of entomopathogenic fungi including Beauveria sp. A plant pathogenic fungus, Verticillium alboatrum was also included to increase the diversity of Deuteromycetes used. A similarity matrix was created using Jaccard's similarity coefficient & clustering was done using unweighted pair group arithmetic mean method (UPGMA). The final dendogram was created using a combination of two programs, Freetree and TreeExplorer. The phylogenetic tree constructed from the RAPD data showed marked genetic variability among different strains of the same species. The spore morphologies of all these fungi were also studied. The phylogenetic pattern could be correlated with the conidial and conidiophore morphology, a criterion commonly used for the classification of fungi in general and Deuteromycetes in particular. Interestingly, the inferred phylogeny showed no significant grouping based on either their biological control properties or the trapping structures amongst the nematophagous fungi as reported earlier by other workers. The phylogenetic pattern was also similar to the tree obtained by comparing the 18S rRNA sequences from the database. The result clearly indicates that the classical

  20. A Multi Agent Based Model for Airport Service Planning

    Directory of Open Access Journals (Sweden)

    W.H. Ip

    2010-09-01

    Full Text Available Aviation industry is highly dynamic and demanding in nature that time and safety are the two most important factors while one of the major sources of delay is aircraft on ground because of it complexity, a lot of machinery like vehicles are involved and lots of communication are involved. As one of the aircraft ground services providers in Hong Kong International Airport, China Aircraft Services Limited (CASL aims to increase competitiveness by better its service provided while minimizing cost is also needed. One of the ways is to optimize the number of maintenance vehicles allocated in order to minimize chance of delay and also operating costs. In the paper, an agent-based model is proposed for support decision making in vehicle allocation. The overview of the aircrafts ground services procedures is firstly mentioned with different optimization methods suggested by researchers. Then, the agent-based approach is introduced and in the latter part of report and a multi-agent system is built and proposed which is decision supportive for CASL in optimizing the maintenance vehicles' allocation. The application provides flexibility for inputting number of different kinds of vehicles, simulation duration and aircraft arrival rate in order to simulation different scenarios which occurs in HKIA.

  1. Markov chain aggregation for agent-based models

    CERN Document Server

    Banisch, Sven

    2016-01-01

    This self-contained text develops a Markov chain approach that makes the rigorous analysis of a class of microscopic models that specify the dynamics of complex systems at the individual level possible. It presents a general framework of aggregation in agent-based and related computational models, one which makes use of lumpability and information theory in order to link the micro and macro levels of observation. The starting point is a microscopic Markov chain description of the dynamical process in complete correspondence with the dynamical behavior of the agent-based model (ABM), which is obtained by considering the set of all possible agent configurations as the state space of a huge Markov chain. An explicit formal representation of a resulting “micro-chain” including microscopic transition rates is derived for a class of models by using the random mapping representation of a Markov process. The type of probability distribution used to implement the stochastic part of the model, which defines the upd...

  2. An agent-based microsimulation of critical infrastructure systems

    Energy Technology Data Exchange (ETDEWEB)

    BARTON,DIANNE C.; STAMBER,KEVIN L.

    2000-03-29

    US infrastructures provide essential services that support the economic prosperity and quality of life. Today, the latest threat to these infrastructures is the increasing complexity and interconnectedness of the system. On balance, added connectivity will improve economic efficiency; however, increased coupling could also result in situations where a disturbance in an isolated infrastructure unexpectedly cascades across diverse infrastructures. An understanding of the behavior of complex systems can be critical to understanding and predicting infrastructure responses to unexpected perturbation. Sandia National Laboratories has developed an agent-based model of critical US infrastructures using time-dependent Monte Carlo methods and a genetic algorithm learning classifier system to control decision making. The model is currently under development and contains agents that represent the several areas within the interconnected infrastructures, including electric power and fuel supply. Previous work shows that agent-based simulations models have the potential to improve the accuracy of complex system forecasting and to provide new insights into the factors that are the primary drivers of emergent behaviors in interdependent systems. Simulation results can be examined both computationally and analytically, offering new ways of theorizing about the impact of perturbations to an infrastructure network.

  3. Amino acid–based surfactants: New antimicrobial agents.

    Science.gov (United States)

    Pinazo, A; Manresa, M A; Marques, A M; Bustelo, M; Espuny, M J; Pérez, L

    2016-02-01

    The rapid increase of drug resistant bacteria makes necessary the development of new antimicrobial agents. Synthetic amino acid-based surfactants constitute a promising alternative to conventional antimicrobial compounds given that they can be prepared from renewable raw materials. In this review, we discuss the structural features that promote antimicrobial activity of amino acid-based surfactants. Monocatenary, dicatenary and gemini surfactants that contain different amino acids on the polar head and show activity against bacteria are revised. The synthesis and basic physico-chemical properties have also been included.

  4. Statistical Agent Based Modelization of the Phenomenon of Drug Abuse

    Science.gov (United States)

    di Clemente, Riccardo; Pietronero, Luciano

    2012-07-01

    We introduce a statistical agent based model to describe the phenomenon of drug abuse and its dynamical evolution at the individual and global level. The agents are heterogeneous with respect to their intrinsic inclination to drugs, to their budget attitude and social environment. The various levels of drug use were inspired by the professional description of the phenomenon and this permits a direct comparison with all available data. We show that certain elements have a great importance to start the use of drugs, for example the rare events in the personal experiences which permit to overcame the barrier of drug use occasionally. The analysis of how the system reacts to perturbations is very important to understand its key elements and it provides strategies for effective policy making. The present model represents the first step of a realistic description of this phenomenon and can be easily generalized in various directions.

  5. Statistical Agent Based Modelization of the Phenomenon of Drug Abuse

    CERN Document Server

    Di Clemente, Riccardo; 10.1038/srep00532

    2012-01-01

    We introduce a statistical agent based model to describe the phenomenon of drug abuse and its dynamical evolution at the individual and global level. The agents are heterogeneous with respect to their intrinsic inclination to drugs, to their budget attitude and social environment. The various levels of drug use were inspired by the professional description of the phenomenon and this permits a direct comparison with all available data. We show that certain elements have a great importance to start the use of drugs, for example the rare events in the personal experiences which permit to overcame the barrier of drug use occasionally. The analysis of how the system reacts to perturbations is very important to understand its key elements and it provides strategies for effective policy making. The present model represents the first step of a realistic description of this phenomenon and can be easily generalized in various directions.

  6. Agent-Based Crowd Simulation of Daily Goods Traditional Markets

    Directory of Open Access Journals (Sweden)

    Purba D. Kusuma

    2016-10-01

    Full Text Available In traditional market, buyers are not only moving from one place to another, but also interacting with traders to purchase their products. When a buyer interacts with a trader, he blocks some space in the corridor. Besides, while buyers are walking, they may be attracted by non-preferred traders, though they may have preferred traders. These situations have not been covered in most existing crowd simulation models. Hence, these existing models cannot be directly implemented in traditional market environments since they mainly focus on crowd members’ movement. This research emphasizes on a crowd model that includes simplified movement and unplanned purchasing models. This model has been developed based on intelligent agent concept, where each agent represents a buyer. Two traditional markets are used for simulation in this research, namely Gedongkuning and Ngasem, in Yogyakarta, Indonesia. The simulation shows that some places are visited more frequently than others. Overall, the simulation result matches the situation found in the real world.

  7. Capacity Analysis for Parallel Runway through Agent-Based Simulation

    Directory of Open Access Journals (Sweden)

    Yang Peng

    2013-01-01

    Full Text Available Parallel runway is the mainstream structure of China hub airport, runway is often the bottleneck of an airport, and the evaluation of its capacity is of great importance to airport management. This study outlines a model, multiagent architecture, implementation approach, and software prototype of a simulation system for evaluating runway capacity. Agent Unified Modeling Language (AUML is applied to illustrate the inbound and departing procedure of planes and design the agent-based model. The model is evaluated experimentally, and the quality is studied in comparison with models, created by SIMMOD and Arena. The results seem to be highly efficient, so the method can be applied to parallel runway capacity evaluation and the model propose favorable flexibility and extensibility.

  8. Building Distributed Web GIS: A Mobile-Agent Based Approach

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The diversity of GISs and the wide-spread availability of WWWhave l e d to an increasing amount of research on integrating a variety of heterogeneous and autonomous GISs in a cooperative environment to construct a new generation o f GIS characterizing in open architecture, distributed computation, interoperabi lity, and extensibility. Our on-going research project MADGI S (Mobile Agent based Distributed Geographic Information System) is reported, in which we pro pose the architecture of MADGIS to meet the requirements of integrating distribu ted GIS applications under Internet environment. We first describe the architect ure of MADGIS, and detailed discussions focusing on the structure of client site , server site and mobile agent in MADGIS. Then we explore key techniques for MAD GIS implementation.

  9. Object Classification based Context Management for Identity Management in Internet of Things

    DEFF Research Database (Denmark)

    Mahalle, Parikshit N.; Prasad, Neeli R.; Prasad, Ramjee

    2013-01-01

    , and there is a need of context-aware access control solution for IdM. Confronting uncertainty of different types of objects in IoT is not easy. This paper presents the logical framework for object classification in context aware IoT, as richer contextual information creates an impact on the access control. This paper...... proposes decision theory based object classification to provide contextual information and context management. Simulation results show that the proposed object classification is useful to improve network lifetime. Results also give motivation of object classification in terms of energy consumption...

  10. Speech/Music Classification Enhancement for 3GPP2 SMV Codec Based on Support Vector Machine

    Science.gov (United States)

    Kim, Sang-Kyun; Chang, Joon-Hyuk

    In this letter, we propose a novel approach to speech/music classification based on the support vector machine (SVM) to improve the performance of the 3GPP2 selectable mode vocoder (SMV) codec. We first analyze the features and the classification method used in real time speech/music classification algorithm in SMV, and then apply the SVM for enhanced speech/music classification. For evaluation of performance, we compare the proposed algorithm and the traditional algorithm of the SMV. The performance of the proposed system is evaluated under the various environments and shows better performance compared to the original method in the SMV.

  11. Signal classification method based on data mining for multi-mode radar

    Institute of Scientific and Technical Information of China (English)

    Qiang Guo; Pulong Nan; Jian Wan

    2016-01-01

    For the multi-mode radar working in the modern elec-tronic battlefield, different working states of one single radar are prone to being classified as multiple emitters when adopting traditional classification methods to process intercepted signals, which has a negative effect on signal classification. A classification method based on spatial data mining is presented to address the above chal enge. Inspired by the idea of spatial data mining, the classification method applies nuclear field to depicting the distribu-tion information of pulse samples in feature space, and digs out the hidden cluster information by analyzing distribution characteristics. In addition, a membership-degree criterion to quantify the correla-tion among al classes is established, which ensures classification accuracy of signal samples. Numerical experiments show that the presented method can effectively prevent different working states of multi-mode emitter from being classified as several emitters, and achieve higher classification accuracy.

  12. Classification of Noisy Data: An Approach Based on Genetic Algorithms and Voronoi Tessellation

    DEFF Research Database (Denmark)

    Khan, Abdul Rauf; Schiøler, Henrik; Knudsen, Torben;

    2016-01-01

    Classification is one of the major constituents of the data-mining toolkit. The well-known methods for classification are built on either the principle of logic or statistical/mathematical reasoning for classification. In this article we propose: (1) a different strategy, which is based......). The results of this study suggest that our proposed methodology is specialized to deal with the classification problem of highly imbalanced classes with significant overlap....... on the portioning of information space; and (2) use of the genetic algorithm to solve combinatorial problems for classification. In particular, we will implement our methodology to solve complex classification problems and compare the performance of our classifier with other well-known methods (SVM, KNN, and ANN...

  13. SAR images classification method based on Dempster-Shafer theory and kernel estimate

    Institute of Scientific and Technical Information of China (English)

    He Chu; Xia Guisong; Sun Hong

    2007-01-01

    To study the scene classification in the Synthetic Aperture Radar (SAR) image, a novel method based on kernel estimate, with the Markov context and Dempster-Shafer evidence theory is proposed.Initially, a nonparametric Probability Density Function (PDF) estimate method is introduced, to describe the scene of SAR images.And then under the Markov context, both the determinate PDF and the kernel estimate method are adopted respectively, to form a primary classification.Next, the primary classification results are fused using the evidence theory in an unsupervised way to get the scene classification.Finally, a regularization step is used, in which an iterated maximum selecting approach is introduced to control the fragments and modify the errors of the classification.Use of the kernel estimate and evidence theory can describe the complicated scenes with little prior knowledge and eliminate the ambiguities of the primary classification results.Experimental results on real SAR images illustrate a rather impressive performance.

  14. Classification of PolSAR image based on quotient space theory

    Science.gov (United States)

    An, Zhihui; Yu, Jie; Liu, Xiaomeng; Liu, Limin; Jiao, Shuai; Zhu, Teng; Wang, Shaohua

    2015-12-01

    In order to improve the classification accuracy, quotient space theory was applied in the classification of polarimetric SAR (PolSAR) image. Firstly, Yamaguchi decomposition method is adopted, which can get the polarimetric characteristic of the image. At the same time, Gray level Co-occurrence Matrix (GLCM) and Gabor wavelet are used to get texture feature, respectively. Secondly, combined with texture feature and polarimetric characteristic, Support Vector Machine (SVM) classifier is used for initial classification to establish different granularity spaces. Finally, according to the quotient space granularity synthetic theory, we merge and reason the different quotient spaces to get the comprehensive classification result. Method proposed in this paper is tested with L-band AIRSAR of San Francisco bay. The result shows that the comprehensive classification result based on the theory of quotient space is superior to the classification result of single granularity space.

  15. Review of Remotely Sensed Imagery Classification Patterns Based on Object-oriented Image Analysis

    Institute of Scientific and Technical Information of China (English)

    LIU Yongxue; LI Manchun; MAO Liang; XU Feifei; HUANG Shuo

    2006-01-01

    With the wide use of high-resolution remotely sensed imagery, the object-oriented remotely sensed information classification pattern has been intensively studied. Starting with the definition of object-oriented remotely sensed information classification pattern and a literature review of related research progress, this paper sums up 4 developing phases of object-oriented classification pattern during the past 20 years. Then, we discuss the three aspects of methodology in detail, namely remotely sensed imagery segmentation, feature analysis and feature selection, and classification rule generation, through comparing them with remotely sensed information classification method based on per-pixel. At last, this paper presents several points that need to be paid attention to in the future studies on object-oriented RS information classification pattern: 1) developing robust and highly effective image segmentation algorithm for multi-spectral RS imagery; 2) improving the feature-set including edge, spatial-adjacent and temporal characteristics; 3) discussing the classification rule generation classifier based on the decision tree; 4) presenting evaluation methods for classification result by object-oriented classification pattern.

  16. Land cover classification using random forest with genetic algorithm-based parameter optimization

    Science.gov (United States)

    Ming, Dongping; Zhou, Tianning; Wang, Min; Tan, Tian

    2016-07-01

    Land cover classification based on remote sensing imagery is an important means to monitor, evaluate, and manage land resources. However, it requires robust classification methods that allow accurate mapping of complex land cover categories. Random forest (RF) is a powerful machine-learning classifier that can be used in land remote sensing. However, two important parameters of RF classification, namely, the number of trees and the number of variables tried at each split, affect classification accuracy. Thus, optimal parameter selection is an inevitable problem in RF-based image classification. This study uses the genetic algorithm (GA) to optimize the two parameters of RF to produce optimal land cover classification accuracy. HJ-1B CCD2 image data are used to classify six different land cover categories in Changping, Beijing, China. Experimental results show that GA-RF can avoid arbitrariness in the selection of parameters. The experiments also compare land cover classification results by using GA-RF method, traditional RF method (with default parameters), and support vector machine method. When the GA-RF method is used, classification accuracies, respectively, improved by 1.02% and 6.64%. The comparison results show that GA-RF is a feasible solution for land cover classification without compromising accuracy or incurring excessive time.

  17. Objected-oriented remote sensing image classification method based on geographic ontology model

    Science.gov (United States)

    Chu, Z.; Liu, Z. J.; Gu, H. Y.

    2016-11-01

    Nowadays, with the development of high resolution remote sensing image and the wide application of laser point cloud data, proceeding objected-oriented remote sensing classification based on the characteristic knowledge of multi-source spatial data has been an important trend on the field of remote sensing image classification, which gradually replaced the traditional method through improving algorithm to optimize image classification results. For this purpose, the paper puts forward a remote sensing image classification method that uses the he characteristic knowledge of multi-source spatial data to build the geographic ontology semantic network model, and carries out the objected-oriented classification experiment to implement urban features classification, the experiment uses protégé software which is developed by Stanford University in the United States, and intelligent image analysis software—eCognition software as the experiment platform, uses hyperspectral image and Lidar data that is obtained through flight in DaFeng City of JiangSu as the main data source, first of all, the experiment uses hyperspectral image to obtain feature knowledge of remote sensing image and related special index, the second, the experiment uses Lidar data to generate nDSM(Normalized DSM, Normalized Digital Surface Model),obtaining elevation information, the last, the experiment bases image feature knowledge, special index and elevation information to build the geographic ontology semantic network model that implement urban features classification, the experiment results show that, this method is significantly higher than the traditional classification algorithm on classification accuracy, especially it performs more evidently on the respect of building classification. The method not only considers the advantage of multi-source spatial data, for example, remote sensing image, Lidar data and so on, but also realizes multi-source spatial data knowledge integration and application

  18. Classification and Identification of Over-voltage Based on HHT and SVM

    Institute of Scientific and Technical Information of China (English)

    WANG Jing; YANG Qing; CHEN Lin; SIMA Wenxia

    2012-01-01

    This paper proposes an effective method for over-voltage classification based on the Hilbert-Huang transform(HHT) method.Hilbert-Huang transform method is composed of empirical mode decomposition(EMD) and Hilbert transform.Nine kinds of common power system over-voltages are calculated and analyzed by HHT.Based on the instantaneous amplitude spectrum,Hilbert marginal spectrum and Hilbert time-frequency spectrum,three kinds of over-voltage characteristic quantities are obtained.A hierarchical classification system is built based on HHT and support vector machine(SVM).This classification system is tested by 106 field over-voltage signals,and the average classification rate is 94.3%.This research shows that HHT is an effective time-frequency analysis algorithms in the application of over-voltage classification and identification.

  19. Dihedral-Based Segment Identification and Classification of Biopolymers II: Polynucleotides

    Science.gov (United States)

    2013-01-01

    In an accompanying paper (Nagy, G.; Oostenbrink, C. Dihedral-based segment identification and classification of biopolymers I: Proteins. J. Chem. Inf. Model. 2013, DOI: 10.1021/ci400541d), we introduce a new algorithm for structure classification of biopolymeric structures based on main-chain dihedral angles. The DISICL algorithm (short for DIhedral-based Segment Identification and CLassification) classifies segments of structures containing two central residues. Here, we introduce the DISICL library for polynucleotides, which is based on the dihedral angles ε, ζ, and χ for the two central residues of a three-nucleotide segment of a single strand. Seventeen distinct structural classes are defined for nucleotide structures, some of which—to our knowledge—were not described previously in other structure classification algorithms. In particular, DISICL also classifies noncanonical single-stranded structural elements. DISICL is applied to databases of DNA and RNA structures containing 80,000 and 180,000 segments, respectively. The classifications according to DISICL are compared to those of another popular classification scheme in terms of the amount of classified nucleotides, average occurrence and length of structural elements, and pairwise matches of the classifications. While the detailed classification of DISICL adds sensitivity to a structure analysis, it can be readily reduced to eight simplified classes providing a more general overview of the secondary structure in polynucleotides. PMID:24364355

  20. A Bayesian Based Search and Classification System for Product Information of Agricultural Logistics Information Technology

    OpenAIRE

    2011-01-01

    Part 1: Decision Support Systems, Intelligent Systems and Artificial Intelligence Applications; International audience; In order to meet the needs of users who search agricultural products logistics information technology, this paper introduces a search and classification system of agricultural products logistics information technology search and classification. Firstly, the dictionary of field concept word was built based on analyzing the characteristics of agricultural products logistics in...

  1. Dihedral-based segment identification and classification of biopolymers II: polynucleotides.

    Science.gov (United States)

    Nagy, Gabor; Oostenbrink, Chris

    2014-01-27

    In an accompanying paper (Nagy, G.; Oostenbrink, C. Dihedral-based segment identification and classification of biopolymers I: Proteins. J. Chem. Inf. Model. 2013, DOI: 10.1021/ci400541d), we introduce a new algorithm for structure classification of biopolymeric structures based on main-chain dihedral angles. The DISICL algorithm (short for DIhedral-based Segment Identification and CLassification) classifies segments of structures containing two central residues. Here, we introduce the DISICL library for polynucleotides, which is based on the dihedral angles ε, ζ, and χ for the two central residues of a three-nucleotide segment of a single strand. Seventeen distinct structural classes are defined for nucleotide structures, some of which--to our knowledge--were not described previously in other structure classification algorithms. In particular, DISICL also classifies noncanonical single-stranded structural elements. DISICL is applied to databases of DNA and RNA structures containing 80,000 and 180,000 segments, respectively. The classifications according to DISICL are compared to those of another popular classification scheme in terms of the amount of classified nucleotides, average occurrence and length of structural elements, and pairwise matches of the classifications. While the detailed classification of DISICL adds sensitivity to a structure analysis, it can be readily reduced to eight simplified classes providing a more general overview of the secondary structure in polynucleotides.

  2. A Kernel-Based Nonlinear Representor with Application to Eigenface Classification

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jing; LIU Ben-yong; TAN Hao

    2004-01-01

    This paper presents a classifier named kernel-based nonlinear representor (KNR) for optimal representation of pattern features. Adopting the Gaussian kernel, with the kernel width adaptively estimated by a simple technique, it is applied to eigenface classification. Experimental results on the ORL face database show that it improves performance by around 6 points, in classification rate, over the Euclidean distance classifier.

  3. Using Discrete Loss Functions and Weighted Kappa for Classification: An Illustration Based on Bayesian Network Analysis

    Science.gov (United States)

    Zwick, Rebecca; Lenaburg, Lubella

    2009-01-01

    In certain data analyses (e.g., multiple discriminant analysis and multinomial log-linear modeling), classification decisions are made based on the estimated posterior probabilities that individuals belong to each of several distinct categories. In the Bayesian network literature, this type of classification is often accomplished by assigning…

  4. Initial steps towards an evidence-based classification system for golfers with a physical impairment

    NARCIS (Netherlands)

    Stoter, Inge K; Hettinga, Florentina J; Altmann, Viola; Eisma, Wim; Arendzen, Hans; Bennett, Tony; van der Woude, Lucas H; Dekker, Rienk

    2015-01-01

    PURPOSE: The present narrative review aims to make a first step towards an evidence-based classification system in handigolf following the International Paralympic Committee (IPC). It intends to create a conceptual framework of classification for handigolf and an agenda for future research. METHOD:

  5. Initial steps towards an evidence-based classification system for golfers with a physical impairment

    NARCIS (Netherlands)

    Stoter, Inge K.; Hettinga, Florentina J.; Altmann, Viola; Eisma, Wim; Arendzen, Hans; Bennett, Tony; van der Woude, Lucas H.; Dekker, Rienk

    2017-01-01

    Purpose: The present narrative review aims to make a first step towards an evidence-based classification system in handigolf following the International Paralympic Committee (IPC). It intends to create a conceptual framework of classification for handigolf and an agenda for future research. Method:

  6. Multi-label literature classification based on the Gene Ontology graph

    Directory of Open Access Journals (Sweden)

    Lu Xinghua

    2008-12-01

    Full Text Available Abstract Background The Gene Ontology is a controlled vocabulary for representing knowledge related to genes and proteins in a computable form. The current effort of manually annotating proteins with the Gene Ontology is outpaced by the rate of accumulation of biomedical knowledge in literature, which urges the development of text mining approaches to facilitate the process by automatically extracting the Gene Ontology annotation from literature. The task is usually cast as a text classification problem, and contemporary methods are confronted with unbalanced training data and the difficulties associated with multi-label classification. Results In this research, we investigated the methods of enhancing automatic multi-label classification of biomedical literature by utilizing the structure of the Gene Ontology graph. We have studied three graph-based multi-label classification algorithms, including a novel stochastic algorithm and two top-down hierarchical classification methods for multi-label literature classification. We systematically evaluated and compared these graph-based classification algorithms to a conventional flat multi-label algorithm. The results indicate that, through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods can significantly improve predictions of the Gene Ontology terms implied by the analyzed text. Furthermore, the graph-based multi-label classifiers are capable of suggesting Gene Ontology annotations (to curators that are closely related to the true annotations even if they fail to predict the true ones directly. A software package implementing the studied algorithms is available for the research community. Conclusion Through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods have better potential than the conventional flat multi-label classification approach to facilitate

  7. Agent-based modelling of consumer energy choices

    Science.gov (United States)

    Rai, Varun; Henry, Adam Douglas

    2016-06-01

    Strategies to mitigate global climate change should be grounded in a rigorous understanding of energy systems, particularly the factors that drive energy demand. Agent-based modelling (ABM) is a powerful tool for representing the complexities of energy demand, such as social interactions and spatial constraints. Unlike other approaches for modelling energy demand, ABM is not limited to studying perfectly rational agents or to abstracting micro details into system-level equations. Instead, ABM provides the ability to represent behaviours of energy consumers -- such as individual households -- using a range of theories, and to examine how the interaction of heterogeneous agents at the micro-level produces macro outcomes of importance to the global climate, such as the adoption of low-carbon behaviours and technologies over space and time. We provide an overview of ABM work in the area of consumer energy choices, with a focus on identifying specific ways in which ABM can improve understanding of both fundamental scientific and applied aspects of the demand side of energy to aid the design of better policies and programmes. Future research needs for improving the practice of ABM to better understand energy demand are also discussed.

  8. Using the Agent-Based Modeling in Economic Field

    Directory of Open Access Journals (Sweden)

    Nora Mihail

    2006-12-01

    Full Text Available The last ten years of the XX century has been the witnesses of the apparition of a new scientific field, which is usually defined as the study of “Complex adaptive systems”. This field, generic named Complexity Sciences, shares its subject, the general proprieties of complex systems across traditional disciplinary boundaries, with cybernetics and general systems theory. But the development of Complexity Sciences approaches is determined by the extensive use of Agent-Based-Models (ABM as a research tool and an emphasis on systems, such as markets, populations or ecologies, which are less integrated or “organized” than the ones, such as companies and economies, intensively studied by the traditional disciplines. For ABM, a complex system is a system of individual agents who have the freedom to act in ways that are not always totally predictable, and whose actions are interconnected such that one agent’s actions changes the context (environment for other agents. These are many examples of such complex systems: the stock market, the human body immune system, a business organization, an institution, a work-team, a family etc.

  9. Leveraging Sequence Classification by Taxonomy-Based Multitask Learning

    Science.gov (United States)

    Widmer, Christian; Leiva, Jose; Altun, Yasemin; Rätsch, Gunnar

    In this work we consider an inference task that biologists are very good at: deciphering biological processes by bringing together knowledge that has been obtained by experiments using various organisms, while respecting the differences and commonalities of these organisms. We look at this problem from an sequence analysis point of view, where we aim at solving the same classification task in different organisms. We investigate the challenge of combining information from several organisms, whereas we consider the relation between the organisms to be defined by a tree structure derived from their phylogeny. Multitask learning, a machine learning technique that recently received considerable attention, considers the problem of learning across tasks that are related to each other. We treat each organism as one task and present three novel multitask learning methods to handle situations in which the relationships among tasks can be described by a hierarchy. These algorithms are designed for large-scale applications and are therefore applicable to problems with a large number of training examples, which are frequently encountered in sequence analysis. We perform experimental analyses on synthetic data sets in order to illustrate the properties of our algorithms. Moreover, we consider a problem from genomic sequence analysis, namely splice site recognition, to illustrate the usefulness of our approach. We show that intelligently combining data from 15 eukaryotic organisms can indeed significantly improve the prediction performance compared to traditional learning approaches. On a broader perspective, we expect that algorithms like the ones presented in this work have the potential to complement and enrich the strategy of homology-based sequence analysis that are currently the quasi-standard in biological sequence analysis.

  10. SPAM CLASSIFICATION BASED ON SUPERVISED LEARNING USING MACHINE LEARNING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    T. Hamsapriya

    2011-12-01

    Full Text Available E-mail is one of the most popular and frequently used ways of communication due to its worldwide accessibility, relatively fast message transfer, and low sending cost. The flaws in the e-mail protocols and the increasing amount of electronic business and financial transactions directly contribute to the increase in e-mail-based threats. Email spam is one of the major problems of the today’s Internet, bringing financial damage to companies and annoying individual users. Spam emails are invading users without their consent and filling their mail boxes. They consume more network capacity as well as time in checking and deleting spam mails. The vast majority of Internet users are outspoken in their disdain for spam, although enough of them respond to commercial offers that spam remains a viable source of income to spammers. While most of the users want to do right think to avoid and get rid of spam, they need clear and simple guidelines on how to behave. In spite of all the measures taken to eliminate spam, they are not yet eradicated. Also when the counter measures are over sensitive, even legitimate emails will be eliminated. Among the approaches developed to stop spam, filtering is the one of the most important technique. Many researches in spam filtering have been centered on the more sophisticated classifier-related issues. In recent days, Machine learning for spam classification is an important research issue. The effectiveness of the proposed work is explores and identifies the use of different learning algorithms for classifying spam messages from e-mail. A comparative analysis among the algorithms has also been presented.

  11. Ultrasonic signal classification based on ambiguity plane feature

    Institute of Scientific and Technical Information of China (English)

    Du Xiuli; Wang Yan; Shen Yi

    2009-01-01

    Ambiguity function (AF) is proposed to represent ultrasonic signal to resolve the preprocessing prob-lem of different center frequencies and different arriving times among ultrasonic signals for feature extraction, as well as offer time-frequency features for signal classification. Moreover, Karhunen-Loeve (K-L) transform is considered to extract signal features from ambiguity plane, and then the features are presented to probabilistic neural network (PNN) for signal classification. Experimental results show that ambiguity function eliminates the difference of center frequency and arriving time existing in ultrasonic signals, and ambiguity plane features extracted by K-L transform describe the signal of different classes effectively in a reduced dimensional space. Classification result suggests that the ambiguity plane features obtain better performance than the features extracted by wavelet transform (WT).

  12. Algebraic classification of higher dimensional spacetimes based on null alignment

    CERN Document Server

    Ortaggio, Marcello; Pravdova, Alena

    2012-01-01

    We review recent developments and applications of the classification of the Weyl tensor in higher dimensional Lorentzian geometries. First, we discuss the general setup, i.e. main definitions and methods for the classification, some refinements and the generalized Newman-Penrose and Geroch-Held-Penrose formalisms. Next, we summarize general results, such as a partial extension of the Goldberg-Sachs theorem, characterization of spacetimes with vanishing (or constant) curvature invariants and the peeling behaviour in asymptotically flat spacetimes. Finally, we discuss certain invariantly defined families of metrics and their relation with the Weyl tensor classification, including: Kundt and Robinson-Trautman spacetimes; the Kerr-Schild ansatz in a constant-curvature background; purely electric and purely magnetic spacetimes; direct and (some) warped products; and geometries with certain symmetries. To conclude, some applications to quadratic gravity are also overviewed.

  13. NONSUBSAMPLED CONTOURLET TRANSFORM BASED CLASSIFICATION OF MICROCALCIFICATION IN DIGITAL MAMMOGRAMS

    Directory of Open Access Journals (Sweden)

    J. S. Leena Jasmine

    2013-01-01

    Full Text Available Mammogram is the best available radiographic method to detect breast cancer in the early stage. However detecting a microcalcification clusters in the early stage is a tough task for the radiologist. Herein we present a novel approach for classifying microcalcification in digital mammograms using Nonsubsampled Contourlet Transform (NSCT and Support Vector Machine (SVM. The classification of microcalcification is achieved by extracting the microcalcification features from the Contourlet coefficients of the image and the outcomes are used as an input to the SVM for classification. The system classifies the mammogram images as normal or abnormal and the abnormal severity as benign or malignant. The evaluation of the system is carried on using Mammography Image Analysis Society (MIAS database. The experimental result shows that the proposed method provides improved classification rate.

  14. MASS CLASSIFICATION IN DIGITAL MAMMOGRAMS BASED ON DISCRETE SHEARLET TRANSFORM

    Directory of Open Access Journals (Sweden)

    J. Amjath Ali

    2013-01-01

    Full Text Available The most significant health problem in the world is breast cancer and early detection is the key to predict it. Mammography is the most reliable method to diagnose breast cancer at the earliest. The classification of the two most findings in the digital mammograms, micro calcifications and mass are valuable for early detection. Since, the appearance of the masses are similar to the surrounding parenchyma, the classification is not an easy task. In this study, an efficient approach to classify masses in the Mammography Image Analysis Society (MIAS database mammogram images is presented. The key features used for the classification is the energies of shearlet decomposed image. These features are fed into SVM classifier to classify mass/non mass images and also benign/malignant. The results demonstrate that the proposed shearlet energy features outperforms the wavelet energy features in terms of accuracy."

  15. Support vector machine classification trees based on fuzzy entropy of classification.

    Science.gov (United States)

    de Boves Harrington, Peter

    2017-02-15

    The support vector machine (SVM) is a powerful classifier that has recently been implemented in a classification tree (SVMTreeG). This classifier partitioned the data by finding gaps in the data space. For large and complex datasets, there may be no gaps in the data space confounding this type of classifier. A novel algorithm was devised that uses fuzzy entropy to find optimal partitions for situations when clusters of data are overlapped in the data space. Also, a kernel version of the fuzzy entropy algorithm was devised. A fast support vector machine implementation is used that has no cost C or slack variables to optimize. Statistical comparisons using bootstrapped Latin partitions among the tree classifiers were made using a synthetic XOR data set and validated with ten prediction sets comprised of 50,000 objects and a data set of NMR spectra obtained from 12 tea sample extracts.

  16. Seafloor Sediment Classification Based on Multibeam Sonar Data

    Institute of Scientific and Technical Information of China (English)

    ZHOU Xinghua; CHEN Yongqi

    2004-01-01

    The multibeam sonars can provide hydrographic quality depth data as well as hold the potential to provide calibrated measurements of the seafloor acoustic backscattering strength. There has been much interest in utilizing backscatters and images from multibeam sonar for seabed type identification and most results are obtained. This paper has presented a focused review of several main methods and recent developments of seafloor classification utilizing multibeam sonar data or/and images. These are including the power spectral analysis methods, the texture analysis, traditional Bayesian classification theory and the most active neural network approaches.

  17. Agent-Based Computing in Distributed Adversarial Planning

    Science.gov (United States)

    2010-08-09

    agents, P3 represents games with 3 agents; value of BF represents the branching factors for the agents in fixed order (each digit for one agent...and M. Wooldridge. Cooperation, knowledge, and time: Alternating-time temporal epistemic logic and its applications. Studia Logica , 75(1):125–157

  18. A Multi-Label Classification Approach Based on Correlations Among Labels

    Directory of Open Access Journals (Sweden)

    Raed Alazaidah

    2015-02-01

    Full Text Available Multi label classification is concerned with learning from a set of instances that are associated with a set of labels, that is, an instance could be associated with multiple labels at the same time. This task occurs frequently in application areas like text categorization, multimedia classification, bioinformatics, protein function classification and semantic scene classification. Current multi-label classification methods could be divided into two categories. The first is called problem transformation methods, which transform multi-label classification problem into single label classification problem, and then apply any single label classifier to solve the problem. The second category is called algorithm adaptation methods, which adapt an existing single label classification algorithm to handle multi-label data. In this paper, we propose a multi-label classification approach based on correlations among labels that use both problem transformation methods and algorithm adaptation methods. The approach begins with transforming multi-label dataset into a single label dataset using least frequent label criteria, and then applies the PART algorithm on the transformed dataset. The output of the approach is multi-labels rules. The approach also tries to get benefit from positive correlations among labels using predictive Apriori algorithm. The proposed approach has been evaluated using two multi-label datasets named (Emotions and Yeast and three evaluation measures (Accuracy, Hamming Loss, and Harmonic Mean. The experiments showed that the proposed approach has a fair accuracy in comparison to other related methods.

  19. A method for cloud detection and opacity classification based on ground based sky imagery

    Directory of Open Access Journals (Sweden)

    M. S. Ghonima

    2012-07-01

    Full Text Available Digital images of the sky obtained using a total sky imager (TSI are classified pixel by pixel into clear sky, optically thin and optically thick clouds. A new classification algorithm was developed that compares the pixel red-blue ratio (RBR to the RBR of a clear sky library (CSL generated from images captured on clear days. The difference, rather than the ratio, between pixel RBR and CSL RBR resulted in more accurate cloud classification. High correlation between TSI image RBR and aerosol optical depth (AOD measured by an AERONET photometer was observed and motivated the addition of a haze correction factor (HCF to the classification model to account for variations in AOD. Thresholds for clear and thick clouds were chosen based on a training image set and validated with set of manually annotated images. Misclassifications of clear and thick clouds into the opposite category were less than 1%. Thin clouds were classified with an accuracy of 60%. Accurate cloud detection and opacity classification techniques will improve the accuracy of short-term solar power forecasting.

  20. A method for cloud detection and opacity classification based on ground based sky imagery

    Directory of Open Access Journals (Sweden)

    M. S. Ghonima

    2012-11-01

    Full Text Available Digital images of the sky obtained using a total sky imager (TSI are classified pixel by pixel into clear sky, optically thin and optically thick clouds. A new classification algorithm was developed that compares the pixel red-blue ratio (RBR to the RBR of a clear sky library (CSL generated from images captured on clear days. The difference, rather than the ratio, between pixel RBR and CSL RBR resulted in more accurate cloud classification. High correlation between TSI image RBR and aerosol optical depth (AOD measured by an AERONET photometer was observed and motivated the addition of a haze correction factor (HCF to the classification model to account for variations in AOD. Thresholds for clear and thick clouds were chosen based on a training image set and validated with set of manually annotated images. Misclassifications of clear and thick clouds into the opposite category were less than 1%. Thin clouds were classified with an accuracy of 60%. Accurate cloud detection and opacity classification techniques will improve the accuracy of short-term solar power forecasting.

  1. An Agent-based Model of a Capitalist Economy

    Institute of Scientific and Technical Information of China (English)

    ZHANG Li-jun; CHENG Dai-zhan

    2002-01-01

    In this paper we investigate the stabilizing role of taxation in an agent based model of a capitalist economy. The model may be considered as a control system with taxation as the control. But it differs from conventional model of control systems. It has some significant characters of complex systems.The system is described and studied by combining the mathematical formulas with computer simulations.Several related concepts as "output" and "control" etc. are reconsidered from new view-points. Via this problem we explore some phenomenon in controlled complex systems.

  2. Agent Based Approaches to Engineering Autonomous Space Software

    CERN Document Server

    Dennis, Louise A; Lincoln, Nicholas; Lisitsa, Alexei; Veres, Sandor M

    2010-01-01

    Current approaches to the engineering of space software such as satellite control systems are based around the development of feedback controllers using packages such as MatLab's Simulink toolbox. These provide powerful tools for engineering real time systems that adapt to changes in the environment but are limited when the controller itself needs to be adapted. We are investigating ways in which ideas from temporal logics and agent programming can be integrated with the use of such control systems to provide a more powerful layer of autonomous decision making. This paper will discuss our initial approaches to the engineering of such systems.

  3. Metathesis access to monocyclic iminocyclitol-based therapeutic agents

    Directory of Open Access Journals (Sweden)

    Albert Demonceau

    2011-05-01

    Full Text Available By focusing on recent developments on natural and non-natural azasugars (iminocyclitols, this review bolsters the case for the role of olefin metathesis reactions (RCM, CM as key transformations in the multistep syntheses of pyrrolidine-, piperidine- and azepane-based iminocyclitols, as important therapeutic agents against a range of common diseases and as tools for studying metabolic disorders. Considerable improvements brought about by introduction of one or more metathesis steps are outlined, with emphasis on the exquisite steric control and atom-economical outcome of the overall process. The comparative performance of several established metathesis catalysts is also highlighted.

  4. Agent-based Algorithm for Spatial Distribution of Objects

    KAUST Repository

    Collier, Nathan

    2012-06-02

    In this paper we present an agent-based algorithm for the spatial distribution of objects. The algorithm is a generalization of the bubble mesh algorithm, initially created for the point insertion stage of the meshing process of the finite element method. The bubble mesh algorithm treats objects in space as bubbles, which repel and attract each other. The dynamics of each bubble are approximated by solving a series of ordinary differential equations. We present numerical results for a meshing application as well as a graph visualization application.

  5. Ontology-based, multi-agent support of production management

    Science.gov (United States)

    Meridou, Despina T.; Inden, Udo; Rückemann, Claus-Peter; Patrikakis, Charalampos Z.; Kaklamani, Dimitra-Theodora I.; Venieris, Iakovos S.

    2016-06-01

    Over the recent years, the reported incidents on failed aircraft ramp-ups or the delayed production in small-lots have increased substantially. In this paper, we present a production management platform that combines agent-based techniques with the Service Oriented Architecture paradigm. This platform takes advantage of the functionality offered by the semantic web language OWL, which allows the users and services of the platform to speak a common language and, at the same time, facilitates risk management and decision making.

  6. Agent-based simulation of electricity markets. A literature review

    Energy Technology Data Exchange (ETDEWEB)

    Sensfuss, F.; Ragwitz, M. [Fraunhofer-Institut fuer Systemtechnik und Innovationsforschung (ISI), Karlsruhe (Germany); Genoese, M.; Moest, D. [Karlsruhe Univ. (T.H.) (Germany). Inst. fuer Industriebetriebslehre und Industrielle Produktion

    2007-07-01

    Liberalisation, climate policy and promotion of renewable energy are challenges to players of the electricity sector in many countries. Policy makers have to con-sider issues like market power, bounded rationality of players and the appear-ance of fluctuating energy sources in order to provide adequate legislation. Fur-thermore the interactions between markets and environmental policy instru-ments become an issue of increasing importance. A promising approach for the scientific analysis of these developments is the field of agent-based simulation. The goal of this article is to provide an overview of the current work applying this methodology to the analysis of electricity markets. (orig.)

  7. Segmentation-Based PolSAR Image Classification Using Visual Features: RHLBP and Color Features

    Directory of Open Access Journals (Sweden)

    Jian Cheng

    2015-05-01

    Full Text Available A segmentation-based fully-polarimetric synthetic aperture radar (PolSAR image classification method that incorporates texture features and color features is designed and implemented. This method is based on the framework that conjunctively uses statistical region merging (SRM for segmentation and support vector machine (SVM for classification. In the segmentation step, we propose an improved local binary pattern (LBP operator named the regional homogeneity local binary pattern (RHLBP to guarantee the regional homogeneity in PolSAR images. In the classification step, the color features extracted from false color images are applied to improve the classification accuracy. The RHLBP operator and color features can provide discriminative information to separate those pixels and regions with similar polarimetric features, which are from different classes. Extensive experimental comparison results with conventional methods on L-band PolSAR data demonstrate the effectiveness of our proposed method for PolSAR image classification.

  8. The method of narrow-band audio classification based on universal noise background model

    Science.gov (United States)

    Rui, Rui; Bao, Chang-chun

    2013-03-01

    Audio classification is the basis of content-based audio analysis and retrieval. The conventional classification methods mainly depend on feature extraction of audio clip, which certainly increase the time requirement for classification. An approach for classifying the narrow-band audio stream based on feature extraction of audio frame-level is presented in this paper. The audio signals are divided into speech, instrumental music, song with accompaniment and noise using the Gaussian mixture model (GMM). In order to satisfy the demand of actual environment changing, a universal noise background model (UNBM) for white noise, street noise, factory noise and car interior noise is built. In addition, three feature schemes are considered to optimize feature selection. The experimental results show that the proposed algorithm achieves a high accuracy for audio classification, especially under each noise background we used and keep the classification time less than one second.

  9. Maximum-margin based representation learning from multiple atlases for Alzheimer's disease classification.

    Science.gov (United States)

    Min, Rui; Cheng, Jian; Price, True; Wu, Guorong; Shen, Dinggang

    2014-01-01

    In order to establish the correspondences between different brains for comparison, spatial normalization based morphometric measurements have been widely used in the analysis of Alzheimer's disease (AD). In the literature, different subjects are often compared in one atlas space, which may be insufficient in revealing complex brain changes. In this paper, instead of deploying one atlas for feature extraction and classification, we propose a maximum-margin based representation learning (MMRL) method to learn the optimal representation from multiple atlases. Unlike traditional methods that perform the representation learning separately from the classification, we propose to learn the new representation jointly with the classification model, which is more powerful in discriminating AD patients from normal controls (NC). We evaluated the proposed method on the ADNI database, and achieved 90.69% for AD/NC classification and 73.69% for p-MCI/s-MCI classification.

  10. INSPECTING COMPLIANCE TO MANY RULES: AN AGENT-BASED MODEL

    Directory of Open Access Journals (Sweden)

    Slaven Smojver

    2016-12-01

    Full Text Available Ever increasing scope and complexity of regulations and other rules that govern human society emphasise importance of the inspection of compliance to those rules. Often-used approaches to the inspection of compliance suffer from drawbacks such as overly idealistic assumptions and narrowness of application. Specifically, inspection models are frequently limited to situations where inspected entity has to comply with only one rule. Furthermore, inspection strategies regularly overlook some useful and available information such as varying costs of compliance to different rules. This article presents an agent-based model for inspection of compliance to many rules, which addresses abovementioned drawbacks. In the article, crime economic, game-theoretic and agent-based modelling approaches to inspection are briefly described, as well as their impact on the model. The model is described and simulation of a simplified version of the model is presented. The obtained results demonstrate that inspection strategies which take into account rules’ compliance costs perform significantly better than random strategies and better than cycle-based strategies. Additionally, the results encourage further, wider testing and validation of the model.

  11. Salient Feature Identification and Analysis using Kernel-Based Classification Techniques for Synthetic Aperture Radar Automatic Target Recognition

    Science.gov (United States)

    2014-03-27

    SALIENT FEATURE IDENTIFICATION AND ANALYSIS USING KERNEL-BASED CLASSIFICATION TECHNIQUES FOR SYNTHETIC APERTURE RADAR AUTOMATIC TARGET RECOGNITION...FEATURE IDENTIFICATION AND ANALYSIS USING KERNEL-BASED CLASSIFICATION TECHNIQUES FOR SYNTHETIC APERTURE RADAR AUTOMATIC TARGET RECOGNITION THESIS Presented...SALIENT FEATURE IDENTIFICATION AND ANALYSIS USING KERNEL-BASED CLASSIFICATION TECHNIQUES FOR SYNTHETIC APERTURE RADAR AUTOMATIC TARGET RECOGNITION

  12. Histotype-based prognostic classification of gastric cancer

    Institute of Scientific and Technical Information of China (English)

    Anna Maria Chiaravalli; Catherine Klersy; Alessandro Vanoli; Andrea Ferretti; Carlo Capella; Enrico Solcia

    2012-01-01

    AIM:To test the efficiency of a recently proposed histotype-based grading system in a consecutive series of gastric cancers.METHOIS:Two hundred advanced gastric cancers operated upon in 1980-1987 and followed for a median 159 mo were investigated on hematoxylin-eosinstained sections to identify low-grade [muconodular,well differentiated tubular,diffuse desmoplastic and high lymphoid response (HLR)],high-grade (anaplastic and mucinous invasive) and intermediate-grade (ordinarycohesive,diffuse and mucinous) cancers,in parallel with a previously investigated series of 292 cases.In addition,immunohistochemical analyses for CD8,CD11 and HLA-DR antigens,pancytokeratin and podoplanin,as well as immunohistochemical and molecular tests for microsatellite DNA instability and in situ hybridization for the Epstein-Barr virus (EBV) EBER1 gene were performed.Patient survival was assessed with death rates per 100 person-years and with Kaplan-Meier or Cox model estimates.RESULTS:Collectively,the four low-grade histotypes accounted for 22% and the two high-grade histotypes for 7% of the consecutive cancers investigated,while the remaining 71% of cases were intermediate-grade cancers,with highly significant,stage-independent,survival differences among the three tumor grades (P =0.004 for grade 1 vs 2 and P =0.0019 for grade 2 vs grade 3),thus confirming the results in the original series.A combined analysis of 492 cases showed an improved prognostic value of histotype-based grading compared with the Lauren classification.In addition,it allowed better characterization of rare histotypes,particularly the three subsets of prognostically different mucinous neoplasms,of which 10 ordinary mucinous cancers showed stage-inclusive survival worse than that of 20 muconodular (P =0.037) and better than that of 21 high-grade (P < 0.001) cases.Tumors with high-level microsatellite DNA instability(MSI-H) or EBV infection,together with a third subset negative for both conditions,formed the

  13. Analysis of uncertainty in multi-temporal object-based classification

    Science.gov (United States)

    Löw, Fabian; Knöfel, Patrick; Conrad, Christopher

    2015-07-01

    Agricultural management increasingly uses crop maps based on classification of remotely sensed data. However, classification errors can translate to errors in model outputs, for instance agricultural production monitoring (yield, water demand) or crop acreage calculation. Hence, knowledge on the spatial variability of the classier performance is important information for the user. But this is not provided by traditional assessments of accuracy, which are based on the confusion matrix. In this study, classification uncertainty was analyzed, based on the support vector machines (SVM) algorithm. SVM was applied to multi-spectral time series data of RapidEye from different agricultural landscapes and years. Entropy was calculated as a measure of classification uncertainty, based on the per-object class membership estimations from the SVM algorithm. Permuting all possible combinations of available images allowed investigating the impact of the image acquisition frequency and timing, respectively, on the classification uncertainty. Results show that multi-temporal datasets decrease classification uncertainty for different crops compared to single data sets, but there was no "one-image-combination-fits-all" solution. The number and acquisition timing of the images, for which a decrease in uncertainty could be realized, proved to be specific to a given landscape, and for each crop they differed across different landscapes. For some crops, an increase of uncertainty was observed when increasing the quantity of images, even if classification accuracy was improved. Random forest regression was employed to investigate the impact of different explanatory variables on the observed spatial pattern of classification uncertainty. It was strongly influenced by factors related with the agricultural management and training sample density. Lower uncertainties were revealed for fields close to rivers or irrigation canals. This study demonstrates that classification uncertainty estimates

  14. Emotion of Physiological Signals Classification Based on TS Feature Selection

    Institute of Scientific and Technical Information of China (English)

    Wang Yujing; Mo Jianlin

    2015-01-01

    This paper propose a method of TS-MLP about emotion recognition of physiological signal.It can recognize emotion successfully by Tabu search which selects features of emotion’s physiological signals and multilayer perceptron that is used to classify emotion.Simulation shows that it has achieved good emotion classification performance.

  15. Laguerre Kernels –Based SVM for Image Classification

    Directory of Open Access Journals (Sweden)

    Ashraf Afifi

    2014-01-01

    Full Text Available Support vector machines (SVMs have been promising methods for classification and regression analysis because of their solid mathematical foundations which convey several salient properties that other methods hardly provide. However the performance of SVMs is very sensitive to how the kernel function is selected, the challenge is to choose the kernel function for accurate data classification. In this paper, we introduce a set of new kernel functions derived from the generalized Laguerre polynomials. The proposed kernels could improve the classification accuracy of SVMs for both linear and nonlinear data sets. The proposed kernel functions satisfy Mercer’s condition and orthogonally properties which are important and useful in some applications when the support vector number is needed as in feature selection. The performance of the generalized Laguerre kernels is evaluated in comparison with the existing kernels. It was found that the choice of the kernel function, and the values of the parameters for that kernel are critical for a given amount of data. The proposed kernels give good classification accuracy in nearly all the data sets, especially those of high dimensions.

  16. Image-Based Coral Reef Classification and Thematic Mapping

    Directory of Open Access Journals (Sweden)

    Brooke Gintert

    2013-04-01

    Full Text Available This paper presents a novel image classification scheme for benthic coral reef images that can be applied to both single image and composite mosaic datasets. The proposed method can be configured to the characteristics (e.g., the size of the dataset, number of classes, resolution of the samples, color information availability, class types, etc. of individual datasets. The proposed method uses completed local binary pattern (CLBP, grey level co-occurrence matrix (GLCM, Gabor filter response, and opponent angle and hue channel color histograms as feature descriptors. For classification, either k-nearest neighbor (KNN, neural network (NN, support vector machine (SVM or probability density weighted mean distance (PDWMD is used. The combination of features and classifiers that attains the best results is presented together with the guidelines for selection. The accuracy and efficiency of our proposed method are compared with other state-of-the-art techniques using three benthic and three texture datasets. The proposed method achieves the highest overall classification accuracy of any of the tested methods and has moderate execution time. Finally, the proposed classification scheme is applied to a large-scale image mosaic of the Red Sea to create a completely classified thematic map of the reef benthos.

  17. Colour based off-road environment and terrain type classification

    NARCIS (Netherlands)

    Jansen, P.; Mark, W. van der; Heuvel, J.C. van den; Groen, F.C.A.

    2005-01-01

    Terrain classification is an important problem that still remains to be solved for off-road autonomous robot vehicle guidance. Often, obstacle detection systems are used which cannot distinguish between solid obstacles such as rocks or soft obstacles such as tall patches of grass. Terrain classifica

  18. Intelligent Agent based Flight Search and Booking System

    Directory of Open Access Journals (Sweden)

    Floyd Garvey

    2012-07-01

    Full Text Available The world globalization is widely used, and there are several definitions that may fit this one word. However the reality remains that globalization has impacted and is impacting each individual on this planet. It is defined to be greater movement of people, goods, capital and ideas due to increased economic integration, which in turn is propelled, by increased trade and investment. It is like moving towards living in a borderless world. With the reality of globalization, the travel industry has benefited significantly. It could be said that globalization is benefiting from the flight industry. Regardless of the way one looks at it, more persons are traveling each day and are exploring several places that were distant places on a map. Equally, technology has been growing at an increasingly rapid pace and is being utilized by several persons all over the world. With the combination of globalization and the increase in technology and the frequency in travel there is a need to provide an intelligent application that is capable to meeting the needs of travelers that utilize mobile phones all over. It is a solution that fits in perfectly to a user’s busy lifestyle, offers ease of use and enough intelligence that makes a user’s experience worthwhile. Having recognized this need, the Agent based Mobile Airline Search and Booking System is been developed that is built to work on the Android to perform Airline Search and booking using Biometric. The system also possess agent learning capability to perform the search of Airlines based on some previous search pattern .The development been carried out using JADE-LEAP Agent development kit on Android.

  19. Router Agent Technology for Policy-Based Network Management

    Science.gov (United States)

    Chow, Edward T.; Sudhir, Gurusham; Chang, Hsin-Ping; James, Mark; Liu, Yih-Chiao J.; Chiang, Winston

    2011-01-01

    This innovation can be run as a standalone network application on any computer in a networked environment. This design can be configured to control one or more routers (one instance per router), and can also be configured to listen to a policy server over the network to receive new policies based on the policy- based network management technology. The Router Agent Technology transforms the received policies into suitable Access Control List syntax for the routers it is configured to control. It commits the newly generated access control lists to the routers and provides feedback regarding any errors that were faced. The innovation also automatically generates a time-stamped log file regarding all updates to the router it is configured to control. This technology, once installed on a local network computer and started, is autonomous because it has the capability to keep listening to new policies from the policy server, transforming those policies to router-compliant access lists, and committing those access lists to a specified interface on the specified router on the network with any error feedback regarding commitment process. The stand-alone application is named RouterAgent and is currently realized as a fully functional (version 1) implementation for the Windows operating system and for CISCO routers.

  20. An agent-based approach to financial stylized facts

    Science.gov (United States)

    Shimokawa, Tetsuya; Suzuki, Kyoko; Misawa, Tadanobu

    2007-06-01

    An important challenge of the financial theory in recent years is to construct more sophisticated models which have consistencies with as many financial stylized facts that cannot be explained by traditional models. Recently, psychological studies on decision making under uncertainty which originate in Kahneman and Tversky's research attract a lot of interest as key factors which figure out the financial stylized facts. These psychological results have been applied to the theory of investor's decision making and financial equilibrium modeling. This paper, following these behavioral financial studies, would like to propose an agent-based equilibrium model with prospect theoretical features of investors. Our goal is to point out a possibility that loss-averse feature of investors explains vast number of financial stylized facts and plays a crucial role in price formations of financial markets. Price process which is endogenously generated through our model has consistencies with, not only the equity premium puzzle and the volatility puzzle, but great kurtosis, asymmetry of return distribution, auto-correlation of return volatility, cross-correlation between return volatility and trading volume. Moreover, by using agent-based simulations, the paper also provides a rigorous explanation from the viewpoint of a lack of market liquidity to the size effect, which means that small-sized stocks enjoy excess returns compared to large-sized stocks.

  1. LEARNING REPOSITORY ADAPTABILITY IN AN AGENT-BASED UNIVERSITY ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Vanco Cabukovski

    2016-06-01

    Full Text Available Automated e-Learning Systems (AeLS are fundamental to contemporary educational concepts worldwide.It has become a standard not only in support to the formal curriculum, but containing social platform capabilities, gamification elements and functionalities fostering communities of experts, also for faster knowledge dissemination. Additionally, AeLSs support internal communications and customizable analytics and methodologies to quickly identify learning performance, which in turn can be used as feedback to implement adaptability in tailoring the content management to meet specific individual needs. The volume of fast growing AeLS content of supplement material and exchanged communication combined with the already huge material archived in the university libraries is enormous and needs sophisticated managing through electronic repositories. Such integration of content management systems (CMS present challenges which can be solved optimally with the use of distributed management implemented through agent-based systems. This paper depicts a successful implementation of an Integrated Intelligent Agent Based UniversityInformation System (IABUIS.

  2. Classification of weld defect based on information fusion technology for radiographic testing system

    Science.gov (United States)

    Jiang, Hongquan; Liang, Zeming; Gao, Jianmin; Dang, Changying

    2016-03-01

    Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster-Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defect feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.

  3. Agent-Based Learning Environments as a Research Tool for Investigating Teaching and Learning.

    Science.gov (United States)

    Baylor, Amy L.

    2002-01-01

    Discusses intelligent learning environments for computer-based learning, such as agent-based learning environments, and their advantages over human-based instruction. Considers the effects of multiple agents; agents and research design; the use of Multiple Intelligent Mentors Instructing Collaboratively (MIMIC) for instructional design for…

  4. Empirical agent-based land market: Integrating adaptive economic behavior in urban land-use models

    NARCIS (Netherlands)

    Filatova, Tatiana

    2015-01-01

    This paper introduces an economic agent-based model of an urban housing market. The RHEA (Risks and Hedonics in Empirical Agent-based land market) model captures natural hazard risks and environmental amenities through hedonic analysis, facilitating empirical agent-based land market modeling. RHEA i

  5. Robust real-time mine classification based on side-scan sonar imagery

    Science.gov (United States)

    Bello, Martin G.

    2000-08-01

    We describe here image processing and neural network based algorithms for detection and classification of mines in side-scan sonar imagery, and the results obtained from their application to two distinct image data bases. These algorithms evolved over a period from 1994 to the present, originally at Draper Laboratory, and currently at Alphatech Inc. The mine-detection/classification system is partitioned into an anomaly screening stage followed by a classification stage involving the calculation of features on blobs, and their input into a multilayer perceptron neural network. Particular attention is given to the selection of algorithm parameters, and training data, in order to optimize performance over the aggregate data set.

  6. Classification of melanoma using wavelet-transform-based optimal feature set

    Science.gov (United States)

    Walvick, Ronn P.; Patel, Ketan; Patwardhan, Sachin V.; Dhawan, Atam P.

    2004-05-01

    The features used in the ABCD rule for characterization of skin lesions suggest that the spatial and frequency information in the nevi changes at various stages of melanoma development. To analyze these changes wavelet transform based features have been reported. The classification of melanoma using these features has produced varying results. In this work, all the reported wavelet transform based features are combined to form a single feature set. This feature set is then optimized by removing redundancies using principal component analysis. A feed forward neural network trained with the back propagation algorithm is then used in the classification process to obtain better classification results.

  7. Agent-based multi-optional model of innovations diffusion

    CERN Document Server

    Laciana, Carlos E

    2013-01-01

    We propose a formalism that allows the study of the process of diffusion of several products competing in a common market. It is based on the generalization of the statistics Ising model (Potts model). For the implementation, agent based modeling is used, applied to a problem of three options; to adopt a product A, a product B, or non-adoption. A launching strategy is analyzed for one of the two products, which delays its launching with the objective of competing with improvements. The proportion reached by one and another product is calculated at market saturation. The simulations are produced varying the social network topology, the uncertainty in the decision, and the population's homogeneity.

  8. Agent Based Modeling on Organizational Dynamics of Terrorist Network

    Directory of Open Access Journals (Sweden)

    Bo Li

    2015-01-01

    Full Text Available Modeling organizational dynamics of terrorist network is a critical issue in computational analysis of terrorism research. The first step for effective counterterrorism and strategic intervention is to investigate how the terrorists operate with the relational network and what affects the performance. In this paper, we investigate the organizational dynamics by employing a computational experimentation methodology. The hierarchical cellular network model and the organizational dynamics model are developed for modeling the hybrid relational structure and complex operational processes, respectively. To intuitively elucidate this method, the agent based modeling is used to simulate the terrorist network and test the performance in diverse scenarios. Based on the experimental results, we show how the changes of operational environments affect the development of terrorist organization in terms of its recovery and capacity to perform future tasks. The potential strategies are also discussed, which can be used to restrain the activities of terrorists.

  9. Complexity and agent-based modelling in urban research

    DEFF Research Database (Denmark)

    Fertner, Christian

    Urbanisation processes are results of a broad variety of actors or actor groups and their behaviour and decisions based on different experiences, knowledge, resources, values etc. The decisions done are often on a micro/individual level but resulting in macro/collective behaviour. In urban research...... influence on the bigger system. Traditional scientific methods or theories often tried to simplify, not accounting complex relations of actors and decision-making. The introduction of computers in simulation made new approaches in modelling, as for example agent-based modelling (ABM), possible, dealing...... of complexity for a majority of science, there exists a huge number of scientific articles, books, tutorials etc. to these topics which doesn’t make it easy for a novice in the field to find the right literature. The literature used gives an optimistic outlook for the future of this methodology, although ABM...

  10. Radial Basis Function Networks Applied in Bacterial Classification Based on MALDI-TOF-MS

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The radial basis function networks were applied to bacterial classification based on the matrix-assisted laser desorption/ionization time-of-flight mass spectrometric (MALDI-TOF-MS) data. The classification of bacteria cultured at different time was discussed and the effect of the network parameters on the classification was investigated. The cross-validation method was used to test the trained networks. The correctness of the classification of different bacteria investigated changes in a wide range from 61.5% to 92.8%. Owing to the complexity of biological effects in bacterial growth, the more rigid control of bacterial culture conditions seems to be a critical factor for improving the rate of correctness for bacterial classification.

  11. [Classification of cell-based medicinal products and legal implications: An overview and an update].

    Science.gov (United States)

    Scherer, Jürgen; Flory, Egbert

    2015-11-01

    In general, cell-based medicinal products do not represent a uniform class of medicinal products, but instead comprise medicinal products with diverse regulatory classification as advanced-therapy medicinal products (ATMP), medicinal products (MP), tissue preparations, or blood products. Due to the legal and scientific consequences of the development and approval of MPs, classification should be clarified as early as possible. This paper describes the legal situation in Germany and highlights specific criteria and concepts for classification, with a focus on, but not limited to, ATMPs and non-ATMPs. Depending on the stage of product development and the specific application submitted to a competent authority, legally binding classification is done by the German Länder Authorities, Paul-Ehrlich-Institut, or European Medicines Agency. On request by the applicants, the Committee for Advanced Therapies may issue scientific recommendations for classification.

  12. Scene Classification of Remote Sensing Image Based on Multi-scale Feature and Deep Neural Network

    Directory of Open Access Journals (Sweden)

    XU Suhui

    2016-07-01

    Full Text Available Aiming at low precision of remote sensing image scene classification owing to small sample sizes, a new classification approach is proposed based on multi-scale deep convolutional neural network (MS-DCNN, which is composed of nonsubsampled Contourlet transform (NSCT, deep convolutional neural network (DCNN, and multiple-kernel support vector machine (MKSVM. Firstly, remote sensing image multi-scale decomposition is conducted via NSCT. Secondly, the decomposing high frequency and low frequency subbands are trained by DCNN to obtain image features in different scales. Finally, MKSVM is adopted to integrate multi-scale image features and implement remote sensing image scene classification. The experiment results in the standard image classification data sets indicate that the proposed approach obtains great classification effect due to combining the recognition superiority to different scenes of low frequency and high frequency subbands.

  13. Three-Class EEG-Based Motor Imagery Classification Using Phase-Space Reconstruction Technique

    Science.gov (United States)

    Djemal, Ridha; Bazyed, Ayad G.; Belwafi, Kais; Gannouni, Sofien; Kaaniche, Walid

    2016-01-01

    Over the last few decades, brain signals have been significantly exploited for brain-computer interface (BCI) applications. In this paper, we study the extraction of features using event-related desynchronization/synchronization techniques to improve the classification accuracy for three-class motor imagery (MI) BCI. The classification approach is based on combining the features of the phase and amplitude of the brain signals using fast Fourier transform (FFT) and autoregressive (AR) modeling of the reconstructed phase space as well as the modification of the BCI parameters (trial length, trial frequency band, classification method). We report interesting results compared with those present in the literature by utilizing sequential forward floating selection (SFFS) and a multi-class linear discriminant analysis (LDA), our findings showed superior classification results, a classification accuracy of 86.06% and 93% for two BCI competition datasets, with respect to results from previous studies. PMID:27563927

  14. A Statistical Approach of Texton Based Texture Classification Using LPboosting Classifier

    Directory of Open Access Journals (Sweden)

    C. Vivek

    2014-05-01

    Full Text Available The aim of the study in this research deals with the accurate texture classification and the image texture analysis has a voluminous errand prospective in real world applications. In this study, the texton co-occurrence matrix applied to the Broadatz database images that derive the template texton grid images and it undergoes to the discrete shearlet transform to decompose the image. The entropy lineage parameters of redundant and interpolate at a certain point which congregating adjacent regions based on geometric properties then the classification is apprehended by comparing the similarity between the estimated distributions of all detail sub bands through the strong LP boosting classification with various weak classifier configurations. We show that the resulted texture features while incurring the maximum of the discriminative information. Our hybrid classification method significantly outperforms the existing texture descriptors and stipulates classification accuracy in the state-of-the-art real world imaging applications.

  15. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  16. Spectral Collaborative Representation based Classification for Hand Gestures recognition on Electromyography Signals

    OpenAIRE

    Boyali, Ali

    2015-01-01

    In this study, we introduce a novel variant and application of the Collaborative Representation based Classification in spectral domain for recognition of the hand gestures using the raw surface Electromyography signals. The intuitive use of spectral features are explained via circulant matrices. The proposed Spectral Collaborative Representation based Classification (SCRC) is able to recognize gestures with higher levels of accuracy for a fairly rich gesture set. The worst recognition result...

  17. State-Based Models for Light Curve Classification

    Science.gov (United States)

    Becker, A.

    I discuss here the application of continuous time autoregressive models to the characterization of astrophysical variability. These types of models are general enough to represent many classes of variability, and descriptive enough to provide features for lightcurve classification. Importantly, the features of these models may be interpreted in terms of the power spectrum of the lightcurve, enabling constraints on characteristic timescales and periodicity. These models may be extended to include vector-valued inputs, raising the prospect of a fully general modeling and classification environment that uses multi-passband inputs to create a single phenomenological model. These types of spectral-temporal models are an important extension of extant techniques, and necessary in the upcoming eras of Gaia and LSST.

  18. Knowledge Based Pipeline Network Classification and Recognition Method of Maps

    Institute of Scientific and Technical Information of China (English)

    Liu Tongyu; Gu Shusheng

    2001-01-01

    Map recognition is an e.ssenfial data input means of Geographic Information System(GIS). How to solve the problems in the procedure, such as recognition of maps with crisscross pipeline networks, classification of buildings and roads, and processing of connected text, is a critical step for GIS keeping high-speed development. In this paper, a new recognition method of pipeline maps is presented, and some common patterns of pipeline connection and component labels are establishecd Through pattern matching, pipelines and component labels are recognized and peeled off from maps. After this approach, maps simply consist of buildings and roads, which are recognized and classified with fuzzy classification method. In addition, the Double Sides Scan (DSS) technique is also described, through which the effect of connected text can be eliminated.

  19. A Spectral Signature Shape-Based Algorithm for Landsat Image Classification

    Directory of Open Access Journals (Sweden)

    Yuanyuan Chen

    2016-08-01

    Full Text Available Land-cover datasets are crucial for earth system modeling and human-nature interaction research at local, regional and global scales. They can be obtained from remotely sensed data using image classification methods. However, in processes of image classification, spectral values have received considerable attention for most classification methods, while the spectral curve shape has seldom been used because it is difficult to be quantified. This study presents a classification method based on the observation that the spectral curve is composed of segments and certain extreme values. The presented classification method quantifies the spectral curve shape and takes full use of the spectral shape differences among land covers to classify remotely sensed images. Using this method, classification maps from TM (Thematic mapper data were obtained with an overall accuracy of 0.834 and 0.854 for two respective test areas. The approach presented in this paper, which differs from previous image classification methods that were mostly concerned with spectral “value” similarity characteristics, emphasizes the "shape" similarity characteristics of the spectral curve. Moreover, this study will be helpful for classification research on hyperspectral and multi-temporal images.

  20. Dihedral-Based Segment Identification and Classification of Biopolymers I: Proteins

    Science.gov (United States)

    2013-01-01

    A new structure classification scheme for biopolymers is introduced, which is solely based on main-chain dihedral angles. It is shown that by dividing a biopolymer into segments containing two central residues, a local classification can be performed. The method is referred to as DISICL, short for Dihedral-based Segment Identification and Classification. Compared to other popular secondary structure classification programs, DISICL is more detailed as it offers 18 distinct structural classes, which may be simplified into a classification in terms of seven more general classes. It was designed with an eye to analyzing subtle structural changes as observed in molecular dynamics simulations of biomolecular systems. Here, the DISICL algorithm is used to classify two databases of protein structures, jointly containing more than 10 million segments. The data is compared to two alternative approaches in terms of the amount of classified residues, average occurrence and length of structural elements, and pair wise matches of the classifications by the different programs. In an accompanying paper (Nagy, G.; Oostenbrink, C. Dihedral-based segment identification and classification of biopolymers II: Polynucleotides. J. Chem. Inf. Model. 2013, DOI: 10.1021/ci400542n), the analysis of polynucleotides is described and applied. Overall, DISICL represents a potentially useful tool to analyze biopolymer structures at a high level of detail. PMID:24364820

  1. Dihedral-based segment identification and classification of biopolymers I: proteins.

    Science.gov (United States)

    Nagy, Gabor; Oostenbrink, Chris

    2014-01-27

    A new structure classification scheme for biopolymers is introduced, which is solely based on main-chain dihedral angles. It is shown that by dividing a biopolymer into segments containing two central residues, a local classification can be performed. The method is referred to as DISICL, short for Dihedral-based Segment Identification and Classification. Compared to other popular secondary structure classification programs, DISICL is more detailed as it offers 18 distinct structural classes, which may be simplified into a classification in terms of seven more general classes. It was designed with an eye to analyzing subtle structural changes as observed in molecular dynamics simulations of biomolecular systems. Here, the DISICL algorithm is used to classify two databases of protein structures, jointly containing more than 10 million segments. The data is compared to two alternative approaches in terms of the amount of classified residues, average occurrence and length of structural elements, and pair wise matches of the classifications by the different programs. In an accompanying paper (Nagy, G.; Oostenbrink, C. Dihedral-based segment identification and classification of biopolymers II: Polynucleotides. J. Chem. Inf. Model. 2013, DOI: 10.1021/ci400542n), the analysis of polynucleotides is described and applied. Overall, DISICL represents a potentially useful tool to analyze biopolymer structures at a high level of detail.

  2. Power Disturbances Classification Using S-Transform Based GA-PNN

    Science.gov (United States)

    Manimala, K.; Selvi, K.

    2015-09-01

    The significance of detection and classification of power quality events that disturb the voltage and/or current waveforms in the electrical power distribution networks is well known. Consequently, in spite of a large number of research reports in this area, a research on the selection of proper parameter for specific classifiers was so far not explored. The parameter selection is very important for successful modelling of input-output relationship in a function approximation model. In this study, probabilistic neural network (PNN) has been used as a function approximation tool for power disturbance classification and genetic algorithm (GA) is utilised for optimisation of the smoothing parameter of the PNN. The important features extracted from raw power disturbance signal using S-Transform are given to the PNN for effective classification. The choice of smoothing parameter for PNN classifier will significantly impact the classification accuracy. Hence, GA based parameter optimization is done to ensure good classification accuracy by selecting suitable parameter of the PNN classifier. Testing results show that the proposed S-Transform based GA-PNN model has better classification ability than classifiers based on conventional grid search method for parameter selection. The noisy and practical signals are considered for the classification process to show the effectiveness of the proposed method in comparison with existing methods.

  3. Investigation into Text Classification With Kernel Based Schemes

    Science.gov (United States)

    2010-03-01

    classification/categorization applications. The text database considered in this study was collected from the IEEE Xplore database website [2]. The...database considered in this study was collected from the IEEE Xplore database website [2]. The documents collected were limited to Electrical engineering...Linear Discriminant Analysis (LDA) scheme. Titles, along with abstracts from IEEE journal articles published between 1990 and 1999 with specific key

  4. Image Analysis and Classification Based on Soil Strength

    Science.gov (United States)

    2016-08-01

    Impact Hammer, which is light, easy to operate, and cost effective. The Clegg Impact Hammer measures stiffness of the soil surface by drop- ping a...WorldView-2 multi- spectral satellite imagery. This paper presents the work done on the im- agery classification for soil strength, the apparent...landing zone ............... 6 4 CRREL research technician, Jesse Stanley, taking Clegg measurements at a test location in San Miguelito

  5. An implementation of norm-based agent negotiation.

    NARCIS (Netherlands)

    Dijkstra, Pieter; Prakken, H.; Vey Mestdagh, C.N.J. de

    2007-01-01

    In this paper, we develop our previous outline of a multi-agent architecture for regulated information exchange in crime investigations. Interactions about information exchange between agents (representing police officers) are further analysed as negotiation dialogues with embedded persuasion dialog

  6. Instrument classification in polyphonic music based on timbre analysis

    Science.gov (United States)

    Zhang, Tong

    2001-07-01

    While most previous work on musical instrument recognition is focused on the classification of single notes in monophonic music, a scheme is proposed in this paper for the distinction of instruments in continuous music pieces which may contain one or more kinds of instruments. Highlights of the system include music segmentation into notes, harmonic partial estimation in polyphonic sound, note feature calculation and normalization, note classification using a set of neural networks, and music piece categorization with fuzzy logic principles. Example outputs of the system are `the music piece is 100% guitar (with 90% likelihood)' and `the music piece is 60% violin and 40% piano, thus a violin/piano duet'. The system has been tested with twelve kinds of musical instruments, and very promising experimental results have been obtained. An accuracy of about 80% is achieved, and the number can be raised to 90% if misindexings within the same instrument family are tolerated (e.g. cello, viola and violin). A demonstration system for musical instrument classification and music timbre retrieval is also presented.

  7. Three-Class Mammogram Classification Based on Descriptive CNN Features

    Science.gov (United States)

    Zhang, Qianni; Jadoon, Adeel

    2017-01-01

    In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques. PMID:28191461

  8. Measure of Landscape Heterogeneity by Agent-Based Methodology

    Science.gov (United States)

    Wirth, E.; Szabó, Gy.; Czinkóczky, A.

    2016-06-01

    With the rapid increase of the world's population, the efficient food production is one of the key factors of the human survival. Since biodiversity and heterogeneity is the basis of the sustainable agriculture, the authors tried to measure the heterogeneity of a chosen landscape. The EU farming and subsidizing policies (EEA, 2014) support landscape heterogeneity and diversity, nevertheless exact measurements and calculations apart from statistical parameters (standard deviation, mean), do not really exist. In the present paper the authors' goal is to find an objective, dynamic method that measures landscape heterogeneity. It is achieved with the so called agent-based modelling, where randomly dispatched dynamic scouts record the observed land cover parameters and sum up the features of a new type of land. During the simulation the agents collect a Monte Carlo integral as a diversity landscape potential which can be considered as the unit of the `greening' measure. As a final product of the ABM method, a landscape potential map is obtained that can serve as a tool for objective decision making to support agricultural diversity.

  9. Agent-based modelling of heating system adoption in Norway

    Energy Technology Data Exchange (ETDEWEB)

    Sopha, Bertha Maya; Kloeckner, Christian A.; Hertwich, Edgar G.

    2010-07-01

    Full text: This paper introduces agent-based modelling as a methodological approach to understand the effect of decision making mechanism on the adoption of heating systems in Norway. The model is used as an experimental/learning tool to design possible interventions, not for prediction. The intended users of the model are therefore policy designers. Primary heating system adoptions of electric heating, heat pump and wood pellet heating were selected. Random topology was chosen to represent social network among households. Agents were households with certain location, number of peers, current adopted heating system, employed decision strategy, and degree of social influence in decision making. The overall framework of decision-making integrated theories from different disciplines; customer behavior theory, behavioral economics, theory of planned behavior, and diffusion of innovation, in order to capture possible decision making processes in households. A mail survey of 270 Norwegian households conducted in 2008 was designed specifically for acquiring data for the simulation. The model represents real geographic area of households and simulates the overall fraction of adopted heating system under study. The model was calibrated with historical data from Statistics Norway (SSB). Interventions with respects to total cost, norms, indoor air quality, reliability, supply security, required work, could be explored using the model. For instance, the model demonstrates that a considerable total cost (investment and operating cost) increase of electric heating and heat pump, rather than a reduction of wood pellet heating's total cost, are required to initiate and speed up wood pellet adoption. (Author)

  10. Agent-based Market Research Learning Environment for New Entrepreneurs

    Directory of Open Access Journals (Sweden)

    Alejandro Valencia

    2012-01-01

    Full Text Available Due to the importance of creating alternative mechanisms to generate know-how on potential markets for new entrepreneurs this paper proposes an agent-based learning environment to help them learning market research strategies within new businesses. An instructor agent, serving as a learning assistant within the MAS environment guides new entrepreneurs to identify their most adequate market niche. The integration of MAS-CommonKADS and GAIA methodologies is used along with AUML diagrams in order to design and develop this agentbased learning environment, called MaREMAS. The paper thus describes all the stages concerning MaREMAS construction focusing on the conceptualization, analysis, design, prototype development, and validation. The tests developed in the MaREMAS learning environment were satisfactory, however, it is proposed as future work to provide the system a more robust statistical module that allows a better analysis of the research variables and hence be able to generate more useful suggestions to the entrepreneur.

  11. Recent progress on pyrazole scaffold-based antimycobacterial agents.

    Science.gov (United States)

    Keri, Rangappa S; Chand, Karam; Ramakrishnappa, Thippeswamy; Nagaraja, Bhari Mallanna

    2015-05-01

    New and reemerging infectious diseases will continue to pose serious global health threats well into the 21st century and according to the World Health Organization report, these are still the leading cause of death among humans worldwide. Among infectious diseases, tuberculosis claims approximately 2 million deaths per year worldwide. Also, agents that reduce the duration and complexity of the current therapy would have a major impact on the overall cure rate. Due to the development of resistance to conventional antibiotics there is a need for new therapeutic strategies to combat Mycobacterium tuberculosis. Subsequently, there is an urgent need for the development of new drug candidates with newer targets and alternative mechanism of action. In this perspective, pyrazole, one of the most important classes of heterocycles, has been the topic of research for thousands of researchers all over the world because of its wide spectrum of biological activities. To pave the way for future research, there is a need to collect the latest information in this promising area. In the present review, we have collated published reports on the pyrazole core to provide an insight so that its full therapeutic potential can be utilized for the treatment of tuberculosis. In this article, the possible structure-activity relationship of pyrazole analogs for designing better antituberculosis (anti-TB) agents has been discussed and is also helpful for new thoughts in the quest for rational designs of more active and less toxic pyrazole-based anti-TB drugs.

  12. Datasize-Based Confidence Measure for a Learning Agent

    NARCIS (Netherlands)

    Jamroga, W.J.; Wiering, M.

    2002-01-01

    In this paper a condence measure is considered for an agent who tries to keep a probabilistic model of her environment of action. The measure is meant to capture only one factor of the agent's doubt � namely, the issue whether the agent has been able to collect a sufcient number of observations. In

  13. Chemoinformatics-based classification of prohibited substances employed for doping in sport.

    Science.gov (United States)

    Cannon, Edward O; Bender, Andreas; Palmer, David S; Mitchell, John B O

    2006-01-01

    Representative molecules from 10 classes of prohibited substances were taken from the World Anti-Doping Agency (WADA) list, augmented by molecules from corresponding activity classes found in the MDDR database. Together with some explicitly allowed compounds, these formed a set of 5245 molecules. Five types of fingerprints were calculated for these substances. The random forest classification method was used to predict membership of each prohibited class on the basis of each type of fingerprint, using 5-fold cross-validation. We also used a k-nearest neighbors (kNN) approach, which worked well for the smallest values of k. The most successful classifiers are based on Unity 2D fingerprints and give very similar Matthews correlation coefficients of 0.836 (kNN) and 0.829 (random forest). The kNN classifiers tend to give a higher recall of positives at the expense of lower precision. A naïve Bayesian classifier, however, lies much further toward the extreme of high recall and low precision. Our results suggest that it will be possible to produce a reliable and quantitative assignment of membership or otherwise of each class of prohibited substances. This should aid the fight against the use of bioactive novel compounds as doping agents, while also protecting athletes against unjust disqualification.

  14. Role Based Multi-Agent System for E-Learning (MASeL

    Directory of Open Access Journals (Sweden)

    Mustafa Hameed

    2016-03-01

    Full Text Available Software agents are autonomous entities that can interact intelligently with other agents as well as their environment in order to carry out a specific task. We have proposed a role-based multi-agent system for e-learning. This multi-agent system is based on Agent-Group-Role (AGR method. As a multi-agent system is distributed, ensuring correctness is an important issue. We have formally modeled our role-based multi-agent system. The correctness properties of liveness and safety are specified as well as verified. Timed-automata based model checker UPPAAL is used for the specification as well as verification of the e-learning system. This results in a formally specified and verified model of the role-based multi-agent system.

  15. Ship Classification with High Resolution TerraSAR-X Imagery Based on Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Zhi Zhao

    2013-01-01

    Full Text Available Ship surveillance using space-borne synthetic aperture radar (SAR, taking advantages of high resolution over wide swaths and all-weather working capability, has attracted worldwide attention. Recent activity in this field has concentrated mainly on the study of ship detection, but the classification is largely still open. In this paper, we propose a novel ship classification scheme based on analytic hierarchy process (AHP in order to achieve better performance. The main idea is to apply AHP on both feature selection and classification decision. On one hand, the AHP based feature selection constructs a selection decision problem based on several feature evaluation measures (e.g., discriminability, stability, and information measure and provides objective criteria to make comprehensive decisions for their combinations quantitatively. On the other hand, we take the selected feature sets as the input of KNN classifiers and fuse the multiple classification results based on AHP, in which the feature sets’ confidence is taken into account when the AHP based classification decision is made. We analyze the proposed classification scheme and demonstrate its results on a ship dataset that comes from TerraSAR-X SAR images.

  16. A New Classification Analysis of Customer Requirement Information Based on Quantitative Standardization for Product Configuration

    Directory of Open Access Journals (Sweden)

    Zheng Xiao

    2016-01-01

    Full Text Available Traditional methods used for the classification of customer requirement information are typically based on specific indicators, hierarchical structures, and data formats and involve a qualitative analysis in terms of stationary patterns. Because these methods neither consider the scalability of classification results nor do they regard subsequent application to product configuration, their classification becomes an isolated operation. However, the transformation of customer requirement information into quantifiable values would lead to a dynamic classification according to specific conditions and would enable an association with product configuration in an enterprise. This paper introduces a classification analysis based on quantitative standardization, which focuses on (i expressing customer requirement information mathematically and (ii classifying customer requirement information for product configuration purposes. Our classification analysis treated customer requirement information as follows: first, it was transformed into standardized values using mathematics, subsequent to which it was classified through calculating the dissimilarity with general customer requirement information related to the product family. Finally, a case study was used to demonstrate and validate the feasibility and effectiveness of the classification analysis.

  17. Land Cover Classification from Full-Waveform LIDAR Data Based on Support Vector Machines

    Science.gov (United States)

    Zhou, M.; Li, C. R.; Ma, L.; Guan, H. C.

    2016-06-01

    In this study, a land cover classification method based on multi-class Support Vector Machines (SVM) is presented to predict the types of land cover in Miyun area. The obtained backscattered full-waveforms were processed following a workflow of waveform pre-processing, waveform decomposition and feature extraction. The extracted features, which consist of distance, intensity, Full Width at Half Maximum (FWHM) and back scattering cross-section, were corrected and used as attributes for training data to generate the SVM prediction model. The SVM prediction model was applied to predict the types of land cover in Miyun area as ground, trees, buildings and farmland. The classification results of these four types of land covers were obtained based on the ground truth information according to the CCD image data of Miyun area. It showed that the proposed classification algorithm achieved an overall classification accuracy of 90.63%. In order to better explain the SVM classification results, the classification results of SVM method were compared with that of Artificial Neural Networks (ANNs) method and it showed that SVM method could achieve better classification results.

  18. LAND COVER CLASSIFICATION FROM FULL-WAVEFORM LIDAR DATA BASED ON SUPPORT VECTOR MACHINES

    Directory of Open Access Journals (Sweden)

    M. Zhou

    2016-06-01

    Full Text Available In this study, a land cover classification method based on multi-class Support Vector Machines (SVM is presented to predict the types of land cover in Miyun area. The obtained backscattered full-waveforms were processed following a workflow of waveform pre-processing, waveform decomposition and feature extraction. The extracted features, which consist of distance, intensity, Full Width at Half Maximum (FWHM and back scattering cross-section, were corrected and used as attributes for training data to generate the SVM prediction model. The SVM prediction model was applied to predict the types of land cover in Miyun area as ground, trees, buildings and farmland. The classification results of these four types of land covers were obtained based on the ground truth information according to the CCD image data of Miyun area. It showed that the proposed classification algorithm achieved an overall classification accuracy of 90.63%. In order to better explain the SVM classification results, the classification results of SVM method were compared with that of Artificial Neural Networks (ANNs method and it showed that SVM method could achieve better classification results.

  19. Virtual images inspired consolidate collaborative representation-based classification method for face recognition

    Science.gov (United States)

    Liu, Shigang; Zhang, Xinxin; Peng, Yali; Cao, Han

    2016-07-01

    The collaborative representation-based classification method performs well in the field of classification of high-dimensional images such as face recognition. It utilizes training samples from all classes to represent a test sample and assigns a class label to the test sample using the representation residuals. However, this method still suffers from the problem that limited number of training sample influences the classification accuracy when applied to image classification. In this paper, we propose a modified collaborative representation-based classification method (MCRC), which exploits novel virtual images and can obtain high classification accuracy. The procedure to produce virtual images is very simple but the use of them can bring surprising performance improvement. The virtual images can sufficiently denote the features of original face images in some case. Extensive experimental results doubtlessly demonstrate that the proposed method can effectively improve the classification accuracy. This is mainly attributed to the integration of the collaborative representation and the proposed feature-information dominated virtual images.

  20. System-Awareness for Agent-based Power System Control

    DEFF Research Database (Denmark)

    Heussen, Kai; Saleem, Arshad; Lind, Morten

    2010-01-01

    Operational intelligence in electric power systems is focused in a small number of control rooms that coordinate their actions. A clear division of responsibility and a command hierarchy organize system operation. With multi-agent based control systems, this control paradigm may be shifted...... to a more decentralized openaccess collaboration control paradigm. This shift cannot happen at once, but must fit also with current operation principles. In order to establish a scalable and transparent system control architecture, organizing principles have to be identified that allow for a smooth...... transition. This paper presents a concept for the representation and organization of control- and resource-allocation, enabling computational reasoning and system awareness. The principles are discussed with respect to a recently proposed Subgrid operation concept....

  1. Agent-based distributed hierarchical control of dc microgrid systems

    DEFF Research Database (Denmark)

    Meng, Lexuan; Vasquez, Juan Carlos; Guerrero, Josep M.

    2014-01-01

    In order to enable distributed control and management for microgrids, this paper explores the application of information consensus and local decisionmaking methods formulating an agent based distributed hierarchical control system. A droop controlled paralleled DC/DC converter system is taken...... as a case study. The objective is to enhance the system efficiency by finding the optimal sharing ratio of load current. Virtual resistances in local control systems are taken as decision variables. Consensus algorithms are applied for global information discovery and local control systems coordination....... Standard genetic algorithm is applied in each local control system in order to search for a global optimum. Hardware-in-Loop simulation results are shown to demonstrate the effectiveness of the method....

  2. A knowledge-based agent prototype for Chinese address geocoding

    Science.gov (United States)

    Wei, Ran; Zhang, Xuehu; Ding, Linfang; Ma, Haoming; Li, Qi

    2009-10-01

    Chinese address geocoding is a difficult problem to deal with due to intrinsic complexities in Chinese address systems and a lack of standards in address assignments and usages. In order to improve existing address geocoding algorithm, a spatial knowledge-based agent prototype aimed at validating address geocoding results is built to determine the spatial accuracies as well as matching confidence. A portion of human's knowledge of judging the spatial closeness of two addresses is represented via first order logic and the corresponding algorithms are implemented with the Prolog language. Preliminary tests conducted using addresses matching result in Beijing area showed that the prototype can successfully assess the spatial closeness between the matching address and the query address with 97% accuracy.

  3. Load management through agent based coordination of flexible electricity consumers

    DEFF Research Database (Denmark)

    Clausen, Anders; Jørgensen, Bo Nørregaard

    2015-01-01

    Demand Response (DR) offers a cost-effective and carbonfriendly way of performing load balancing. DR describes a change in the electricity consumption of flexible consumers in response to the supply situation. In DR, flexible consumers may perform their own load balancing through load management....... In this paper, we propose an approach to perform such coordination through a Virtual Power Plant (VPP)[1]. We represent flexible electricity consumers as software agents and we solve the coordination problem through multi-objective multi-issue optimization using a mediator-based negotiation mechanism. We...... illustrate how we can coordinate flexible consumers through a VPP in response to external events simulating the need for load balancing services....

  4. An Agent Based approach to design Serious Game

    Directory of Open Access Journals (Sweden)

    Manuel Gentile

    2014-06-01

    Full Text Available Serious games are designed to train and educate learners, opening up new learning approaches like exploratory learning and situated cognition.  Despite growing interest in these games, their design is still an artisan process.On the basis of experiences in designing computer simulation, this paper proposes an agent-based approach to guide the design process of a serious game. The proposed methodology allows the designer to strike the right equilibrium between educational effectiveness and entertainment, realism and complexity.The design of the PNPVillage game is used as a case study. The PNPVillage game aims to introduce and foster an entrepreneurial mindset among young students. It was implemented within the framework of the European project “I  can… I cannot… I go!” Rev.2

  5. Distributed Research Project Scheduling Based on Multi-Agent Methods

    Directory of Open Access Journals (Sweden)

    Constanta Nicoleta Bodea

    2011-01-01

    Full Text Available Different project planning and scheduling approaches have been developed. The Operational Research (OR provides two major planning techniques: CPM (Critical Path Method and PERT (Program Evaluation and Review Technique. Due to projects complexity and difficulty to use classical methods, new approaches were developed. Artificial Intelligence (AI initially promoted the automatic planner concept, but model-based planning and scheduling methods emerged later on. The paper adresses the project scheduling optimization problem, when projects are seen as Complex Adaptive Systems (CAS. Taken into consideration two different approaches for project scheduling optimization: TCPSP (Time- Constrained Project Scheduling and RCPSP (Resource-Constrained Project Scheduling, the paper focuses on a multiagent implementation in MATLAB for TCSP. Using the research project as a case study, the paper includes a comparison between two multi-agent methods: Genetic Algorithm (GA and Ant Colony Algorithm (ACO.

  6. A Systematic Review of Agent-Based Modelling and Simulation Applications in the Higher Education Domain

    Science.gov (United States)

    Gu, X.; Blackmore, K. L.

    2015-01-01

    This paper presents the results of a systematic review of agent-based modelling and simulation (ABMS) applications in the higher education (HE) domain. Agent-based modelling is a "bottom-up" modelling paradigm in which system-level behaviour (macro) is modelled through the behaviour of individual local-level agent interactions (micro).…

  7. Agent based Particle Swarm Optimization for Load Frequency Control of Distribution Grid

    DEFF Research Database (Denmark)

    Cha, Seung-Tae; Saleem, Arshad; Wu, Qiuwei;

    2012-01-01

    This paper presents a Particle Swarm Optimization (PSO) based on multi-agent controller. Real-time digital simulator (RTDS) is used for modelling the power system, while a PSO based multi-agent LFC algorithm is developed in JAVA for communicating with resource agents and determines the scenario t...

  8. Classification-based summation of cerebral digital subtraction angiography series for image post-processing algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Schuldhaus, D; Spiegel, M; Polyanskaya, M; Hornegger, J [Pattern Recognition Lab, University Erlangen-Nuremberg (Germany); Redel, T [Siemens AG Healthcare Sector, Forchheim (Germany); Struffert, T; Doerfler, A, E-mail: martin.spiegel@informatik.uni-erlangen.de [Department of Neuroradiology, University Erlangen-Nuremberg (Germany)

    2011-03-21

    X-ray-based 2D digital subtraction angiography (DSA) plays a major role in the diagnosis, treatment planning and assessment of cerebrovascular disease, i.e. aneurysms, arteriovenous malformations and intracranial stenosis. DSA information is increasingly used for secondary image post-processing such as vessel segmentation, registration and comparison to hemodynamic calculation using computational fluid dynamics. Depending on the amount of injected contrast agent and the duration of injection, these DSA series may not exhibit one single DSA image showing the entire vessel tree. The interesting information for these algorithms, however, is usually depicted within a few images. If these images would be combined into one image the complexity of segmentation or registration methods using DSA series would drastically decrease. In this paper, we propose a novel method automatically splitting a DSA series into three parts, i.e. mask, arterial and parenchymal phase, to provide one final image showing all important vessels with less noise and moving artifacts. This final image covers all arterial phase images, either by image summation or by taking the minimum intensities. The phase classification is done by a two-step approach. The mask/arterial phase border is determined by a Perceptron-based method trained from a set of DSA series. The arterial/parenchymal phase border is specified by a threshold-based method. The evaluation of the proposed method is two-sided: (1) comparison between automatic and medical expert-based phase selection and (2) the quality of the final image is measured by gradient magnitudes inside the vessels and signal-to-noise (SNR) outside. Experimental results show a match between expert and automatic phase separation of 93%/50% and an average SNR increase of up to 182% compared to summing up the entire series.

  9. Classification-based summation of cerebral digital subtraction angiography series for image post-processing algorithms

    Science.gov (United States)

    Schuldhaus, D.; Spiegel, M.; Redel, T.; Polyanskaya, M.; Struffert, T.; Hornegger, J.; Doerfler, A.

    2011-03-01

    X-ray-based 2D digital subtraction angiography (DSA) plays a major role in the diagnosis, treatment planning and assessment of cerebrovascular disease, i.e. aneurysms, arteriovenous malformations and intracranial stenosis. DSA information is increasingly used for secondary image post-processing such as vessel segmentation, registration and comparison to hemodynamic calculation using computational fluid dynamics. Depending on the amount of injected contrast agent and the duration of injection, these DSA series may not exhibit one single DSA image showing the entire vessel tree. The interesting information for these algorithms, however, is usually depicted within a few images. If these images would be combined into one image the complexity of segmentation or registration methods using DSA series would drastically decrease. In this paper, we propose a novel method automatically splitting a DSA series into three parts, i.e. mask, arterial and parenchymal phase, to provide one final image showing all important vessels with less noise and moving artifacts. This final image covers all arterial phase images, either by image summation or by taking the minimum intensities. The phase classification is done by a two-step approach. The mask/arterial phase border is determined by a Perceptron-based method trained from a set of DSA series. The arterial/parenchymal phase border is specified by a threshold-based method. The evaluation of the proposed method is two-sided: (1) comparison between automatic and medical expert-based phase selection and (2) the quality of the final image is measured by gradient magnitudes inside the vessels and signal-to-noise (SNR) outside. Experimental results show a match between expert and automatic phase separation of 93%/50% and an average SNR increase of up to 182% compared to summing up the entire series.

  10. Classification-based summation of cerebral digital subtraction angiography series for image post-processing algorithms.

    Science.gov (United States)

    Schuldhaus, D; Spiegel, M; Redel, T; Polyanskaya, M; Struffert, T; Hornegger, J; Doerfler, A

    2011-03-21

    X-ray-based 2D digital subtraction angiography (DSA) plays a major role in the diagnosis, treatment planning and assessment of cerebrovascular disease, i.e. aneurysms, arteriovenous malformations and intracranial stenosis. DSA information is increasingly used for secondary image post-processing such as vessel segmentation, registration and comparison to hemodynamic calculation using computational fluid dynamics. Depending on the amount of injected contrast agent and the duration of injection, these DSA series may not exhibit one single DSA image showing the entire vessel tree. The interesting information for these algorithms, however, is usually depicted within a few images. If these images would be combined into one image the complexity of segmentation or registration methods using DSA series would drastically decrease. In this paper, we propose a novel method automatically splitting a DSA series into three parts, i.e. mask, arterial and parenchymal phase, to provide one final image showing all important vessels with less noise and moving artifacts. This final image covers all arterial phase images, either by image summation or by taking the minimum intensities. The phase classification is done by a two-step approach. The mask/arterial phase border is determined by a Perceptron-based method trained from a set of DSA series. The arterial/parenchymal phase border is specified by a threshold-based method. The evaluation of the proposed method is two-sided: (1) comparison between automatic and medical expert-based phase selection and (2) the quality of the final image is measured by gradient magnitudes inside the vessels and signal-to-noise (SNR) outside. Experimental results show a match between expert and automatic phase separation of 93%/50% and an average SNR increase of up to 182% compared to summing up the entire series.

  11. A Neuro-Fuzzy based System for Classification of Natural Textures

    Science.gov (United States)

    Jiji, G. Wiselin

    2016-12-01

    A statistical approach based on the coordinated clusters representation of images is used for classification and recognition of textured images. In this paper, two issues are being addressed; one is the extraction of texture features from the fuzzy texture spectrum in the chromatic and achromatic domains from each colour component histogram of natural texture images and the second issue is the concept of a fusion of multiple classifiers. The implementation of an advanced neuro-fuzzy learning scheme has been also adopted in this paper. The results of classification tests show the high performance of the proposed method that may have industrial application for texture classification, when compared with other works.

  12. Multi-agent reinforcement learning with cooperation based on eligibility traces

    Institute of Scientific and Technical Information of China (English)

    杨玉君; 程君实; 陈佳品

    2004-01-01

    The application of reinforcement learning is widely used by multi-agent systems in recent years. An agent uses a multi-agent system to cooperate with other agents to accomplish the given task, and one agent's be-havior usually affects the others' behaviors. In traditional reinforcement learning, one agent takes the others lo-cation, so it is difficult to consider the others' behavior, which decreases the learning efficiency. This paper proposes multi-agent reinforcement learning with cooperation based on eligibility traces, i.e. one agent esti-mates the other agent's behavior with the other agent's eligibility traces. The results of this simulation prove the validity of the proposed learning method.

  13. Classification of pulmonary airway disease based on mucosal color analysis

    Science.gov (United States)

    Suter, Melissa; Reinhardt, Joseph M.; Riker, David; Ferguson, John Scott; McLennan, Geoffrey

    2005-04-01

    Airway mucosal color changes occur in response to the development of bronchial diseases including lung cancer, cystic fibrosis, chronic bronchitis, emphysema and asthma. These associated changes are often visualized using standard macro-optical bronchoscopy techniques. A limitation to this form of assessment is that the subtle changes that indicate early stages in disease development may often be missed as a result of this highly subjective assessment, especially in inexperienced bronchoscopists. Tri-chromatic CCD chip bronchoscopes allow for digital color analysis of the pulmonary airway mucosa. This form of analysis may facilitate a greater understanding of airway disease response. A 2-step image classification approach is employed: the first step is to distinguish between healthy and diseased bronchoscope images and the second is to classify the detected abnormal images into 1 of 4 possible disease categories. A database of airway mucosal color constructed from healthy human volunteers is used as a standard against which statistical comparisons are made from mucosa with known apparent airway abnormalities. This approach demonstrates great promise as an effective detection and diagnosis tool to highlight potentially abnormal airway mucosa identifying a region possibly suited to further analysis via airway forceps biopsy, or newly developed micro-optical biopsy strategies. Following the identification of abnormal airway images a neural network is used to distinguish between the different disease classes. We have shown that classification of potentially diseased airway mucosa is possible through comparative color analysis of digital bronchoscope images. The combination of the two strategies appears to increase the classification accuracy in addition to greatly decreasing the computational time.

  14. An Analysis of Social Class Classification Based on Linguistic Variables

    Institute of Scientific and Technical Information of China (English)

    QU Xia-sha

    2016-01-01

    Since language is an influential tool in social interaction, the relationship of speech and social factors, such as social class, gender, even age is worth studying. People employ different linguistic variables to imply their social class, status and iden-tity in the social interaction. Thus the linguistic variation involves vocabulary, sounds, grammatical constructions, dialects and so on. As a result, a classification of social class draws people’s attention. Linguistic variable in speech interactions indicate the social relationship between people. This paper attempts to illustrate three main linguistic variables which influence the social class, and further sociolinguistic studies need to be more concerned about.

  15. Currency-based Iterative Multi-Agent Bidding Mechanism Based on Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    M; K; LIM; Z; ZHANG

    2002-01-01

    This paper introduces a multi-agent system which i nt egrates process planning and production scheduling, in order to increase the fle xibility of manufacturing systems in coping with rapid changes in dynamic market and dealing with internal uncertainties such as machine breakdown or resources shortage. This system consists of various autonomous agents, each of which has t he capability of communicating with one another and making decisions based on it s knowledge and if necessary on information provided ...

  16. Gaussian Mixture Model and Deep Neural Network based Vehicle Detection and Classification

    Directory of Open Access Journals (Sweden)

    S Sri Harsha

    2016-09-01

    Full Text Available The exponential rise in the demand of vision based traffic surveillance systems have motivated academia-industries to develop optimal vehicle detection and classification scheme. In this paper, an adaptive learning rate based Gaussian mixture model (GMM algorithm has been developed for background subtraction of multilane traffic data. Here, vehicle rear information and road dash-markings have been used for vehicle detection. Performing background subtraction, connected component analysis has been applied to retrieve vehicle region. A multilayered AlexNet deep neural network (DNN has been applied to extract higher layer features. Furthermore, scale invariant feature transform (SIFT based vehicle feature extraction has been performed. The extracted 4096-dimensional features have been processed for dimensional reduction using principle component analysis (PCA and linear discriminant analysis (LDA. The features have been mapped for SVM-based classification. The classification results have exhibited that AlexNet-FC6 features with LDA give the accuracy of 97.80%, followed by AlexNet-FC6 with PCA (96.75%. AlexNet-FC7 feature with LDA and PCA algorithms has exhibited classification accuracy of 91.40% and 96.30%, respectively. On the contrary, SIFT features with LDA algorithm has exhibited 96.46% classification accuracy. The results revealed that enhanced GMM with AlexNet DNN at FC6 and FC7 can be significant for optimal vehicle detection and classification.

  17. Computer vision-based limestone rock-type classification using probabilistic neural network

    Institute of Scientific and Technical Information of China (English)

    Ashok Kumar Patel; Snehamoy Chatterjee

    2016-01-01

    Proper quality planning of limestone raw materials is an essential job of maintaining desired feed in cement plant. Rock-type identification is an integrated part of quality planning for limestone mine. In this paper, a computer vision-based rock-type classification algorithm is proposed for fast and reliable identification without human intervention. A laboratory scale vision-based model was developed using probabilistic neural network (PNN) where color histogram features are used as input. The color image histogram-based features that include weighted mean, skewness and kurtosis features are extracted for all three color space red, green, and blue. A total nine features are used as input for the PNN classification model. The smoothing parameter for PNN model is selected judicially to develop an optimal or close to the optimum classification model. The developed PPN is validated using the test data set and results reveal that the proposed vision-based model can perform satisfactorily for classifying limestone rock-types. Overall the error of mis-classification is below 6%. When compared with other three classifica-tion algorithms, it is observed that the proposed method performs substantially better than all three classification algorithms.

  18. Computer vision-based limestone rock-type classification using probabilistic neural network

    Directory of Open Access Journals (Sweden)

    Ashok Kumar Patel

    2016-01-01

    Full Text Available Proper quality planning of limestone raw materials is an essential job of maintaining desired feed in cement plant. Rock-type identification is an integrated part of quality planning for limestone mine. In this paper, a computer vision-based rock-type classification algorithm is proposed for fast and reliable identification without human intervention. A laboratory scale vision-based model was developed using probabilistic neural network (PNN where color histogram features are used as input. The color image histogram-based features that include weighted mean, skewness and kurtosis features are extracted for all three color space red, green, and blue. A total nine features are used as input for the PNN classification model. The smoothing parameter for PNN model is selected judicially to develop an optimal or close to the optimum classification model. The developed PPN is validated using the test data set and results reveal that the proposed vision-based model can perform satisfactorily for classifying limestone rock-types. Overall the error of mis-classification is below 6%. When compared with other three classification algorithms, it is observed that the proposed method performs substantially better than all three classification algorithms.

  19. A Bayes fusion method based ensemble classification approach for Brown cloud application

    Directory of Open Access Journals (Sweden)

    M.Krishnaveni

    2014-03-01

    Full Text Available Classification is a recurrent task of determining a target function that maps each attribute set to one of the predefined class labels. Ensemble fusion is one of the suitable classifier model fusion techniques which combine the multiple classifiers to perform high classification accuracy than individual classifiers. The main objective of this paper is to combine base classifiers using ensemble fusion methods namely Decision Template, Dempster-Shafer and Bayes to compare the accuracy of the each fusion methods on the brown cloud dataset. The base classifiers like KNN, MLP and SVM have been considered in ensemble classification in which each classifier with four different function parameters. From the experimental study it is proved, that the Bayes fusion method performs better classification accuracy of 95% than Decision Template of 80%, Dempster-Shaferof 85%, in a Brown Cloud image dataset.

  20. Content-based similarity for 3D model retrieval and classification

    Institute of Scientific and Technical Information of China (English)

    Ke Lü; Ning He; Jian Xue

    2009-01-01

    With the rapid development of 3D digital shape information,content-based 3D model retrieval and classification has become an important research area.This paper presents a novel 3D model retrieval and classification algorithm.For feature representation,a method combining a distance histogram and moment invariants is proposed to improve the retrieval performance.The major advantage of using a distance histogram is its invariance to the transforms of scaling,translation and rotation.Based on the premise that two similar objects should have high mutual information,the querying of 3D data should convey a great deal of information on the shape of the two objects,and so we propose a mutual information distance measurement to perform the similarity comparison of 3D objects.The proposed algorithm is tested with a 3D model retrieval and classification prototype,and the experimental evaluation demonstrates satisfactory retrieval results and classification accuracy.