WorldWideScience

Sample records for intelligent deep web

  1. Deep iCrawl: An Intelligent Vision-Based Deep Web Crawler

    OpenAIRE

    R.Anita; V.Ganga Bharani; N.Nityanandam; Pradeep Kumar Sahoo

    2011-01-01

    The explosive growth of World Wide Web has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. Deep web pages are created dynamically as a result of queries posed to specific web databases. The structure of the deep web pages makes it impossible for traditional web crawlers to access deep web contents. This paper, Deep iCrawl, gives a novel and vision-based app...

  2. Web Intelligence and Artificial Intelligence in Education

    Science.gov (United States)

    Devedzic, Vladan

    2004-01-01

    This paper surveys important aspects of Web Intelligence (WI) in the context of Artificial Intelligence in Education (AIED) research. WI explores the fundamental roles as well as practical impacts of Artificial Intelligence (AI) and advanced Information Technology (IT) on the next generation of Web-related products, systems, services, and…

  3. An Intelligent QoS Identification for Untrustworthy Web Services Via Two-phase Neural Networks

    OpenAIRE

    Wang, Weidong; Wang, Liqiang; Lu, Wei

    2016-01-01

    QoS identification for untrustworthy Web services is critical in QoS management in the service computing since the performance of untrustworthy Web services may result in QoS downgrade. The key issue is to intelligently learn the characteristics of trustworthy Web services from different QoS levels, then to identify the untrustworthy ones according to the characteristics of QoS metrics. As one of the intelligent identification approaches, deep neural network has emerged as a powerful techniqu...

  4. How to Improve Artificial Intelligence through Web

    OpenAIRE

    Adrian Lupasc

    2005-01-01

    Intelligent agents, intelligent software applications and artificial intelligent applications from artificial intelligence service providers may make their way onto the Web in greater number as adaptive software, dynamic programming languages and Learning Algorithms are introduced into Web Services. The evolution of Web architecture may allow intelligent applications to run directly on the Web by introducing XML, RDF and logic layer. The Intelligent Wireless Web’s significant potential for ra...

  5. Emergent web intelligence advanced information retrieval

    CERN Document Server

    Badr, Youakim; Abraham, Ajith; Hassanien, Aboul-Ella

    2010-01-01

    Web Intelligence explores the impact of artificial intelligence and advanced information technologies representing the next generation of Web-based systems, services, and environments, and designing hybrid web systems that serve wired and wireless users more efficiently. Multimedia and XML-based data are produced regularly and in increasing way in our daily digital activities, and their retrieval must be explored and studied in this emergent web-based era. 'Emergent Web Intelligence: Advanced information retrieval, provides reviews of the related cutting-edge technologies and insights. It is v

  6. A study on the Web intelligence

    Institute of Scientific and Technical Information of China (English)

    Sang-Geun Kim

    2004-01-01

    This paper surveys important aspects of Web Intelligence (WI). WI explores the fundamental roles as well as practical impacts of Artificial Intelligence (AI) and advanced Information Technology (IT) on the next generation of Web - related products, systens, and activities. As a direction for scientific research and devlopment, WI can be extremely beneficial for the field of Artificial Intelligence in Education (AIED). This paper covers these issues only very briefly. It focuses more on other issues in WI, such as intelligent Web services, and semantic web, and proposes how to use them as basis for tackling new and challenging research problems in AIED.

  7. Hacking web intelligence open source intelligence and web reconnaissance concepts and techniques

    CERN Document Server

    Chauhan, Sudhanshu

    2015-01-01

    Open source intelligence (OSINT) and web reconnaissance are rich topics for infosec professionals looking for the best ways to sift through the abundance of information widely available online. In many cases, the first stage of any security assessment-that is, reconnaissance-is not given enough attention by security professionals, hackers, and penetration testers. Often, the information openly present is as critical as the confidential data. Hacking Web Intelligence shows you how to dig into the Web and uncover the information many don't even know exists. The book takes a holistic approach

  8. How to Improve Artificial Intelligence through Web

    Directory of Open Access Journals (Sweden)

    Adrian LUPASC

    2005-10-01

    Full Text Available Intelligent agents, intelligent software applications and artificial intelligent applications from artificial intelligence service providers maymake their way onto the Web in greater number as adaptive software, dynamic programming languages and Learning Algorithms are introduced intoWeb Services. The evolution of Web architecture may allow intelligent applications to run directly on the Web by introducing XML, RDF and logiclayer. The Intelligent Wireless Web’s significant potential for rapidly completing information transactions may take an important contribution toglobal worker productivity. Artificial intelligence can be defined as the study of the ways in which computers can be made to perform cognitivetasks. Examples of such tasks include understanding natural language statements, recognizing visual patterns or scenes, diagnosing diseases orillnesses, solving mathematical problems, performing financial analyses, learning new procedures for solving problems. The term expert system canbe considered to be a particular type of knowledge-based system. An expert system is a system in which the knowledge is deliberately represented“as it is”. Expert systems are applications that make decisions in real-life situations that would otherwise be performed by a human expert. They areprograms designed to mimic human performance at specialized, constrained problem-solving tasks. They are constructed as a collection of IF-THENproduction rules combined with a reasoning engine that applies those rules, either in a forward or backward direction, to specific problems.

  9. The deep learning AI playbook strategy for disruptive artificial intelligence

    CERN Document Server

    Perez, Carlos E

    2017-01-01

    Deep Learning Artificial Intelligence involves the interplay of Computer Science, Physics, Biology, Linguistics and Psychology. In addition to that, it is technology that can be extremely disruptive. The ramifications to society and even our own humanity will be profound. There are few subjects that are as captivating and as consequential as this. Surprisingly, there is very little that is written about this new technology in a more comprehensive and cohesive way. This book is an opinionated take on the developments of Deep Learning AI. One question many have will be "how to apply Deep Learning AI in a business context?" Technology that is disruptive does not automatically imply that its application to valuable use cases will be apparent. For years, many people could not figure out how to monetize the World Wide Web. We are in a similar situation with Deep Learning AI. The developments may be mind-boggling but its monetization is far from being obvious. This book presents a framework to address this shortcomi...

  10. Intelligent web agents for a 3D virtual community

    Science.gov (United States)

    Dave, T. M.; Zhang, Yanqing; Owen, G. S. S.; Sunderraman, Rajshekhar

    2003-08-01

    In this paper, we propose an Avatar-based intelligent agent technique for 3D Web based Virtual Communities based on distributed artificial intelligence, intelligent agent techniques, and databases and knowledge bases in a digital library. One of the goals of this joint NSF (IIS-9980130) and ACM SIGGRAPH Education Committee (ASEC) project is to create a virtual community of educators and students who have a common interest in comptuer graphics, visualization, and interactive techniqeus. In this virtual community (ASEC World) Avatars will represent the educators, students, and other visitors to the world. Intelligent agents represented as specially dressed Avatars will be available to assist the visitors to ASEC World. The basic Web client-server architecture of the intelligent knowledge-based avatars is given. Importantly, the intelligent Web agent software system for the 3D virtual community is implemented successfully.

  11. Business intelligence and capacity planning: web-based solutions.

    Science.gov (United States)

    James, Roger

    2010-07-01

    Income (activity) and expenditure (costs) form the basis of a modern hospital's 'business intelligence'. However, clinical engagement in business intelligence is patchy. This article describes the principles of business intelligence and outlines some recent developments using web-based applications.

  12. Harnessing the Deep Web: Present and Future

    OpenAIRE

    Madhavan, Jayant; Afanasiev, Loredana; Antova, Lyublena; Halevy, Alon

    2009-01-01

    Over the past few years, we have built a system that has exposed large volumes of Deep-Web content to Google.com users. The content that our system exposes contributes to more than 1000 search queries per-second and spans over 50 languages and hundreds of domains. The Deep Web has long been acknowledged to be a major source of structured data on the web, and hence accessing Deep-Web content has long been a problem of interest in the data management community. In this paper, we report on where...

  13. Stratification-Based Outlier Detection over the Deep Web.

    Science.gov (United States)

    Xian, Xuefeng; Zhao, Pengpeng; Sheng, Victor S; Fang, Ligang; Gu, Caidong; Yang, Yuanfeng; Cui, Zhiming

    2016-01-01

    For many applications, finding rare instances or outliers can be more interesting than finding common patterns. Existing work in outlier detection never considers the context of deep web. In this paper, we argue that, for many scenarios, it is more meaningful to detect outliers over deep web. In the context of deep web, users must submit queries through a query interface to retrieve corresponding data. Therefore, traditional data mining methods cannot be directly applied. The primary contribution of this paper is to develop a new data mining method for outlier detection over deep web. In our approach, the query space of a deep web data source is stratified based on a pilot sample. Neighborhood sampling and uncertainty sampling are developed in this paper with the goal of improving recall and precision based on stratification. Finally, a careful performance evaluation of our algorithm confirms that our approach can effectively detect outliers in deep web.

  14. Deep Web and Dark Web: Deep World of the Internet

    OpenAIRE

    Çelik, Emine

    2018-01-01

    The Internet is undoubtedly still a revolutionary breakthrough in the history of humanity. Many people use the internet for communication, social media, shopping, political and social agenda, and more. Deep Web and Dark Web concepts not only handled by computer, software engineers but also handled by social siciensists because of the role of internet for the States in international arenas, public institutions and human life. By the moving point that very importantrole of internet for social s...

  15. Focused Crawling of the Deep Web Using Service Class Descriptions

    Energy Technology Data Exchange (ETDEWEB)

    Rocco, D; Liu, L; Critchlow, T

    2004-06-21

    Dynamic Web data sources--sometimes known collectively as the Deep Web--increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deep Web. To address these challenges, we present DynaBot, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DynaBot has three unique characteristics. First, DynaBot utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DynaBot employs a modular, self-tuning system architecture for focused crawling of the DeepWeb using service class descriptions. Third, DynaBot incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.

  16. Intelligent web data management software architectures and emerging technologies

    CERN Document Server

    Ma, Kun; Yang, Bo; Sun, Runyuan

    2016-01-01

    This book presents some of the emerging techniques and technologies used to handle Web data management. Authors present novel software architectures and emerging technologies and then validate using experimental data and real world applications. The contents of this book are focused on four popular thematic categories of intelligent Web data management: cloud computing, social networking, monitoring and literature management. The Volume will be a valuable reference to researchers, students and practitioners in the field of Web data management, cloud computing, social networks using advanced intelligence tools.

  17. Towards Brain-inspired Web Intelligence

    Science.gov (United States)

    Zhong, Ning

    Artificial Intelligence (AI) has been mainly studied within the realm of computer based technologies. Various computational models and knowledge based systems have been developed for automated reasoning, learning, and problem-solving. However, there still exist several grand challenges. The AI research has not produced major breakthrough recently due to a lack of understanding of human brains and natural intelligence. In addition, most of the AI models and systems will not work well when dealing with large-scale, dynamically changing, open and distributed information sources at a Web scale.

  18. Stratification-Based Outlier Detection over the Deep Web

    OpenAIRE

    Xian, Xuefeng; Zhao, Pengpeng; Sheng, Victor S.; Fang, Ligang; Gu, Caidong; Yang, Yuanfeng; Cui, Zhiming

    2016-01-01

    For many applications, finding rare instances or outliers can be more interesting than finding common patterns. Existing work in outlier detection never considers the context of deep web. In this paper, we argue that, for many scenarios, it is more meaningful to detect outliers over deep web. In the context of deep web, users must submit queries through a query interface to retrieve corresponding data. Therefore, traditional data mining methods cannot be directly applied. The primary contribu...

  19. Deep web search: an overview and roadmap

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien; Trieschnigg, Rudolf Berend; Hiemstra, Djoerd

    2011-01-01

    We review the state-of-the-art in deep web search and propose a novel classification scheme to better compare deep web search systems. The current binary classification (surfacing versus virtual integration) hides a number of implicit decisions that must be made by a developer. We make these

  20. Research Proposal for Distributed Deep Web Search

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien

    2010-01-01

    This proposal identifies two main problems related to deep web search, and proposes a step by step solution for each of them. The first problem is about searching deep web content by means of a simple free-text interface (with just one input field, instead of a complex interface with many input

  1. A Framework for Transparently Accessing Deep Web Sources

    Science.gov (United States)

    Dragut, Eduard Constantin

    2010-01-01

    An increasing number of Web sites expose their content via query interfaces, many of them offering the same type of products/services (e.g., flight tickets, car rental/purchasing). They constitute the so-called "Deep Web". Accessing the content on the Deep Web has been a long-standing challenge for the database community. For a user interested in…

  2. Digging Deeper: The Deep Web.

    Science.gov (United States)

    Turner, Laura

    2001-01-01

    Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…

  3. Intelligent Agent Based Semantic Web in Cloud Computing Environment

    OpenAIRE

    Mukhopadhyay, Debajyoti; Sharma, Manoj; Joshi, Gajanan; Pagare, Trupti; Palwe, Adarsha

    2013-01-01

    Considering today's web scenario, there is a need of effective and meaningful search over the web which is provided by Semantic Web. Existing search engines are keyword based. They are vulnerable in answering intelligent queries from the user due to the dependence of their results on information available in web pages. While semantic search engines provides efficient and relevant results as the semantic web is an extension of the current web in which information is given well defined meaning....

  4. Un paseo por la Deep Web

    OpenAIRE

    Ortega Castillo, Carlos

    2018-01-01

    Este documento busca presentar una mirada técnica e inclusiva a algunas de las tecnologías de interconexión desarrolladas en la DeepWeb, primero desde un punto de vista teórico y después con una breve introducción práctica. La desmitificación de los procesos desarrollados bajo la DeepWeb, brinda herramientas a los usuarios para esclarecer y construir nuevos paradigmas de sociedad, conocimiento y tecnología que aporten al desarrollo responsable de este tipo de redes y contribuyan al crecimi...

  5. AN EFFICIENT METHOD FOR DEEP WEB CRAWLER BASED ON ACCURACY -A REVIEW

    OpenAIRE

    Pranali Zade1, Dr.S.W.Mohod2

    2018-01-01

    As deep web grows at a very fast pace, there has been increased interest in techniques that help efficiently locate deep-web interfaces. However, due to the large volume of web resources and the dynamic nature of deep web, achieving wide coverage and high efficiency is a challenging issue. We propose a three-stage framework, for efficient harvesting deep web interfaces. Project experimental results on a set of representative domains show the agility and accuracy of our proposed crawler framew...

  6. REVIEW PAPER ON THE DEEP WEB DATA EXTRACTION

    OpenAIRE

    Prof. V. S. Patil*1, Miss Sneha Sitafale2, Miss Priyanka Kale3, Miss Poonam Bhujbal 4 , Miss Mohini Dandge 5 .

    2018-01-01

    Deep web data extraction is the process of extracting a set of data records and the items that they contain from a query result page. Such structured data can be later integrated into results from other data sources and given to the user in a single, cohesive view. Domain identification is used to identify the query interfaces related to the domain from the forms obtained in the search process. The surface web contains a large amount of unfiltered information, whereas the deep web includes hi...

  7. La deep web : el mercado negro global

    OpenAIRE

    Gay Fernández, José

    2015-01-01

    La deep web es un espacio oculto de internet donde la primera garantía es el anonimato. En líneas generales, la deep web contiene todo aquello que los buscadores convencionales no pueden localizar. Esta garantía sirve para albergar una vasta red de servicios ilegales, como el narcotráfico, la trata de blancas, la contratación de sicarios, la compra-venta de pasaportes y cuentas bancarias, o la pornografía infantil, entre otros muchos. Pero el anonimato también posibilita que activ...

  8. Using the Web for Competitive Intelligence (CI) Gathering

    Science.gov (United States)

    Rocker, JoAnne; Roncaglia, George

    2002-01-01

    Businesses use the Internet as a way to communicate company information as a way of engaging their customers. As the use of the Web for business transactions and advertising grows, so too, does the amount of useful information for practitioners of competitive intelligence (CI). CI is the legal and ethical practice of information gathering about competitors and the marketplace. Information sources like company webpages, online newspapers and news organizations, electronic journal articles and reports, and Internet search engines allow CI practitioners analyze company strengths and weaknesses for their customers. More company and marketplace information than ever is available on the Internet and a lot of it is free. Companies should view the Web not only as a business tool but also as a source of competitive intelligence. In a highly competitive marketplace can any organization afford to ignore information about the other players and customers in that same marketplace?

  9. Intelligent Learning Infrastructure for Knowledge Intensive Organizations: A Semantic Web Perspective

    Science.gov (United States)

    Lytras, Miltiadis, Ed.; Naeve, Ambjorn, Ed.

    2005-01-01

    In the context of Knowledge Society, the convergence of knowledge and learning management is a critical milestone. "Intelligent Learning Infrastructure for Knowledge Intensive Organizations: A Semantic Web Perspective" provides state-of-the art knowledge through a balanced theoretical and technological discussion. The semantic web perspective…

  10. Enhancing E-Learning through Web Service and Intelligent Agents

    Directory of Open Access Journals (Sweden)

    Nasir Hussain

    2006-04-01

    Full Text Available E-learning is basically the integration of various technologies. E-Learning technology is now maturing and we can find a multiplicity of standards. New technologies such as agents and web services are promising better results. In this paper we have proposed an e-learning architecture that is dependent on intelligent agent systems and web services. These communication technologies will make the architecture more robust, scalable and efficient.

  11. Intelligent Web-Based English Instruction in Middle Schools

    Science.gov (United States)

    Jia, Jiyou

    2015-01-01

    The integration of technology into educational environments has become more prominent over the years. The combination of technology and face-to-face interaction with instructors allows for a thorough, more valuable educational experience. "Intelligent Web-Based English Instruction in Middle Schools" addresses the concerns associated with…

  12. deepTools2: a next generation web server for deep-sequencing data analysis.

    Science.gov (United States)

    Ramírez, Fidel; Ryan, Devon P; Grüning, Björn; Bhardwaj, Vivek; Kilpert, Fabian; Richter, Andreas S; Heyne, Steffen; Dündar, Friederike; Manke, Thomas

    2016-07-08

    We present an update to our Galaxy-based web server for processing and visualizing deeply sequenced data. Its core tool set, deepTools, allows users to perform complete bioinformatic workflows ranging from quality controls and normalizations of aligned reads to integrative analyses, including clustering and visualization approaches. Since we first described our deepTools Galaxy server in 2014, we have implemented new solutions for many requests from the community and our users. Here, we introduce significant enhancements and new tools to further improve data visualization and interpretation. deepTools continue to be open to all users and freely available as a web service at deeptools.ie-freiburg.mpg.de The new deepTools2 suite can be easily deployed within any Galaxy framework via the toolshed repository, and we also provide source code for command line usage under Linux and Mac OS X. A public and documented API for access to deepTools functionality is also available. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. Design and Application of an Intelligent Agent for Web Information Discovery

    Institute of Scientific and Technical Information of China (English)

    闵君; 冯珊; 唐超; 许立达

    2003-01-01

    With the propagation of applications on the internet, the internet has become a great information source which supplies users with valuable information. But it is hard for users to quickly acquire the right information on the web. This paper an intelligent agent for internet applications to retrieve and extract web information under user's guidance. The intelligent agent is made up of a retrieval script to identify web sources, an extraction script based on the document object model to express extraction process, a data translator to export the extracted information into knowledge bases with frame structures, and a data reasoning to reply users' questions. A GUI tool named Script Writer helps to generate the extraction script visually, and knowledge rule databases help to extract wanted information and to generate the answer to questions.

  14. An insight into the deep web; why it matters for addiction psychiatry?

    Science.gov (United States)

    Orsolini, Laura; Papanti, Duccio; Corkery, John; Schifano, Fabrizio

    2017-05-01

    Nowadays, the web is rapidly spreading, playing a significant role in the marketing or sale or distribution of "quasi" legal drugs, hence facilitating continuous changes in drug scenarios. The easily renewable and anarchic online drug-market is gradually transforming indeed the drug market itself, from a "street" to a "virtual" one, with customers being able to shop with a relative anonymity in a 24-hr marketplace. The hidden "deep web" is facilitating this phenomenon. The paper aims at providing an overview to mental health's and addiction's professionals on current knowledge about prodrug activities on the deep web. A nonparticipant netnographic qualitative study of a list of prodrug websites (blogs, fora, and drug marketplaces) located into the surface web was here carried out. A systematic Internet search was conducted on Duckduckgo® and Google® whilst including the following keywords: "drugs" or "legal highs" or "Novel Psychoactive Substances" or "NPS" combined with the word deep web. Four themes (e.g., "How to access into the deepweb"; "Darknet and the online drug trading sites"; "Grams-search engine for the deep web"; and "Cryptocurrencies") and 14 categories were here generated and properly discussed. This paper represents a complete or systematical guideline about the deep web, specifically focusing on practical information on online drug marketplaces, useful for addiction's professionals. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Deep Web : acceso, seguridad y análisis de tráfico

    OpenAIRE

    Cagiga Vila, Ignacio

    2017-01-01

    RESUMEN: Este trabajo pretende hacer un análisis técnico de la Deep Web en el ámbito de las redes y las tecnologías de Internet. La parte principal del proyecto puede verse dividida en dos partes: Acceso a la Deep Web como cliente, e implementación de un relay de la Tor Network. La implementación de un relay de la Tor Network permite comprender como se consigue asegurar la anonimidad y seguridad de los usuarios que intentan acceder a la Deep Web a través de esta red. La parte de laboratorio d...

  16. Discovering Land Cover Web Map Services from the Deep Web with JavaScript Invocation Rules

    Directory of Open Access Journals (Sweden)

    Dongyang Hou

    2016-06-01

    Full Text Available Automatic discovery of isolated land cover web map services (LCWMSs can potentially help in sharing land cover data. Currently, various search engine-based and crawler-based approaches have been developed for finding services dispersed throughout the surface web. In fact, with the prevalence of geospatial web applications, a considerable number of LCWMSs are hidden in JavaScript code, which belongs to the deep web. However, discovering LCWMSs from JavaScript code remains an open challenge. This paper aims to solve this challenge by proposing a focused deep web crawler for finding more LCWMSs from deep web JavaScript code and the surface web. First, the names of a group of JavaScript links are abstracted as initial judgements. Through name matching, these judgements are utilized to judge whether or not the fetched webpages contain predefined JavaScript links that may prompt JavaScript code to invoke WMSs. Secondly, some JavaScript invocation functions and URL formats for WMS are summarized as JavaScript invocation rules from prior knowledge of how WMSs are employed and coded in JavaScript. These invocation rules are used to identify the JavaScript code for extracting candidate WMSs through rule matching. The above two operations are incorporated into a traditional focused crawling strategy situated between the tasks of fetching webpages and parsing webpages. Thirdly, LCWMSs are selected by matching services with a set of land cover keywords. Moreover, a search engine for LCWMSs is implemented that uses the focused deep web crawler to retrieve and integrate the LCWMSs it discovers. In the first experiment, eight online geospatial web applications serve as seed URLs (Uniform Resource Locators and crawling scopes; the proposed crawler addresses only the JavaScript code in these eight applications. All 32 available WMSs hidden in JavaScript code were found using the proposed crawler, while not one WMS was discovered through the focused crawler

  17. International Conference on Computational Intelligence 2015

    CERN Document Server

    Saha, Sujan

    2017-01-01

    This volume comprises the proceedings of the International Conference on Computational Intelligence 2015 (ICCI15). This book aims to bring together work from leading academicians, scientists, researchers and research scholars from across the globe on all aspects of computational intelligence. The work is composed mainly of original and unpublished results of conceptual, constructive, empirical, experimental, or theoretical work in all areas of computational intelligence. Specifically, the major topics covered include classical computational intelligence models and artificial intelligence, neural networks and deep learning, evolutionary swarm and particle algorithms, hybrid systems optimization, constraint programming, human-machine interaction, computational intelligence for the web analytics, robotics, computational neurosciences, neurodynamics, bioinspired and biomorphic algorithms, cross disciplinary topics and applications. The contents of this volume will be of use to researchers and professionals alike....

  18. Intelligent fault diagnosis of rolling bearings using an improved deep recurrent neural network

    Science.gov (United States)

    Jiang, Hongkai; Li, Xingqiu; Shao, Haidong; Zhao, Ke

    2018-06-01

    Traditional intelligent fault diagnosis methods for rolling bearings heavily depend on manual feature extraction and feature selection. For this purpose, an intelligent deep learning method, named the improved deep recurrent neural network (DRNN), is proposed in this paper. Firstly, frequency spectrum sequences are used as inputs to reduce the input size and ensure good robustness. Secondly, DRNN is constructed by the stacks of the recurrent hidden layer to automatically extract the features from the input spectrum sequences. Thirdly, an adaptive learning rate is adopted to improve the training performance of the constructed DRNN. The proposed method is verified with experimental rolling bearing data, and the results confirm that the proposed method is more effective than traditional intelligent fault diagnosis methods.

  19. A COMPARATIVE ANALYSIS OF WEB INFORMATION EXTRACTION TECHNIQUES DEEP LEARNING vs. NAÏVE BAYES vs. BACK PROPAGATION NEURAL NETWORKS IN WEB DOCUMENT EXTRACTION

    Directory of Open Access Journals (Sweden)

    J. Sharmila

    2016-01-01

    Full Text Available Web mining related exploration is getting the chance to be more essential these days in view of the reason that a lot of information is overseen through the web. Web utilization is expanding in an uncontrolled way. A particular framework is required for controlling such extensive measure of information in the web space. Web mining is ordered into three noteworthy divisions: Web content mining, web usage mining and web structure mining. Tak-Lam Wong has proposed a web content mining methodology in the exploration with the aid of Bayesian Networks (BN. In their methodology, they were learning on separating the web data and characteristic revelation in view of the Bayesian approach. Roused from their investigation, we mean to propose a web content mining methodology, in view of a Deep Learning Algorithm. The Deep Learning Algorithm gives the interest over BN on the basis that BN is not considered in any learning architecture planning like to propose system. The main objective of this investigation is web document extraction utilizing different grouping algorithm and investigation. This work extricates the data from the web URL. This work shows three classification algorithms, Deep Learning Algorithm, Bayesian Algorithm and BPNN Algorithm. Deep Learning is a capable arrangement of strategies for learning in neural system which is connected like computer vision, speech recognition, and natural language processing and biometrics framework. Deep Learning is one of the simple classification technique and which is utilized for subset of extensive field furthermore Deep Learning has less time for classification. Naive Bayes classifiers are a group of basic probabilistic classifiers in view of applying Bayes hypothesis with concrete independence assumptions between the features. At that point the BPNN algorithm is utilized for classification. Initially training and testing dataset contains more URL. We extract the content presently from the dataset. The

  20. Deep Blue Cannot Play Checkers: The Need for Generalized Intelligence for Mobile Robots

    Directory of Open Access Journals (Sweden)

    Troy D. Kelley

    2010-01-01

    Full Text Available Generalized intelligence is much more difficult than originally anticipated when Artificial Intelligence (AI was first introduced in the early 1960s. Deep Blue, the chess playing supercomputer, was developed to defeat the top rated human chess player and successfully did so by defeating Gary Kasporov in 1997. However, Deep Blue only played chess; it did not play checkers, or any other games. Other examples of AI programs which learned and played games were successful at specific tasks, but generalizing the learned behavior to other domains was not attempted. So the question remains: Why is generalized intelligence so difficult? If complex tasks require a significant amount of development, time and task generalization is not easily accomplished, then a significant amount of effort is going to be required to develop an intelligent system. This approach will require a system of systems approach that uses many AI techniques: neural networks, fuzzy logic, and cognitive architectures.

  1. Moby and Moby 2: creatures of the deep (web).

    Science.gov (United States)

    Vandervalk, Ben P; McCarthy, E Luke; Wilkinson, Mark D

    2009-03-01

    Facile and meaningful integration of data from disparate resources is the 'holy grail' of bioinformatics. Some resources have begun to address this problem by providing their data using Semantic Web standards, specifically the Resource Description Framework (RDF) and the Web Ontology Language (OWL). Unfortunately, adoption of Semantic Web standards has been slow overall, and even in cases where the standards are being utilized, interconnectivity between resources is rare. In response, we have seen the emergence of centralized 'semantic warehouses' that collect public data from third parties, integrate it, translate it into OWL/RDF and provide it to the community as a unified and queryable resource. One limitation of the warehouse approach is that queries are confined to the resources that have been selected for inclusion. A related problem, perhaps of greater concern, is that the majority of bioinformatics data exists in the 'Deep Web'-that is, the data does not exist until an application or analytical tool is invoked, and therefore does not have a predictable Web address. The inability to utilize Uniform Resource Identifiers (URIs) to address this data is a barrier to its accessibility via URI-centric Semantic Web technologies. Here we examine 'The State of the Union' for the adoption of Semantic Web standards in the health care and life sciences domain by key bioinformatics resources, explore the nature and connectivity of several community-driven semantic warehousing projects, and report on our own progress with the CardioSHARE/Moby-2 project, which aims to make the resources of the Deep Web transparently accessible through SPARQL queries.

  2. Effectiveness of Web Quest in Enhancing 4th Grade Students' Spiritual Intelligence

    Science.gov (United States)

    Jwaifell, Mustafa; Al-Mouhtadi, Reham; Aldarabah, Intisar

    2015-01-01

    Spiritual intelligence has gained great interest from a good number of the researchers and scholars, while there is a lack of using new technologies such as WebQuest as an instructional tool; which is one of the e-learning applications in education in enhancing spiritual intelligence of 4th graders in Jordanian schools. This study aimed at…

  3. Cyanide Suicide After Deep Web Shopping: A Case Report.

    Science.gov (United States)

    Le Garff, Erwan; Delannoy, Yann; Mesli, Vadim; Allorge, Delphine; Hédouin, Valéry; Tournel, Gilles

    2016-09-01

    Cyanide is a product that is known for its use in industrial or laboratory processes, as well as for intentional intoxication. The toxicity of cyanide is well described in humans with rapid inhibition of cellular aerobic metabolism after ingestion or inhalation, leading to severe clinical effects that are frequently lethal. We report the case of a young white man found dead in a hotel room after self-poisoning with cyanide ordered in the deep Web. This case shows a probable complex suicide kit use including cyanide, as a lethal tool, and dextromethorphan, as a sedative and anxiolytic substance. This case is an original example of the emerging deep Web shopping in illegal drug procurement.

  4. From machine learning to deep learning: progress in machine intelligence for rational drug discovery.

    Science.gov (United States)

    Zhang, Lu; Tan, Jianjun; Han, Dan; Zhu, Hao

    2017-11-01

    Machine intelligence, which is normally presented as artificial intelligence, refers to the intelligence exhibited by computers. In the history of rational drug discovery, various machine intelligence approaches have been applied to guide traditional experiments, which are expensive and time-consuming. Over the past several decades, machine-learning tools, such as quantitative structure-activity relationship (QSAR) modeling, were developed that can identify potential biological active molecules from millions of candidate compounds quickly and cheaply. However, when drug discovery moved into the era of 'big' data, machine learning approaches evolved into deep learning approaches, which are a more powerful and efficient way to deal with the massive amounts of data generated from modern drug discovery approaches. Here, we summarize the history of machine learning and provide insight into recently developed deep learning approaches and their applications in rational drug discovery. We suggest that this evolution of machine intelligence now provides a guide for early-stage drug design and discovery in the current big data era. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Deep web query interface understanding and integration

    CERN Document Server

    Dragut, Eduard C; Yu, Clement T

    2012-01-01

    There are millions of searchable data sources on the Web and to a large extent their contents can only be reached through their own query interfaces. There is an enormous interest in making the data in these sources easily accessible. There are primarily two general approaches to achieve this objective. The first is to surface the contents of these sources from the deep Web and add the contents to the index of regular search engines. The second is to integrate the searching capabilities of these sources and support integrated access to them. In this book, we introduce the state-of-the-art tech

  6. The Effect of Web Assisted Learning with Emotional Intelligence Content on Students' Information about Energy Saving, Attitudes towards Environment and Emotional Intelligence

    Science.gov (United States)

    Ercan, Orhan; Ural, Evrim; Köse, Sinan

    2017-01-01

    For a sustainable world, it is very important for students to develop positive environmental attitudes and to have awareness of energy use. The study aims to investigate the effect of web assisted instruction with emotional intelligence content on 8th grade students' emotional intelligence, attitudes towards environment and energy saving, academic…

  7. SMART CITIES INTELLIGENCE SYSTEM (SMACiSYS) INTEGRATING SENSOR WEB WITH SPATIAL DATA INFRASTRUCTURES (SENSDI)

    OpenAIRE

    D. Bhattacharya; M. Painho

    2017-01-01

    The paper endeavours to enhance the Sensor Web with crucial geospatial analysis capabilities through integration with Spatial Data Infrastructure. The objective is development of automated smart cities intelligence system (SMACiSYS) with sensor-web access (SENSDI) utilizing geomatics for sustainable societies. There has been a need to develop automated integrated system to categorize events and issue information that reaches users directly. At present, no web-enabled information system exists...

  8. Prediction of the behavior of reinforced concrete deep beams with web openings using the finite ele

    Directory of Open Access Journals (Sweden)

    Ashraf Ragab Mohamed

    2014-06-01

    Full Text Available The exact analysis of reinforced concrete deep beams is a complex problem and the presence of web openings aggravates the situation. However, no code provision exists for the analysis of deep beams with web opening. The code implemented strut and tie models are debatable and no unique solution using these models is available. In this study, the finite element method is utilized to study the behavior of reinforced concrete deep beams with and without web openings. Furthermore, the effect of the reinforcement distribution on the beam overall capacity has been studied and compared to the Egyptian code guidelines. The damaged plasticity model has been used for the analysis. Models of simply supported deep beams under 3 and 4-point bending and continuous deep beams with and without web openings have been analyzed. Model verification has shown good agreement to literature experimental work. Results of the parametric analysis have shown that web openings crossing the expected compression struts should be avoided, and the depth of the opening should not exceed 20% of the beam overall depth. The reinforcement distribution should be in the range of 0.1–0.2 beam depth for simply supported deep beams.

  9. Security Guidelines for the Development of Accessible Web Applications through the implementation of intelligent systems

    Directory of Open Access Journals (Sweden)

    Luis Joyanes Aguilar

    2009-12-01

    Full Text Available Due to the significant increase in threats, attacks and vulnerabilities that affect the Web in recent years has resulted the development and implementation of pools and methods to ensure security measures in the privacy, confidentiality and data integrity of users and businesses. Under certain circumstances, despite the implementation of these tools do not always get the flow of information which is passed in a secure manner. Many of these security tools and methods cannot be accessed by people who have disabilities or assistive technologies which enable people to access the Web efficiently. Among these security tools that are not accessible are the virtual keyboard, the CAPTCHA and other technologies that help to some extent to ensure safety on the Internet and are used in certain measures to combat malicious code and attacks that have been increased in recent times on the Web. Through the implementation of intelligent systems can detect, recover and receive information on the characteristics and properties of the different tools and hardware devices or software with which the user is accessing a web application and through analysis and interpretation of these intelligent systems can infer and automatically adjust the characteristics necessary to have these tools to be accessible by anyone regardless of disability or navigation context. This paper defines a set of guidelines and specific features that should have the security tools and methods to ensure the Web accessibility through the implementation of intelligent systems.

  10. Nature vs. Nurture: The Role of Environmental Resources in Evolutionary Deep Intelligence

    OpenAIRE

    Chung, Audrey G.; Fieguth, Paul; Wong, Alexander

    2018-01-01

    Evolutionary deep intelligence synthesizes highly efficient deep neural networks architectures over successive generations. Inspired by the nature versus nurture debate, we propose a study to examine the role of external factors on the network synthesis process by varying the availability of simulated environmental resources. Experimental results were obtained for networks synthesized via asexual evolutionary synthesis (1-parent) and sexual evolutionary synthesis (2-parent, 3-parent, and 5-pa...

  11. The Potential Transformative Impact of Web 2.0 Technology on the Intelligence Community

    National Research Council Canada - National Science Library

    Werner, Adrienne

    2008-01-01

    Web 2.0 technologies can transform and improve interagency collaboration in the Intelligence Community in many of the same ways that have marked their use through the internet in the public domain and private industry...

  12. Deep Web: aproximaciones a la ciber irresponsabilidad

    Directory of Open Access Journals (Sweden)

    Dulce María Bautista Luzardo

    2015-01-01

    Full Text Available La Deep web o Hard web es una parte gigantesca de las plataformas virtuales indetectables donde ocurren ciberacciones que tienen como precedente el ocultamiento de la identidad del usuario y han dado pie a la tergiversación del concepto de persona y a la utilización de la web de una manera irresponsable —en algunos casos— para causar desazón, para perseguir o a veces hackear bancos, entidades y cuentas privadas. Este es un artículo de reflexión para analizar los alcances de la práctica de esconder acciones en Internet y de modificar el rostro en la cibersociedad contemporánea. Con esta reflexión se pretende llamar la atención acerca de la responsabilidad que tenemos a la hora de entrar en el mundo del Internet y se analiza los peligros que estas prácticas conllevan.

  13. Post-processing of Deep Web Information Extraction Based on Domain Ontology

    Directory of Open Access Journals (Sweden)

    PENG, T.

    2013-11-01

    Full Text Available Many methods are utilized to extract and process query results in deep Web, which rely on the different structures of Web pages and various designing modes of databases. However, some semantic meanings and relations are ignored. So, in this paper, we present an approach for post-processing deep Web query results based on domain ontology which can utilize the semantic meanings and relations. A block identification model (BIM based on node similarity is defined to extract data blocks that are relevant to specific domain after reducing noisy nodes. Feature vector of domain books is obtained by result set extraction model (RSEM based on vector space model (VSM. RSEM, in combination with BIM, builds the domain ontology on books which can not only remove the limit of Web page structures when extracting data information, but also make use of semantic meanings of domain ontology. After extracting basic information of Web pages, a ranking algorithm is adopted to offer an ordered list of data records to users. Experimental results show that BIM and RSEM extract data blocks and build domain ontology accurately. In addition, relevant data records and basic information are extracted and ranked. The performances precision and recall show that our proposed method is feasible and efficient.

  14. IAServ: An Intelligent Home Care Web Services Platform in a Cloud for Aging-in-Place

    Directory of Open Access Journals (Sweden)

    Chang-Yu Chiang

    2013-11-01

    Full Text Available As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients’ needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet.

  15. IAServ: an intelligent home care web services platform in a cloud for aging-in-place.

    Science.gov (United States)

    Su, Chuan-Jun; Chiang, Chang-Yu

    2013-11-12

    As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients' needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform) to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet.

  16. Designing A General Deep Web Harvester by Harvestability Factor

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; van Keulen, Maurice; Hiemstra, Djoerd

    2014-01-01

    To make deep web data accessible, harvesters have a crucial role. Targeting different domains and websites enhances the need of a general-purpose harvester which can be applied to different settings and situations. To develop such a harvester, a large number of issues should be addressed. To have

  17. A Web-Based Authoring Tool for Algebra-Related Intelligent Tutoring Systems

    Directory of Open Access Journals (Sweden)

    Maria Virvou

    2000-01-01

    Full Text Available This paper describes the development of a web-based authoring tool for Intelligent Tutoring Systems. The tool aims to be useful to teachers and students of domains that make use of algebraic equations. The initial input to the tool is a "description" of a specific domain given by a human teacher. In return the tool provides assistance at the construction of exercises by the human teacher and then monitors the students while they are solving the exercises and provides appropriate feedback. The tool incorporates intelligence in its diagnostic component, which performs error diagnosis to students’ errors. It also handles the teaching material in a flexible and individualised way.

  18. Deep pelagic food web structure as revealed by in situ feeding observations.

    Science.gov (United States)

    Choy, C Anela; Haddock, Steven H D; Robison, Bruce H

    2017-12-06

    Food web linkages, or the feeding relationships between species inhabiting a shared ecosystem, are an ecological lens through which ecosystem structure and function can be assessed, and thus are fundamental to informing sustainable resource management. Empirical feeding datasets have traditionally been painstakingly generated from stomach content analysis, direct observations and from biochemical trophic markers (stable isotopes, fatty acids, molecular tools). Each approach carries inherent biases and limitations, as well as advantages. Here, using 27 years (1991-2016) of in situ feeding observations collected by remotely operated vehicles (ROVs), we quantitatively characterize the deep pelagic food web of central California within the California Current, complementing existing studies of diet and trophic interactions with a unique perspective. Seven hundred and forty-three independent feeding events were observed with ROVs from near-surface waters down to depths approaching 4000 m, involving an assemblage of 84 different predators and 82 different prey types, for a total of 242 unique feeding relationships. The greatest diversity of prey was consumed by narcomedusae, followed by physonect siphonophores, ctenophores and cephalopods. We highlight key interactions within the poorly understood 'jelly web', showing the importance of medusae, ctenophores and siphonophores as key predators, whose ecological significance is comparable to large fish and squid species within the central California deep pelagic food web. Gelatinous predators are often thought to comprise relatively inefficient trophic pathways within marine communities, but we build upon previous findings to document their substantial and integral roles in deep pelagic food webs. © 2017 The Authors.

  19. Deep Web Search Interface Identification: A Semi-Supervised Ensemble Approach

    OpenAIRE

    Hong Wang; Qingsong Xu; Lifeng Zhou

    2014-01-01

    To surface the Deep Web, one crucial task is to predict whether a given web page has a search interface (searchable HyperText Markup Language (HTML) form) or not. Previous studies have focused on supervised classification with labeled examples. However, labeled data are scarce, hard to get and requires tediousmanual work, while unlabeled HTML forms are abundant and easy to obtain. In this research, we consider the plausibility of using both labeled and unlabeled data to train better models to...

  20. Effects of an Intelligent Web-Based English Instruction System on Students' Academic Performance

    Science.gov (United States)

    Jia, J.; Chen, Y.; Ding, Z.; Bai, Y.; Yang, B.; Li, M.; Qi, J.

    2013-01-01

    This research conducted quasi-experiments in four middle schools to evaluate the long-term effects of an intelligent web-based English instruction system, Computer Simulation in Educational Communication (CSIEC), on students' academic attainment. The analysis of regular examination scores and vocabulary test validates the positive impact of CSIEC,…

  1. The utilisation of the deep web for military counter terrorist operations

    CSIR Research Space (South Africa)

    Aschmann, MJ

    2017-03-01

    Full Text Available The Internet offers anonymity and a disregard of national boundaries. Most countries are deeply concerned about the threat cyberspace and in particular, cyberterrorism, are posing to national security. The Deep and Dark Web is associated...

  2. Intelligent Detection of Structure from Remote Sensing Images Based on Deep Learning Method

    Science.gov (United States)

    Xin, L.

    2018-04-01

    Utilizing high-resolution remote sensing images for earth observation has become the common method of land use monitoring. It requires great human participation when dealing with traditional image interpretation, which is inefficient and difficult to guarantee the accuracy. At present, the artificial intelligent method such as deep learning has a large number of advantages in the aspect of image recognition. By means of a large amount of remote sensing image samples and deep neural network models, we can rapidly decipher the objects of interest such as buildings, etc. Whether in terms of efficiency or accuracy, deep learning method is more preponderant. This paper explains the research of deep learning method by a great mount of remote sensing image samples and verifies the feasibility of building extraction via experiments.

  3. GROWTH OF COLLECTIVE INTELLIGENCE BY LINKING KNOWLEDGE WORKERS THROUGH SOCIAL MEDIA

    Directory of Open Access Journals (Sweden)

    JAROSLAVA KUBÁTOVÁ

    2012-05-01

    Full Text Available Collective intelligence can be defined, very broadly, as groups of individuals that do things collectively, and that seem to be intelligent. Collective intelligence has existed for ages. Families, tribes, companies, countries, etc., are all groups of individuals doing things collectively, and that seem to be intelligent. However, over the past two decades, the rise of the Internet has given upturn to new types of collective intelligence. Companies can take advantage from the so-called Web-enabled collective intelligence. Web-enabled collective intelligence is based on linking knowledge workers through social media. That means that companies can hire geographically dispersed knowledge workers and create so-called virtual teams of these knowledge workers (members of the virtual teams are connected only via the Internet and do not meet face to face. By providing an online social network, the companies can achieve significant growth of collective intelligence. But to create and use an online social network within a company in a really efficient way, the managers need to have a deep understanding of how such a system works. Thus the purpose of this paper is to share the knowledge about effective use of social networks in companies. The main objectives of this paper are as follows: to introduce some good practices of the use of social media in companies, to analyze these practices and to generalize recommendations for a successful introduction and use of social media to increase collective intelligence of a company.

  4. An Autonomous Learning System of Bengali Characters Using Web-Based Intelligent Handwriting Recognition

    Science.gov (United States)

    Khatun, Nazma; Miwa, Jouji

    2016-01-01

    This research project was aimed to develop an intelligent Bengali handwriting education system to improve the literacy level in Bangladesh. Due to the socio-economical limitation, all of the population does not have the chance to go to school. Here, we developed a prototype of web-based (iPhone/smartphone or computer browser) intelligent…

  5. Deep neural networks: A promising tool for fault characteristic mining and intelligent diagnosis of rotating machinery with massive data

    Science.gov (United States)

    Jia, Feng; Lei, Yaguo; Lin, Jing; Zhou, Xin; Lu, Na

    2016-05-01

    Aiming to promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rotating machinery. Among these studies, the methods based on artificial neural networks (ANNs) are commonly used, which employ signal processing techniques for extracting features and further input the features to ANNs for classifying faults. Though these methods did work in intelligent fault diagnosis of rotating machinery, they still have two deficiencies. (1) The features are manually extracted depending on much prior knowledge about signal processing techniques and diagnostic expertise. In addition, these manual features are extracted according to a specific diagnosis issue and probably unsuitable for other issues. (2) The ANNs adopted in these methods have shallow architectures, which limits the capacity of ANNs to learn the complex non-linear relationships in fault diagnosis issues. As a breakthrough in artificial intelligence, deep learning holds the potential to overcome the aforementioned deficiencies. Through deep learning, deep neural networks (DNNs) with deep architectures, instead of shallow ones, could be established to mine the useful information from raw data and approximate complex non-linear functions. Based on DNNs, a novel intelligent method is proposed in this paper to overcome the deficiencies of the aforementioned intelligent diagnosis methods. The effectiveness of the proposed method is validated using datasets from rolling element bearings and planetary gearboxes. These datasets contain massive measured signals involving different health conditions under various operating conditions. The diagnosis results show that the proposed method is able to not only adaptively mine available fault characteristics from the measured signals, but also obtain superior diagnosis accuracy compared with the existing methods.

  6. A deep knowledge architecture for intelligent support of nuclear waste transportation decisions

    International Nuclear Information System (INIS)

    Batra, D.; Bowen, W.M.; Hill, T.R.; Weeks, K.D.

    1988-01-01

    The concept of intelligent decision support has been discussed and explored in several recent papers, one of which has suggested the use of a Deep Knowledge Architecture. This paper explores this concept through application to a specific decision environment. The complex problems involved in nuclear waste disposal decisions provide an excellent test case. The resulting architecture uses an integrated, multi-level model base to represent the deep knowledge of the problem. Combined with the surface level knowledge represented by the database, the proposed knowledge base complements that of the decision-maker, allowing analysis at a range of levels of decisions which may also occur at a range of levels

  7. Rapid and accurate intraoperative pathological diagnosis by artificial intelligence with deep learning technology.

    Science.gov (United States)

    Zhang, Jing; Song, Yanlin; Xia, Fan; Zhu, Chenjing; Zhang, Yingying; Song, Wenpeng; Xu, Jianguo; Ma, Xuelei

    2017-09-01

    Frozen section is widely used for intraoperative pathological diagnosis (IOPD), which is essential for intraoperative decision making. However, frozen section suffers from some drawbacks, such as time consuming and high misdiagnosis rate. Recently, artificial intelligence (AI) with deep learning technology has shown bright future in medicine. We hypothesize that AI with deep learning technology could help IOPD, with a computer trained by a dataset of intraoperative lesion images. Evidences supporting our hypothesis included the successful use of AI with deep learning technology in diagnosing skin cancer, and the developed method of deep-learning algorithm. Large size of the training dataset is critical to increase the diagnostic accuracy. The performance of the trained machine could be tested by new images before clinical use. Real-time diagnosis, easy to use and potential high accuracy were the advantages of AI for IOPD. In sum, AI with deep learning technology is a promising method to help rapid and accurate IOPD. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Cluo: Web-Scale Text Mining System For Open Source Intelligence Purposes

    Directory of Open Access Journals (Sweden)

    Przemyslaw Maciolek

    2013-01-01

    Full Text Available The amount of textual information published on the Internet is considered tobe in billions of web pages, blog posts, comments, social media updates andothers. Analyzing such quantities of data requires high level of distribution –both data and computing. This is especially true in case of complex algorithms,often used in text mining tasks.The paper presents a prototype implementation of CLUO – an Open SourceIntelligence (OSINT system, which extracts and analyzes significant quantitiesof openly available information.

  9. Intelligent Information Fusion in the Aviation Domain: A Semantic-Web based Approach

    Science.gov (United States)

    Ashish, Naveen; Goforth, Andre

    2005-01-01

    Information fusion from multiple sources is a critical requirement for System Wide Information Management in the National Airspace (NAS). NASA and the FAA envision creating an "integrated pool" of information originally coming from different sources, which users, intelligent agents and NAS decision support tools can tap into. In this paper we present the results of our initial investigations into the requirements and prototype development of such an integrated information pool for the NAS. We have attempted to ascertain key requirements for such an integrated pool based on a survey of DSS tools that will benefit from this integrated pool. We then advocate key technologies from computer science research areas such as the semantic web, information integration, and intelligent agents that we believe are well suited to achieving the envisioned system wide information management capabilities.

  10. Informatics in radiology: automated Web-based graphical dashboard for radiology operational business intelligence.

    Science.gov (United States)

    Nagy, Paul G; Warnock, Max J; Daly, Mark; Toland, Christopher; Meenan, Christopher D; Mezrich, Reuben S

    2009-11-01

    Radiology departments today are faced with many challenges to improve operational efficiency, performance, and quality. Many organizations rely on antiquated, paper-based methods to review their historical performance and understand their operations. With increased workloads, geographically dispersed image acquisition and reading sites, and rapidly changing technologies, this approach is increasingly untenable. A Web-based dashboard was constructed to automate the extraction, processing, and display of indicators and thereby provide useful and current data for twice-monthly departmental operational meetings. The feasibility of extracting specific metrics from clinical information systems was evaluated as part of a longer-term effort to build a radiology business intelligence architecture. Operational data were extracted from clinical information systems and stored in a centralized data warehouse. Higher-level analytics were performed on the centralized data, a process that generated indicators in a dynamic Web-based graphical environment that proved valuable in discussion and root cause analysis. Results aggregated over a 24-month period since implementation suggest that this operational business intelligence reporting system has provided significant data for driving more effective management decisions to improve productivity, performance, and quality of service in the department.

  11. Hospital-based nurses' perceptions of the adoption of Web 2.0 tools for knowledge sharing, learning, social interaction and the production of collective intelligence.

    Science.gov (United States)

    Lau, Adela S M

    2011-11-11

    Web 2.0 provides a platform or a set of tools such as blogs, wikis, really simple syndication (RSS), podcasts, tags, social bookmarks, and social networking software for knowledge sharing, learning, social interaction, and the production of collective intelligence in a virtual environment. Web 2.0 is also becoming increasingly popular in e-learning and e-social communities. The objectives were to investigate how Web 2.0 tools can be applied for knowledge sharing, learning, social interaction, and the production of collective intelligence in the nursing domain and to investigate what behavioral perceptions are involved in the adoption of Web 2.0 tools by nurses. The decomposed technology acceptance model was applied to construct the research model on which the hypotheses were based. A questionnaire was developed based on the model and data from nurses (n = 388) were collected from late January 2009 until April 30, 2009. Pearson's correlation analysis and t tests were used for data analysis. Intention toward using Web 2.0 tools was positively correlated with usage behavior (r = .60, P Web 2.0 tools and enable them to better plan the strategy of implementation of Web 2.0 tools for knowledge sharing, learning, social interaction, and the production of collective intelligence.

  12. Hospital-Based Nurses’ Perceptions of the Adoption of Web 2.0 Tools for Knowledge Sharing, Learning, Social Interaction and the Production of Collective Intelligence

    Science.gov (United States)

    2011-01-01

    Background Web 2.0 provides a platform or a set of tools such as blogs, wikis, really simple syndication (RSS), podcasts, tags, social bookmarks, and social networking software for knowledge sharing, learning, social interaction, and the production of collective intelligence in a virtual environment. Web 2.0 is also becoming increasingly popular in e-learning and e-social communities. Objectives The objectives were to investigate how Web 2.0 tools can be applied for knowledge sharing, learning, social interaction, and the production of collective intelligence in the nursing domain and to investigate what behavioral perceptions are involved in the adoption of Web 2.0 tools by nurses. Methods The decomposed technology acceptance model was applied to construct the research model on which the hypotheses were based. A questionnaire was developed based on the model and data from nurses (n = 388) were collected from late January 2009 until April 30, 2009. Pearson’s correlation analysis and t tests were used for data analysis. Results Intention toward using Web 2.0 tools was positively correlated with usage behavior (r = .60, P Web 2.0 tools and enable them to better plan the strategy of implementation of Web 2.0 tools for knowledge sharing, learning, social interaction, and the production of collective intelligence. PMID:22079851

  13. A novel method for intelligent fault diagnosis of rolling bearings using ensemble deep auto-encoders

    Science.gov (United States)

    Shao, Haidong; Jiang, Hongkai; Lin, Ying; Li, Xingqiu

    2018-03-01

    Automatic and accurate identification of rolling bearings fault categories, especially for the fault severities and fault orientations, is still a major challenge in rotating machinery fault diagnosis. In this paper, a novel method called ensemble deep auto-encoders (EDAEs) is proposed for intelligent fault diagnosis of rolling bearings. Firstly, different activation functions are employed as the hidden functions to design a series of auto-encoders (AEs) with different characteristics. Secondly, EDAEs are constructed with various auto-encoders for unsupervised feature learning from the measured vibration signals. Finally, a combination strategy is designed to ensure accurate and stable diagnosis results. The proposed method is applied to analyze the experimental bearing vibration signals. The results confirm that the proposed method can get rid of the dependence on manual feature extraction and overcome the limitations of individual deep learning models, which is more effective than the existing intelligent diagnosis methods.

  14. Autonomous development and learning in artificial intelligence and robotics: Scaling up deep learning to human-like learning.

    Science.gov (United States)

    Oudeyer, Pierre-Yves

    2017-01-01

    Autonomous lifelong development and learning are fundamental capabilities of humans, differentiating them from current deep learning systems. However, other branches of artificial intelligence have designed crucial ingredients towards autonomous learning: curiosity and intrinsic motivation, social learning and natural interaction with peers, and embodiment. These mechanisms guide exploration and autonomous choice of goals, and integrating them with deep learning opens stimulating perspectives.

  15. Intelligible Artificial Intelligence

    OpenAIRE

    Weld, Daniel S.; Bansal, Gagan

    2018-01-01

    Since Artificial Intelligence (AI) software uses techniques like deep lookahead search and stochastic optimization of huge neural networks to fit mammoth datasets, it often results in complex behavior that is difficult for people to understand. Yet organizations are deploying AI algorithms in many mission-critical settings. In order to trust their behavior, we must make it intelligible --- either by using inherently interpretable models or by developing methods for explaining otherwise overwh...

  16. Designing A General Deep Web Access Approach Based On A Newly Introduced Factor; Harvestability Factor (HF)

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; van Keulen, Maurice; Hiemstra, Djoerd

    2014-01-01

    The growing need of accessing more and more information draws attentions to huge amount of data hidden behind web forms defined as deep web. To make this data accessible, harvesters have a crucial role. Targeting different domains and websites enhances the need to have a general-purpose harvester

  17. Deep Learning-Based Noise Reduction Approach to Improve Speech Intelligibility for Cochlear Implant Recipients.

    Science.gov (United States)

    Lai, Ying-Hui; Tsao, Yu; Lu, Xugang; Chen, Fei; Su, Yu-Ting; Chen, Kuang-Chao; Chen, Yu-Hsuan; Chen, Li-Ching; Po-Hung Li, Lieber; Lee, Chin-Hui

    2018-01-20

    We investigate the clinical effectiveness of a novel deep learning-based noise reduction (NR) approach under noisy conditions with challenging noise types at low signal to noise ratio (SNR) levels for Mandarin-speaking cochlear implant (CI) recipients. The deep learning-based NR approach used in this study consists of two modules: noise classifier (NC) and deep denoising autoencoder (DDAE), thus termed (NC + DDAE). In a series of comprehensive experiments, we conduct qualitative and quantitative analyses on the NC module and the overall NC + DDAE approach. Moreover, we evaluate the speech recognition performance of the NC + DDAE NR and classical single-microphone NR approaches for Mandarin-speaking CI recipients under different noisy conditions. The testing set contains Mandarin sentences corrupted by two types of maskers, two-talker babble noise, and a construction jackhammer noise, at 0 and 5 dB SNR levels. Two conventional NR techniques and the proposed deep learning-based approach are used to process the noisy utterances. We qualitatively compare the NR approaches by the amplitude envelope and spectrogram plots of the processed utterances. Quantitative objective measures include (1) normalized covariance measure to test the intelligibility of the utterances processed by each of the NR approaches; and (2) speech recognition tests conducted by nine Mandarin-speaking CI recipients. These nine CI recipients use their own clinical speech processors during testing. The experimental results of objective evaluation and listening test indicate that under challenging listening conditions, the proposed NC + DDAE NR approach yields higher intelligibility scores than the two compared classical NR techniques, under both matched and mismatched training-testing conditions. When compared to the two well-known conventional NR techniques under challenging listening condition, the proposed NC + DDAE NR approach has superior noise suppression capabilities and gives less distortion

  18. Deep Web Search Interface Identification: A Semi-Supervised Ensemble Approach

    Directory of Open Access Journals (Sweden)

    Hong Wang

    2014-12-01

    Full Text Available To surface the Deep Web, one crucial task is to predict whether a given web page has a search interface (searchable HyperText Markup Language (HTML form or not. Previous studies have focused on supervised classification with labeled examples. However, labeled data are scarce, hard to get and requires tediousmanual work, while unlabeled HTML forms are abundant and easy to obtain. In this research, we consider the plausibility of using both labeled and unlabeled data to train better models to identify search interfaces more effectively. We present a semi-supervised co-training ensemble learning approach using both neural networks and decision trees to deal with the search interface identification problem. We show that the proposed model outperforms previous methods using only labeled data. We also show that adding unlabeled data improves the effectiveness of the proposed model.

  19. A Dynamic Recommender System for Improved Web Usage Mining and CRM Using Swarm Intelligence.

    Science.gov (United States)

    Alphy, Anna; Prabakaran, S

    2015-01-01

    In modern days, to enrich e-business, the websites are personalized for each user by understanding their interests and behavior. The main challenges of online usage data are information overload and their dynamic nature. In this paper, to address these issues, a WebBluegillRecom-annealing dynamic recommender system that uses web usage mining techniques in tandem with software agents developed for providing dynamic recommendations to users that can be used for customizing a website is proposed. The proposed WebBluegillRecom-annealing dynamic recommender uses swarm intelligence from the foraging behavior of a bluegill fish. It overcomes the information overload by handling dynamic behaviors of users. Our dynamic recommender system was compared against traditional collaborative filtering systems. The results show that the proposed system has higher precision, coverage, F1 measure, and scalability than the traditional collaborative filtering systems. Moreover, the recommendations given by our system overcome the overspecialization problem by including variety in recommendations.

  20. Smart Cities Intelligence System (SMACiSYS) Integrating Sensor Web with Spatial Data Infrastructures (sensdi)

    Science.gov (United States)

    Bhattacharya, D.; Painho, M.

    2017-09-01

    The paper endeavours to enhance the Sensor Web with crucial geospatial analysis capabilities through integration with Spatial Data Infrastructure. The objective is development of automated smart cities intelligence system (SMACiSYS) with sensor-web access (SENSDI) utilizing geomatics for sustainable societies. There has been a need to develop automated integrated system to categorize events and issue information that reaches users directly. At present, no web-enabled information system exists which can disseminate messages after events evaluation in real time. Research work formalizes a notion of an integrated, independent, generalized, and automated geo-event analysing system making use of geo-spatial data under popular usage platform. Integrating Sensor Web With Spatial Data Infrastructures (SENSDI) aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. The other benefit, conversely, is the expansion of spatial data infrastructure to utilize sensor web, dynamically and in real time for smart applications that smarter cities demand nowadays. Hence, SENSDI augments existing smart cities platforms utilizing sensor web and spatial information achieved by coupling pairs of otherwise disjoint interfaces and APIs formulated by Open Geospatial Consortium (OGC) keeping entire platform open access and open source. SENSDI is based on Geonode, QGIS and Java, that bind most of the functionalities of Internet, sensor web and nowadays Internet of Things superseding Internet of Sensors as well. In a nutshell, the project delivers a generalized real-time accessible and analysable platform for sensing the environment and mapping the captured information for optimal decision-making and societal benefit.

  1. SMART CITIES INTELLIGENCE SYSTEM (SMACiSYS INTEGRATING SENSOR WEB WITH SPATIAL DATA INFRASTRUCTURES (SENSDI

    Directory of Open Access Journals (Sweden)

    D. Bhattacharya

    2017-09-01

    Full Text Available The paper endeavours to enhance the Sensor Web with crucial geospatial analysis capabilities through integration with Spatial Data Infrastructure. The objective is development of automated smart cities intelligence system (SMACiSYS with sensor-web access (SENSDI utilizing geomatics for sustainable societies. There has been a need to develop automated integrated system to categorize events and issue information that reaches users directly. At present, no web-enabled information system exists which can disseminate messages after events evaluation in real time. Research work formalizes a notion of an integrated, independent, generalized, and automated geo-event analysing system making use of geo-spatial data under popular usage platform. Integrating Sensor Web With Spatial Data Infrastructures (SENSDI aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. The other benefit, conversely, is the expansion of spatial data infrastructure to utilize sensor web, dynamically and in real time for smart applications that smarter cities demand nowadays. Hence, SENSDI augments existing smart cities platforms utilizing sensor web and spatial information achieved by coupling pairs of otherwise disjoint interfaces and APIs formulated by Open Geospatial Consortium (OGC keeping entire platform open access and open source. SENSDI is based on Geonode, QGIS and Java, that bind most of the functionalities of Internet, sensor web and nowadays Internet of Things superseding Internet of Sensors as well. In a nutshell, the project delivers a generalized real-time accessible and analysable platform for sensing the environment and mapping the captured information for optimal decision-making and societal benefit.

  2. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2016-01-01

    The change of the web content is rapid. In Focused Web Harvesting [?], which aims at achieving a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a complete set of related web data to their interesting topics. Whether you are a fan

  3. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2016-01-01

    Web content changes rapidly [18]. In Focused Web Harvesting [17] which aim it is to achieve a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a set of all the relevant web data to their topics of interest. Whether you are a fan

  4. Food web structure and vulnerability of a deep-sea ecosystem in the NW Mediterranean Sea

    OpenAIRE

    Tecchio, Samuele; Coll, Marta; Christensen, Villy; Company, Joan B.; Ramirez-Llodra, Eva; Sarda, Francisco

    2013-01-01

    There is increasing fishing pressure on the continental margins of the oceans, and this raises concerns about the vulnerability of the ecosystems thriving there. The current knowledge of the biology of deep-water fish species identifies potential reduced resilience to anthropogenic disturbance. However, there are extreme difficulties in sampling the deep sea, resulting in poorly resolved and indirectly obtained food-web relationships. Here, we modelled the flows and biomasses of a Mediterrane...

  5. Artificial Intelligence as Structural Estimation: Economic Interpretations of Deep Blue, Bonanza, and AlphaGo

    OpenAIRE

    Igami, Mitsuru

    2017-01-01

    Artificial intelligence (AI) has achieved superhuman performance in a growing number of tasks, but understanding and explaining AI remain challenging. This paper clarifies the connections between machine-learning algorithms to develop AIs and the econometrics of dynamic structural models through the case studies of three famous game AIs. Chess-playing Deep Blue is a calibrated value function, whereas shogi-playing Bonanza is an estimated value function via Rust's (1987) nested fixed-point met...

  6. TOPIC MODELING: CLUSTERING OF DEEP WEBPAGES

    OpenAIRE

    Muhunthaadithya C; Rohit J.V; Sadhana Kesavan; E. Sivasankar

    2015-01-01

    The internet is comprised of massive amount of information in the form of zillions of web pages.This information can be categorized into the surface web and the deep web. The existing search engines can effectively make use of surface web information.But the deep web remains unexploited yet. Machine learning techniques have been commonly employed to access deep web content.

  7. Semantic Business Intelligence - a New Generation of Business Intelligence

    Directory of Open Access Journals (Sweden)

    Dinu AIRINEI

    2012-01-01

    Full Text Available Business Intelligence Solutions represents applications used by companies to manage process and analyze data to provide substantiated decision. In the context of Semantic Web develop-ment trend is to integrate semantic unstructured data, making business intelligence solutions to be redesigned in such a manner that can analyze, process and synthesize, in addition to traditional data and data integrated with semantic another form and structure. This invariably leads appearance of new BI solution, called Semantic Business Intelligence.

  8. Food web flows through a sub-arctic deep-sea benthic community

    Science.gov (United States)

    Gontikaki, E.; van Oevelen, D.; Soetaert, K.; Witte, U.

    2011-11-01

    The benthic food web of the deep Faroe-Shetland Channel (FSC) was modelled by using the linear inverse modelling methodology. The reconstruction of carbon pathways by inverse analysis was based on benthic oxygen uptake rates, biomass data and transfer of labile carbon through the food web as revealed by a pulse-chase experiment. Carbon deposition was estimated at 2.2 mmol C m -2 d -1. Approximately 69% of the deposited carbon was respired by the benthic community with bacteria being responsible for 70% of the total respiration. The major fraction of the labile detritus flux was recycled within the microbial loop leaving merely 2% of the deposited labile phytodetritus available for metazoan consumption. Bacteria assimilated carbon at high efficiency (0.55) but only 24% of bacterial production was grazed by metazoans; the remaining returned to the dissolved organic matter pool due to viral lysis. Refractory detritus was the basal food resource for nematodes covering ∼99% of their carbon requirements. On the contrary, macrofauna seemed to obtain the major part of their metabolic needs from bacteria (49% of macrofaunal consumption). Labile detritus transfer was well-constrained, based on the data from the pulse-chase experiment, but appeared to be of limited importance to the diet of the examined benthic organisms (preferred prey, in this case, was other macrofaunal animals rather than nematodes. Bacteria and detritus contributed 53% and 12% to the total carbon ingestion of carnivorous polychaetes suggesting a high degree of omnivory among higher consumers in the FSC benthic food web. Overall, this study provided a unique insight into the functioning of a deep-sea benthic community and demonstrated how conventional data can be exploited further when combined with state-of-the-art modelling approaches.

  9. Biomagnification of persistent organic pollutants in a deep-sea, temperate food web.

    Science.gov (United States)

    Romero-Romero, Sonia; Herrero, Laura; Fernández, Mario; Gómara, Belén; Acuña, José Luis

    2017-12-15

    Polychlorinated biphenyls (PCBs), polybrominated diphenyl ethers (PBDEs) and polychlorinated dibenzo-p-dioxins and -furans (PCDD/Fs) were measured in a temperate, deep-sea ecosystem, the Avilés submarine Canyon (AC; Cantabrian Sea, Southern Bay of Biscay). There was an increase of contaminant concentration with the trophic level of the organisms, as calculated from stable nitrogen isotope data (δ 15 N). Such biomagnification was only significant for the pelagic food web and its magnitude was highly dependent on the type of top predators included in the analysis. The trophic magnification factor (TMF) for PCB-153 in the pelagic food web (spanning four trophic levels) was 6.2 or 2.2, depending on whether homeotherm top predators (cetaceans and seabirds) were included or not in the analysis, respectively. Since body size is significantly correlated with δ 15 N, it can be used as a proxy to estimate trophic magnification, what can potentially lead to a simple and convenient method to calculate the TMF. In spite of their lower biomagnification, deep-sea fishes showed higher concentrations than their shallower counterparts, although those differences were not significant. In summary, the AC fauna exhibits contaminant levels comparable or lower than those reported in other systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Why & When Deep Learning Works: Looking Inside Deep Learnings

    OpenAIRE

    Ronen, Ronny

    2017-01-01

    The Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI) has been heavily supporting Machine Learning and Deep Learning research from its foundation in 2012. We have asked six leading ICRI-CI Deep Learning researchers to address the challenge of "Why & When Deep Learning works", with the goal of looking inside Deep Learning, providing insights on how deep networks function, and uncovering key observations on their expressiveness, limitations, and potential. The outp...

  11. Engineering Adaptive Web Applications

    DEFF Research Database (Denmark)

    Dolog, Peter

    2007-01-01

    suit the user profile the most. This paper summarizes the domain engineering framework for such adaptive web applications. The framework provides guidelines to develop adaptive web applications as members of a family. It suggests how to utilize the design artifacts as knowledge which can be used......Information and services on the web are accessible for everyone. Users of the web differ in their background, culture, political and social environment, interests and so on. Ambient intelligence was envisioned as a concept for systems which are able to adapt to user actions and needs....... With the growing amount of information and services, the web applications become natural candidates to adopt the concepts of ambient intelligence. Such applications can deal with divers user intentions and actions based on the user profile and can suggest the combination of information content and services which...

  12. Deep learning architectures for multi-label classification of intelligent health risk prediction.

    Science.gov (United States)

    Maxwell, Andrew; Li, Runzhi; Yang, Bei; Weng, Heng; Ou, Aihua; Hong, Huixiao; Zhou, Zhaoxian; Gong, Ping; Zhang, Chaoyang

    2017-12-28

    Multi-label classification of data remains to be a challenging problem. Because of the complexity of the data, it is sometimes difficult to infer information about classes that are not mutually exclusive. For medical data, patients could have symptoms of multiple different diseases at the same time and it is important to develop tools that help to identify problems early. Intelligent health risk prediction models built with deep learning architectures offer a powerful tool for physicians to identify patterns in patient data that indicate risks associated with certain types of chronic diseases. Physical examination records of 110,300 anonymous patients were used to predict diabetes, hypertension, fatty liver, a combination of these three chronic diseases, and the absence of disease (8 classes in total). The dataset was split into training (90%) and testing (10%) sub-datasets. Ten-fold cross validation was used to evaluate prediction accuracy with metrics such as precision, recall, and F-score. Deep Learning (DL) architectures were compared with standard and state-of-the-art multi-label classification methods. Preliminary results suggest that Deep Neural Networks (DNN), a DL architecture, when applied to multi-label classification of chronic diseases, produced accuracy that was comparable to that of common methods such as Support Vector Machines. We have implemented DNNs to handle both problem transformation and algorithm adaption type multi-label methods and compare both to see which is preferable. Deep Learning architectures have the potential of inferring more information about the patterns of physical examination data than common classification methods. The advanced techniques of Deep Learning can be used to identify the significance of different features from physical examination data as well as to learn the contributions of each feature that impact a patient's risk for chronic diseases. However, accurate prediction of chronic disease risks remains a challenging

  13. Estimating Ground-Level PM2.5 by Fusing Satellite and Station Observations: A Geo-Intelligent Deep Learning Approach

    Science.gov (United States)

    Li, Tongwen; Shen, Huanfeng; Yuan, Qiangqiang; Zhang, Xuechen; Zhang, Liangpei

    2017-12-01

    Fusing satellite observations and station measurements to estimate ground-level PM2.5 is promising for monitoring PM2.5 pollution. A geo-intelligent approach, which incorporates geographical correlation into an intelligent deep learning architecture, is developed to estimate PM2.5. Specifically, it considers geographical distance and spatiotemporally correlated PM2.5 in a deep belief network (denoted as Geoi-DBN). Geoi-DBN can capture the essential features associated with PM2.5 from latent factors. It was trained and tested with data from China in 2015. The results show that Geoi-DBN performs significantly better than the traditional neural network. The out-of-sample cross-validation R2 increases from 0.42 to 0.88, and RMSE decreases from 29.96 to 13.03 μg/m3. On the basis of the derived PM2.5 distribution, it is predicted that over 80% of the Chinese population live in areas with an annual mean PM2.5 of greater than 35 μg/m3. This study provides a new perspective for air pollution monitoring in large geographic regions.

  14. An Intelligent Web Digital Image Metadata Service Platform for Social Curation Commerce Environment

    Directory of Open Access Journals (Sweden)

    Seong-Yong Hong

    2015-01-01

    Full Text Available Information management includes multimedia data management, knowledge management, collaboration, and agents, all of which are supporting technologies for XML. XML technologies have an impact on multimedia databases as well as collaborative technologies and knowledge management. That is, e-commerce documents are encoded in XML and are gaining much popularity for business-to-business or business-to-consumer transactions. Recently, the internet sites, such as e-commerce sites and shopping mall sites, deal with a lot of image and multimedia information. This paper proposes an intelligent web digital image information retrieval platform, which adopts XML technology for social curation commerce environment. To support object-based content retrieval on product catalog images containing multiple objects, we describe multilevel metadata structures representing the local features, global features, and semantics of image data. To enable semantic-based and content-based retrieval on such image data, we design an XML-Schema for the proposed metadata. We also describe how to automatically transform the retrieval results into the forms suitable for the various user environments, such as web browser or mobile device, using XSLT. The proposed scheme can be utilized to enable efficient e-catalog metadata sharing between systems, and it will contribute to the improvement of the retrieval correctness and the user’s satisfaction on semantic-based web digital image information retrieval.

  15. Science.Gov - A single gateway to the deep web knowledge of U.S. science agencies

    International Nuclear Information System (INIS)

    Hitson, B.A.

    2004-01-01

    The impact of science and technology on our daily lives is easily demonstrated. From new drug discoveries, to new and more efficient energy sources, to the incorporation of new technologies into business and industry, the productive applications of R and D are innumerable. The possibility of creating such applications depends most heavily on the availability of one resource: knowledge. Knowledge must be shared for scientific progress to occur. In the past, the ability to share knowledge electronically has been limited by the 'deep Web' nature of scientific databases and the lack of technology to simultaneously search disparate and decentralized information collections. U.S. science agencies invest billions of dollars each year on basic and applied research and development projects. To make the collective knowledge from this R and D more easily accessible and searchable, 12 science agencies collaborated to develop Science.gov - a single, searchable gateway to the deep Web knowledge of U.S. science agencies. This paper will describe Science.gov and its contribution to nuclear knowledge management. (author)

  16. Obstacle Detection for Intelligent Transportation Systems Using Deep Stacked Autoencoder and k-Nearest Neighbor Scheme

    KAUST Repository

    Dairi, Abdelkader; Harrou, Fouzi; Sun, Ying; Senouci, Mohamed

    2018-01-01

    Obstacle detection is an essential element for the development of intelligent transportation systems so that accidents can be avoided. In this study, we propose a stereovisionbased method for detecting obstacles in urban environment. The proposed method uses a deep stacked auto-encoders (DSA) model that combines the greedy learning features with the dimensionality reduction capacity and employs an unsupervised k-nearest neighbors algorithm (KNN) to accurately and reliably detect the presence of obstacles. We consider obstacle detection as an anomaly detection problem. We evaluated the proposed method by using practical data from three publicly available datasets, the Malaga stereovision urban dataset (MSVUD), the Daimler urban segmentation dataset (DUSD), and Bahnhof dataset. Also, we compared the efficiency of DSA-KNN approach to the deep belief network (DBN)-based clustering schemes. Results show that the DSA-KNN is suitable to visually monitor urban scenes.

  17. Obstacle Detection for Intelligent Transportation Systems Using Deep Stacked Autoencoder and k-Nearest Neighbor Scheme

    KAUST Repository

    Dairi, Abdelkader

    2018-04-30

    Obstacle detection is an essential element for the development of intelligent transportation systems so that accidents can be avoided. In this study, we propose a stereovisionbased method for detecting obstacles in urban environment. The proposed method uses a deep stacked auto-encoders (DSA) model that combines the greedy learning features with the dimensionality reduction capacity and employs an unsupervised k-nearest neighbors algorithm (KNN) to accurately and reliably detect the presence of obstacles. We consider obstacle detection as an anomaly detection problem. We evaluated the proposed method by using practical data from three publicly available datasets, the Malaga stereovision urban dataset (MSVUD), the Daimler urban segmentation dataset (DUSD), and Bahnhof dataset. Also, we compared the efficiency of DSA-KNN approach to the deep belief network (DBN)-based clustering schemes. Results show that the DSA-KNN is suitable to visually monitor urban scenes.

  18. The benefit of combining a deep neural network architecture with ideal ratio mask estimation in computational speech segregation to improve speech intelligibility.

    Science.gov (United States)

    Bentsen, Thomas; May, Tobias; Kressner, Abigail A; Dau, Torsten

    2018-01-01

    Computational speech segregation attempts to automatically separate speech from noise. This is challenging in conditions with interfering talkers and low signal-to-noise ratios. Recent approaches have adopted deep neural networks and successfully demonstrated speech intelligibility improvements. A selection of components may be responsible for the success with these state-of-the-art approaches: the system architecture, a time frame concatenation technique and the learning objective. The aim of this study was to explore the roles and the relative contributions of these components by measuring speech intelligibility in normal-hearing listeners. A substantial improvement of 25.4 percentage points in speech intelligibility scores was found going from a subband-based architecture, in which a Gaussian Mixture Model-based classifier predicts the distributions of speech and noise for each frequency channel, to a state-of-the-art deep neural network-based architecture. Another improvement of 13.9 percentage points was obtained by changing the learning objective from the ideal binary mask, in which individual time-frequency units are labeled as either speech- or noise-dominated, to the ideal ratio mask, where the units are assigned a continuous value between zero and one. Therefore, both components play significant roles and by combining them, speech intelligibility improvements were obtained in a six-talker condition at a low signal-to-noise ratio.

  19. Semantic Business Intelligence - a New Generation of Business Intelligence

    OpenAIRE

    Dinu AIRINEI; Dora-Anca BERTA

    2012-01-01

    Business Intelligence Solutions represents applications used by companies to manage process and analyze data to provide substantiated decision. In the context of Semantic Web develop-ment trend is to integrate semantic unstructured data, making business intelligence solutions to be redesigned in such a manner that can analyze, process and synthesize, in addition to traditional data and data integrated with semantic another form and structure. This invariably leads appearance of new BI solutio...

  20. DATA EXTRACTION AND LABEL ASSIGNMENT FOR WEB DATABASES

    OpenAIRE

    T. Rajesh; T. Prathap; S.Naveen Nambi; A.R. Arunachalam

    2015-01-01

    Deep Web contents are accessed by queries submitted to Web databases and the returned data records are en wrapped in dynamically generated Web pages (they will be called deep Web pages in this paper). The structured data that Extracting from deep Web pages is a challenging problem due to the underlying intricate structures of such pages. Until now, a too many number of techniques have been proposed to address this problem, but all of them have limitations because they are Web-page-programming...

  1. Rancang Bangun Sistem Business Intelligence Universitas Sebagai Pendukung Pengambilan Keputusan Akademik

    Directory of Open Access Journals (Sweden)

    Zainal Arifin

    2016-01-01

    Full Text Available Sistem business intelligence universitas dimulai dengan tahapan integrasi data, analisis data, membuat laporan dan membuatweb portal dan kemudian mengitegrasikan laporan tersebut dengan web portal. Analisis data diolah dengan OLAP, KPI dandata mining untuk mengekstrak informasi dari data yang tersimpan didalam data warehouse. Hasil proses analisis datatersebut di representasikan dalam bentuk laporan statistik dan dashboard, kemudian digunakan sebagai pendukungpengambilan keputusan akademik. Penelitian ini bertujuan merancang bangun sistem business intelligence universitassebagai pendukung pengambilan keputusan akademik pada Universitas Mulawarman berbasis web dengan OLAP. Penelitianini menghasilkan kerangka sistem dan web portal sistem business intelligence universitas yang diakses melalui browsersecara online. Business Intelligence dapat digunakan sebagai solusi untuk mempertimbangkan proses pengambilan keputusandalam pengelolaan universitas dan solusi dalam peningkatan kinerja pengelolaan akademik untuk mencapai keunggulanakademik.Kata kunci : Business Intelligence; Data warehouse; OLAP; KPI; Data mining

  2. Autonomous Mission Operations for Sensor Webs

    Science.gov (United States)

    Underbrink, A.; Witt, K.; Stanley, J.; Mandl, D.

    2008-12-01

    We present interim results of a 2005 ROSES AIST project entitled, "Using Intelligent Agents to Form a Sensor Web for Autonomous Mission Operations", or SWAMO. The goal of the SWAMO project is to shift the control of spacecraft missions from a ground-based, centrally controlled architecture to a collaborative, distributed set of intelligent agents. The network of intelligent agents intends to reduce management requirements by utilizing model-based system prediction and autonomic model/agent collaboration. SWAMO agents are distributed throughout the Sensor Web environment, which may include multiple spacecraft, aircraft, ground systems, and ocean systems, as well as manned operations centers. The agents monitor and manage sensor platforms, Earth sensing systems, and Earth sensing models and processes. The SWAMO agents form a Sensor Web of agents via peer-to-peer coordination. Some of the intelligent agents are mobile and able to traverse between on-orbit and ground-based systems. Other agents in the network are responsible for encapsulating system models to perform prediction of future behavior of the modeled subsystems and components to which they are assigned. The software agents use semantic web technologies to enable improved information sharing among the operational entities of the Sensor Web. The semantics include ontological conceptualizations of the Sensor Web environment, plus conceptualizations of the SWAMO agents themselves. By conceptualizations of the agents, we mean knowledge of their state, operational capabilities, current operational capacities, Web Service search and discovery results, agent collaboration rules, etc. The need for ontological conceptualizations over the agents is to enable autonomous and autonomic operations of the Sensor Web. The SWAMO ontology enables automated decision making and responses to the dynamic Sensor Web environment and to end user science requests. The current ontology is compatible with Open Geospatial Consortium (OGC

  3. Deep Brain Stimulation of the Subthalamic Nucleus Parameter Optimization for Vowel Acoustics and Speech Intelligibility in Parkinson's Disease

    Science.gov (United States)

    Knowles, Thea; Adams, Scott; Abeyesekera, Anita; Mancinelli, Cynthia; Gilmore, Greydon; Jog, Mandar

    2018-01-01

    Purpose: The settings of 3 electrical stimulation parameters were adjusted in 12 speakers with Parkinson's disease (PD) with deep brain stimulation of the subthalamic nucleus (STN-DBS) to examine their effects on vowel acoustics and speech intelligibility. Method: Participants were tested under permutations of low, mid, and high STN-DBS frequency,…

  4. Maximum Spanning Tree Model on Personalized Web Based Collaborative Learning in Web 3.0

    OpenAIRE

    Padma, S.; Seshasaayee, Ananthi

    2012-01-01

    Web 3.0 is an evolving extension of the current web environme bnt. Information in web 3.0 can be collaborated and communicated when queried. Web 3.0 architecture provides an excellent learning experience to the students. Web 3.0 is 3D, media centric and semantic. Web based learning has been on high in recent days. Web 3.0 has intelligent agents as tutors to collect and disseminate the answers to the queries by the students. Completely Interactive learner's query determine the customization of...

  5. Machine listening intelligence

    Science.gov (United States)

    Cella, C. E.

    2017-05-01

    This manifesto paper will introduce machine listening intelligence, an integrated research framework for acoustic and musical signals modelling, based on signal processing, deep learning and computational musicology.

  6. Deep learning application: rubbish classification with aid of an android device

    Science.gov (United States)

    Liu, Sijiang; Jiang, Bo; Zhan, Jie

    2017-06-01

    Deep learning is a very hot topic currently in pattern recognition and artificial intelligence researches. Aiming at the practical problem that people usually don't know correct classifications some rubbish should belong to, based on the powerful image classification ability of the deep learning method, we have designed a prototype system to help users to classify kinds of rubbish. Firstly the CaffeNet Model was adopted for our classification network training on the ImageNet dataset, and the trained network was deployed on a web server. Secondly an android app was developed for users to capture images of unclassified rubbish, upload images to the web server for analyzing backstage and retrieve the feedback, so that users can obtain the classification guide by an android device conveniently. Tests on our prototype system of rubbish classification show that: an image of one single type of rubbish with origin shape can be better used to judge its classification, while an image containing kinds of rubbish or rubbish with changed shape may fail to help users to decide rubbish's classification. However, the system still shows promising auxiliary function for rubbish classification if the network training strategy can be optimized further.

  7. Semantic web for the working ontologist effective modeling in RDFS and OWL

    CERN Document Server

    Allemang, Dean

    2011-01-01

    Semantic Web models and technologies provide information in machine-readable languages that enable computers to access the Web more intelligently and perform tasks automatically without the direction of users. These technologies are relatively recent and advancing rapidly, creating a set of unique challenges for those developing applications. Semantic Web for the Working Ontologist is the essential, comprehensive resource on semantic modeling, for practitioners in health care, artificial intelligence, finance, engineering, military intelligence, enterprise architecture, and more. Focused on

  8. Design and development of an IoT-based web application for an intelligent remote SCADA system

    Science.gov (United States)

    Kao, Kuang-Chi; Chieng, Wei-Hua; Jeng, Shyr-Long

    2018-03-01

    This paper presents a design of an intelligent remote electrical power supervisory control and data acquisition (SCADA) system based on the Internet of Things (IoT), with Internet Information Services (IIS) for setting up web servers, an ASP.NET model-view- controller (MVC) for establishing a remote electrical power monitoring and control system by using responsive web design (RWD), and a Microsoft SQL Server as the database. With the web browser connected to the Internet, the sensing data is sent to the client by using the TCP/IP protocol, which supports mobile devices with different screen sizes. The users can provide instructions immediately without being present to check the conditions, which considerably reduces labor and time costs. The developed system incorporates a remote measuring function by using a wireless sensor network and utilizes a visual interface to make the human-machine interface (HMI) more instinctive. Moreover, it contains an analog input/output and a basic digital input/output that can be applied to a motor driver and an inverter for integration with a remote SCADA system based on IoT, and thus achieve efficient power management.

  9. A novel design of hidden web crawler using ontology

    OpenAIRE

    Manvi; Bhatia, Komal Kumar; Dixit, Ashutosh

    2015-01-01

    Deep Web is content hidden behind HTML forms. Since it represents a large portion of the structured, unstructured and dynamic data on the Web, accessing Deep-Web content has been a long challenge for the database community. This paper describes a crawler for accessing Deep-Web using Ontologies. Performance evaluation of the proposed work showed that this new approach has promising results.

  10. SchNet - A deep learning architecture for molecules and materials

    Science.gov (United States)

    Schütt, K. T.; Sauceda, H. E.; Kindermans, P.-J.; Tkatchenko, A.; Müller, K.-R.

    2018-06-01

    Deep learning has led to a paradigm shift in artificial intelligence, including web, text, and image search, speech recognition, as well as bioinformatics, with growing impact in chemical physics. Machine learning, in general, and deep learning, in particular, are ideally suitable for representing quantum-mechanical interactions, enabling us to model nonlinear potential-energy surfaces or enhancing the exploration of chemical compound space. Here we present the deep learning architecture SchNet that is specifically designed to model atomistic systems by making use of continuous-filter convolutional layers. We demonstrate the capabilities of SchNet by accurately predicting a range of properties across chemical space for molecules and materials, where our model learns chemically plausible embeddings of atom types across the periodic table. Finally, we employ SchNet to predict potential-energy surfaces and energy-conserving force fields for molecular dynamics simulations of small molecules and perform an exemplary study on the quantum-mechanical properties of C20-fullerene that would have been infeasible with regular ab initio molecular dynamics.

  11. Inteligência Organizacional e Competitiva e a Web 2.0

    Directory of Open Access Journals (Sweden)

    Kira Tarapanoff

    2013-11-01

    Full Text Available New possibilities are analyzed for Competitive and Organizational Intelligence under Web 2.0. The underlying theoretical approach is based on Information Science, Information and Knowledge Management, Competitive Intelligence and related disciplines, in an integrated manner. The thesis that is defended in this essay considers that the basic elements that constitute what is understood by competitive and organizational intelligence 2.0 are the adequate use of collective intelligence accessible at the Web 2.0 in order to create and share knowledge; associated with the emerging concepts of the corporate world such as sustainability.

  12. New challenges in computational collective intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Ngoc Thanh; Katarzyniak, Radoslaw Piotr [Wroclaw Univ. of Technology (Poland). Inst. of Informatics; Janiak, Adam (eds.) [Wroclaw Univ. of Technology (Poland). Inst. of Computer Engineering, Control and Robotics

    2009-07-01

    The book consists of 29 chapters which have been selected and invited from the submissions to the 1{sup st} International Conference on Collective Intelligence - Semantic Web, Social Networks and Multiagent Systems (ICCCI 2009). All chapters in the book discuss various examples of applications of computational collective intelligence and related technologies to such fields as semantic web, information systems ontologies, social networks, agent and multiagent systems. The editors hope that the book can be useful for graduate and Ph.D. students in Computer Science, in particular participants to courses on Soft Computing, Multi-Agent Systems and Robotics. This book can also be useful for researchers working on the concept of computational collective intelligence in artificial populations. It is the hope of the editors that readers of this volume can find many inspiring ideas and use them to create new cases intelligent collectives. Many such challenges are suggested by particular approaches and models presented in particular chapters of this book. (orig.)

  13. ICT-Supported Gaming for Competitive Intelligence

    NARCIS (Netherlands)

    Achterbergh, J.M.I.M.; Khosrow-Pour, M.

    2005-01-01

    Collecting and processing competitive intelligence for the purpose of strategy formulation are complex activities requiring deep insight in and models of the “organization in its environment.” These insights and models need to be not only shared between CI (competitive intelligence) practitioners

  14. Data transfer based on intelligent ethernet card

    International Nuclear Information System (INIS)

    Zhu Haitao; Chinese Academy of Sciences, Beijing; Chu Yuanping; Zhao Jingwei

    2007-01-01

    Intelligent Ethernet Cards are widely used in systems where the network throughout is very large, such as the DAQ systems for modern high energy physics experiments, web service. With the example of a commercial intelligent Ethernet card, this paper introduces the architecture, the principle and the process of intelligent Ethernet cards. In addition, the results of several experiments showing the differences between intelligent Ethernet cards and general ones are also presented. (authors)

  15. Web service composition: a semantic web and automated planning technique application

    Directory of Open Access Journals (Sweden)

    Jaime Alberto Guzmán Luna

    2008-09-01

    Full Text Available This article proposes applying semantic web and artificial intelligence planning techniques to a web services composition model dealing with problems of ambiguity in web service description and handling incomplete web information. The model uses an OWL-S services and implements a planning technique which handles open world semantics in its reasoning process to resolve these problems. This resulted in a web services composition system incorporating a module for interpreting OWL-S services and converting them into a planning problem in PDDL (a planning module handling incomplete information and an execution service module concurrently interacting with the planner for executing each composition plan service.

  16. Students in a Teacher College of Education Develop Educational Programs and Activities Related to Intelligent Use of the Web: Cultivating New Knowledge

    Science.gov (United States)

    Wadmany, Rivka; Zeichner, Orit; Melamed, Orly

    2014-01-01

    Students in a teacher training college in Israel have developed and taught curricula on the intelligent use of the Web. The educational programs were based on activities thematically related to the world of digital citizenship, such as the rights of the child and the Internet, identity theft, copyrights, freedom of expression and its limitations,…

  17. Advanced Techniques in Web Intelligence-2 Web User Browsing Behaviour and Preference Analysis

    CERN Document Server

    Palade, Vasile; Jain, Lakhmi

    2013-01-01

    This research volume focuses on analyzing the web user browsing behaviour and preferences in traditional web-based environments, social  networks and web 2.0 applications,  by using advanced  techniques in data acquisition, data processing, pattern extraction and  cognitive science for modeling the human actions.  The book is directed to  graduate students, researchers/scientists and engineers  interested in updating their knowledge with the recent trends in web user analysis, for developing the next generation of web-based systems and applications.

  18. Distributed Deep Web Search

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien

    2013-01-01

    The World Wide Web contains billions of documents (and counting); hence, it is likely that some document will contain the answer or content you are searching for. While major search engines like Bing and Google often manage to return relevant results to your query, there are plenty of situations in

  19. Efficacy of a web-based intelligent tutoring system for communicating genetic risk of breast cancer: a fuzzy-trace theory approach.

    Science.gov (United States)

    Wolfe, Christopher R; Reyna, Valerie F; Widmer, Colin L; Cedillos, Elizabeth M; Fisher, Christopher R; Brust-Renck, Priscila G; Weil, Audrey M

    2015-01-01

    . Many healthy women consider genetic testing for breast cancer risk, yet BRCA testing issues are complex. . To determine whether an intelligent tutor, BRCA Gist, grounded in fuzzy-trace theory (FTT), increases gist comprehension and knowledge about genetic testing for breast cancer risk, improving decision making. . In 2 experiments, 410 healthy undergraduate women were randomly assigned to 1 of 3 groups: an online module using a Web-based tutoring system (BRCA Gist) that uses artificial intelligence technology, a second group read highly similar content from the National Cancer Institute (NCI) Web site, and a third that completed an unrelated tutorial. . BRCA Gist applied FTT and was designed to help participants develop gist comprehension of topics relevant to decisions about BRCA genetic testing, including how breast cancer spreads, inherited genetic mutations, and base rates. . We measured content knowledge, gist comprehension of decision-relevant information, interest in testing, and genetic risk and testing judgments. . Control knowledge scores ranged from 54% to 56%, NCI improved significantly to 65% and 70%, and BRCA Gist improved significantly more to 75% and 77%, P tutors, such as BRCA Gist, are scalable, cost-effective ways of helping people understand complex issues, improving decision making. © The Author(s) 2014.

  20. Intelligent Overload Control for Composite Web Services

    NARCIS (Netherlands)

    Meulenhoff, P.J.; Ostendorf, D.R.; Zivkovic, Miroslav; Meeuwissen, H.B.; Gijsen, B.M.M.

    2009-01-01

    In this paper, we analyze overload control for composite web services in service oriented architectures by an orchestrating broker, and propose two practical access control rules which effectively mitigate the effects of severe overloads at some web services in the composite service. These two rules

  1. Novel applications of intelligent systems

    CERN Document Server

    Kasabov, Nikola; Filev, Dimitar; Jotsov, Vladimir

    2016-01-01

    In this carefully edited book some selected results of theoretical and applied research in the field of broadly perceived intelligent systems are presented. The problems vary from industrial to web and problem independent applications. All this is united under the slogan: "Intelligent systems conquer the world”. The book brings together innovation projects with analytical research, invention, retrieval and processing of knowledge and logical applications in technology. This book is aiming to a wide circle of readers and particularly to the young generation of IT/ICT experts who will build the next generations of intelligent systems.

  2. Responsible vendors, intelligent consumers: Silk Road, the online revolution in drug trading.

    Science.gov (United States)

    Van Hout, Marie Claire; Bingham, Tim

    2014-03-01

    Silk Road is located on the Deep Web and provides an anonymous transacting infrastructure for the retail of drugs and pharmaceuticals. Members are attracted to the site due to protection of identity by screen pseudonyms, variety and quality of product listings, selection of vendors based on reviews, reduced personal risks, stealth of product delivery, development of personal connections with vendors in stealth modes and forum activity. The study aimed to explore vendor accounts of Silk Road as retail infrastructure. A single and holistic case study with embedded units approach (Yin, 2003) was chosen to explore the accounts of vendor subunits situated within the Silk Road marketplace. Vendors (n=10) completed an online interview via the direct message facility and via Tor mail. Vendors described themselves as 'intelligent and responsible' consumers of drugs. Decisions to commence vending operations on the site centred on simplicity in setting up vendor accounts, and opportunity to operate within a low risk, high traffic, high mark-up, secure and anonymous Deep Web infrastructure. The embedded online culture of harm reduction ethos appealed to them in terms of the responsible vending and use of personally tested high quality products. The professional approach to running their Silk Road businesses and dedication to providing a quality service was characterised by professional advertising of quality products, professional communication and visibility on forum pages, speedy dispatch of slightly overweight products, competitive pricing, good stealth techniques and efforts to avoid customer disputes. Vendors appeared content with a fairly constant buyer demand and described a relatively competitive market between small and big time market players. Concerns were evident with regard to Bitcoin instability. The greatest threat to Silk Road and other sites operating on the Deep Web is not law enforcement or market dynamics, it is technology itself. Copyright © 2013 Elsevier

  3. Bioaccumulation of tributyltin and triphenyltin compounds through the food web in deep offshore water

    OpenAIRE

    KONO, Kumiko; MINAMI, Takashi; YAMADA, Hisashi; TANAKA, Hiroyuki; KOYAMA, Jiro

    2008-01-01

    Concentrations of tributyltin (TBT) and triphenyltin (TPT) compounds were determined in bottom seawater, sediments, and organisms of various trophic levels in the marine benthic food web in the Sea of Japan to clarify how the bioaccumulation patterns of TBT and TPT in the deep-sea ecosystem differ. TBT was detected in all samples: 0.3-0.8 ng/l in bottom seawater, 4.4-16 ng/g dry wt in sediment, and 1.8-240 ng/g dry wt in various organisms. TBT and TPT concentrations were lower in bottom seawa...

  4. Learning Structural Classification Rules for Web-page Categorization

    NARCIS (Netherlands)

    Stuckenschmidt, Heiner; Hartmann, Jens; Van Harmelen, Frank

    2002-01-01

    Content-related metadata plays an important role in the effort of developing intelligent web applications. One of the most established form of providing content-related metadata is the assignment of web-pages to content categories. We describe the Spectacle system for classifying individual web

  5. Deep smarts.

    Science.gov (United States)

    Leonard, Dorothy; Swap, Walter

    2004-09-01

    When a person sizes up a complex situation and rapidly comes to a decision that proves to be not just good but brilliant, you think, "That was smart." After you watch him do this a few times, you realize you're in the presence of something special. It's not raw brainpower, though that helps. It's not emotional intelligence, either, though that, too, is often involved. It's deep smarts. Deep smarts are not philosophical--they're not"wisdom" in that sense, but they're as close to wisdom as business gets. You see them in the manager who understands when and how to move into a new international market, in the executive who knows just what kind of talk to give when her organization is in crisis, in the technician who can track a product failure back to an interaction between independently produced elements. These are people whose knowledge would be hard to purchase on the open market. Their insight is based on know-how more than on know-what; it comprises a system view as well as expertise in individual areas. Because deep smarts are experienced based and often context specific, they can't be produced overnight or readily imported into an organization. It takes years for an individual to develop them--and no time at all for an organization to lose them when a valued veteran walks out the door. They can be taught, however, with the right techniques. Drawing on their forthcoming book Deep Smarts, Dorothy Leonard and Walter Swap say the best way to transfer such expertise to novices--and, on a larger scale, to make individual knowledge institutional--isn't through PowerPoint slides, a Web site of best practices, online training, project reports, or lectures. Rather, the sage needs to teach the neophyte individually how to draw wisdom from experience. Companies have to be willing to dedicate time and effort to such extensive training, but the investment more than pays for itself.

  6. 1st International Conference on Intelligent Computing and Communication

    CERN Document Server

    Satapathy, Suresh; Sanyal, Manas; Bhateja, Vikrant

    2017-01-01

    The book covers a wide range of topics in Computer Science and Information Technology including swarm intelligence, artificial intelligence, evolutionary algorithms, and bio-inspired algorithms. It is a collection of papers presented at the First International Conference on Intelligent Computing and Communication (ICIC2) 2016. The prime areas of the conference are Intelligent Computing, Intelligent Communication, Bio-informatics, Geo-informatics, Algorithm, Graphics and Image Processing, Graph Labeling, Web Security, Privacy and e-Commerce, Computational Geometry, Service Orient Architecture, and Data Engineering.

  7. Geospatial Semantics and the Semantic Web

    CERN Document Server

    Ashish, Naveen

    2011-01-01

    The availability of geographic and geospatial information and services, especially on the open Web has become abundant in the last several years with the proliferation of online maps, geo-coding services, geospatial Web services and geospatially enabled applications. The need for geospatial reasoning has significantly increased in many everyday applications including personal digital assistants, Web search applications, local aware mobile services, specialized systems for emergency response, medical triaging, intelligence analysis and more. Geospatial Semantics and the Semantic Web: Foundation

  8. Deep Learning for Drug Design: an Artificial Intelligence Paradigm for Drug Discovery in the Big Data Era.

    Science.gov (United States)

    Jing, Yankang; Bian, Yuemin; Hu, Ziheng; Wang, Lirong; Xie, Xiang-Qun Sean

    2018-03-30

    Over the last decade, deep learning (DL) methods have been extremely successful and widely used to develop artificial intelligence (AI) in almost every domain, especially after it achieved its proud record on computational Go. Compared to traditional machine learning (ML) algorithms, DL methods still have a long way to go to achieve recognition in small molecular drug discovery and development. And there is still lots of work to do for the popularization and application of DL for research purpose, e.g., for small molecule drug research and development. In this review, we mainly discussed several most powerful and mainstream architectures, including the convolutional neural network (CNN), recurrent neural network (RNN), and deep auto-encoder networks (DAENs), for supervised learning and nonsupervised learning; summarized most of the representative applications in small molecule drug design; and briefly introduced how DL methods were used in those applications. The discussion for the pros and cons of DL methods as well as the main challenges we need to tackle were also emphasized.

  9. 76 FR 22940 - Intelligent Transportation Systems Program Advisory Committee; Notice of Meeting

    Science.gov (United States)

    2011-04-25

    ... DEPARTMENT OF TRANSPORTATION Intelligent Transportation Systems Program Advisory Committee; Notice...-363; 5 U.S.C. app. 2), a Web conference of the Intelligent Transportation Systems (ITS) Program... implementation of intelligent transportation systems. Through its sponsor, the ITS Joint Program Office (JPO...

  10. Fuzzy Clustering: An Approachfor Mining Usage Profilesfrom Web

    OpenAIRE

    Ms.Archana N. Boob; Prof. D. M. Dakhane

    2012-01-01

    Web usage mining is an application of data mining technology to mining the data of the web server log file. It can discover the browsing patterns of user and some kind of correlations between the web pages. Web usage mining provides the support for the web site design, providing personalization server and other business making decision, etc. Web mining applies the data mining, the artificial intelligence and the chart technology and so on to the web data and traces users' visiting characteris...

  11. Survey of Techniques for Deep Web Source Selection and Surfacing the Hidden Web Content

    OpenAIRE

    Khushboo Khurana; M.B. Chandak

    2016-01-01

    Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in...

  12. Applied Semantic Web Technologies

    CERN Document Server

    Sugumaran, Vijayan

    2011-01-01

    The rapid advancement of semantic web technologies, along with the fact that they are at various levels of maturity, has left many practitioners confused about the current state of these technologies. Focusing on the most mature technologies, Applied Semantic Web Technologies integrates theory with case studies to illustrate the history, current state, and future direction of the semantic web. It maintains an emphasis on real-world applications and examines the technical and practical issues related to the use of semantic technologies in intelligent information management. The book starts with

  13. Expertik: Experience with Artificial Intelligence and Mobile Computing

    Directory of Open Access Journals (Sweden)

    José Edward Beltrán Lozano

    2013-06-01

    Full Text Available This article presents the experience in the development of services based in Artificial Intelligence, Service Oriented Architecture, mobile computing. It aims to combine technology offered by mobile computing provides techniques and artificial intelligence through a service provide diagnostic solutions to problems in industrial maintenance. It aims to combine technology offered by mobile computing and the techniques artificial intelligence through a service to provide diagnostic solutions to problems in industrial maintenance. For service creation are identified the elements of an expert system, the knowledge base, the inference engine and knowledge acquisition interfaces and their consultation. The applications were developed in ASP.NET under architecture three layers. The data layer was developed conjunction in SQL Server with data management classes; business layer in VB.NET and the presentation layer in ASP.NET with XHTML. Web interfaces for knowledge acquisition and query developed in Web and Mobile Web. The inference engine was conducted in web service developed for the fuzzy logic model to resolve requests from applications consulting knowledge (initially an exact rule-based logic within this experience to resolve requests from applications consulting knowledge. This experience seeks to strengthen a technology-based company to offer services based on AI for service companies Colombia.

  14. The Intelligent Technologies of Electronic Information System

    Science.gov (United States)

    Li, Xianyu

    2017-08-01

    Based upon the synopsis of system intelligence and information services, this paper puts forward the attributes and the logic structure of information service, sets forth intelligent technology framework of electronic information system, and presents a series of measures, such as optimizing business information flow, advancing data decision capability, improving information fusion precision, strengthening deep learning application and enhancing prognostic and health management, and demonstrates system operation effectiveness. This will benefit the enhancement of system intelligence.

  15. Internet-based intelligent information processing systems

    CERN Document Server

    Tonfoni, G; Ichalkaranje, N S

    2003-01-01

    The Internet/WWW has made it possible to easily access quantities of information never available before. However, both the amount of information and the variation in quality pose obstacles to the efficient use of the medium. Artificial intelligence techniques can be useful tools in this context. Intelligent systems can be applied to searching the Internet and data-mining, interpreting Internet-derived material, the human-Web interface, remote condition monitoring and many other areas. This volume presents the latest research on the interaction between intelligent systems (neural networks, adap

  16. [Development and application of a web-based expert system using artificial intelligence for management of mental health by Korean emigrants].

    Science.gov (United States)

    Bae, Jeongyee

    2013-04-01

    The purpose of this project was to develop an international web-based expert system using principals of artificial intelligence and user-centered design for management of mental health by Korean emigrants. Using this system, anyone can access the system via computer access to the web. Our design process utilized principles of user-centered design with 4 phases: needs assessment, analysis, design/development/testing, and application release. A survey was done with 3,235 Korean emigrants. Focus group interviews were also conducted. Survey and analysis results guided the design of the web-based expert system. With this system, anyone can check their mental health status by themselves using a personal computer. The system analyzes facts based on answers to automated questions, and suggests solutions accordingly. A history tracking mechanism enables monitoring and future analysis. In addition, this system will include intervention programs to promote mental health status. This system is interactive and accessible to anyone in the world. It is expected that this management system will contribute to Korean emigrants' mental health promotion and allow researchers and professionals to share information on mental health.

  17. A Semantically Automated Protocol Adapter for Mapping SOAP Web Services to RESTful HTTP Format to Enable the Web Infrastructure, Enhance Web Service Interoperability and Ease Web Service Migration

    Directory of Open Access Journals (Sweden)

    Frank Doheny

    2012-04-01

    Full Text Available Semantic Web Services (SWS are Web Service (WS descriptions augmented with semantic information. SWS enable intelligent reasoning and automation in areas such as service discovery, composition, mediation, ranking and invocation. This paper applies SWS to a previous protocol adapter which, operating within clearly defined constraints, maps SOAP Web Services to RESTful HTTP format. However, in the previous adapter, the configuration element is manual and the latency implications are locally based. This paper applies SWS technologies to automate the configuration element and the latency tests are conducted in a more realistic Internet based setting.

  18. Web-enabling technologies for the factory floor: a web-enabling strategy for emanufacturing

    Science.gov (United States)

    Velez, Ricardo; Lastra, Jose L. M.; Tuokko, Reijo O.

    2001-10-01

    This paper is intended to address the different technologies available for Web-enabling of the factory floor. It will give an overview of the importance of Web-enabling of the factory floor, in the application of the concepts of flexible and intelligent manufacturing, in conjunction with e-commerce. As a last section, it will try to define a Web-enabling strategy for the application in eManufacturing. This is made under the scope of the electronics manufacturing industry, so every application, technology or related matter is presented under such scope.

  19. 2nd International Conference on Intelligent Computing, Communication & Devices

    CERN Document Server

    Popentiu-Vladicescu, Florin

    2017-01-01

    The book presents high quality papers presented at 2nd International Conference on Intelligent Computing, Communication & Devices (ICCD 2016) organized by Interscience Institute of Management and Technology (IIMT), Bhubaneswar, Odisha, India, during 13 and 14 August, 2016. The book covers all dimensions of intelligent sciences in its three tracks, namely, intelligent computing, intelligent communication and intelligent devices. intelligent computing track covers areas such as intelligent and distributed computing, intelligent grid and cloud computing, internet of things, soft computing and engineering applications, data mining and knowledge discovery, semantic and web technology, hybrid systems, agent computing, bioinformatics, and recommendation systems. Intelligent communication covers communication and network technologies, including mobile broadband and all optical networks that are the key to groundbreaking inventions of intelligent communication technologies. This covers communication hardware, soft...

  20. Memory Based Machine Intelligence Techniques in VLSI hardware

    OpenAIRE

    James, Alex Pappachen

    2012-01-01

    We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...

  1. Building Program Vector Representations for Deep Learning

    OpenAIRE

    Mou, Lili; Li, Ge; Liu, Yuxuan; Peng, Hao; Jin, Zhi; Xu, Yan; Zhang, Lu

    2014-01-01

    Deep learning has made significant breakthroughs in various fields of artificial intelligence. Advantages of deep learning include the ability to capture highly complicated features, weak involvement of human engineering, etc. However, it is still virtually impossible to use deep learning to analyze programs since deep architectures cannot be trained effectively with pure back propagation. In this pioneering paper, we propose the "coding criterion" to build program vector representations, whi...

  2. An intelligent framework for dynamic web services composition in the semantic web

    OpenAIRE

    Thakker, D

    2008-01-01

    As Web services are being increasingly adopted as the distributed computing technology of choice to securely publish application services beyond the firewall, the importance of composing them to create new, value-added service, is increasing. Thus far, the most successful practical approach to Web services composition, largely endorsed by the industry falls under the static composition category where the service selection and flow management are done a priori and manually. The second approach...

  3. E-Learning 3.0 = E-Learning 2.0 + Web 3.0?

    Science.gov (United States)

    Hussain, Fehmida

    2012-01-01

    Web 3.0, termed as the semantic web or the web of data is the transformed version of Web 2.0 with technologies and functionalities such as intelligent collaborative filtering, cloud computing, big data, linked data, openness, interoperability and smart mobility. If Web 2.0 is about social networking and mass collaboration between the creator and…

  4. Semantic mashups intelligent reuse of web resources

    CERN Document Server

    Endres-Niggemeyer, Brigitte

    2013-01-01

    Mashups are mostly lightweight Web applications that offer new functionalities by combining, aggregating and transforming resources and services available on the Web. Popular examples include a map in their main offer, for instance for real estate, hotel recommendations, or navigation tools.  Mashups may contain and mix client-side and server-side activity. Obviously, understanding the incoming resources (services, statistical figures, text, videos, etc.) is a precondition for optimally combining them, so that there is always some undercover semantics being used.  By using semantic annotations

  5. Space Radiation Intelligence System (SPRINTS), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — NextGen Federal Systems proposes an innovative SPace Radiation INTelligence System (SPRINTS) which provides an interactive and web-delivered capability that...

  6. Survey on deep learning for radiotherapy.

    Science.gov (United States)

    Meyer, Philippe; Noblet, Vincent; Mazzara, Christophe; Lallement, Alex

    2018-05-17

    More than 50% of cancer patients are treated with radiotherapy, either exclusively or in combination with other methods. The planning and delivery of radiotherapy treatment is a complex process, but can now be greatly facilitated by artificial intelligence technology. Deep learning is the fastest-growing field in artificial intelligence and has been successfully used in recent years in many domains, including medicine. In this article, we first explain the concept of deep learning, addressing it in the broader context of machine learning. The most common network architectures are presented, with a more specific focus on convolutional neural networks. We then present a review of the published works on deep learning methods that can be applied to radiotherapy, which are classified into seven categories related to the patient workflow, and can provide some insights of potential future applications. We have attempted to make this paper accessible to both radiotherapy and deep learning communities, and hope that it will inspire new collaborations between these two communities to develop dedicated radiotherapy applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Computational Intelligence and Decision Making Trends and Applications

    CERN Document Server

    Madureira, Ana; Marques, Viriato

    2013-01-01

    This book provides a general overview and original analysis of new developments and applications in several areas of Computational Intelligence and Information Systems. Computational Intelligence has become an important tool for engineers to develop and analyze novel techniques to solve problems in basic sciences such as physics, chemistry, biology, engineering, environment and social sciences.   The material contained in this book addresses the foundations and applications of Artificial Intelligence and Decision Support Systems, Complex and Biological Inspired Systems, Simulation and Evolution of Real and Artificial Life Forms, Intelligent Models and Control Systems, Knowledge and Learning Technologies, Web Semantics and Ontologies, Intelligent Tutoring Systems, Intelligent Power Systems, Self-Organized and Distributed Systems, Intelligent Manufacturing Systems and Affective Computing. The contributions have all been written by international experts, who provide current views on the topics discussed and pr...

  8. Development of an asynchronous communication channel between wireless sensor nodes, smartphone devices, and web applications using RESTful Web Services for intelligent farming

    Science.gov (United States)

    De Leon, Marlene M.; Estuar, Maria Regina E.; Lim, Hadrian Paulo; Victorino, John Noel C.; Co, Jerelyn; Saddi, Ivan Lester; Paelmo, Sharlene Mae; Dela Cruz, Bon Lemuel

    2017-09-01

    Environment and agriculture related applications have been gaining ground for the past several years and have been the context for researches in ubiquitous and pervasive computing. This study is a part of a bigger study that uses artificial intelligence in developing models to detect, monitor, and forecast the spread of Fusarium oxysporum cubense TR4 (FOC TR4) on Cavendish bananas cultivated in the Philippines. To implement an Intelligent Farming system, 1) wireless sensor nodes (WSNs) are deployed in Philippine banana plantations to collect soil parameter data that is considered to affect the health of Cavendish bananas, 2) a custom built smartphone application is used for collecting, storing, and transmitting soil data, plant images and plant status data to a cloud storage, and 3) a custom built web application is used to load and display results of physico-chemical analysis of soil, analysis of data models, and geographic locations of plants being monitored. This study discusses the issues, considerations, and solutions implemented in the development of an asynchronous communication channel to ensure that all data collected by WSNs and smartphone applications are transmitted with a high degree of accuracy and reliability. From a design standpoint: standard API documentation on usage of data type is required to avoid inconsistencies in parameter passing. From a technical standpoint, there is a need to include error-handling mechanisms especially for delays in transmission of data as well as generalize method of parsing thru multidimensional array of data. Strategies are presented in the paper.

  9. Present situation and trend of precision guidance technology and its intelligence

    Science.gov (United States)

    Shang, Zhengguo; Liu, Tiandong

    2017-11-01

    This paper first introduces the basic concepts of precision guidance technology and artificial intelligence technology. Then gives a brief introduction of intelligent precision guidance technology, and with the help of development of intelligent weapon based on deep learning project in foreign: LRASM missile project, TRACE project, and BLADE project, this paper gives an overview of the current foreign precision guidance technology. Finally, the future development trend of intelligent precision guidance technology is summarized, mainly concentrated in the multi objectives, intelligent classification, weak target detection and recognition, intelligent between complex environment intelligent jamming and multi-source, multi missile cooperative fighting and other aspects.

  10. Categorization of web pages - Performance enhancement to search engine

    Digital Repository Service at National Institute of Oceanography (India)

    Lakshminarayana, S.

    of Artificial Intelligence, Volume III. Los Altos, CA.: William Kaufmann. pp 1-74. 18. Brin, S. & Page, L. (1998). The anatomy of a large scale hyper-textual web search engine. In Proceedings of the seventh World Wide Web conference, Brisbane, Australia. 19...

  11. New trends in computational collective intelligence

    CERN Document Server

    Kim, Sang-Wook; Trawiński, Bogdan

    2015-01-01

    This book consists of 20 chapters in which the authors deal with different theoretical and practical aspects of new trends in Collective Computational Intelligence techniques. Computational Collective Intelligence methods and algorithms are one the current trending research topics from areas related to Artificial Intelligence, Soft Computing or Data Mining among others. Computational Collective Intelligence is a rapidly growing field that is most often understood as an AI sub-field dealing with soft computing methods which enable making group decisions and processing knowledge among autonomous units acting in distributed environments. Web-based Systems, Social Networks, and Multi-Agent Systems very often need these tools for working out consistent knowledge states, resolving conflicts and making decisions. The chapters included in this volume cover a selection of topics and new trends in several domains related to Collective Computational Intelligence: Language and Knowledge Processing, Data Mining Methods an...

  12. Swiss Première of the film "Deep Web" | 11 March 7 p.m. | CERN Main Auditorium

    CERN Multimedia

    2016-01-01

    On Friday 11 March, the CineGlobe Film Festival at CERN will host the FIFDH (International Film Festival and Forum on Human Rights) in the CERN Main Auditorium for the Swiss Première of the film Deep Web.   Starting from the online black market Silk Road, this investigation immerses us in the universe of the Tor network and the Dark Web, the cryptic and anonymous side of the Internet. In this modern version of the Far West, inhabited by bounty hunters, libertarians and political dissidents, everything is paid in bitcoins.  After the screening, filmmaker Miruna Coca-Cozma will moderate a discussion on security and the evolution of the web, with the participation of the director of the DiploFoundation, Jovan Kubalija, and CERN Computer Security Officer Stefan Lueders. Doors open at 7:00 p.m., film begins at 7:30 p.m.. Entry is free with reservation by email to deepweb.cern@fifdh.org. Anyone interested in volunteering for the s...

  13. Personalization of Rule-based Web Services.

    Science.gov (United States)

    Choi, Okkyung; Han, Sang Yong

    2008-04-04

    Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work.

  14. FUDAOWANG: A Web-Based Intelligent Tutoring System Implementing Advanced Education Concepts

    Science.gov (United States)

    Xu, Wei; Zhao, Ke; Li, Yatao; Yi, Zhenzhen

    2012-01-01

    Determining how to provide good tutoring functions is an important research direction of intelligent tutoring systems. In this study, the authors develop an intelligent tutoring system with good tutoring functions, called "FUDAOWANG." The research domain that FUDAOWANG treats is junior middle school mathematics, which belongs to the objective…

  15. Applying artificial intelligence to disease staging: Deep learning for improved staging of diabetic retinopathy.

    Science.gov (United States)

    Takahashi, Hidenori; Tampo, Hironobu; Arai, Yusuke; Inoue, Yuji; Kawashima, Hidetoshi

    2017-01-01

    Disease staging involves the assessment of disease severity or progression and is used for treatment selection. In diabetic retinopathy, disease staging using a wide area is more desirable than that using a limited area. We investigated if deep learning artificial intelligence (AI) could be used to grade diabetic retinopathy and determine treatment and prognosis. The retrospective study analyzed 9,939 posterior pole photographs of 2,740 patients with diabetes. Nonmydriatic 45° field color fundus photographs were taken of four fields in each eye annually at Jichi Medical University between May 2011 and June 2015. A modified fully randomly initialized GoogLeNet deep learning neural network was trained on 95% of the photographs using manual modified Davis grading of three additional adjacent photographs. We graded 4,709 of the 9,939 posterior pole fundus photographs using real prognoses. In addition, 95% of the photographs were learned by the modified GoogLeNet. Main outcome measures were prevalence and bias-adjusted Fleiss' kappa (PABAK) of AI staging of the remaining 5% of the photographs. The PABAK to modified Davis grading was 0.64 (accuracy, 81%; correct answer in 402 of 496 photographs). The PABAK to real prognosis grading was 0.37 (accuracy, 96%). We propose a novel AI disease-staging system for grading diabetic retinopathy that involves a retinal area not typically visualized on fundoscopy and another AI that directly suggests treatments and determines prognoses.

  16. PredMP: A Web Resource for Computationally Predicted Membrane Proteins via Deep Learning

    KAUST Repository

    Wang, Sheng

    2018-02-06

    Experimental determination of membrane protein (MP) structures is challenging as they are often too large for nuclear magnetic resonance (NMR) experiments and difficult to crystallize. Currently there are only about 510 non-redundant MPs with solved structures in Protein Data Bank (PDB). To elucidate the MP structures computationally, we developed a novel web resource, denoted as PredMP (http://52.87.130.56:3001/#/proteinindex), that delivers one-dimensional (1D) annotation of the membrane topology and secondary structure, two-dimensional (2D) prediction of the contact/distance map, together with three-dimensional (3D) modeling of the MP structure in the lipid bilayer, for each MP target from a given model organism. The precision of the computationally constructed MP structures is leveraged by state-of-the-art deep learning methods as well as cutting-edge modeling strategies. In particular, (i) we annotate 1D property via DeepCNF (Deep Convolutional Neural Fields) that not only models complex sequence-structure relationship but also interdependency between adjacent property labels; (ii) we predict 2D contact/distance map through Deep Transfer Learning which learns the patterns as well as the complex relationship between contacts/distances and protein features from non-membrane proteins; and (iii) we model 3D structure by feeding its predicted contacts and secondary structure to the Crystallography & NMR System (CNS) suite combined with a membrane burial potential that is residue-specific and depth-dependent. PredMP currently contains more than 2,200 multi-pass transmembrane proteins (length<700 residues) from Human. These transmembrane proteins are classified according to IUPHAR/BPS Guide, which provides a hierarchical organization of receptors, channels, transporters, enzymes and other drug targets according to their molecular relationships and physiological functions. Among these MPs, we estimated that our approach could predict correct folds for 1

  17. International Conference on Frontiers of Intelligent Computing : Theory and Applications

    CERN Document Server

    Udgata, Siba; Biswal, Bhabendra

    2014-01-01

    This volume contains the papers presented at the Second International Conference on Frontiers in Intelligent Computing: Theory and Applications (FICTA-2013) held during 14-16 November 2013 organized by Bhubaneswar Engineering College (BEC), Bhubaneswar, Odisha, India. It contains 63 papers focusing on application of intelligent techniques which includes evolutionary computation techniques like genetic algorithm, particle swarm optimization techniques, teaching-learning based optimization etc  for various engineering applications such as data mining, Fuzzy systems, Machine Intelligence and ANN, Web technologies and Multimedia applications and Intelligent computing and Networking etc.

  18. Web-based e-learning and virtual lab of human-artificial immune system.

    Science.gov (United States)

    Gong, Tao; Ding, Yongsheng; Xiong, Qin

    2014-05-01

    Human immune system is as important in keeping the body healthy as the brain in supporting the intelligence. However, the traditional models of the human immune system are built on the mathematics equations, which are not easy for students to understand. To help the students to understand the immune systems, a web-based e-learning approach with virtual lab is designed for the intelligent system control course by using new intelligent educational technology. Comparing the traditional graduate educational model within the classroom, the web-based e-learning with the virtual lab shows the higher inspiration in guiding the graduate students to think independently and innovatively, as the students said. It has been found that this web-based immune e-learning system with the online virtual lab is useful for teaching the graduate students to understand the immune systems in an easier way and design their simulations more creatively and cooperatively. The teaching practice shows that the optimum web-based e-learning system can be used to increase the learning effectiveness of the students.

  19. The Sensor Web: A Macro-Instrument for Coordinated Sensing

    Directory of Open Access Journals (Sweden)

    Kevin A. Delin

    2002-07-01

    Full Text Available The Sensor Web is a macro-instrument concept that allows for the spatiotemporal understanding of an environment through coordinated efforts between multiple numbers and types of sensing platforms, including both orbital and terrestrial and both fixed and mobile. Each of these platforms, or pods, communicates within their local neighborhood and thus distributes information to the instrument as a whole. Much as intelligence in the brain is a result of the myriad of connections between dendrites, it is anticipated that the Sensor Web will develop a macro-intelligence as a result of its distributed information with the pods reacting and adapting to their environment in a way that is much more than their individual sum. The sharing of data among individual pods will allow for a global perception and purpose of the instrument as a whole. The Sensor Web is to sensors what the Internet is to computers, with different platforms and operating systems communicating via a set of shared, robust protocols. This paper will outline the potential of the Sensor Web concept and describe the Jet Propulsion Laboratory (JPL Sensor Webs Project (http://sensorwebs.jpl.nasa.gov/. In particular, various fielded Sensor Webs will be discussed.

  20. Deep learning with Python

    CERN Document Server

    Chollet, Francois

    2018-01-01

    DESCRIPTION Deep learning is applicable to a widening range of artificial intelligence problems, such as image classification, speech recognition, text classification, question answering, text-to-speech, and optical character recognition. Deep Learning with Python is structured around a series of practical code examples that illustrate each new concept introduced and demonstrate best practices. By the time you reach the end of this book, you will have become a Keras expert and will be able to apply deep learning in your own projects. KEY FEATURES • Practical code examples • In-depth introduction to Keras • Teaches the difference between Deep Learning and AI ABOUT THE TECHNOLOGY Deep learning is the technology behind photo tagging systems at Facebook and Google, self-driving cars, speech recognition systems on your smartphone, and much more. AUTHOR BIO Francois Chollet is the author of Keras, one of the most widely used libraries for deep learning in Python. He has been working with deep neural ...

  1. A World Wide Web Region-Based Image Search Engine

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper the development of an intelligent image content-based search engine for the World Wide Web is presented. This system will offer a new form of media representation and access of content available in WWW. Information Web Crawlers continuously traverse the Internet and collect images...

  2. Effects of internal phosphorus loadings and food-web structure on the recovery of a deep lake from eutrophication

    Science.gov (United States)

    Lepori, Fabio; Roberts, James J.

    2017-01-01

    We used monitoring data from Lake Lugano (Switzerland and Italy) to assess key ecosystem responses to three decades of nutrient management (1983–2014). We investigated whether reductions in external phosphorus loadings (Lext) caused declines in lake phosphorus concentrations (P) and phytoplankton biomass (Chl a), as assumed by the predictive models that underpinned the management plan. Additionally, we examined the hypothesis that deep lakes respond quickly to Lext reductions. During the study period, nutrient management reduced Lext by approximately a half. However, the effects of such reduction on P and Chl a were complex. Far from the scenarios predicted by classic nutrient-management approaches, the responses of P and Chl a did not only reflect changes in Lext, but also variation in internal P loadings (Lint) and food-web structure. In turn, Lint varied depending on basin morphometry and climatic effects, whereas food-web structure varied due to apparently stochastic events of colonization and near-extinction of key species. Our results highlight the complexity of the trajectory of deep-lake ecosystems undergoing nutrient management. From an applied standpoint, they also suggest that [i] the recovery of warm monomictic lakes may be slower than expected due to the development of Lint, and that [ii] classic P and Chl a models based on Lext may be useful in nutrient management programs only if their predictions are used as starting points within adaptive frameworks.

  3. Exploring the academic invisible web

    OpenAIRE

    Lewandowski, Dirk; Mayr, Philipp

    2006-01-01

    Purpose: To provide a critical review of Bergman’s 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scientific search engines. We provide an overview of approaches followed thus far. Design/methodol...

  4. EIIS: An Educational Information Intelligent Search Engine Supported by Semantic Services

    Science.gov (United States)

    Huang, Chang-Qin; Duan, Ru-Lin; Tang, Yong; Zhu, Zhi-Ting; Yan, Yong-Jian; Guo, Yu-Qing

    2011-01-01

    The semantic web brings a new opportunity for efficient information organization and search. To meet the special requirements of the educational field, this paper proposes an intelligent search engine enabled by educational semantic support service, where three kinds of searches are integrated into Educational Information Intelligent Search (EIIS)…

  5. ComplexContact: a web server for inter-protein contact prediction using deep learning

    KAUST Repository

    Zeng, Hong; Wang, Sheng; Zhou, Tianming; Zhao, Feifeng; Li, Xiufeng; Wu, Qing; Xu, Jinbo

    2018-01-01

    ComplexContact (http://raptorx2.uchicago.edu/ComplexContact/) is a web server for sequence-based interfacial residue-residue contact prediction of a putative protein complex. Interfacial residue-residue contacts are critical for understanding how proteins form complex and interact at residue level. When receiving a pair of protein sequences, ComplexContact first searches for their sequence homologs and builds two paired multiple sequence alignments (MSA), then it applies co-evolution analysis and a CASP-winning deep learning (DL) method to predict interfacial contacts from paired MSAs and visualizes the prediction as an image. The DL method was originally developed for intra-protein contact prediction and performed the best in CASP12. Our large-scale experimental test further shows that ComplexContact greatly outperforms pure co-evolution methods for inter-protein contact prediction, regardless of the species.

  6. ComplexContact: a web server for inter-protein contact prediction using deep learning

    KAUST Repository

    Zeng, Hong

    2018-05-20

    ComplexContact (http://raptorx2.uchicago.edu/ComplexContact/) is a web server for sequence-based interfacial residue-residue contact prediction of a putative protein complex. Interfacial residue-residue contacts are critical for understanding how proteins form complex and interact at residue level. When receiving a pair of protein sequences, ComplexContact first searches for their sequence homologs and builds two paired multiple sequence alignments (MSA), then it applies co-evolution analysis and a CASP-winning deep learning (DL) method to predict interfacial contacts from paired MSAs and visualizes the prediction as an image. The DL method was originally developed for intra-protein contact prediction and performed the best in CASP12. Our large-scale experimental test further shows that ComplexContact greatly outperforms pure co-evolution methods for inter-protein contact prediction, regardless of the species.

  7. ComplexContact: a web server for inter-protein contact prediction using deep learning.

    Science.gov (United States)

    Zeng, Hong; Wang, Sheng; Zhou, Tianming; Zhao, Feifeng; Li, Xiufeng; Wu, Qing; Xu, Jinbo

    2018-05-22

    ComplexContact (http://raptorx2.uchicago.edu/ComplexContact/) is a web server for sequence-based interfacial residue-residue contact prediction of a putative protein complex. Interfacial residue-residue contacts are critical for understanding how proteins form complex and interact at residue level. When receiving a pair of protein sequences, ComplexContact first searches for their sequence homologs and builds two paired multiple sequence alignments (MSA), then it applies co-evolution analysis and a CASP-winning deep learning (DL) method to predict interfacial contacts from paired MSAs and visualizes the prediction as an image. The DL method was originally developed for intra-protein contact prediction and performed the best in CASP12. Our large-scale experimental test further shows that ComplexContact greatly outperforms pure co-evolution methods for inter-protein contact prediction, regardless of the species.

  8. Role of Librarian in Internet and World Wide Web Environment

    Directory of Open Access Journals (Sweden)

    K. Nageswara Rao

    2001-01-01

    Full Text Available The transition of traditional library collections to digital or virtual collections presented the librarian with new opportunities. The Internet, Web en-vironment and associated sophisticated tools have given the librarian a new dynamic role to play and serve the new information based society in bet-ter ways than hitherto. Because of the powerful features of Web i.e. distributed, heterogeneous, collaborative, multimedia, multi-protocol, hyperme-dia-oriented architecture, World Wide Web has revolutionized the way people access information, and has opened up new possibilities in areas such as digital libraries, virtual libraries, scientific information retrieval and dissemination. Not only the world is becoming interconnected, but also the use of Internet and Web has changed the fundamental roles, paradigms, and organizational culture of libraries and librarians as well. The article describes the limitless scope of Internet and Web, the existence of the librarian in the changing environment, parallelism between information sci-ence and information technology, librarians and intelligent agents, working of intelligent agents, strengths, weaknesses, threats and opportunities in-volved in the relationship between librarians and the Web. The role of librarian in Internet and Web environment especially as intermediary, facilita-tor, end-user trainer, Web site builder, researcher, interface designer, knowledge manager and sifter of information resources is also described.

  9. Philosophical engineering toward a philosophy of the web

    CERN Document Server

    Halpin, Harry

    2013-01-01

    This is the first interdisciplinary exploration of the philosophical foundations of the Web, a new area of inquiry that has important implications across a range of domains. Contains twelve essays that bridge the fields of philosophy, cognitive science, and phenomenologyTackles questions such as the impact of Google on intelligence and epistemology, the philosophical status of digital objects, ethics on the Web, semantic and ontological changes caused by the Web, and the potential of the Web to serve as a genuine cognitive extensionBrings together insightful new scholarship from well-known an

  10. New in protein structure and function annotation: hotspots, single nucleotide polymorphisms and the 'Deep Web'.

    Science.gov (United States)

    Bromberg, Yana; Yachdav, Guy; Ofran, Yanay; Schneider, Reinhard; Rost, Burkhard

    2009-05-01

    The rapidly increasing quantity of protein sequence data continues to widen the gap between available sequences and annotations. Comparative modeling suggests some aspects of the 3D structures of approximately half of all known proteins; homology- and network-based inferences annotate some aspect of function for a similar fraction of the proteome. For most known protein sequences, however, there is detailed knowledge about neither their function nor their structure. Comprehensive efforts towards the expert curation of sequence annotations have failed to meet the demand of the rapidly increasing number of available sequences. Only the automated prediction of protein function in the absence of homology can close the gap between available sequences and annotations in the foreseeable future. This review focuses on two novel methods for automated annotation, and briefly presents an outlook on how modern web software may revolutionize the field of protein sequence annotation. First, predictions of protein binding sites and functional hotspots, and the evolution of these into the most successful type of prediction of protein function from sequence will be discussed. Second, a new tool, comprehensive in silico mutagenesis, which contributes important novel predictions of function and at the same time prepares for the onset of the next sequencing revolution, will be described. While these two new sub-fields of protein prediction represent the breakthroughs that have been achieved methodologically, it will then be argued that a different development might further change the way biomedical researchers benefit from annotations: modern web software can connect the worldwide web in any browser with the 'Deep Web' (ie, proprietary data resources). The availability of this direct connection, and the resulting access to a wealth of data, may impact drug discovery and development more than any existing method that contributes to protein annotation.

  11. An Immune Agent for Web-Based AI Course

    Science.gov (United States)

    Gong, Tao; Cai, Zixing

    2006-01-01

    To overcome weakness and faults of a web-based e-learning course such as Artificial Intelligence (AI), an immune agent was proposed, simulating a natural immune mechanism against a virus. The immune agent was built on the multi-dimension education agent model and immune algorithm. The web-based AI course was comprised of many files, such as HTML…

  12. A Web Observatory for the Machine Processability of Structured Data on the Web

    NARCIS (Netherlands)

    Beek, W.; Groth, P.; Schlobach, S.; Hoekstra, R.

    2014-01-01

    General human intelligence is needed in order to process Linked Open Data (LOD). On the Semantic Web (SW), content is intended to be machine-processable as well. But the extent to which a machine is able to navigate, access, and process the SW has not been extensively researched. We present LOD

  13. A Windows Phone 7 Oriented Secure Architecture for Business Intelligence Mobile Applications

    Directory of Open Access Journals (Sweden)

    Silvia TRIF

    2011-01-01

    Full Text Available This paper present and implement a Windows Phone 7 Oriented Secure Architecture for Business Intelligence Mobile Application. In the developing process is used a Windows Phone 7 application that interact with a WCF Web Service and a database. The types of Business Intelligence Mobile Applications are presented. The Windows mobile devices security and restrictions are presented. The namespaces and security algorithms used in .NET Compact Framework for assuring the application security are presented. The proposed architecture is showed underlying the flows between the application and the web service.

  14. The benefit of combining a deep neural network architecture with ideal ratio mask estimation in computational speech segregation to improve speech intelligibility

    DEFF Research Database (Denmark)

    Bentsen, Thomas; May, Tobias; Kressner, Abigail Anne

    2018-01-01

    Computational speech segregation attempts to automatically separate speech from noise. This is challenging in conditions with interfering talkers and low signal-to-noise ratios. Recent approaches have adopted deep neural networks and successfully demonstrated speech intelligibility improvements....... A selection of components may be responsible for the success with these state-of-the-art approaches: the system architecture, a time frame concatenation technique and the learning objective. The aim of this study was to explore the roles and the relative contributions of these components by measuring speech......, to a state-of-the-art deep neural network-based architecture. Another improvement of 13.9 percentage points was obtained by changing the learning objective from the ideal binary mask, in which individual time-frequency units are labeled as either speech- or noise-dominated, to the ideal ratio mask, where...

  15. Artificial intelligence for Mariáš

    OpenAIRE

    Kaštánková, Petra

    2016-01-01

    This thesis focuses on the implementation of a card game, Mariáš, and an artificial intelligence for this game. The game is designed for three players and it can be played with either other human players, or with a computer adversary. The game is designed as a client-server application, whereby the player connects to the game using a web page. The basis of the artificial intelligence is the Minimax algorithm. To speed it up we use the Alpha-Beta pruning, hash tables for storing equivalent sta...

  16. Quantum neuromorphic hardware for quantum artificial intelligence

    Science.gov (United States)

    Prati, Enrico

    2017-08-01

    The development of machine learning methods based on deep learning boosted the field of artificial intelligence towards unprecedented achievements and application in several fields. Such prominent results were made in parallel with the first successful demonstrations of fault tolerant hardware for quantum information processing. To which extent deep learning can take advantage of the existence of a hardware based on qubits behaving as a universal quantum computer is an open question under investigation. Here I review the convergence between the two fields towards implementation of advanced quantum algorithms, including quantum deep learning.

  17. Deep into the Brain: Artificial Intelligence in Stroke Imaging.

    Science.gov (United States)

    Lee, Eun-Jae; Kim, Yong-Hwan; Kim, Namkug; Kang, Dong-Wha

    2017-09-01

    Artificial intelligence (AI), a computer system aiming to mimic human intelligence, is gaining increasing interest and is being incorporated into many fields, including medicine. Stroke medicine is one such area of application of AI, for improving the accuracy of diagnosis and the quality of patient care. For stroke management, adequate analysis of stroke imaging is crucial. Recently, AI techniques have been applied to decipher the data from stroke imaging and have demonstrated some promising results. In the very near future, such AI techniques may play a pivotal role in determining the therapeutic methods and predicting the prognosis for stroke patients in an individualized manner. In this review, we offer a glimpse at the use of AI in stroke imaging, specifically focusing on its technical principles, clinical application, and future perspectives.

  18. The new challenge for e-learning : the educational semantic web

    NARCIS (Netherlands)

    Aroyo, L.M.; Dicheva, D.

    2004-01-01

    The big question for many researchers in the area of educational systems now is what is the next step in the evolution of e-learning? Are we finally moving from a scattered intelligence to a coherent space of collaborative intelligence? How close we are to the vision of the Educational Semantic Web

  19. 3rd Euro-China Conference on Intelligent Data Analysis and Applications

    CERN Document Server

    Snášel, Václav; Sung, Tien-Wen; Wang, Xiao

    2017-01-01

    This book gathers papers presented at the ECC 2016, the Third Euro-China Conference on Intelligent Data Analysis and Applications, which was held in Fuzhou City, China from November 7 to 9, 2016. The aim of the ECC is to provide an internationally respected forum for scientific research in the broad areas of intelligent data analysis, computational intelligence, signal processing, and all associated applications of artificial intelligence (AI). The third installment of the ECC was jointly organized by Fujian University of Technology, China, and VSB-Technical University of Ostrava, Czech Republic. The conference was co-sponsored by Taiwan Association for Web Intelligence Consortium, and Immersion Co., Ltd.

  20. Deep Corals, Deep Learning: Moving the Deep Net Towards Real-Time Image Annotation

    OpenAIRE

    Lea-Anne Henry; Sankha S. Mukherjee; Neil M. Roberston; Laurence De Clippele; J. Murray Roberts

    2016-01-01

    The mismatch between human capacity and the acquisition of Big Data such as Earth imagery undermines commitments to Convention on Biological Diversity (CBD) and Aichi targets. Artificial intelligence (AI) solutions to Big Data issues are urgently needed as these could prove to be faster, more accurate, and cheaper. Reducing costs of managing protected areas in remote deep waters and in the High Seas is of great importance, and this is a realm where autonomous technology will be transformative.

  1. Bacterioplankton communities of Crater Lake, OR: Dynamic changes with euphotic zone food web structure and stable deep water populations

    Science.gov (United States)

    Urbach, E.; Vergin, K.L.; Larson, G.L.; Giovannoni, S.J.

    2007-01-01

    The distribution of bacterial and archaeal species in Crater Lake plankton varies dramatically over depth and with time, as assessed by hybridization of group-specific oligonucleotides to RNA extracted from lakewater. Nonmetric, multidimensional scaling (MDS) analysis of relative bacterial phylotype densities revealed complex relationships among assemblages sampled from depth profiles in July, August and September of 1997 through 1999. CL500-11 green nonsulfur bacteria (Phylum Chloroflexi) and marine Group I crenarchaeota are consistently dominant groups in the oxygenated deep waters at 300 and 500 m. Other phylotypes found in the deep waters are similar to surface and mid-depth populations and vary with time. Euphotic zone assemblages are dominated either by ??-proteobacteria or CL120-10 verrucomicrobia, and ACK4 actinomycetes. MDS analyses of euphotic zone populations in relation to environmental variables and phytoplankton and zooplankton population structures reveal apparent links between Daphnia pulicaria zooplankton population densities and microbial community structure. These patterns may reflect food web interactions that link kokanee salmon population densities to community structure of the bacterioplankton, via fish predation on Daphnia with cascading consequences to Daphnia bacterivory and predation on bacterivorous protists. These results demonstrate a stable bottom-water microbial community. They also extend previous observations of food web-driven changes in euphotic zone bacterioplankton community structure to an oligotrophic setting. ?? 2007 Springer Science+Business Media B.V.

  2. Inferring Trust Relationships in Web-Based Social Networks

    National Research Council Canada - National Science Library

    Golbeck, Jennifer; Hendler, James

    2006-01-01

    The growth of web-based social networking and the properties of those networks have created great potential for producing intelligent software that integrates a user's social network and preferences...

  3. Intelligent Access to Sequence and Structure Databases (IASSD) - an interface for accessing information from major web databases.

    Science.gov (United States)

    Ganguli, Sayak; Gupta, Manoj Kumar; Basu, Protip; Banik, Rahul; Singh, Pankaj Kumar; Vishal, Vineet; Bera, Abhisek Ranjan; Chakraborty, Hirak Jyoti; Das, Sasti Gopal

    2014-01-01

    With the advent of age of big data and advances in high throughput technology accessing data has become one of the most important step in the entire knowledge discovery process. Most users are not able to decipher the query result that is obtained when non specific keywords or a combination of keywords are used. Intelligent access to sequence and structure databases (IASSD) is a desktop application for windows operating system. It is written in Java and utilizes the web service description language (wsdl) files and Jar files of E-utilities of various databases such as National Centre for Biotechnology Information (NCBI) and Protein Data Bank (PDB). Apart from that IASSD allows the user to view protein structure using a JMOL application which supports conditional editing. The Jar file is freely available through e-mail from the corresponding author.

  4. Web threat and its implication for E-business in Nigeria ...

    African Journals Online (AJOL)

    Web threat is any threat that uses the internet to facilitate identity theft , fraud, espionage and intelligence gathering. Web -based vulnerabilities now outnumber traditional computer security concerns. Such threats use multiple types of malware and fraud, all of which utilize HTTP or HTTPS protocols, but may also employ ...

  5. How much data resides in a web collection: how to estimate size of a web collection

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2013-01-01

    With increasing amount of data in deep web sources (hidden from general search engines behind web forms), accessing this data has gained more attention. In the algorithms applied for this purpose, it is the knowledge of a data source size that enables the algorithms to make accurate decisions in

  6. Food web functioning of the benthopelagic community in a deep-sea seamount based on diet and stable isotope analyses

    Science.gov (United States)

    Preciado, Izaskun; Cartes, Joan E.; Punzón, Antonio; Frutos, Inmaculada; López-López, Lucía; Serrano, Alberto

    2017-03-01

    Trophic interactions in the deep-sea fish community of the Galicia Bank seamount (NE Atlantic) were inferred by using stomach contents analyses (SCA) and stable isotope analyses (SIA) of 27 fish species and their main prey items. Samples were collected during three surveys performed in 2009, 2010 and 2011 between 625 and 1800 m depth. Three main trophic guilds were determined using SCA data: pelagic, benthopelagic and benthic feeders, respectively. Vertically migrating macrozooplankton and meso-bathypelagic shrimps were identified to play a key role as pelagic prey for the deep sea fish community of the Galicia Bank. Habitat overlap was hardly detected; as a matter of fact, when species coexisted most of them evidenced a low dietary overlap, indicating a high degree of resource partitioning. A high potential competition, however, was observed among benthopelagic feeders, i.e.: Etmopterus spinax, Hoplostethus mediterraneus and Epigonus telescopus. A significant correlation was found between δ15N and δ13C for all the analysed species. When calculating Trophic Levels (TLs) for the main fish species, using both the SCA and SIA approaches, some discrepancies arose: TLs calculated from SIA were significantly higher than those obtained from SCA, probably indicating a higher consumption of benthic-suprabenthic prey in the previous months. During the summer, food web functioning in the Galicia Bank was more influenced by the assemblages dwelling in the water column than by deep-sea benthos, which was rather scarce in the summer samples. These discrepancies demonstrate the importance of using both approaches, SCA (snapshot of diet) and SIA (assimilated food in previous months), when attempting trophic studies, if an overview of food web dynamics in different compartments of the ecosystem is to be obtained.

  7. Programming Collective Intelligence Building Smart Web 2.0 Applications

    CERN Document Server

    Segaran, Toby

    2008-01-01

    This fascinating book demonstrates how you can build web applications to mine the enormous amount of data created by people on the Internet. With the sophisticated algorithms in this book, you can write smart programs to access interesting datasets from other web sites, collect data from users of your own applications, and analyze and understand the data once you've found it.

  8. Understanding a Deep Learning Technique through a Neuromorphic System a Case Study with SpiNNaker Neuromorphic Platform

    Directory of Open Access Journals (Sweden)

    Sugiarto Indar

    2018-01-01

    Full Text Available Deep learning (DL has been considered as a breakthrough technique in the field of artificial intelligence and machine learning. Conceptually, it relies on a many-layer network that exhibits a hierarchically non-linear processing capability. Some DL architectures such as deep neural networks, deep belief networks and recurrent neural networks have been developed and applied to many fields with incredible results, even comparable to human intelligence. However, many researchers are still sceptical about its true capability: can the intelligence demonstrated by deep learning technique be applied for general tasks? This question motivates the emergence of another research discipline: neuromorphic computing (NC. In NC, researchers try to identify the most fundamental ingredients that construct intelligence behaviour produced by the brain itself. To achieve this, neuromorphic systems are developed to mimic the brain functionality down to cellular level. In this paper, a neuromorphic platform called SpiNNaker is described and evaluated in order to understand its potential use as a platform for a deep learning approach. This paper is a literature review that contains comparative study on algorithms that have been implemented in SpiNNaker.

  9. Understanding a Deep Learning Technique through a Neuromorphic System a Case Study with SpiNNaker Neuromorphic Platform

    OpenAIRE

    Sugiarto Indar; Pasila Felix

    2018-01-01

    Deep learning (DL) has been considered as a breakthrough technique in the field of artificial intelligence and machine learning. Conceptually, it relies on a many-layer network that exhibits a hierarchically non-linear processing capability. Some DL architectures such as deep neural networks, deep belief networks and recurrent neural networks have been developed and applied to many fields with incredible results, even comparable to human intelligence. However, many researchers are still scept...

  10. Developing Deep Learning Applications for Life Science and Pharma Industry.

    Science.gov (United States)

    Siegismund, Daniel; Tolkachev, Vasily; Heyse, Stephan; Sick, Beate; Duerr, Oliver; Steigele, Stephan

    2018-06-01

    Deep Learning has boosted artificial intelligence over the past 5 years and is seen now as one of the major technological innovation areas, predicted to replace lots of repetitive, but complex tasks of human labor within the next decade. It is also expected to be 'game changing' for research activities in pharma and life sciences, where large sets of similar yet complex data samples are systematically analyzed. Deep learning is currently conquering formerly expert domains especially in areas requiring perception, previously not amenable to standard machine learning. A typical example is the automated analysis of images which are typically produced en-masse in many domains, e. g., in high-content screening or digital pathology. Deep learning enables to create competitive applications in so-far defined core domains of 'human intelligence'. Applications of artificial intelligence have been enabled in recent years by (i) the massive availability of data samples, collected in pharma driven drug programs (='big data') as well as (ii) deep learning algorithmic advancements and (iii) increase in compute power. Such applications are based on software frameworks with specific strengths and weaknesses. Here, we introduce typical applications and underlying frameworks for deep learning with a set of practical criteria for developing production ready solutions in life science and pharma research. Based on our own experience in successfully developing deep learning applications we provide suggestions and a baseline for selecting the most suited frameworks for a future-proof and cost-effective development. © Georg Thieme Verlag KG Stuttgart · New York.

  11. Emotional intelligence education in pre-registration nursing programmes: an integrative review.

    Science.gov (United States)

    Foster, Kim; McCloughen, Andrea; Delgado, Cynthia; Kefalas, Claudia; Harkness, Emily

    2015-03-01

    To investigate the state of knowledge on emotional intelligence (EI) education in pre-registration nursing programmes. Integrative literature review. CINAHL, Medline, Scopus, ERIC, and Web of Knowledge electronic databases were searched for abstracts published in English between 1992-2014. Data extraction and constant comparative analysis of 17 articles. Three categories were identified: Constructs of emotional intelligence; emotional intelligence curricula components; and strategies for emotional intelligence education. A wide range of emotional intelligence constructs were found, with a predominance of trait-based constructs. A variety of strategies to enhance students' emotional intelligence skills were identified, but limited curricula components and frameworks reported in the literature. An ability-based model for curricula and learning and teaching approaches is recommended. Copyright © 2014. Published by Elsevier Ltd.

  12. Semantic Web

    Directory of Open Access Journals (Sweden)

    Anna Lamandini

    2011-06-01

    Full Text Available The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013. As a system it seeks to overcome overload or excess of irrelevant information in Internet, in order to facilitate specific or pertinent research. It is an extension of the existing Web in which the aim is for cooperation between and the computer and people (the dream of Sir Tim Berners –Lee where machines can give more support to people when integrating and elaborating data in order to obtain inferences and a global sharing of data. It is a technology that is able to favour the development of a “data web” in other words the creation of a space in both sets of interconnected and shared data (Linked Data which allows users to link different types of data coming from different sources. It is a technology that will have great effect on everyday life since it will permit the planning of “intelligent applications” in various sectors such as education and training, research, the business world, public information, tourism, health, and e-government. It is an innovative technology that activates a social transformation (socio-semantic Web on a world level since it redefines the cognitive universe of users and enables the sharing not only of information but of significance (collective and connected intelligence.

  13. An Intelligent Framework for Website Usability

    Directory of Open Access Journals (Sweden)

    Alexiei Dingli

    2014-01-01

    Full Text Available With the major advances of the Internet throughout the past couple of years, websites have come to play a central role in the modern marketing business program. However, simply owning a website is not enough for a business to prosper on the Web. Indeed, it is the level of usability of a website that determines if a user stays or abandons it for another competing one. It is therefore crucial to understand the importance of usability on the web, and consequently the need for its evaluation. Nonetheless, there exist a number of obstacles preventing software organizations from successfully applying sound website usability evaluation strategies in practice. From this point of view automation of the latter is extremely beneficial, which not only assists designers in creating more usable websites, but also enhances the Internet users’ experience on the Web and increases their level of satisfaction. As a means of addressing this problem, an Intelligent Usability Evaluation (IUE tool is proposed that automates the usability evaluation process by employing a Heuristic Evaluation technique in an intelligent manner through the adoption of several research-based AI methods. Experimental results show there exists a high correlation between the tool and human annotators when identifying the considered usability violations.

  14. ANALYSIS OF WEB MINING APPLICATIONS AND BENEFICIAL AREAS

    Directory of Open Access Journals (Sweden)

    Khaleel Ahmad

    2011-10-01

    Full Text Available The main purpose of this paper is to study the process of Web mining techniques, features, application ( e-commerce and e-business and its beneficial areas. Web mining has become more popular and its widely used in varies application areas (such as business intelligent system, e-commerce and e-business. The e-commerce or e-business results are bettered by the application of the mining techniques such as data mining and text mining, among all the mining techniques web mining is better.

  15. High-Redshift Radio Galaxies from Deep Fields

    Indian Academy of Sciences (India)

    2016-01-27

    Jan 27, 2016 ... High-Redshift Radio Galaxies from Deep Fields ... Here we present results from the deep 150 MHz observations of LBDS-Lynx field, which has been imaged at 327, ... Articles are also visible in Web of Science immediately.

  16. The deep lymphatic anatomy of the hand.

    Science.gov (United States)

    Ma, Chuan-Xiang; Pan, Wei-Ren; Liu, Zhi-An; Zeng, Fan-Qiang; Qiu, Zhi-Qiang

    2018-04-03

    The deep lymphatic anatomy of the hand still remains the least described in medical literature. Eight hands were harvested from four nonembalmed human cadavers amputated above the wrist. A small amount of 6% hydrogen peroxide was employed to detect the lymphatic vessels around the superficial and deep palmar vascular arches, in webs from the index to little fingers, the thenar and hypothenar areas. A 30-gauge needle was inserted into the vessels and injected with a barium sulphate compound. Each specimen was dissected, photographed and radiographed to demonstrate deep lymphatic distribution of the hand. Five groups of deep collecting lymph vessels were found in the hand: superficial palmar arch lymph vessel (SPALV); deep palmar arch lymph vessel (DPALV); thenar lymph vessel (TLV); hypothenar lymph vessel (HTLV); deep finger web lymph vessel (DFWLV). Each group of vessels drained in different directions first, then all turned and ran towards the wrist in different layers. The deep lymphatic drainage of the hand has been presented. The results will provide an anatomical basis for clinical management, educational reference and scientific research. Copyright © 2018 Elsevier GmbH. All rights reserved.

  17. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    Science.gov (United States)

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Artificial Intelligence in Cardiology.

    Science.gov (United States)

    Johnson, Kipp W; Torres Soto, Jessica; Glicksberg, Benjamin S; Shameer, Khader; Miotto, Riccardo; Ali, Mohsin; Ashley, Euan; Dudley, Joel T

    2018-06-12

    Artificial intelligence and machine learning are poised to influence nearly every aspect of the human condition, and cardiology is not an exception to this trend. This paper provides a guide for clinicians on relevant aspects of artificial intelligence and machine learning, reviews selected applications of these methods in cardiology to date, and identifies how cardiovascular medicine could incorporate artificial intelligence in the future. In particular, the paper first reviews predictive modeling concepts relevant to cardiology such as feature selection and frequent pitfalls such as improper dichotomization. Second, it discusses common algorithms used in supervised learning and reviews selected applications in cardiology and related disciplines. Third, it describes the advent of deep learning and related methods collectively called unsupervised learning, provides contextual examples both in general medicine and in cardiovascular medicine, and then explains how these methods could be applied to enable precision cardiology and improve patient outcomes. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  19. International Conference on Intelligent and Interactive Systems and Applications

    CERN Document Server

    Patnaik, Srikanta; Yu, Zhengtao

    2017-01-01

    This book provides the latest research findings and developments in the field of interactive intelligent systems, addressing diverse areas such as autonomous systems, Internet and cloud computing, pattern recognition and vision systems, mobile computing and intelligent networking, and e-enabled systems. It gathers selected papers from the International Conference on Intelligent and Interactive Systems and Applications (IISA2016) held on June 25–26, 2016 in Shanghai, China. Interactive intelligent systems are among the most important multi-disciplinary research and development domains of artificial intelligence, human–computer interaction, machine learning and new Internet-based technologies. Accordingly, these systems embrace a considerable number of application areas such as autonomous systems, expert systems, mobile systems, recommender systems, knowledge-based and semantic web-based systems, virtual communication environments, and decision support systems, to name a few. To date, research on interactiv...

  20. From outbound to inbound marketing for a web-development company

    OpenAIRE

    Liukkonen, Maria

    2016-01-01

    The objective of the thesis is transformation from outbound to inbound marketing of a web-development company based on social media channels. The company is called Tulikipuna and it offers web-development services, coding for web, intelligent websites solutions and software services to all kinds of corporate clients and companies. The theoretical framework was based on defining concept of digital marketing; the difference between otbound and inbound marketing,social media sites and curre...

  1. Underwater Web Work

    Science.gov (United States)

    Wighting, Mervyn J.; Lucking, Robert A.; Christmann, Edwin P.

    2004-01-01

    Teachers search for ways to enhance oceanography units in the classroom. There are many online resources available to help one explore the mysteries of the deep. This article describes a collection of Web sites on this topic appropriate for middle level classrooms.

  2. An Object-Oriented Architecture for a Web-Based CAI System.

    Science.gov (United States)

    Nakabayashi, Kiyoshi; Hoshide, Takahide; Seshimo, Hitoshi; Fukuhara, Yoshimi

    This paper describes the design and implementation of an object-oriented World Wide Web-based CAI (Computer-Assisted Instruction) system. The goal of the design is to provide a flexible CAI/ITS (Intelligent Tutoring System) framework with full extendibility and reusability, as well as to exploit Web-based software technologies such as JAVA, ASP (a…

  3. Load Forecasting with Artificial Intelligence on Big Data

    OpenAIRE

    Glauner, Patrick; State, Radu

    2016-01-01

    In the domain of electrical power grids, there is a particular interest in time series analysis using artificial intelligence. Machine learning is the branch of artificial intelligence giving computers the ability to learn patterns from data without being explicitly programmed. Deep Learning is a set of cutting-edge machine learning algorithms that are inspired by how the human brain works. It allows to self-learn feature hierarchies from the data rather than modeling hand-crafted features. I...

  4. The Semantic Web: From Representation to Realization

    Science.gov (United States)

    Thórisson, Kristinn R.; Spivack, Nova; Wissner, James M.

    A semantically-linked web of electronic information - the Semantic Web - promises numerous benefits including increased precision in automated information sorting, searching, organizing and summarizing. Realizing this requires significantly more reliable meta-information than is readily available today. It also requires a better way to represent information that supports unified management of diverse data and diverse Manipulation methods: from basic keywords to various types of artificial intelligence, to the highest level of intelligent manipulation - the human mind. How this is best done is far from obvious. Relying solely on hand-crafted annotation and ontologies, or solely on artificial intelligence techniques, seems less likely for success than a combination of the two. In this paper describe an integrated, complete solution to these challenges that has already been implemented and tested with hundreds of thousands of users. It is based on an ontological representational level we call SemCards that combines ontological rigour with flexible user interface constructs. SemCards are machine- and human-readable digital entities that allow non-experts to create and use semantic content, while empowering machines to better assist and participate in the process. SemCards enable users to easily create semantically-grounded data that in turn acts as examples for automation processes, creating a positive iterative feedback loop of metadata creation and refinement between user and machine. They provide a holistic solution to the Semantic Web, supporting powerful management of the full lifecycle of data, including its creation, retrieval, classification, sorting and sharing. We have implemented the SemCard technology on the semantic Web site Twine.com, showing that the technology is indeed versatile and scalable. Here we present the key ideas behind SemCards and describe the initial implementation of the technology.

  5. A Framework for the Systematic Collection of Open Source Intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Pouchard, Line Catherine [ORNL; Trien, Joseph P [ORNL; Dobson, Jonathan D [ORNL

    2009-01-01

    Following legislative directions, the Intelligence Community has been mandated to make greater use of Open Source Intelligence (OSINT). Efforts are underway to increase the use of OSINT but there are many obstacles. One of these obstacles is the lack of tools helping to manage the volume of available data and ascertain its credibility. We propose a unique system for selecting, collecting and storing Open Source data from the Web and the Open Source Center. Some data management tasks are automated, document source is retained, and metadata containing geographical coordinates are added to the documents. Analysts are thus empowered to search, view, store, and analyze Web data within a single tool. We present ORCAT I and ORCAT II, two implementations of the system.

  6. Computational intelligence for technology enhanced learning

    Energy Technology Data Exchange (ETDEWEB)

    Xhafa, Fatos [Polytechnic Univ. of Catalonia, Barcelona (Spain). Dept. of Languages and Informatics Systems; Caballe, Santi; Daradoumis, Thanasis [Open Univ. of Catalonia, Barcelona (Spain). Dept. of Computer Sciences Multimedia and Telecommunications; Abraham, Ajith [Machine Intelligence Research Labs (MIR Labs), Auburn, WA (United States). Scientific Network for Innovation and Research Excellence; Juan Perez, Angel Alejandro (eds.) [Open Univ. of Catalonia, Barcelona (Spain). Dept. of Information Sciences

    2010-07-01

    E-Learning has become one of the most wide spread ways of distance teaching and learning. Technologies such as Web, Grid, and Mobile and Wireless networks are pushing teaching and learning communities to find new and intelligent ways of using these technologies to enhance teaching and learning activities. Indeed, these new technologies can play an important role in increasing the support to teachers and learners, to shorten the time to learning and teaching; yet, it is necessary to use intelligent techniques to take advantage of these new technologies to achieve the desired support to teachers and learners and enhance learners' performance in distributed learning environments. The chapters of this volume bring advances in using intelligent techniques for technology enhanced learning as well as development of e-Learning applications based on such techniques and supported by technology. Such intelligent techniques include clustering and classification for personalization of learning, intelligent context-aware techniques, adaptive learning, data mining techniques and ontologies in e-Learning systems, among others. Academics, scientists, software developers, teachers and tutors and students interested in e-Learning will find this book useful for their academic, research and practice activity. (orig.)

  7. Smart Aerospace eCommerce: Using Intelligent Agents in a NASA Mission Services Ordering Application

    Science.gov (United States)

    Moleski, Walt; Luczak, Ed; Morris, Kim; Clayton, Bill; Scherf, Patricia; Obenschain, Arthur F. (Technical Monitor)

    2002-01-01

    This paper describes how intelligent agent technology was successfully prototyped and then deployed in a smart eCommerce application for NASA. An intelligent software agent called the Intelligent Service Validation Agent (ISVA) was added to an existing web-based ordering application to validate complex orders for spacecraft mission services. This integration of intelligent agent technology with conventional web technology satisfies an immediate NASA need to reduce manual order processing costs. The ISVA agent checks orders for completeness, consistency, and correctness, and notifies users of detected problems. ISVA uses NASA business rules and a knowledge base of NASA services, and is implemented using the Java Expert System Shell (Jess), a fast rule-based inference engine. The paper discusses the design of the agent and knowledge base, and the prototyping and deployment approach. It also discusses future directions and other applications, and discusses lessons-learned that may help other projects make their aerospace eCommerce applications smarter.

  8. Intelligent Web-Based Learning System with Personalized Learning Path Guidance

    Science.gov (United States)

    Chen, C. M.

    2008-01-01

    Personalized curriculum sequencing is an important research issue for web-based learning systems because no fixed learning paths will be appropriate for all learners. Therefore, many researchers focused on developing e-learning systems with personalized learning mechanisms to assist on-line web-based learning and adaptively provide learning paths…

  9. Invited talk: Deep Learning Meets Physics

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Deep Learning has emerged as one of the most successful fields of machine learning and artificial intelligence with overwhelming success in industrial speech, text and vision benchmarks. Consequently it evolved into the central field of research for IT giants like Google, facebook, Microsoft, Baidu, and Amazon. Deep Learning is founded on novel neural network techniques, the recent availability of very fast computers, and massive data sets. In its core, Deep Learning discovers multiple levels of abstract representations of the input. The main obstacle to learning deep neural networks is the vanishing gradient problem. The vanishing gradient impedes credit assignment to the first layers of a deep network or to early elements of a sequence, therefore limits model selection. Major advances in Deep Learning can be related to avoiding the vanishing gradient like stacking, ReLUs, residual networks, highway networks, and LSTM. For Deep Learning, we suggested self-normalizing neural networks (SNNs) which automatica...

  10. Representing System Behaviors and Expert Behaviors for Intelligent Tutoring. Technical Report No. 108.

    Science.gov (United States)

    Towne, Douglas M.; And Others

    Simulation-based software tools that can infer system behaviors from a deep model of the system have the potential for automatically building the semantic representations required to support intelligent tutoring in fault diagnosis. The Intelligent Maintenance Training System (IMTS) is such a resource, designed for use in training troubleshooting…

  11. Towards an Intelligent Possibilistic Web Information Retrieval Using Multiagent System

    Science.gov (United States)

    Elayeb, Bilel; Evrard, Fabrice; Zaghdoud, Montaceur; Ahmed, Mohamed Ben

    2009-01-01

    Purpose: The purpose of this paper is to make a scientific contribution to web information retrieval (IR). Design/methodology/approach: A multiagent system for web IR is proposed based on new technologies: Hierarchical Small-Worlds (HSW) and Possibilistic Networks (PN). This system is based on a possibilistic qualitative approach which extends the…

  12. BUILDING A WEB APPLICATION WITH LARAVEL 5

    OpenAIRE

    Nguyen, Quang

    2015-01-01

    In modern IT industry, it is essential for web developers to know at least one battle-proven framework. Laravel is one of the most successful PHP framework in 2015, based on annual framework popularity survey conducted by SitePoint (SitePoint, The Best PHP Framework for 2015: SitePoint Survey Results, cited, 25.10.2015). There are several advantages and benefits of using web framework in general and Laravel in particular. Framework is a product of collective intelligence, comprising many ...

  13. 7th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2015)

    CERN Document Server

    Nguyen, Ngoc; Batubara, John; New Trends in Intelligent Information and Database Systems

    2015-01-01

    Intelligent information and database systems are two closely related subfields of modern computer science which have been known for over thirty years. They focus on the integration of artificial intelligence and classic database technologies to create the class of next generation information systems. The book focuses on new trends in intelligent information and database systems and discusses topics addressed to the foundations and principles of data, information, and knowledge models, methodologies for intelligent information and database systems analysis, design, and implementation, their validation, maintenance and evolution. They cover a broad spectrum of research topics discussed both from the practical and theoretical points of view such as: intelligent information retrieval, natural language processing, semantic web, social networks, machine learning, knowledge discovery, data mining, uncertainty management and reasoning under uncertainty, intelligent optimization techniques in information systems, secu...

  14. Business Intelligence a supporto del GIS: ultime novità dal mondo GFOSS

    Directory of Open Access Journals (Sweden)

    Fabio D'Ovidio

    2009-03-01

    Full Text Available GeoBI (GeoSpatial Business Intelligence open source projectGeoBI (GeoSpatial Business Intelligence, a new open source project coming from Italy, look at the Geographic Information Systems (GIS from a different point of view because it proposes to manage the alphanumeric component of cartographic data through Business Intelligence tools. This implies multidimensional webGIS usage in order to display charts, dials and diagrams on the map and spatially navigate OLAP cubes.

  15. Business Intelligence a supporto del GIS: ultime novità dal mondo GFOSS

    Directory of Open Access Journals (Sweden)

    Fabio D'Ovidio

    2009-03-01

    Full Text Available GeoBI (GeoSpatial Business Intelligence open source project GeoBI (GeoSpatial Business Intelligence, a new open source project coming from Italy, look at the Geographic Information Systems (GIS from a different point of view because it proposes to manage the alphanumeric component of cartographic data through Business Intelligence tools. This implies multidimensional webGIS usage in order to display charts, dials and diagrams on the map and spatially navigate OLAP cubes.

  16. Intelligent Networks Data Fusion Web-based Services for Ad-hoc Integrated WSNs-RFID

    Directory of Open Access Journals (Sweden)

    Falah Alshahrany

    2016-01-01

    Full Text Available The use of variety of data fusion tools and techniques for big data processing poses the problem of the data and information integration called data fusion having objectives which can differ from one application to another. The design of network data fusion systems aimed at meeting these objectives, need to take into account of the necessary synergy that can result from distributed data processing within the data networks and data centres, involving increased computation and communication. This papers reports on how this processing distribution is functionally structured as configurable integrated web-based support services, in the context of an ad-hoc wireless sensor network used for sensing and tracking, in the context of distributed detection based on complete observations to support real rime decision making. The interrelated functional and hardware RFID-WSN integration is an essential aspect of the data fusion framework that focuses on multi-sensor collaboration as an innovative approach to extend the heterogeneity of the devices and sensor nodes of ad-hoc networks generating a huge amount of heterogeneous soft and hard raw data. The deployment and configuration of these networks require data fusion processing that includes network and service management and enhances the performance and reliability of networks data fusion support systems providing intelligent capabilities for real-time control access and fire detection.

  17. A Comparative Survey of Lotka and Pao’s Laws Conformity with the Number of Researchers and Their Articles in Computer Science and Artificial Intelligence Fields in Web of Science (1986-2009

    Directory of Open Access Journals (Sweden)

    Farideh Osareh

    2011-10-01

    Full Text Available The purpose of this research was to examine the validity of Lotka and Pao’s laws with authorship distribution of "Computer Science" and "Artificial Intelligence" fields using Web of Science (WoS during 1986 to 2009 and comparing the results of examinations. This study was done by using the methods of citation analysis which are scientometrics techniques. The research sample includes all articles in computer science and artificial intelligence fields indexed in the databases accessible via Web of Science during 1986-2009; that were stored in 500 records files and added to "ISI.exe" software for analysis to be performed. Then, the required output of this software was saved in Excel. There were 19150 articles in the computer science field (by 45713 authors and 958 articles in artificial intelligence field (by 2487 authors. Then for final counting and analyzing, the data converted to “Excel” spreadsheet software. Lotka and Pao’s laws were tested using both Lotka’s formula: (for Lotka’s Law; also for testing Pao’s law the values of the exponent n and the constant c are computed and Kolmogorov-Smirnov goodness-of-fit tests were applied. The results suggested that author productivity distribution predicted in “Lotka's generalized inverse square law” was not applicable to computer science and artificial intelligence; but Pao’s law was applicable to these subject areas. Survey both literature and original examining of Lotka and Pao’s Laws witnessed some aspects should be considered. The main elements involved in fitting in a bibliometrics method have been identified: using Lotka or Pao’s law, subject area, period of time, measurement of authors, and a criterion for assessing goodness-of-fit.

  18. A novel AIDS/HIV intelligent medical consulting system based on expert systems.

    Science.gov (United States)

    Ebrahimi, Alireza Pour; Toloui Ashlaghi, Abbas; Mahdavy Rad, Maryam

    2013-01-01

    The purpose of this paper is to propose a novel intelligent model for AIDS/HIV data based on expert system and using it for developing an intelligent medical consulting system for AIDS/HIV. In this descriptive research, 752 frequently asked questions (FAQs) about AIDS/HIV are gathered from numerous websites about this disease. To perform the data mining and extracting the intelligent model, the 6 stages of Crisp method has been completed for FAQs. The 6 stages include: Business understanding, data understanding, data preparation, modelling, evaluation and deployment. C5.0 Tree classification algorithm is used for modelling. Also, rational unified process (RUP) is used to develop the web-based medical consulting software. Stages of RUP are as follows: Inception, elaboration, construction and transition. The intelligent developed model has been used in the infrastructure of the software and based on client's inquiry and keywords related FAQs are displayed to the client, according to the rank. FAQs' ranks are gradually determined considering clients reading it. Based on displayed FAQs, test and entertainment links are also displayed. The accuracy of the AIDS/HIV intelligent web-based medical consulting system is estimated to be 78.76%. AIDS/HIV medical consulting systems have been developed using intelligent infrastructure. Being equipped with an intelligent model, providing consulting services on systematic textual data and providing side services based on client's activities causes the implemented system to be unique. The research has been approved by Iranian Ministry of Health and Medical Education for being practical.

  19. Mutual intelligibility between closely related language in Europe.

    NARCIS (Netherlands)

    Gooskens, Charlotte; van Heuven, Vincent; Golubovic, Jelena; Schüppert, Anja; Swarte, Femke; Voigt, Stefanie

    2018-01-01

    By means of a large-scale web-based investigation, we established the degree of mutual intelligibility of 16 closely related spoken languages within the Germanic, Slavic and Romance language families in Europe. We first present the results of a selection of 1833 listeners representing the mutual

  20. Towards deep learning with segregated dendrites.

    Science.gov (United States)

    Guerguiev, Jordan; Lillicrap, Timothy P; Richards, Blake A

    2017-12-05

    Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations-the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons.

  1. Nonvolatile Memory Materials for Neuromorphic Intelligent Machines.

    Science.gov (United States)

    Jeong, Doo Seok; Hwang, Cheol Seong

    2018-04-18

    Recent progress in deep learning extends the capability of artificial intelligence to various practical tasks, making the deep neural network (DNN) an extremely versatile hypothesis. While such DNN is virtually built on contemporary data centers of the von Neumann architecture, physical (in part) DNN of non-von Neumann architecture, also known as neuromorphic computing, can remarkably improve learning and inference efficiency. Particularly, resistance-based nonvolatile random access memory (NVRAM) highlights its handy and efficient application to the multiply-accumulate (MAC) operation in an analog manner. Here, an overview is given of the available types of resistance-based NVRAMs and their technological maturity from the material- and device-points of view. Examples within the strategy are subsequently addressed in comparison with their benchmarks (virtual DNN in deep learning). A spiking neural network (SNN) is another type of neural network that is more biologically plausible than the DNN. The successful incorporation of resistance-based NVRAM in SNN-based neuromorphic computing offers an efficient solution to the MAC operation and spike timing-based learning in nature. This strategy is exemplified from a material perspective. Intelligent machines are categorized according to their architecture and learning type. Also, the functionality and usefulness of NVRAM-based neuromorphic computing are addressed. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Competitive Intelligence on the Internet-Going for the Gold.

    Science.gov (United States)

    Kassler, Helene

    2000-01-01

    Discussion of competitive intelligence (CI) focuses on recent Web sties and several search techniques that provide valuable CI information. Highlights include links that display business relationships; information from vendors; general business sites; search engine strategies; local business newspapers; job postings; patent and trademark…

  3. DEEPWATER AND NEARSHORE FOOD WEB CHARACTERIZATIONS IN LAKE SUPERIOR

    Science.gov (United States)

    Due to the difficulty associated with sampling deep aquatic systems, food web relationships among deepwater fauna are often poorly known. We are characterizing nearshore versus offshore habitats in the Great Lakes and investigating food web linkages among profundal, pelagic, and ...

  4. Digging deeper on "deep" learning: A computational ecology approach.

    Science.gov (United States)

    Buscema, Massimo; Sacco, Pier Luigi

    2017-01-01

    We propose an alternative approach to "deep" learning that is based on computational ecologies of structurally diverse artificial neural networks, and on dynamic associative memory responses to stimuli. Rather than focusing on massive computation of many different examples of a single situation, we opt for model-based learning and adaptive flexibility. Cross-fertilization of learning processes across multiple domains is the fundamental feature of human intelligence that must inform "new" artificial intelligence.

  5. Intelligent Shimming for Deep Drawing Processes

    DEFF Research Database (Denmark)

    Tommerup, Søren; Endelt, Benny Ørtoft; Danckert, Joachim

    2011-01-01

    cavities the blank-holder force distribution can be controlled during the punch stroke. By means of a sequence of numerical simulations abrasive wear is imposed to the deep drawing of a rectangular cup. The abrasive wear is modelled by changing the tool surface geometry using an algorithm based...... on the sliding energy density. As the tool surfaces are changed the material draw-in is significantly altered when using conventional open-loop control of the blank-holder force. A feed-back controller is presented which is capable of reducing the draw-in difference to a certain degree. Further a learning...

  6. State-of-the-Art Mobile Intelligence: Enabling Robots to Move Like Humans by Estimating Mobility with Artificial Intelligence

    Directory of Open Access Journals (Sweden)

    Xue-Bo Jin

    2018-03-01

    Full Text Available Mobility is a significant robotic task. It is the most important function when robotics is applied to domains such as autonomous cars, home service robots, and autonomous underwater vehicles. Despite extensive research on this topic, robots still suffer from difficulties when moving in complex environments, especially in practical applications. Therefore, the ability to have enough intelligence while moving is a key issue for the success of robots. Researchers have proposed a variety of methods and algorithms, including navigation and tracking. To help readers swiftly understand the recent advances in methodology and algorithms for robot movement, we present this survey, which provides a detailed review of the existing methods of navigation and tracking. In particular, this survey features a relation-based architecture that enables readers to easily grasp the key points of mobile intelligence. We first outline the key problems in robot systems and point out the relationship among robotics, navigation, and tracking. We then illustrate navigation using different sensors and the fusion methods and detail the state estimation and tracking models for target maneuvering. Finally, we address several issues of deep learning as well as the mobile intelligence of robots as suggested future research topics. The contributions of this survey are threefold. First, we review the literature of navigation according to the applied sensors and fusion method. Second, we detail the models for target maneuvering and the existing tracking based on estimation, such as the Kalman filter and its series developed form, according to their model-construction mechanisms: linear, nonlinear, and non-Gaussian white noise. Third, we illustrate the artificial intelligence approach—especially deep learning methods—and discuss its combination with the estimation method.

  7. Towards web documents quality assessment for digital humanities scholars

    NARCIS (Netherlands)

    Ceolin, D.; Noordegraaf, Julia; Aroyo, L.M.; van Son, C.M.; Nejdl, Wolfgang; Hall, Wendy; Parigi, Paolo; Staab, Steffen

    2016-01-01

    We present a framework for assessing the quality of Web documents, and a baseline of three quality dimensions: trustworthiness, objectivity and basic scholarly quality. Assessing Web document quality is a "deep data" problem necessitating approaches to handle both data size and complexity.

  8. Training teachers to observation: an approach through multiple intelligences theory

    Directory of Open Access Journals (Sweden)

    Nicolini, P.

    2010-11-01

    Full Text Available Observation is a daily practice in scholastic and educational contexts, but it needs to develop into a professional competence in order to be helpful. In fact, to design an educative and didactic plan and to provide useful tools, activities and tasks to their students, teachers and educators need to collect information about learners. For these reasons we’ll built a Web-Observation (Web-Ob application, a tool able to support good practices in observation. In particular, the Web-Ob can provide Multiple Intelligences Theory as a framework through which children’s behaviors and attitudes can be observed, assessed and evaluated.

  9. A Web API ecosystem through feature-based reuse

    NARCIS (Netherlands)

    Verborgh, Ruben; Dumontier, Michel

    2016-01-01

    The current Web API landscape does not scale well: every API requires its own hardcoded clients in an unusually short-lived, tightly coupled relationship of highly subjective quality. This directly leads to inflated development costs, and prevents the design of a more intelligent generation of

  10. Development of intelligent semantic search system for rubber research data in Thailand

    Science.gov (United States)

    Kaewboonma, Nattapong; Panawong, Jirapong; Pianhanuruk, Ekkawit; Buranarach, Marut

    2017-10-01

    The rubber production of Thailand increased not only by strong demand from the world market, but was also stimulated strongly through the replanting program of the Thai Government from 1961 onwards. With the continuous growth of rubber research data volume on the Web, the search for information has become a challenging task. Ontologies are used to improve the accuracy of information retrieval from the web by incorporating a degree of semantic analysis during the search. In this context, we propose an intelligent semantic search system for rubber research data in Thailand. The research methods included 1) analyzing domain knowledge, 2) ontologies development, and 3) intelligent semantic search system development to curate research data in trusted digital repositories may be shared among the wider Thailand rubber research community.

  11. The new community rules marketing on the social web

    CERN Document Server

    Weinberg, Tamar

    2009-01-01

    Blogs, networking sites, and other examples of the social web provide businesses with a largely untapped marketing channel for products and services. But how do you take advantage of them? With The New Community Rules, you''ll understand how social web technologies work, and learn the most practical and effective ways to reach people who frequent these sites. Written by an expert in social media and viral marketing, this book cuts through the hype and jargon to give you intelligent advice and strategies for positioning your business on the social web, with case studies that show how other c

  12. Onto-Agents-Enabling Intelligent Agents on the Web

    Science.gov (United States)

    2005-05-01

    Manual annotation is tedious, and often done poorly. Even within the funded DAML project fewer pages were annotated than was hoped. In eCommerce , there...been overcome, congratulations! The DAML project was initiated at the birth of the semantic web. It contributed greatly to define a new research

  13. Intelligent networks recent approaches and applications in medical systems

    CERN Document Server

    Ahamed, Syed V

    2013-01-01

    This textbook offers an insightful study of the intelligent Internet-driven revolutionary and fundamental forces at work in society. Readers will have access to tools and techniques to mentor and monitor these forces rather than be driven by changes in Internet technology and flow of money. These submerged social and human forces form a powerful synergistic foursome web of (a) processor technology, (b) evolving wireless networks of the next generation, (c) the intelligent Internet, and (d) the motivation that drives individuals and corporations. In unison, the technological forces can tear

  14. Mutual intelligibility between closely related languages in Europe

    NARCIS (Netherlands)

    Gooskens, C.; Heuven, van V.J.J.P.; Golubović, J.; Schüppert, A.; Swarte, F.; Voigt, S.

    2017-01-01

    By means of a large-scale web-based investigation, we established the degree of mutual intelligibility of 16 closely related spoken languages within the Germanic, Slavic and Romance language families in Europe. We first present the results of a selection of 1833 listeners representing the mutual

  15. DESIGN OF A WEB SEMI-INTELLIGENT METADATA SEARCH MODEL APPLIED IN DATA WAREHOUSING SYSTEMS DISEÑO DE UN MODELO SEMIINTELIGENTE DE BÚSQUEDA DE METADATOS EN LA WEB, APLICADO A SISTEMAS DATA WAREHOUSING

    Directory of Open Access Journals (Sweden)

    Enrique Luna Ramírez

    2008-12-01

    Full Text Available In this paper, the design of a Web metadata search model with semi-intelligent features is proposed. The search model is oriented to retrieve the metadata associated to a data warehouse in a fast, flexible and reliable way. Our proposal includes a set of distinctive functionalities, which consist of the temporary storage of the frequently used metadata in an exclusive store, different to the global data warehouse metadata store, and of the use of control processes to retrieve information from both stores through aliases of concepts.En este artículo se propone el diseño de un modelo para la búsqueda Web de metadatos con características semiinteligentes. El modelo ha sido concebido para recuperar de manera rápida, flexible y fiable los metadatos asociados a un data warehouse corporativo. Nuestra propuesta incluye un conjunto de funcionalidades distintivas consistentes en el almacenamiento temporal de los metadatos de uso frecuente en un almacén exclusivo, diferente al almacén global de metadatos, y al uso de procesos de control para recuperar información de ambos almacenes a través de alias de conceptos.

  16. 8th Asian Conference on Intelligent Information and Database Systems

    CERN Document Server

    Madeyski, Lech; Nguyen, Ngoc

    2016-01-01

    The objective of this book is to contribute to the development of the intelligent information and database systems with the essentials of current knowledge, experience and know-how. The book contains a selection of 40 chapters based on original research presented as posters during the 8th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2016) held on 14–16 March 2016 in Da Nang, Vietnam. The papers to some extent reflect the achievements of scientific teams from 17 countries in five continents. The volume is divided into six parts: (a) Computational Intelligence in Data Mining and Machine Learning, (b) Ontologies, Social Networks and Recommendation Systems, (c) Web Services, Cloud Computing, Security and Intelligent Internet Systems, (d) Knowledge Management and Language Processing, (e) Image, Video, Motion Analysis and Recognition, and (f) Advanced Computing Applications and Technologies. The book is an excellent resource for researchers, those working in artificial intelligence, mu...

  17. DeepBase: annotation and discovery of microRNAs and other noncoding RNAs from deep-sequencing data.

    Science.gov (United States)

    Yang, Jian-Hua; Qu, Liang-Hu

    2012-01-01

    Recent advances in high-throughput deep-sequencing technology have produced large numbers of short and long RNA sequences and enabled the detection and profiling of known and novel microRNAs (miRNAs) and other noncoding RNAs (ncRNAs) at unprecedented sensitivity and depth. In this chapter, we describe the use of deepBase, a database that we have developed to integrate all public deep-sequencing data and to facilitate the comprehensive annotation and discovery of miRNAs and other ncRNAs from these data. deepBase provides an integrative, interactive, and versatile web graphical interface to evaluate miRBase-annotated miRNA genes and other known ncRNAs, explores the expression patterns of miRNAs and other ncRNAs, and discovers novel miRNAs and other ncRNAs from deep-sequencing data. deepBase also provides a deepView genome browser to comparatively analyze these data at multiple levels. deepBase is available at http://deepbase.sysu.edu.cn/.

  18. Research on intelligent machine self-perception method based on LSTM

    Science.gov (United States)

    Wang, Qiang; Cheng, Tao

    2018-05-01

    In this paper, we use the advantages of LSTM in feature extraction and processing high-dimensional and complex nonlinear data, and apply it to the autonomous perception of intelligent machines. Compared with the traditional multi-layer neural network, this model has memory, can handle time series information of any length. Since the multi-physical domain signals of processing machines have a certain timing relationship, and there is a contextual relationship between states and states, using this deep learning method to realize the self-perception of intelligent processing machines has strong versatility and adaptability. The experiment results show that the method proposed in this paper can obviously improve the sensing accuracy under various working conditions of the intelligent machine, and also shows that the algorithm can well support the intelligent processing machine to realize self-perception.

  19. What makes you clever the puzzle of intelligence

    CERN Document Server

    Partridge, Derek

    2014-01-01

    From Black Holes and Big Bangs to the Higgs boson and the infinitesimal building blocks of all matter, modern science has been spectacularly successful, with one glaring exception - intelligence. Intelligence still remains as one of the greatest mysteries in science. How do you chat so effortlessly? How do you remember, and why do you forget? From a basis of ten maxims What Makes You Clever explains the difficulties as well as the persuasive and persistent over-estimations of progress in Artificial Intelligence. Computers have transformed our lives, and will continue to do so for many years to come. But ever since the Turing Test proposed in 1950 up to IBM's Deep Blue computer that won the second six-game match against world champion Garry Kasparov, the science of artificial intelligence has struggled to make progress. The reader's expertise is engaged to probe human language, machine learning, neural computing, holistic systems and emergent phenomenon. What Makes You Clever reveals the difficulties that s...

  20. Intelligent System for Data Tracking in Image Editing Company

    Directory of Open Access Journals (Sweden)

    Kimlong Ngin

    2017-11-01

    Full Text Available The success of data transaction in a company largely depends on the intelligence system used in its database and application system. The complex and heterogeneous data in the log file make it more difficult for users to manage data effectively. Therefore, this paper creates an application system that can manage data from the log file. A sample was collected from an image editing company in Cambodia by interviewing five customers and seven operators, who worked on the data files for 300 images. This paper found two results: first, the agent script was used for retrieving data from the log file, classifying data, and inserting data into a database; and second, the web interface was used for the viewing of results by the users. The intelligence capabilities of our application, together with a friendly web-based and window-based experience, allow the users to easily acquire, manage, and access the data in an image editing company.

  1. Meaning on the web : Evolution vs intelligent design?

    NARCIS (Netherlands)

    Brachman, Ron; Connolly, Dan; Khare, Rohit; Smadja, Frank; Van Harmelen, Frank

    2006-01-01

    It is a truism that as the Web grows in size and scope, it becomes harder to find what we want, to identify like-minded people and communities, to find the best ads to offer, and to have applications work together smoothly. Services don't interoperate; queries yield long lists of results, most of

  2. Using Deep Learning Techniques to Forecast Environmental Consumption Level

    Directory of Open Access Journals (Sweden)

    Donghyun Lee

    2017-10-01

    Full Text Available Artificial intelligence is a promising futuristic concept in the field of science and technology, and is widely used in new industries. The deep-learning technology leads to performance enhancement and generalization of artificial intelligence technology. The global leader in the field of information technology has declared its intention to utilize the deep-learning technology to solve environmental problems such as climate change, but few environmental applications have so far been developed. This study uses deep-learning technologies in the environmental field to predict the status of pro-environmental consumption. We predicted the pro-environmental consumption index based on Google search query data, using a recurrent neural network (RNN model. To verify the accuracy of the index, we compared the prediction accuracy of the RNN model with that of the ordinary least square and artificial neural network models. The RNN model predicts the pro-environmental consumption index better than any other model. We expect the RNN model to perform still better in a big data environment because the deep-learning technologies would be increasingly sophisticated as the volume of data grows. Moreover, the framework of this study could be useful in environmental forecasting to prevent damage caused by climate change.

  3. Deep imitation learning for 3D navigation tasks.

    Science.gov (United States)

    Hussein, Ahmed; Elyan, Eyad; Gaber, Mohamed Medhat; Jayne, Chrisina

    2018-01-01

    Deep learning techniques have shown success in learning from raw high-dimensional data in various applications. While deep reinforcement learning is recently gaining popularity as a method to train intelligent agents, utilizing deep learning in imitation learning has been scarcely explored. Imitation learning can be an efficient method to teach intelligent agents by providing a set of demonstrations to learn from. However, generalizing to situations that are not represented in the demonstrations can be challenging, especially in 3D environments. In this paper, we propose a deep imitation learning method to learn navigation tasks from demonstrations in a 3D environment. The supervised policy is refined using active learning in order to generalize to unseen situations. This approach is compared to two popular deep reinforcement learning techniques: deep-Q-networks and Asynchronous actor-critic (A3C). The proposed method as well as the reinforcement learning methods employ deep convolutional neural networks and learn directly from raw visual input. Methods for combining learning from demonstrations and experience are also investigated. This combination aims to join the generalization ability of learning by experience with the efficiency of learning by imitation. The proposed methods are evaluated on 4 navigation tasks in a 3D simulated environment. Navigation tasks are a typical problem that is relevant to many real applications. They pose the challenge of requiring demonstrations of long trajectories to reach the target and only providing delayed rewards (usually terminal) to the agent. The experiments show that the proposed method can successfully learn navigation tasks from raw visual input while learning from experience methods fail to learn an effective policy. Moreover, it is shown that active learning can significantly improve the performance of the initially learned policy using a small number of active samples.

  4. Mining social media and web searches for disease detection.

    Science.gov (United States)

    Yang, Y Tony; Horneffer, Michael; DiLisio, Nicole

    2013-04-28

    Web-based social media is increasingly being used across different settings in the health care industry. The increased frequency in the use of the Internet via computer or mobile devices provides an opportunity for social media to be the medium through which people can be provided with valuable health information quickly and directly. While traditional methods of detection relied predominately on hierarchical or bureaucratic lines of communication, these often failed to yield timely and accurate epidemiological intelligence. New web-based platforms promise increased opportunities for a more timely and accurate spreading of information and analysis. This article aims to provide an overview and discussion of the availability of timely and accurate information. It is especially useful for the rapid identification of an outbreak of an infectious disease that is necessary to promptly and effectively develop public health responses. These web-based platforms include search queries, data mining of web and social media, process and analysis of blogs containing epidemic key words, text mining, and geographical information system data analyses. These new sources of analysis and information are intended to complement traditional sources of epidemic intelligence. Despite the attractiveness of these new approaches, further study is needed to determine the accuracy of blogger statements, as increases in public participation may not necessarily mean the information provided is more accurate.

  5. Mining social media and web searches for disease detection

    Directory of Open Access Journals (Sweden)

    Y. Tony Yang

    2013-05-01

    Full Text Available Web-based social media is increasingly being used across different settings in the health care industry. The increased frequency in the use of the Internet via computer or mobile devices provides an opportunity for social media to be the medium through which people can be provided with valuable health information quickly and directly. While traditional methods of detection relied predominately on hierarchical or bureaucratic lines of communication, these often failed to yield timely and accurate epidemiological intelligence. New web-based platforms promise increased opportunities for a more timely and accurate spreading of information and analysis. This article aims to provide an overview and discussion of the availability of timely and accurate information. It is especially useful for the rapid identification of an outbreak of an infectious disease that is necessary to promptly and effectively develop public health responses. These web-based platforms include search queries, data mining of web and social media, process and analysis of blogs containing epidemic key words, text mining, and geographical information system data analyses. These new sources of analysis and information are intended to complement traditional sources of epidemic intelligence. Despite the attractiveness of these new approaches, further study is needed to determine the accuracy of blogger statements, as increases in public participation may not necessarily mean the information provided is more accurate.

  6. Understanding the Web from an Economic Perspective: The Evolution of Business Models and the Web

    Directory of Open Access Journals (Sweden)

    Louis Rinfret

    2014-08-01

    Full Text Available The advent of the World Wide Web is arguably amongst the most important changes that have occurred since the 1990s in the business landscape. It has fueled the rise of new industries, supported the convergence and reshaping of existing ones and enabled the development of new business models. During this time the web has evolved tremendously from a relatively static pagedisplay tool to a massive network of user-generated content, collective intelligence, applications and hypermedia. As technical standards continue to evolve, business models catch-up to the new capabilities. New ways of creating value, distributing it and profiting from it emerge more rapidly than ever. In this paper we explore how the World Wide Web and business models evolve and we identify avenues for future research in light of the web‟s ever-evolving nature and its influence on business models.

  7. Harvesting All Matching Information To A Given Query From a Deep Website

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice; Armano, Giuliano; Bozzon, Alessandro; Giuliani, Alessandro

    In this paper, the goal is harvesting all documents matching a given (entity) query from a deep web source. The objective is to retrieve all information about for instance "Denzel Washington", "Iran Nuclear Deal", or "FC Barcelona" from data hidden behind web forms. Policies of web search engines

  8. From academia to industry: The story of Google DeepMind

    OpenAIRE

    Legg, Shane

    2014-01-01

    Shane Legg left academia to cofound DeepMind Technologies in 2010, along with Demis Hassabis and Mustafa Suleyman. Their vision was to bring together cutting edge machine learning and systems neuroscience in order to create artificial agents with general intelligence. Following investments from a number of famous technology entrepreneurs, including Peter Thiel and Elon Musk, they assembled a team of world class researchers with backgrounds in systems neuroscience, deep learning, reinforcement...

  9. Web survey methodology

    CERN Document Server

    Callegaro, Mario; Vehovar, Asja

    2015-01-01

    Web Survey Methodology guides the reader through the past fifteen years of research in web survey methodology. It both provides practical guidance on the latest techniques for collecting valid and reliable data and offers a comprehensive overview of research issues. Core topics from preparation to questionnaire design, recruitment testing to analysis and survey software are all covered in a systematic and insightful way. The reader will be exposed to key concepts and key findings in the literature, covering measurement, non-response, adjustments, paradata, and cost issues. The book also discusses the hottest research topics in survey research today, such as internet panels, virtual interviewing, mobile surveys and the integration with passive measurements, e-social sciences, mixed modes and business intelligence. The book is intended for students, practitioners, and researchers in fields such as survey and market research, psychological research, official statistics and customer satisfaction research.

  10. Meta-Search Utilizing Evolitionary Recommendation: A Web Search Architecture Proposal

    Czech Academy of Sciences Publication Activity Database

    Húsek, Dušan; Keyhanipour, A.; Krömer, P.; Moshiri, B.; Owais, S.; Snášel, V.

    2008-01-01

    Roč. 33, - (2008), s. 189-200 ISSN 1870-4069 Institutional research plan: CEZ:AV0Z10300504 Keywords : web search * meta-search engine * intelligent re-ranking * ordered weighted averaging * Boolean search queries optimizing Subject RIV: IN - Informatics, Computer Science

  11. Design and realization of intelligent tourism service system based on voice interaction

    Science.gov (United States)

    Hu, Lei-di; Long, Yi; Qian, Cheng-yang; Zhang, Ling; Lv, Guo-nian

    2008-10-01

    Voice technology is one of the important contents to improve the intelligence and humanization of tourism service system. Combining voice technology, the paper concentrates on application needs and the composition of system to present an overall intelligent tourism service system's framework consisting of presentation layer, Web services layer, and tourism application service layer. On the basis, the paper further elaborated the implementation of the system and its key technologies, including intelligent voice interactive technology, seamless integration technology of multiple data sources, location-perception-based guides' services technology, and tourism safety control technology. Finally, according to the situation of Nanjing tourism, a prototype of Tourism Services System is realized.

  12. A COMPARATIVE ANALYSIS OF WEB INFORMATION EXTRACTION TECHNIQUES DEEP LEARNING vs. NAÏVE BAYES vs. BACK PROPAGATION NEURAL NETWORKS IN WEB DOCUMENT EXTRACTION

    OpenAIRE

    J. Sharmila; A. Subramani

    2016-01-01

    Web mining related exploration is getting the chance to be more essential these days in view of the reason that a lot of information is overseen through the web. Web utilization is expanding in an uncontrolled way. A particular framework is required for controlling such extensive measure of information in the web space. Web mining is ordered into three noteworthy divisions: Web content mining, web usage mining and web structure mining. Tak-Lam Wong has proposed a web content mining methodolog...

  13. An Ontology-supported Approach for Automatic Chaining of Web Services in Geospatial Knowledge Discovery

    Science.gov (United States)

    di, L.; Yue, P.; Yang, W.; Yu, G.

    2006-12-01

    Recent developments in geospatial semantic Web have shown promise for automatic discovery, access, and use of geospatial Web services to quickly and efficiently solve particular application problems. With the semantic Web technology, it is highly feasible to construct intelligent geospatial knowledge systems that can provide answers to many geospatial application questions. A key challenge in constructing such intelligent knowledge system is to automate the creation of a chain or process workflow that involves multiple services and highly diversified data and can generate the answer to a specific question of users. This presentation discusses an approach for automating composition of geospatial Web service chains by employing geospatial semantics described by geospatial ontologies. It shows how ontology-based geospatial semantics are used for enabling the automatic discovery, mediation, and chaining of geospatial Web services. OWL-S is used to represent the geospatial semantics of individual Web services and the type of the services it belongs to and the type of the data it can handle. The hierarchy and classification of service types are described in the service ontology. The hierarchy and classification of data types are presented in the data ontology. For answering users' geospatial questions, an Artificial Intelligent (AI) planning algorithm is used to construct the service chain by using the service and data logics expressed in the ontologies. The chain can be expressed as a graph with nodes representing services and connection weights representing degrees of semantic matching between nodes. The graph is a visual representation of logical geo-processing path for answering users' questions. The graph can be instantiated to a physical service workflow for execution to generate the answer to a user's question. A prototype system, which includes real world geospatial applications, is implemented to demonstrate the concept and approach.

  14. Inside the Web: A Look at Digital Libraries and the Invisible/Deep Web

    Science.gov (United States)

    Su, Mila C.

    2009-01-01

    The evolution of the Internet and the World Wide Web continually exceeds expectations with the "swift pace" of technological innovations. Information is added, and just as quickly becomes outdated at a rapid pace. Researchers have found that Digital materials can provide access to primary source materials and connect the researcher to institutions…

  15. Head pose estimation algorithm based on deep learning

    Science.gov (United States)

    Cao, Yuanming; Liu, Yijun

    2017-05-01

    Head pose estimation has been widely used in the field of artificial intelligence, pattern recognition and intelligent human-computer interaction and so on. Good head pose estimation algorithm should deal with light, noise, identity, shelter and other factors robustly, but so far how to improve the accuracy and robustness of attitude estimation remains a major challenge in the field of computer vision. A method based on deep learning for pose estimation is presented. Deep learning with a strong learning ability, it can extract high-level image features of the input image by through a series of non-linear operation, then classifying the input image using the extracted feature. Such characteristics have greater differences in pose, while they are robust of light, identity, occlusion and other factors. The proposed head pose estimation is evaluated on the CAS-PEAL data set. Experimental results show that this method is effective to improve the accuracy of pose estimation.

  16. Deep learning for visual understanding

    NARCIS (Netherlands)

    Guo, Y.

    2017-01-01

    With the dramatic growth of the image data on the web, there is an increasing demand of the algorithms capable of understanding the visual information automatically. Deep learning, served as one of the most significant breakthroughs, has brought revolutionary success in diverse visual applications,

  17. Development and application of deep convolutional neural network in target detection

    Science.gov (United States)

    Jiang, Xiaowei; Wang, Chunping; Fu, Qiang

    2018-04-01

    With the development of big data and algorithms, deep convolution neural networks with more hidden layers have more powerful feature learning and feature expression ability than traditional machine learning methods, making artificial intelligence surpass human level in many fields. This paper first reviews the development and application of deep convolutional neural networks in the field of object detection in recent years, then briefly summarizes and ponders some existing problems in the current research, and the future development of deep convolutional neural network is prospected.

  18. Radiologic diagnosis of bone tumours using Webonex, a web-based artificial intelligence program

    International Nuclear Information System (INIS)

    Rasuli, P.; Rasouli, F.; Rasouli, T.

    2001-01-01

    Knowledge-based system is a decision support system in which an expert's knowledge and reasoning can be applied to problems in bounded knowledge domains. These systems, using knowledge and inference techniques, mimic human reasoning to solve problems. Knowledge-based systems are said to be 'intelligent' because they possess massive stores of information and exhibit many attributes commonly associated with human experts performing difficult tasks and using specialized knowledge and sophisticated problem-solving strategies. Knowledge-based systems differ from conventional software such as database systems in that they are able to reason about data and draw conclusions employing heuristic rules. Heuristics embody human expertise in some knowledge domain and are sometimes characterized as the 'rules of thumb' that one acquires through practical experience and uses to solve everyday problems. Knowledge-based systems have been developed in a variety of fields, including medical disciplines. A decision support system has been assisting clinicians in areas such as infectious disease therapy for many years. For example, these systems can help radiologists formulate and evaluate diagnostic hypotheses by recalling associations between diseases and imaging findings. Although radiologic technology relies heavily on computers, it has been slow to develop a knowledge-based system to aid in diagnoses. These systems can be valuable interactive educational tools for medical students. In 1992, we developed a DOS-based Bonex, a menu-driven expert system for the differential diagnosis of bone tumours using PDC Prolog. It was a rule-based expert system that led the user through a menu of questions and generated a hard copy report and a list of diagnoses with an estimate of the likelihood of each. Bonex was presented at the 1992 Annual Meeting of the Radiological Society of North America (RSNA) in Chicago. We also developed an expert system for the differential diagnosis of brain lesions

  19. Radiologic diagnosis of bone tumours using Webonex, a web-based artificial intelligence program

    Energy Technology Data Exchange (ETDEWEB)

    Rasuli, P. [Univ. of Ottawa, Dept. of Radiology, Ottawa Hospital, Ottawa, Ontario (Canada); Rasouli, F. [Research, Development and Engineering Center, PMUSA, Richmond, VA (United States); Rasouli, T. [Johns Hopkins Univ., Dept. of Cognitive Science, Baltimore, Maryland (United States)

    2001-08-01

    Knowledge-based system is a decision support system in which an expert's knowledge and reasoning can be applied to problems in bounded knowledge domains. These systems, using knowledge and inference techniques, mimic human reasoning to solve problems. Knowledge-based systems are said to be 'intelligent' because they possess massive stores of information and exhibit many attributes commonly associated with human experts performing difficult tasks and using specialized knowledge and sophisticated problem-solving strategies. Knowledge-based systems differ from conventional software such as database systems in that they are able to reason about data and draw conclusions employing heuristic rules. Heuristics embody human expertise in some knowledge domain and are sometimes characterized as the 'rules of thumb' that one acquires through practical experience and uses to solve everyday problems. Knowledge-based systems have been developed in a variety of fields, including medical disciplines. A decision support system has been assisting clinicians in areas such as infectious disease therapy for many years. For example, these systems can help radiologists formulate and evaluate diagnostic hypotheses by recalling associations between diseases and imaging findings. Although radiologic technology relies heavily on computers, it has been slow to develop a knowledge-based system to aid in diagnoses. These systems can be valuable interactive educational tools for medical students. In 1992, we developed a DOS-based Bonex, a menu-driven expert system for the differential diagnosis of bone tumours using PDC Prolog. It was a rule-based expert system that led the user through a menu of questions and generated a hard copy report and a list of diagnoses with an estimate of the likelihood of each. Bonex was presented at the 1992 Annual Meeting of the Radiological Society of North America (RSNA) in Chicago. We also developed an expert system for the differential

  20. Implementation of E-Service Intelligence in the Field of Web Mining

    OpenAIRE

    PROF. MS. S. P. SHINDE,; PROF. V.P.DESHMUKH

    2011-01-01

    The World Wide Web is a popular and interactive medium to disseminate information today .The web is huge, diverse, dynamic, widely distributed global information service centre. We are familiar with the terms like e-commerce, e-governance, e-market, e-finance, e-learning, e-banking etc. These terms come under online services called e-service applications. E-services involve various types of delivery systems, advanced information technologies, methodologies and applications of online services....

  1. Infrastructural intelligence: Contemporary entanglements between neuroscience and AI.

    Science.gov (United States)

    Bruder, Johannes

    2017-01-01

    In this chapter, I reflect on contemporary entanglements between artificial intelligence and the neurosciences by tracing the development of Google's recent DeepMind algorithms back to their roots in neuroscientific studies of episodic memory and imagination. Google promotes a new form of "infrastructural intelligence," which excels by constantly reassessing its cognitive architecture in exchange with a cloud of data that surrounds it, and exhibits putatively human capacities such as intuition. I argue that such (re)alignments of biological and artificial intelligence have been enabled by a paradigmatic infrastructuralization of the brain in contemporary neuroscience. This infrastructuralization is based in methodologies that epistemically liken the brain to complex systems of an entirely different scale (i.e., global logistics) and has given rise to diverse research efforts that target the neuronal infrastructures of higher cognitive functions such as empathy and creativity. What is at stake in this process is no less than the shape of brains to come and a revised understanding of the intelligent and creative social subject. © 2017 Elsevier B.V. All rights reserved.

  2. Use of a Deep Recurrent Neural Network to Reduce Wind Noise: Effects on Judged Speech Intelligibility and Sound Quality.

    Science.gov (United States)

    Keshavarzi, Mahmoud; Goehring, Tobias; Zakis, Justin; Turner, Richard E; Moore, Brian C J

    2018-01-01

    Despite great advances in hearing-aid technology, users still experience problems with noise in windy environments. The potential benefits of using a deep recurrent neural network (RNN) for reducing wind noise were assessed. The RNN was trained using recordings of the output of the two microphones of a behind-the-ear hearing aid in response to male and female speech at various azimuths in the presence of noise produced by wind from various azimuths with a velocity of 3 m/s, using the "clean" speech as a reference. A paired-comparison procedure was used to compare all possible combinations of three conditions for subjective intelligibility and for sound quality or comfort. The conditions were unprocessed noisy speech, noisy speech processed using the RNN, and noisy speech that was high-pass filtered (which also reduced wind noise). Eighteen native English-speaking participants were tested, nine with normal hearing and nine with mild-to-moderate hearing impairment. Frequency-dependent linear amplification was provided for the latter. Processing using the RNN was significantly preferred over no processing by both subject groups for both subjective intelligibility and sound quality, although the magnitude of the preferences was small. High-pass filtering (HPF) was not significantly preferred over no processing. Although RNN was significantly preferred over HPF only for sound quality for the hearing-impaired participants, for the results as a whole, there was a preference for RNN over HPF. Overall, the results suggest that reduction of wind noise using an RNN is possible and might have beneficial effects when used in hearing aids.

  3. Use of a Deep Recurrent Neural Network to Reduce Wind Noise: Effects on Judged Speech Intelligibility and Sound Quality

    Science.gov (United States)

    Keshavarzi, Mahmoud; Goehring, Tobias; Zakis, Justin; Turner, Richard E.; Moore, Brian C. J.

    2018-01-01

    Despite great advances in hearing-aid technology, users still experience problems with noise in windy environments. The potential benefits of using a deep recurrent neural network (RNN) for reducing wind noise were assessed. The RNN was trained using recordings of the output of the two microphones of a behind-the-ear hearing aid in response to male and female speech at various azimuths in the presence of noise produced by wind from various azimuths with a velocity of 3 m/s, using the “clean” speech as a reference. A paired-comparison procedure was used to compare all possible combinations of three conditions for subjective intelligibility and for sound quality or comfort. The conditions were unprocessed noisy speech, noisy speech processed using the RNN, and noisy speech that was high-pass filtered (which also reduced wind noise). Eighteen native English-speaking participants were tested, nine with normal hearing and nine with mild-to-moderate hearing impairment. Frequency-dependent linear amplification was provided for the latter. Processing using the RNN was significantly preferred over no processing by both subject groups for both subjective intelligibility and sound quality, although the magnitude of the preferences was small. High-pass filtering (HPF) was not significantly preferred over no processing. Although RNN was significantly preferred over HPF only for sound quality for the hearing-impaired participants, for the results as a whole, there was a preference for RNN over HPF. Overall, the results suggest that reduction of wind noise using an RNN is possible and might have beneficial effects when used in hearing aids. PMID:29708061

  4. E-learning systems intelligent techniques for personalization

    CERN Document Server

    Klašnja-Milićević, Aleksandra; Ivanović, Mirjana; Budimac, Zoran; Jain, Lakhmi C

    2017-01-01

    This monograph provides a comprehensive research review of intelligent techniques for personalisation of e-learning systems. Special emphasis is given to intelligent tutoring systems as a particular class of e-learning systems, which support and improve the learning and teaching of domain-specific knowledge. A new approach to perform effective personalization based on Semantic web technologies achieved in a tutoring system is presented. This approach incorporates a recommender system based on collaborative tagging techniques that adapts to the interests and level of students' knowledge. These innovations are important contributions of this monograph. Theoretical models and techniques are illustrated on a real personalised tutoring system for teaching Java programming language. The monograph is directed to, students and researchers interested in the e-learning and personalization techniques. .

  5. Modular Logic Programming for Web Data, Inheritance and Agents

    Science.gov (United States)

    Karali, Isambo

    The Semantic Web provides a framework and a set of technologies enabling an effective machine processable information. However, most of the problems that are addressed in the Semantic Web were tackled by the artificial intelligence community, in the past. Within this period, Logic Programming emerged as a complete framework ranging from a sound formal theory, based on Horn clauses, to a declarative description language and an operational behavior that can be executed. Logic programming and its extensions have been already used in various approaches in the Semantic Web or the traditional Web context. In this work, we investigate the use of Modular Logic Programming, i.e. Logic Programming extended with modules, to address issues of the Semantic Web ranging from the ontology layer to reasoning and agents. These techniques provide a uniform framework ranging from the data layer to the higher layers of logic, avoiding the problem of incompatibilities of technologies related with different Semantic Web layers. What is more is that it can operate directly on top of existent World Wide Web sources.

  6. Designing Adaptive Web Applications

    DEFF Research Database (Denmark)

    Dolog, Peter

    2008-01-01

    Learning system to study a discipline. In business to business interaction, different requirements and parameters of exchanged business requests might be served by different services from third parties. Such applications require certain intelligence and a slightly different approach to design. Adpative web......The unique characteristic of web applications is that they are supposed to be used by much bigger and diverse set of users and stakeholders. An example application area is e-Learning or business to business interaction. In eLearning environment, various users with different background use the e......-based applications aim to leave some of their features at the design stage in the form of variables which are dependent on several criteria. The resolution of the variables is called adaptation and can be seen from two perspectives: adaptation by humans to the changed requirements of stakeholders and dynamic system...

  7. Artificial intelligence for analyzing orthopedic trauma radiographs.

    Science.gov (United States)

    Olczak, Jakub; Fahlberg, Niklas; Maki, Atsuto; Razavian, Ali Sharif; Jilert, Anthony; Stark, André; Sköldenberg, Olof; Gordon, Max

    2017-12-01

    Background and purpose - Recent advances in artificial intelligence (deep learning) have shown remarkable performance in classifying non-medical images, and the technology is believed to be the next technological revolution. So far it has never been applied in an orthopedic setting, and in this study we sought to determine the feasibility of using deep learning for skeletal radiographs. Methods - We extracted 256,000 wrist, hand, and ankle radiographs from Danderyd's Hospital and identified 4 classes: fracture, laterality, body part, and exam view. We then selected 5 openly available deep learning networks that were adapted for these images. The most accurate network was benchmarked against a gold standard for fractures. We furthermore compared the network's performance with 2 senior orthopedic surgeons who reviewed images at the same resolution as the network. Results - All networks exhibited an accuracy of at least 90% when identifying laterality, body part, and exam view. The final accuracy for fractures was estimated at 83% for the best performing network. The network performed similarly to senior orthopedic surgeons when presented with images at the same resolution as the network. The 2 reviewer Cohen's kappa under these conditions was 0.76. Interpretation - This study supports the use for orthopedic radiographs of artificial intelligence, which can perform at a human level. While current implementation lacks important features that surgeons require, e.g. risk of dislocation, classifications, measurements, and combining multiple exam views, these problems have technical solutions that are waiting to be implemented for orthopedics.

  8. Emotional intelligence and affective events in nurse education: A narrative review.

    Science.gov (United States)

    Lewis, Gillian M; Neville, Christine; Ashkanasy, Neal M

    2017-06-01

    To investigate the current state of knowledge about emotional intelligence and affective events that arise during nursing students' clinical placement experiences. Narrative literature review. CINAHL, MEDLINE, PsycINFO, Scopus, Web of Science, ERIC and APAIS-Health databases published in English between 1990 and 2016. Data extraction from and constant comparative analysis of ten (10) research articles. We found four main themes: (1) emotional intelligence buffers stress; (2) emotional intelligence reduces anxiety associated with end of life care; (3) emotional intelligence promotes effective communication; and (4) emotional intelligence improves nursing performance. The articles we analysed adopted a variety of emotional intelligence models. Using the Ashkanasy and Daus "three-stream" taxonomy (Stream 1: ability models; 2: self-report; 3: mixed models), we found that Stream 2 self-report measures were the most popular followed by Stream 3 mixed model measures. None of the studies we surveyed used the Stream 1 approach. Findings nonetheless indicated that emotional intelligence was important in maintaining physical and psychological well-being. We concluded that developing emotional intelligence should be a useful adjunct to improve academic and clinical performance and to reduce the risk of emotional distress during clinical placement experiences. We call for more consistency in the use of emotional intelligence tests as a means to create an empirical evidence base in the field of nurse education. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Structure, functioning, and cumulative stressors of Mediterranean deep-sea ecosystems

    Science.gov (United States)

    Tecchio, Samuele; Coll, Marta; Sardà, Francisco

    2015-06-01

    Environmental stressors, such as climate fluctuations, and anthropogenic stressors, such as fishing, are of major concern for the management of deep-sea ecosystems. Deep-water habitats are limited by primary productivity and are mainly dependent on the vertical input of organic matter from the surface. Global change over the latest decades is imparting variations in primary productivity levels across oceans, and thus it has an impact on the amount of organic matter landing on the deep seafloor. In addition, anthropogenic impacts are now reaching the deep ocean. The Mediterranean Sea, the largest enclosed basin on the planet, is not an exception. However, ecosystem-level studies of response to varying food input and anthropogenic stressors on deep-sea ecosystems are still scant. We present here a comparative ecological network analysis of three food webs of the deep Mediterranean Sea, with contrasting trophic structure. After modelling the flows of these food webs with the Ecopath with Ecosim approach, we compared indicators of network structure and functioning. We then developed temporal dynamic simulations varying the organic matter input to evaluate its potential effect. Results show that, following the west-to-east gradient in the Mediterranean Sea of marine snow input, organic matter recycling increases, net production decreases to negative values and trophic organisation is overall reduced. The levels of food-web activity followed the gradient of organic matter availability at the seafloor, confirming that deep-water ecosystems directly depend on marine snow and are therefore influenced by variations of energy input, such as climate-driven changes. In addition, simulations of varying marine snow arrival at the seafloor, combined with the hypothesis of a possible fishery expansion on the lower continental slope in the western basin, evidence that the trawling fishery may pose an impact which could be an order of magnitude stronger than a climate

  10. Potential and Challenges of Web-based Collective Intelligence to Tackle Societal Problems

    Directory of Open Access Journals (Sweden)

    Birutė Pitrėnaitė-Žilėnienė

    2014-03-01

    Full Text Available Purpose – to research what are conditions and challenges for collective intelligence (hereinafter – CI, i.e., emerging applying social technologies, to tackle societal problems. Several objectives were set in order to achieve the goal: to analyze the scientific concepts of CI and its contents; to summarize possibilities and challenges of application of CI in largescale online argumentation; following theoretical attitudes towards CI, to analyze Lithuanian praxis of application of CI technologies in large-scale online argumentation.Methodology – the methods of document analysis and content analysis of virtual community projects were applied. Theoretical analysis enabled recognition of CI phenomena and the variety of interpretations on CI as well as preconditions and difficulties to be tackled in order to ensure effective application of CI technologies in the processes of different policies design and/or societal problem solving. Having theoretical analysis as a base, the authors researched how the theoretical frameworks correspond to practices of Lithuanian virtual community projects, which are oriented to identification and analysis of relevant problems that communities are facing.Findings – scientific documents analysis demonstrates the variety of possible interpretations of CI. Such interpretations depend on the researcher’s attitudes towards this phenomenon: some authors explain CI in a very broad sense not including the aspects of social technologies. However, in the last decades, with the emergence of the Internet, social technologies have become concurrent dimension of CI. The main principles of Web-based CI are geographically spread users and a big number of them. Materialization of these principles ensures variety of elements needed for emerging of CI. There are diverse web-based mediums, where CI is being developed. However, not all of them ensure collective action, which is obligatory for CI. Researchers have analyzed

  11. Bio-AIMS Collection of Chemoinformatics Web Tools based on Molecular Graph Information and Artificial Intelligence Models.

    Science.gov (United States)

    Munteanu, Cristian R; Gonzalez-Diaz, Humberto; Garcia, Rafael; Loza, Mabel; Pazos, Alejandro

    2015-01-01

    The molecular information encoding into molecular descriptors is the first step into in silico Chemoinformatics methods in Drug Design. The Machine Learning methods are a complex solution to find prediction models for specific biological properties of molecules. These models connect the molecular structure information such as atom connectivity (molecular graphs) or physical-chemical properties of an atom/group of atoms to the molecular activity (Quantitative Structure - Activity Relationship, QSAR). Due to the complexity of the proteins, the prediction of their activity is a complicated task and the interpretation of the models is more difficult. The current review presents a series of 11 prediction models for proteins, implemented as free Web tools on an Artificial Intelligence Model Server in Biosciences, Bio-AIMS (http://bio-aims.udc.es/TargetPred.php). Six tools predict protein activity, two models evaluate drug - protein target interactions and the other three calculate protein - protein interactions. The input information is based on the protein 3D structure for nine models, 1D peptide amino acid sequence for three tools and drug SMILES formulas for two servers. The molecular graph descriptor-based Machine Learning models could be useful tools for in silico screening of new peptides/proteins as future drug targets for specific treatments.

  12. Applying Adaptive Swarm Intelligence Technology with Structuration in Web-Based Collaborative Learning

    Science.gov (United States)

    Huang, Yueh-Min; Liu, Chien-Hung

    2009-01-01

    One of the key challenges in the promotion of web-based learning is the development of effective collaborative learning environments. We posit that the structuration process strongly influences the effectiveness of technology used in web-based collaborative learning activities. In this paper, we propose an ant swarm collaborative learning (ASCL)…

  13. 1st International Conference on Computational Intelligence and Informatics

    CERN Document Server

    Prasad, V; Rani, B; Udgata, Siba; Raju, K

    2017-01-01

    The book covers a variety of topics which include data mining and data warehousing, high performance computing, parallel and distributed computing, computational intelligence, soft computing, big data, cloud computing, grid computing, cognitive computing, image processing, computer networks, wireless networks, social networks, wireless sensor networks, information and network security, web security, internet of things, bioinformatics and geoinformatics. The book is a collection of best papers submitted in the First International Conference on Computational Intelligence and Informatics (ICCII 2016) held during 28-30 May 2016 at JNTUH CEH, Hyderabad, India. It was hosted by Department of Computer Science and Engineering, JNTUH College of Engineering in association with Division V (Education & Research) CSI, India. .

  14. Authoring support in concept-based web information systems for educational applications

    NARCIS (Netherlands)

    Aroyo, L.M.; Dicheva, D.

    2004-01-01

    The increasing complexity of concept-based web information systems (WIS) and their educational applications requires more intelligent support for their authoring. We propose an ontological approach towards a common authoring framework for such systems to formally describe the overall authoring

  15. LOG FILE ANALYSIS AND CREATION OF MORE INTELLIGENT WEB SITES

    Directory of Open Access Journals (Sweden)

    Mislav Šimunić

    2012-07-01

    Full Text Available To enable successful performance of any company or business system, both inthe world and in the Republic of Croatia, among many problems relating to its operationsand particularly to maximum utilization and efficiency of the Internet as a media forrunning business (especially in terms of marketing, they should make the best possible useof the present-day global trends and advantages of sophisticated technologies andapproaches to running a business. Bearing in mind the fact of daily increasing competitionand more demanding market, this paper addresses certain scientific and practicalcontribution to continuous analysis of demand market and adaptation thereto by analyzingthe log files and by retroactive effect on the web site. A log file is a carrier of numerousdata and indicators that should be used in the best possible way to improve the entirebusiness operations of a company. However, this is not always simple and easy. The websites differ in size, purpose, and technology used for designing them. For this very reason,the analytic analysis frameworks should be such that can cover any web site and at thesame time leave some space for analyzing and investigating the specific characteristicof each web site and provide for its dynamics by analyzing the log file records. Thoseconsiderations were a basis for this paper

  16. Towards the Development of Web-based Business intelligence Tools

    DEFF Research Database (Denmark)

    Georgiev, Lachezar; Tanev, Stoyan

    2011-01-01

    This paper focuses on using web search techniques in examining the co-creation strategies of technology driven firms. It does not focus on the co-creation results but describes the implementation of a software tool using data mining techniques to analyze the content on firms’ websites. The tool...

  17. W.E.B. DuBois's Challenge to Scientific Racism.

    Science.gov (United States)

    Taylor, Carol M.

    1981-01-01

    Proposes that a direct and authoritative challenge to the scientific racism of the late eighteenth and early twentieth centuries was urgently needed, and was one of the leading rhetorical contributions of W.E.B. DuBois. Specifically examines three issues: social Darwinism, the eugenics movement, and psychologists' measurement of intelligence.…

  18. Analysing Student Programs in the PHP Intelligent Tutoring System

    Science.gov (United States)

    Weragama, Dinesha; Reye, Jim

    2014-01-01

    Programming is a subject that many beginning students find difficult. The PHP Intelligent Tutoring System (PHP ITS) has been designed with the aim of making it easier for novices to learn the PHP language in order to develop dynamic web pages. Programming requires practice. This makes it necessary to include practical exercises in any ITS that…

  19. Starvation and recovery in the deep-sea methanotroph Methyloprofundus sedimenti

    OpenAIRE

    Tavormina, Patricia L.; Kellermann, Matthias Y.; Antony, Chakkiath Paul; Tocheva, Elitza I.; Dalleska, Nathan F.; Jensen, Ashley J.; Valentine, David L.; Hinrichs, Kai-Uwe; Jensen, Grant J.; Dubilier, Nicole; Orphan, Victoria J.

    2017-01-01

    In the deep ocean, the conversion of methane into derived carbon and energy drives the establishment of diverse faunal communities. Yet specific biological mechanisms underlying the introduction of methane-derived carbon into the food web remain poorly described, due to a lack of cultured representative deep-sea methanotrophic prokaryotes. Here, the response of the deep-sea aerobic methanotroph Methyloprofundus sedimenti to methane starvation and recovery was characterized. By combining lipid...

  20. Qualitative Evaluation of the Java Intelligent Tutoring System

    Directory of Open Access Journals (Sweden)

    Edward Sykes

    2005-10-01

    Full Text Available In an effort to support the growing trend of the Java programming language and to promote web-based personalized education, the Java Intelligent Tutoring System (JITS was designed and developed. This tutoring system is unique in a number of ways. Most Intelligent Tutoring Systems require the teacher to author problems with corresponding solutions. JITS, on the other hand, requires the teacher to only supply the problem and problem specification. JITS is designed to "intelligently" examine the student's submitted code and determines appropriate feedback based on a number of factors such as JITS' cognitive model of the student, the student's skill level, and problem details. JITS is intended to be used by beginner programming students in their first year of College or University. This paper discusses the important aspects of the design and development of JITS, the qualitative methods and procedures, and findings. Research was conducted at the Sheridan Institute of Technology and Advanced Learning, Ontario, Canada.

  1. FINITE ELEMENT ANALYSIS OF DEEP BEAM UNDER DIRECT AND INDIRECT LOAD

    Directory of Open Access Journals (Sweden)

    Haleem K. Hussain

    2018-05-01

    Full Text Available This research study the effect of exist of opening in web of deep beam loaded directly and indirectly and the behavior of reinforced concrete deep beams without with and without web reinforcement, the opening size and shear span ratio (a/d was constant. Nonlinear analysis using the finite element method with ANSYS software release 12.0 program was used to predict the ultimate load capacity and crack propagation for reinforced concrete deep beams with openings. The adopted beam models depend on experimental test program of reinforced concrete deep beam with and without openings and the finite element analysis result showed a good agreement with small amount of deference in ultimate beam capacity with (ANSYS analysis and it was completely efficient to simulate the behavior of reinforced concrete deep beams. The mid-span deflection at ultimate applied load and inclined cracked were highly compatible with experimental results. The model with opening in the shear span shows a reduction in the load-carrying capacity of beam and adding the vertical stirrup has improve the capacity of ultimate beam load.

  2. The future of radiology augmented with Artificial Intelligence: A strategy for success.

    Science.gov (United States)

    Liew, Charlene

    2018-05-01

    The rapid development of Artificial Intelligence/deep learning technology and its implementation into routine clinical imaging will cause a major transformation to the practice of radiology. Strategic positioning will ensure the successful transition of radiologists into their new roles as augmented clinicians. This paper describes an overall vision on how to achieve a smooth transition through the practice of augmented radiology where radiologists-in-the-loop ensure the safe implementation of Artificial Intelligence systems. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Intelligence in Artificial Intelligence

    OpenAIRE

    Datta, Shoumen Palit Austin

    2016-01-01

    The elusive quest for intelligence in artificial intelligence prompts us to consider that instituting human-level intelligence in systems may be (still) in the realm of utopia. In about a quarter century, we have witnessed the winter of AI (1990) being transformed and transported to the zenith of tabloid fodder about AI (2015). The discussion at hand is about the elements that constitute the canonical idea of intelligence. The delivery of intelligence as a pay-per-use-service, popping out of ...

  4. Ubiquitous Computing Services Discovery and Execution Using a Novel Intelligent Web Services Algorithm

    Science.gov (United States)

    Choi, Okkyung; Han, SangYong

    2007-01-01

    Ubiquitous Computing makes it possible to determine in real time the location and situations of service requesters in a web service environment as it enables access to computers at any time and in any place. Though research on various aspects of ubiquitous commerce is progressing at enterprises and research centers, both domestically and overseas, analysis of a customer's personal preferences based on semantic web and rule based services using semantics is not currently being conducted. This paper proposes a Ubiquitous Computing Services System that enables a rule based search as well as semantics based search to support the fact that the electronic space and the physical space can be combined into one and the real time search for web services and the construction of efficient web services thus become possible.

  5. Bringing Web 2.0 to bioinformatics.

    Science.gov (United States)

    Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P

    2009-01-01

    Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.

  6. An intelligent sales assistant for configurable products

    OpenAIRE

    Molina, Martin

    2001-01-01

    Some of the recent proposals of web-based applications are oriented to provide advanced search services through virtual shops. Within this context, this paper proposes an advanced type of software application that simulates how a sales assistant dialogues with a consumer to dynamically configure a product according to particular needs. The paper presents the general knowl- edge model that uses artificial intelligence and knowledge-based techniques to simulate the configuration process. Finall...

  7. Artificial Intelligence is Getting Personal : A study on the Usage Motivations and Privacy Concerns of Intelligent Personal Assistants’ Users

    OpenAIRE

    Tundrea, Darius

    2017-01-01

    The present study is aiming to evaluate the Intelligent Personal Assistants usage motivations, addressing at the same time various privacy issues and concerns related to this emergent technology. To fulfil the purpose of the study I have applied two different research methods. Initially, a web survey conducted gathered 18 respondents answering 24 questions related to the presented topic. Subsequently, was organised a focus group by gathering seven respondents who shared their opinions on the ...

  8. In Pursuit of Alternatives in ELT Methodology: WebQuests

    Science.gov (United States)

    Sen, Ayfer; Neufeld, Steve

    2006-01-01

    Although the Internet has opened up a vast new source of information for university students to use and explore, many students lack the skills to find, critically evaluate and intelligently exploit web-based resources. This problem is accentuated in English-medium universities where students learn and use English as a foreign language. In these…

  9. DeepLoc: prediction of protein subcellular localization using deep learning

    DEFF Research Database (Denmark)

    Almagro Armenteros, Jose Juan; Sønderby, Casper Kaae; Sønderby, Søren Kaae

    2017-01-01

    The prediction of eukaryotic protein subcellular localization is a well-studied topic in bioinformatics due to its relevance in proteomics research. Many machine learning methods have been successfully applied in this task, but in most of them, predictions rely on annotation of homologues from...... knowledge databases. For novel proteins where no annotated homologues exist, and for predicting the effects of sequence variants, it is desirable to have methods for predicting protein properties from sequence information only. Here, we present a prediction algorithm using deep neural networks to predict...... current state-of-the-art algorithms, including those relying on homology information. The method is available as a web server at http://www.cbs.dtu.dk/services/DeepLoc . Example code is available at https://github.com/JJAlmagro/subcellular_localization . The dataset is available at http...

  10. Intelligent automotive battery systems

    Science.gov (United States)

    Witehira, P.

    A single power-supply battery is incompatible with modern vehicles. A one-cmbination 12 cell/12 V battery, developed by Power Beat International Limited (PBIL), is described. The battery is designed to be a 'drop in' replacement for existing batteries. The cell structures, however, are designed according to load function, i.e., high-current shallow-discharge cycles and low-current deep-discharge cycles. The preferred energy discharge management logic and integration into the power distribution network of the vehicle to provide safe user-friendly usage is described. The system is designed to operate transparent to the vehicle user. The integrity of the volatile high-current cells is maintained by temperature-sensitive voltage control and discharge management. The deep-cycle cells can be fully utilized without affecting startability under extreme conditions. Electric energy management synchronization with engine starting will provide at least 6% overall reduction in hydrocarbon emissions using an intelligent on-board power-supply technology developed by PBIL.

  11. tOWL: a temporal Web Ontology Language.

    Science.gov (United States)

    Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay

    2012-02-01

    Through its interoperability and reasoning capabilities, the Semantic Web opens a realm of possibilities for developing intelligent systems on the Web. The Web Ontology Language (OWL) is the most expressive standard language for modeling ontologies, the cornerstone of the Semantic Web. However, up until now, no standard way of expressing time and time-dependent information in OWL has been provided. In this paper, we present a temporal extension of the very expressive fragment SHIN(D) of the OWL Description Logic language, resulting in the temporal OWL language. Through a layered approach, we introduce three extensions: 1) concrete domains, which allow the representation of restrictions using concrete domain binary predicates; 2) temporal representation , which introduces time points, relations between time points, intervals, and Allen's 13 interval relations into the language; and 3) timeslices/fluents, which implement a perdurantist view on individuals and allow for the representation of complex temporal aspects, such as process state transitions. We illustrate the expressiveness of the newly introduced language by using an example from the financial domain.

  12. Polite Web-Based Intelligent Tutors: Can They Improve Learning in Classrooms?

    Science.gov (United States)

    McLaren, Bruce M.; DeLeeuw, Krista E.; Mayer, Richard E.

    2011-01-01

    Should an intelligent software tutor be polite, in an effort to motivate and cajole students to learn, or should it use more direct language? If it should be polite, under what conditions? In a series of studies in different contexts (e.g., lab versus classroom) with a variety of students (e.g., low prior knowledge versus high prior knowledge),…

  13. Discovery radiomics via evolutionary deep radiomic sequencer discovery for pathologically proven lung cancer detection.

    Science.gov (United States)

    Shafiee, Mohammad Javad; Chung, Audrey G; Khalvati, Farzad; Haider, Masoom A; Wong, Alexander

    2017-10-01

    While lung cancer is the second most diagnosed form of cancer in men and women, a sufficiently early diagnosis can be pivotal in patient survival rates. Imaging-based, or radiomics-driven, detection methods have been developed to aid diagnosticians, but largely rely on hand-crafted features that may not fully encapsulate the differences between cancerous and healthy tissue. Recently, the concept of discovery radiomics was introduced, where custom abstract features are discovered from readily available imaging data. We propose an evolutionary deep radiomic sequencer discovery approach based on evolutionary deep intelligence. Motivated by patient privacy concerns and the idea of operational artificial intelligence, the evolutionary deep radiomic sequencer discovery approach organically evolves increasingly more efficient deep radiomic sequencers that produce significantly more compact yet similarly descriptive radiomic sequences over multiple generations. As a result, this framework improves operational efficiency and enables diagnosis to be run locally at the radiologist's computer while maintaining detection accuracy. We evaluated the evolved deep radiomic sequencer (EDRS) discovered via the proposed evolutionary deep radiomic sequencer discovery framework against state-of-the-art radiomics-driven and discovery radiomics methods using clinical lung CT data with pathologically proven diagnostic data from the LIDC-IDRI dataset. The EDRS shows improved sensitivity (93.42%), specificity (82.39%), and diagnostic accuracy (88.78%) relative to previous radiomics approaches.

  14. The Potential Transformative Impact of Web 2.0 Technology on the Intelligence Community

    Science.gov (United States)

    2008-12-01

    wikis, mashups and folksonomies .24 As the web is considered a platform, web 2.0 lacks concrete boundaries; instead, it possesses a gravitational...engagement and marketing Folksonomy The practice and method of collaboratively creating and managing tags147 to annotate and categorize content

  15. Deep learning beyond cats and dogs: recent advances in diagnosing breast cancer with deep neural networks.

    Science.gov (United States)

    Burt, Jeremy R; Torosdagli, Neslisah; Khosravan, Naji; RaviPrakash, Harish; Mortazi, Aliasghar; Tissavirasingham, Fiona; Hussein, Sarfaraz; Bagci, Ulas

    2018-04-10

    Deep learning has demonstrated tremendous revolutionary changes in the computing industry and its effects in radiology and imaging sciences have begun to dramatically change screening paradigms. Specifically, these advances have influenced the development of computer-aided detection and diagnosis (CAD) systems. These technologies have long been thought of as "second-opinion" tools for radiologists and clinicians. However, with significant improvements in deep neural networks, the diagnostic capabilities of learning algorithms are approaching levels of human expertise (radiologists, clinicians etc.), shifting the CAD paradigm from a "second opinion" tool to a more collaborative utility. This paper reviews recently developed CAD systems based on deep learning technologies for breast cancer diagnosis, explains their superiorities with respect to previously established systems, defines the methodologies behind the improved achievements including algorithmic developments, and describes remaining challenges in breast cancer screening and diagnosis. We also discuss possible future directions for new CAD models that continue to change as artificial intelligence algorithms evolve.

  16. The dynamic interplay among EFL learners’ ambiguity tolerance, adaptability, cultural intelligence, learning approach, and language achievement

    Directory of Open Access Journals (Sweden)

    Shadi Alahdadi

    2017-01-01

    Full Text Available A key objective of education is to prepare individuals to be fully-functioning learners. This entails developing the cognitive, metacognitive, motivational, cultural, and emotional competencies. The present study aimed to examine the interrelationships among adaptability, tolerance of ambiguity, cultural intelligence, learning approach, and language achievement as manifestations of the above competencies within a single model. The participants comprised one hundred eighty BA and MA Iranian university students studying English language teaching and translation. The instruments used in this study consisted of the translated versions of four questionnaires: second language tolerance of ambiguity scale, adaptability taken from emotional intelligence inventory, cultural intelligence (CQ inventory, and the revised study process questionnaire measuring surface and deep learning. The results estimated via structural equation modeling (SEM revealed that the proposed model containing the variables under study had a good fit with the data. It was found that all the variables except adaptability directly influenced language achievement with deep approach having the highest impact and ambiguity tolerance having the lowest influence. In addition, ambiguity tolerance was a positive and significant predictor of deep approach. CQ was found to be under the influence of both ambiguity tolerance and adaptability. The findings were discussed in the light of the yielded results.

  17. National Water Model: Providing the Nation with Actionable Water Intelligence

    Science.gov (United States)

    Aggett, G. R.; Bates, B.

    2017-12-01

    The National Water Model (NWM) provides national, street-level detail of water movement through time and space. Operating hourly, this flood of information offers enormous benefits in the form of water resource management, natural disaster preparedness, and the protection of life and property. The Geo-Intelligence Division at the NOAA National Water Center supplies forecasters and decision-makers with timely, actionable water intelligence through the processing of billions of NWM data points every hour. These datasets include current streamflow estimates, short and medium range streamflow forecasts, and many other ancillary datasets. The sheer amount of NWM data produced yields a dataset too large to allow for direct human comprehension. As such, it is necessary to undergo model data post-processing, filtering, and data ingestion by visualization web apps that make use of cartographic techniques to bring attention to the areas of highest urgency. This poster illustrates NWM output post-processing and cartographic visualization techniques being developed and employed by the Geo-Intelligence Division at the NOAA National Water Center to provide national actionable water intelligence.

  18. Cost Effective Evaluation of Companies' Storytelling on the Web

    DEFF Research Database (Denmark)

    Clemmensen, Torkil; Vendelø, Morten Thanning

    2004-01-01

    narrative qualities above the company that has a web site with good graphical appearance, but poor narrative qualities. In conclusion, we suggest that user centred evaluation of commercial web sites by using the suggested method can pay attention to deep, narrative structures in both the company's self......, initiate the customers' imagination and narrative mind and hence their decision making. These ideas are investigated in a qualitative study of two companies' self-presentation as future work places for students. The results demonstrate that the students choose the company that has a web site with rich...

  19. A Deep Learning Approach for Fault Diagnosis of Induction Motors in Manufacturing

    Science.gov (United States)

    Shao, Si-Yu; Sun, Wen-Jun; Yan, Ru-Qiang; Wang, Peng; Gao, Robert X.

    2017-11-01

    Extracting features from original signals is a key procedure for traditional fault diagnosis of induction motors, as it directly influences the performance of fault recognition. However, high quality features need expert knowledge and human intervention. In this paper, a deep learning approach based on deep belief networks (DBN) is developed to learn features from frequency distribution of vibration signals with the purpose of characterizing working status of induction motors. It combines feature extraction procedure with classification task together to achieve automated and intelligent fault diagnosis. The DBN model is built by stacking multiple-units of restricted Boltzmann machine (RBM), and is trained using layer-by-layer pre-training algorithm. Compared with traditional diagnostic approaches where feature extraction is needed, the presented approach has the ability of learning hierarchical representations, which are suitable for fault classification, directly from frequency distribution of the measurement data. The structure of the DBN model is investigated as the scale and depth of the DBN architecture directly affect its classification performance. Experimental study conducted on a machine fault simulator verifies the effectiveness of the deep learning approach for fault diagnosis of induction motors. This research proposes an intelligent diagnosis method for induction motor which utilizes deep learning model to automatically learn features from sensor data and realize working status recognition.

  20. Artificial Intelligence and Moral intelligence

    OpenAIRE

    Laura Pana

    2008-01-01

    We discuss the thesis that the implementation of a moral code in the behaviour of artificial intelligent systems needs a specific form of human and artificial intelligence, not just an abstract intelligence. We present intelligence as a system with an internal structure and the structural levels of the moral system, as well as certain characteristics of artificial intelligent agents which can/must be treated as 1- individual entities (with a complex, specialized, autonomous or selfdetermined,...

  1. Competitive intelligence in services organizations: a systematic literature review

    Directory of Open Access Journals (Sweden)

    Danielle Faust Cruz

    2015-02-01

    Full Text Available It is growing the importance of services sector in the global economy. Facing a global and dynamic market, characterized by fierce competition, Competitive Intelligence - CI can help services organizations in decision making process and in the conception of competitive advantages against competitors. This paper aims to outline the state of art concerning the use of competitive intelligence in the services sector organizations, through research and analysis of articles found in major databases. This is a theoretical study consisting of a systematic literature review including bibliometric and content analysis. Relevant publications were retrieved in the following data bases related to the subject: Web of Knowledge, Scopus, Ebsco, ScienceDirect, and Engineering Village. The results allow considering the importance of competitive intelligence for survival and competitiveness of services organizations. Finally, it was verified the existence of a gap related to prescriptive studies, focusing on investigations about the subject, being this type of study relevant to the area to reach maturity

  2. Virtual Sensor Web Architecture

    Science.gov (United States)

    Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.

    2006-12-01

    NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.

  3. 9th Asian Conference on Intelligent Information and Database Systems

    CERN Document Server

    Nguyen, Ngoc; Shirai, Kiyoaki

    2017-01-01

    This book presents recent research in intelligent information and database systems. The carefully selected contributions were initially accepted for presentation as posters at the 9th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2017) held from to 5 April 2017 in Kanazawa, Japan. While the contributions are of an advanced scientific level, several are accessible for non-expert readers. The book brings together 47 chapters divided into six main parts: • Part I. From Machine Learning to Data Mining. • Part II. Big Data and Collaborative Decision Support Systems, • Part III. Computer Vision Analysis, Detection, Tracking and Recognition, • Part IV. Data-Intensive Text Processing, • Part V. Innovations in Web and Internet Technologies, and • Part VI. New Methods and Applications in Information and Software Engineering. The book is an excellent resource for researchers and those working in algorithmics, artificial and computational intelligence, collaborative systems, decisio...

  4. Food-web and ecosystem structure of the open-ocean and deep-sea environments of the Azores, NE Atlantic

    Directory of Open Access Journals (Sweden)

    Telmo Morato

    2016-12-01

    Full Text Available The Marine Strategy Framework Directive intends to adopt ecosystem-based management for resources, biodiversity and habitats that puts emphasis on maintaining the health of the ecosystem alongside appropriate human use of the marine environment, for the benefit of current and future generations. Within the overall framework of ecosystem-based management, ecosystem models are tools to evaluate and gain insights in ecosystem properties. The low data availability and complexity of modelling deep-water ecosystems has limited the application of ecosystem models to few deep-water ecosystems. Here, we aim to develop an ecosystem model for the deep-sea and open ocean in the Azores exclusive economic zone with the overarching objective of characterising the food-web and ecosystem structure of the ecosystem. An ecosystem model with 45 functional groups, including a detritus group, two primary producer groups, eight invertebrate groups, 29 fish groups, three marine mammal groups, a turtle and a seabird group was built. Overall data quality measured by the pedigree index was estimated to be higher than the mean value of all published models. Therefore, the model was built with source data of an overall reasonable quality, especially considering the normally low data availability for deep-sea ecosystems. The total biomass (excluding detritus of the modelled ecosystem for the whole area was calculated as 24.7 t km-². The mean trophic level for the total marine catch of the Azores was estimated to be 3.95, similar to the trophic level of the bathypelagic and medium-size pelagic fish. Trophic levels for the different functional groups were estimated to be similar to those obtained with stable isotopes and stomach contents analyses, with some exceptions on both ends of the trophic spectra. Omnivory indices were in general low, indicating prey speciation for the majority of the groups. Cephalopods, pelagic sharks and toothed whales were identified as groups with

  5. Pro deep learning with TensorFlow a mathematical approach to advanced artificial intelligence in Python

    CERN Document Server

    Pattanayak, Santanu

    2017-01-01

    Deploy deep learning solutions in production with ease using TensorFlow. You'll also develop the mathematical understanding and intuition required to invent new deep learning architectures and solutions on your own. Pro Deep Learning with TensorFlow provides practical, hands-on expertise so you can learn deep learning from scratch and deploy meaningful deep learning solutions. This book will allow you to get up to speed quickly using TensorFlow and to optimize different deep learning architectures. All of the practical aspects of deep learning that are relevant in any industry are emphasized in this book. You will be able to use the prototypes demonstrated to build new deep learning applications. The code presented in the book is available in the form of iPython notebooks and scripts which allow you to try out examples and extend them in interesting ways. You will be equipped with the mathematical foundation and scientific knowledge to pursue research in this field and give back to the community.

  6. ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation.

    Science.gov (United States)

    Hohman, Fred; Hodas, Nathan; Chau, Duen Horng

    2017-05-01

    Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as "black-boxes" due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user's data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.

  7. ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation

    Energy Technology Data Exchange (ETDEWEB)

    Hohman, Frederick M.; Hodas, Nathan O.; Chau, Duen Horng

    2017-05-30

    Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as “black-boxes” due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user’s data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.

  8. Intelligence Naturelle et Intelligence Artificielle

    OpenAIRE

    Dubois, Daniel

    2011-01-01

    Cet article présente une approche systémique du concept d’intelligence naturelle en ayant pour objectif de créer une intelligence artificielle. Ainsi, l’intelligence naturelle, humaine et animale non-humaine, est une fonction composée de facultés permettant de connaître et de comprendre. De plus, l'intelligence naturelle reste indissociable de la structure, à savoir les organes du cerveau et du corps. La tentation est grande de doter les systèmes informatiques d’une intelligence artificielle ...

  9. The Linking Probability of Deep Spider-Web Networks

    OpenAIRE

    Pippenger, Nicholas

    2005-01-01

    We consider crossbar switching networks with base $b$ (that is, constructed from $b\\times b$ crossbar switches), scale $k$ (that is, with $b^k$ inputs, $b^k$ outputs and $b^k$ links between each consecutive pair of stages) and depth $l$ (that is, with $l$ stages). We assume that the crossbars are interconnected according to the spider-web pattern, whereby two diverging paths reconverge only after at least $k$ stages. We assume that each vertex is independently idle with probability $q$, the v...

  10. Context-dependent Reasoning for the Semantic Web

    Directory of Open Access Journals (Sweden)

    Neli P. Zlatareva

    2011-08-01

    Full Text Available Ontologies are the backbone of the emerging Semantic Web, which is envisioned to dramatically improve current web services by extending them with intelligent capabilities such as reasoning and context-awareness. They define a shared vocabulary of common domains accessible to both, humans and computers, and support various types of information management including storage and processing of data. Current ontology languages, which are designed to be decidable to allow for automatic data processing, target simple typed ontologies that are completely and consistently specified. As the size of ontologies and the complexity of web applications grow, the need for more flexible representation and reasoning schemes emerges. This article presents a logical framework utilizing context-dependent rules which are intended to support not fully and/or precisely specified ontologies. A hypothetical application scenario is described to illustrate the type of ontologies targeted, and the type of queries that the presented logical framework is intended to address.

  11. Energy transfer in the Congo deep-sea fan: From terrestrially-derived organic matter to chemosynthetic food webs

    Science.gov (United States)

    Pruski, A. M.; Decker, C.; Stetten, E.; Vétion, G.; Martinez, P.; Charlier, K.; Senyarich, C.; Olu, K.

    2017-08-01

    Large amounts of recent terrestrial organic matter (OM) from the African continent are delivered to the abyssal plain by turbidity currents and accumulate in the Congo deep-sea fan. In the recent lobe complex, large clusters of vesicomyid bivalves are found all along the active channel in areas of reduced sediment. These soft-sediment communities resemble those fuelled by chemoautotrophy in cold-seep settings. The aim of this study was to elucidate feeding strategies in these macrofaunal assemblages as part of a greater effort to understand the link between the inputs of terrestrially-derived OM and the chemosynthetic habitats. The biochemical composition of the sedimentary OM was first analysed in order to evaluate how nutritious the available particulate OM is for the benthic macrofauna. The terrestrial OM is already degraded when it reaches the final depositional area. However, high biopolymeric carbon contents (proteins, carbohydrates and lipids) are found in the channel of the recent lobe complex. In addition, about one to two thirds of the nitrogen can be assigned to peptide-like material. Even if this soil-derived OM is poorly digestible, turbiditic deposits contain such high amounts of organic carbon that there is enough biopolymeric carbon and proteacinous nitrogen to support dense benthic communities that contrast with the usual depauperate abyssal plains. Stable carbon and nitrogen isotopes and fatty acid biomarkers were then used to shed light on the feeding strategies allowing the energy transfer from the terrestrial OM brought by the turbidity currents to the abyssal food web. In the non-reduced sediment, surface detritivorous holothurians and suspension-feeding poriferans rely on detritic OM, thereby depending directly on the turbiditic deposits. The sulphur-oxidising symbiont bearing vesicomyids closely depend on the reprocessing of OM with methane and sulphide as final products. Their carbon and nitrogen isotopic signatures vary greatly among sites

  12. Bee Swarm Optimization for Medical Web Information Foraging.

    Science.gov (United States)

    Drias, Yassine; Kechid, Samir; Pasi, Gabriella

    2016-02-01

    The present work is related to Web intelligence and more precisely to medical information foraging. We present here a novel approach based on agents technology for information foraging. An architecture is proposed, in which we distinguish two important phases. The first one is a learning process for localizing the most relevant pages that might interest the user. This is performed on a fixed instance of the Web. The second takes into account the openness and the dynamicity of the Web. It consists on an incremental learning starting from the result of the first phase and reshaping the outcomes taking into account the changes that undergoes the Web. The whole system offers a tool to help the user undertaking information foraging. We implemented the system using a group of cooperative reactive agents and more precisely a colony of artificial bees. In order to validate our proposal, experiments were conducted on MedlinePlus, a benchmark dedicated for research in the domain of Health. The results are promising either for those related to Web regularities and for the response time, which is very short and hence complies the real time constraint.

  13. A Lead Provided by Bookmarks - Intelligent Browsers

    Directory of Open Access Journals (Sweden)

    Dan Balanescu

    2015-05-01

    Full Text Available Browsers are applications that allow Internet access. A defining characteristic is their unidirectionality: Navigator-> Internet. The purpose of this article is to support the idea of Intelligent Browsers that is defined by bidirectional: Navigator-> Internet and Internet-> Navigator. The fundamental idea is that the Internet contains huge resources of knowledge, but they are “passive”. The purpose of this article is to propose the “activation” of this knowledge so that they, through “Intelligent Browsers”, to become from Sitting Ducks to Active Mentors. Following this idea, the present article proposes changes to Bookmarks function, from the current status of Favorites to Recommendations. The article presents an analysis of the utility of this function (by presenting a research of web browsing behaviors and in particular finds that the significance of this utility has decreased lately (to the point of becoming almost useless, as will be shown, in terms data-information-knowledge. Finally, it presents the idea of a project which aims to be an applied approach that anticipates the findings of this study and the concept of Intelligent Browsers (or Active Browsers required in the context of the Big Data concept.

  14. Artificial Intelligence, Machine Learning, Deep Learning, and Cognitive Computing: What Do These Terms Mean and How Will They Impact Health Care?

    Science.gov (United States)

    Bini, Stefano A

    2018-02-27

    This article was presented at the 2017 annual meeting of the American Association of Hip and Knee Surgeons to introduce the members gathered as the audience to the concepts behind artificial intelligence (AI) and the applications that AI can have in the world of health care today. We discuss the origin of AI, progress to machine learning, and then discuss how the limits of machine learning lead data scientists to develop artificial neural networks and deep learning algorithms through biomimicry. We will place all these technologies in the context of practical clinical examples and show how AI can act as a tool to support and amplify human cognitive functions for physicians delivering care to increasingly complex patients. The aim of this article is to provide the reader with a basic understanding of the fundamentals of AI. Its purpose is to demystify this technology for practicing surgeons so they can better understand how and where to apply it. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Artificial intelligence in medicine.

    Science.gov (United States)

    Hamet, Pavel; Tremblay, Johanne

    2017-04-01

    Artificial Intelligence (AI) is a general term that implies the use of a computer to model intelligent behavior with minimal human intervention. AI is generally accepted as having started with the invention of robots. The term derives from the Czech word robota, meaning biosynthetic machines used as forced labor. In this field, Leonardo Da Vinci's lasting heritage is today's burgeoning use of robotic-assisted surgery, named after him, for complex urologic and gynecologic procedures. Da Vinci's sketchbooks of robots helped set the stage for this innovation. AI, described as the science and engineering of making intelligent machines, was officially born in 1956. The term is applicable to a broad range of items in medicine such as robotics, medical diagnosis, medical statistics, and human biology-up to and including today's "omics". AI in medicine, which is the focus of this review, has two main branches: virtual and physical. The virtual branch includes informatics approaches from deep learning information management to control of health management systems, including electronic health records, and active guidance of physicians in their treatment decisions. The physical branch is best represented by robots used to assist the elderly patient or the attending surgeon. Also embodied in this branch are targeted nanorobots, a unique new drug delivery system. The societal and ethical complexities of these applications require further reflection, proof of their medical utility, economic value, and development of interdisciplinary strategies for their wider application. Copyright © 2017. Published by Elsevier Inc.

  16. Intelligent (Autonomous) Power Controller Development for Human Deep Space Exploration

    Science.gov (United States)

    Soeder, James; Raitano, Paul; McNelis, Anne

    2016-01-01

    As NASAs Evolvable Mars Campaign and other exploration initiatives continue to mature they have identified the need for more autonomous operations of the power system. For current human space operations such as the International Space Station, the paradigm is to perform the planning, operation and fault diagnosis from the ground. However, the dual problems of communication lag as well as limited communication bandwidth beyond GEO synchronous orbit, underscore the need to change the operation methodology for human operation in deep space. To address this need, for the past several years the Glenn Research Center has had an effort to develop an autonomous power controller for human deep space vehicles. This presentation discusses the present roadmap for deep space exploration along with a description of conceptual power system architecture for exploration modules. It then contrasts the present ground centric control and management architecture with limited autonomy on-board the spacecraft with an advanced autonomous power control system that features ground based monitoring with a spacecraft mission manager with autonomous control of all core systems, including power. It then presents a functional breakdown of the autonomous power control system and examines its operation in both normal and fault modes. Finally, it discusses progress made in the development of a real-time power system model and how it is being used to evaluate the performance of the controller and well as using it for verification of the overall operation.

  17. TRSDL: Tag-Aware Recommender System Based on Deep Learning–Intelligent Computing Systems

    Directory of Open Access Journals (Sweden)

    Nan Liang

    2018-05-01

    Full Text Available In recommender systems (RS, many models are designed to predict ratings of items for the target user. To improve the performance for rating prediction, some studies have introduced tags into recommender systems. Tags benefit RS considerably, however, they are also redundant and ambiguous. In this paper, we propose a hybrid deep learning model TRSDL (tag-aware recommender system based on deep learning to improve the performance of tag-aware recommender systems (TRS. First, TRSDL uses pre-trained word embeddings to represent user-defined tags, and constructs item and user profiles based on the items’ tags set and users’ tagging behaviors. Then, it utilizes deep neural networks (DNNs and recurrent neural networks (RNNs to extract the latent features of items and users, respectively. Finally, it predicts ratings from these latent features. The model not only addresses tag limitations and takes advantage of semantic tag information but also learns more advanced implicit features via deep structures. We evaluated our proposed approach and several baselines on MovieLens-20 m, and the experimental results demonstrate that TRSDL significantly outperforms all the baselines (including the state-of-the-art models BiasedMF and I-AutoRec. In addition, we also explore the impacts of network depth and type on model performance.

  18. The internet and intelligent machines: search engines, agents and robots

    International Nuclear Information System (INIS)

    Achenbach, S.; Alfke, H.

    2000-01-01

    The internet plays an important role in a growing number of medical applications. Finding relevant information is not always easy as the amount of available information on the Web is rising quickly. Even the best Search Engines can only collect links to a fraction of all existing Web pages. In addition, many of these indexed documents have been changed or deleted. The vast majority of information on the Web is not searchable with conventional methods. New search strategies, technologies and standards are combined in Intelligent Search Agents (ISA) an Robots, which can retrieve desired information in a specific approach. Conclusion: The article describes differences between ISAs and conventional Search Engines and how communication between Agents improves their ability to find information. Examples of existing ISAs are given and the possible influences on the current and future work in radiology is discussed. (orig.) [de

  19. Crowdteaching: Supporting Teaching as Designing in Collective Intelligence Communities

    Directory of Open Access Journals (Sweden)

    Mimi Recker

    2014-09-01

    Full Text Available The widespread availability of high-quality Web-based content offers new potential for supporting teachers as designers of curricula and classroom activities. When coupled with a participatory Web culture and infrastructure, teachers can share their creations as well as leverage from the best that their peers have to offer to support a collective intelligence or crowdsourcing community, which we dub crowdteaching. We applied a collective intelligence framework to characterize crowdteaching in the context of a Web-based tool for teachers called the Instructional Architect (IA. The IA enables teachers to find, create, and share instructional activities (called IA projects for their students using online learning resources. These IA projects can further be viewed, copied, or adapted by other IA users. This study examines the usage activities of two samples of teachers, and also analyzes the characteristics of a subset of their IA projects. Analyses of teacher activities suggest that they are engaging in crowdteaching processes. Teachers, on average, chose to share over half of their IA projects, and copied some directly from other IA projects. Thus, these teachers can be seen as both contributors to and consumers of crowdteaching processes. In addition, IA users preferred to view IA projects rather than to completely copy them. Finally, correlational results based on an analysis of the characteristics of IA projects suggest that several easily computed metrics (number of views, number of copies, and number of words in IA projects can act as an indirect proxy of instructionally relevant indicators of the content of IA projects.

  20. An Intelligent Tool for Activity Data Collection

    Directory of Open Access Journals (Sweden)

    A. M. Jehad Sarkar

    2011-04-01

    Full Text Available Activity recognition systems using simple and ubiquitous sensors require a large variety of real-world sensor data for not only evaluating their performance but also training the systems for better functioning. However, a tremendous amount of effort is required to setup an environment for collecting such data. For example, expertise and resources are needed to design and install the sensors, controllers, network components, and middleware just to perform basic data collections. It is therefore desirable to have a data collection method that is inexpensive, flexible, user-friendly, and capable of providing large and diverse activity datasets. In this paper, we propose an intelligent activity data collection tool which has the ability to provide such datasets inexpensively without physically deploying the testbeds. It can be used as an inexpensive and alternative technique to collect human activity data. The tool provides a set of web interfaces to create a web-based activity data collection environment. It also provides a web-based experience sampling tool to take the user’s activity input. The tool generates an activity log using its activity knowledge and the user-given inputs. The activity knowledge is mined from the web. We have performed two experiments to validate the tool’s performance in producing reliable datasets.

  1. An intelligent tool for activity data collection.

    Science.gov (United States)

    Sarkar, A M Jehad

    2011-01-01

    Activity recognition systems using simple and ubiquitous sensors require a large variety of real-world sensor data for not only evaluating their performance but also training the systems for better functioning. However, a tremendous amount of effort is required to setup an environment for collecting such data. For example, expertise and resources are needed to design and install the sensors, controllers, network components, and middleware just to perform basic data collections. It is therefore desirable to have a data collection method that is inexpensive, flexible, user-friendly, and capable of providing large and diverse activity datasets. In this paper, we propose an intelligent activity data collection tool which has the ability to provide such datasets inexpensively without physically deploying the testbeds. It can be used as an inexpensive and alternative technique to collect human activity data. The tool provides a set of web interfaces to create a web-based activity data collection environment. It also provides a web-based experience sampling tool to take the user's activity input. The tool generates an activity log using its activity knowledge and the user-given inputs. The activity knowledge is mined from the web. We have performed two experiments to validate the tool's performance in producing reliable datasets.

  2. Situation-aware GeoVisualization considering applied logic and extensibility: a new architecture and mechanism for intelligent GeoWeb

    Science.gov (United States)

    He, Xuelin; Gold, Christopher

    2010-11-01

    Recent years have witnessed the emerging Virtual Globe technology which has been increasingly exhibiting powerful features and capabilities. However, the current technical architecture for geovisualization is still the traditional data- viewer mode, i.e. KML-Geobrowser. Current KML is basically an encoding format for wrapping static snapshots of information frozen at discrete time points, and a geobrowser is virtually a data renderer for geovisualization. In the real world spatial-temporal objects and elements possess specific semantics, applied logic and operational rules, naturally or socially, which need to be considered and to be executed when corresponding data is integrated or visualized in a visual geocontext. However, currently there is no a way to express and execute this kind of applied logic and control rules within the current geobrowsing architecture. This paper proposes a novel architecture by originating a new mechanism, DKML, and implementing a DKML-supporting prototype geobrowser. Embedded programming script within KML files can express applied logic, control conditions, situation-aware analysis utilities and special functionality, to achieve intelligent, controllable and applied logic-conformant geovisualization, and to flexibly extend and customize the DKMLsupporting geobrowser. Benefiting from the mechanism developed in this research, geobrowsers can truly evolve into powerful multi-purpose GeoWeb platforms with promising potential and prospects.

  3. Semantic Web based Self-management for a Pervasive Service Middleware

    DEFF Research Database (Denmark)

    Zhang, Weishan; Hansen, Klaus Marius

    2008-01-01

    Self-management is one of the challenges for realizing ambient intelligence in pervasive computing. In this paper,we propose and present a semantic Web based self-management approach for a pervasive service middleware where dynamic context information is encoded in a set of self-management context...... ontologies. The proposed approach is justified from the characteristics of pervasive computing and the open world assumption and reasoning potentials of semantic Web and its rule language. To enable real-time self-management, application level and network level state reporting is employed in our approach....... State changes are triggering execution of self-management rules for adaption, monitoring, diagnosis, and so on. Evaluations of self-diagnosis in terms of extensibility, performance,and scalability show that the semantic Web based self-management approach is effective to achieve the self-diagnosis goals...

  4. Research and development of artificial intelligence in China

    Institute of Scientific and Technical Information of China (English)

    Jane Qiu

    2016-01-01

    This year saw several milestones in the development of artificial intelligence.In March,Alpha Go,a computer algorithm developed by Google’s London-based company,Deep Mind,beat the world champion Lee Sedol at Go,an ancient Chinese board game.In October,the same company unveiled in the journal Nature its latest technique that allows a machine to solve tasks that require logic and reasoning,such as finding its way around the London

  5. System Interface for an Integrated Intelligent Safety System (ISS for Vehicle Applications

    Directory of Open Access Journals (Sweden)

    Mahammad A. Hannan

    2010-01-01

    Full Text Available This paper deals with the interface-relevant activity of a vehicle integrated intelligent safety system (ISS that includes an airbag deployment decision system (ADDS and a tire pressure monitoring system (TPMS. A program is developed in LabWindows/CVI, using C for prototype implementation. The prototype is primarily concerned with the interconnection between hardware objects such as a load cell, web camera, accelerometer, TPM tire module and receiver module, DAQ card, CPU card and a touch screen. Several safety subsystems, including image processing, weight sensing and crash detection systems, are integrated, and their outputs are combined to yield intelligent decisions regarding airbag deployment. The integrated safety system also monitors tire pressure and temperature. Testing and experimentation with this ISS suggests that the system is unique, robust, intelligent, and appropriate for in-vehicle applications.

  6. Recent advances in swarm intelligence and evolutionary computation

    CERN Document Server

    2015-01-01

    This timely review volume summarizes the state-of-the-art developments in nature-inspired algorithms and applications with the emphasis on swarm intelligence and bio-inspired computation. Topics include the analysis and overview of swarm intelligence and evolutionary computation, hybrid metaheuristic algorithms, bat algorithm, discrete cuckoo search, firefly algorithm, particle swarm optimization, and harmony search as well as convergent hybridization. Application case studies have focused on the dehydration of fruits and vegetables by the firefly algorithm and goal programming, feature selection by the binary flower pollination algorithm, job shop scheduling, single row facility layout optimization, training of feed-forward neural networks, damage and stiffness identification, synthesis of cross-ambiguity functions by the bat algorithm, web document clustering, truss analysis, water distribution networks, sustainable building designs and others. As a timely review, this book can serve as an ideal reference f...

  7. Deep learning in pharmacogenomics: from gene regulation to patient stratification.

    Science.gov (United States)

    Kalinin, Alexandr A; Higgins, Gerald A; Reamaroon, Narathip; Soroushmehr, Sayedmohammadreza; Allyn-Feuer, Ari; Dinov, Ivo D; Najarian, Kayvan; Athey, Brian D

    2018-05-01

    This Perspective provides examples of current and future applications of deep learning in pharmacogenomics, including: identification of novel regulatory variants located in noncoding domains of the genome and their function as applied to pharmacoepigenomics; patient stratification from medical records; and the mechanistic prediction of drug response, targets and their interactions. Deep learning encapsulates a family of machine learning algorithms that has transformed many important subfields of artificial intelligence over the last decade, and has demonstrated breakthrough performance improvements on a wide range of tasks in biomedicine. We anticipate that in the future, deep learning will be widely used to predict personalized drug response and optimize medication selection and dosing, using knowledge extracted from large and complex molecular, epidemiological, clinical and demographic datasets.

  8. Deep learning for automated drivetrain fault detection

    DEFF Research Database (Denmark)

    Bach-Andersen, Martin; Rømer-Odgaard, Bo; Winther, Ole

    2018-01-01

    A novel data-driven deep-learning system for large-scale wind turbine drivetrain monitoring applications is presented. It uses convolutional neural network processing on complex vibration signal inputs. The system is demonstrated to learn successfully from the actions of human diagnostic experts...... the fleet-wide diagnostic model performance. The analysis also explores the time dependence of the diagnostic performance, providing a detailed view of the timeliness and accuracy of the diagnostic outputs across the different architectures. Deep architectures are shown to outperform the human analyst...... as well as shallow-learning architectures, and the results demonstrate that when applied in a large-scale monitoring system, machine intelligence is now able to handle some of the most challenging diagnostic tasks related to wind turbines....

  9. Designing a patient monitoring system for bipolar disorder using Semantic Web technologies.

    Science.gov (United States)

    Thermolia, Chryssa; Bei, Ekaterini S; Petrakis, Euripides G M; Kritsotakis, Vangelis; Tsiknakis, Manolis; Sakkalis, Vangelis

    2015-01-01

    The new movement to personalize treatment plans and improve prediction capabilities is greatly facilitated by intelligent remote patient monitoring and risk prevention. This paper focuses on patients suffering from bipolar disorder, a mental illness characterized by severe mood swings. We exploit the advantages of Semantic Web and Electronic Health Record Technologies to develop a patient monitoring platform to support clinicians. Relying on intelligently filtering of clinical evidence-based information and individual-specific knowledge, we aim to provide recommendations for treatment and monitoring at appropriate time or concluding into alerts for serious shifts in mood and patients' non response to treatment.

  10. Species- and habitat-specific bioaccumulation of total mercury and methylmercury in the food web of a deep oligotrophic lake.

    Science.gov (United States)

    Arcagni, Marina; Juncos, Romina; Rizzo, Andrea; Pavlin, Majda; Fajon, Vesna; Arribére, María A; Horvat, Milena; Ribeiro Guevara, Sergio

    2018-01-15

    Niche segregation between introduced and native fish in Lake Nahuel Huapi, a deep oligotrophic lake in Northwest Patagonia (Argentina), occurs through the consumption of different prey. Therefore, in this work we analyzed total mercury [THg] and methylmercury [MeHg] concentrations in top predator fish and in their main prey to test whether their feeding habits influence [Hg]. Results indicate that [THg] and [MeHg] varied by foraging habitat and they increased with greater percentage of benthic diet and decreased with pelagic diet in Lake Nahuel Huapi. This is consistent with the fact that the native creole perch, a mostly benthivorous feeder, which shares the highest trophic level of the food web with introduced salmonids, had higher [THg] and [MeHg] than the more pelagic feeder rainbow trout and bentho-pelagic feeder brown trout. This differential THg and MeHg bioaccumulation observed in native and introduced fish provides evidence to the hypothesis that there are two main Hg transfer pathways from the base of the food web to top predators: a pelagic pathway where Hg is transferred from water, through plankton (with Hg in inorganic species mostly), forage fish to salmonids, and a benthic pathway, as Hg is transferred from the sediments (where Hg methylation occurs mostly), through crayfish (with higher [MeHg] than plankton), to native fish, leading to one fold higher [Hg]. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Semantically-Enabled Sensor Plug & Play for the Sensor Web

    Science.gov (United States)

    Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian

    2011-01-01

    Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC’s Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research. PMID:22164033

  12. Web-based tool for visualization of electric field distribution in deep-seated body structures and planning of electroporation-based treatments.

    Science.gov (United States)

    Marčan, Marija; Pavliha, Denis; Kos, Bor; Forjanič, Tadeja; Miklavčič, Damijan

    2015-01-01

    Treatments based on electroporation are a new and promising approach to treating tumors, especially non-resectable ones. The success of the treatment is, however, heavily dependent on coverage of the entire tumor volume with a sufficiently high electric field. Ensuring complete coverage in the case of deep-seated tumors is not trivial and can in best way be ensured by patient-specific treatment planning. The basis of the treatment planning process consists of two complex tasks: medical image segmentation, and numerical modeling and optimization. In addition to previously developed segmentation algorithms for several tissues (human liver, hepatic vessels, bone tissue and canine brain) and the algorithms for numerical modeling and optimization of treatment parameters, we developed a web-based tool to facilitate the translation of the algorithms and their application in the clinic. The developed web-based tool automatically builds a 3D model of the target tissue from the medical images uploaded by the user and then uses this 3D model to optimize treatment parameters. The tool enables the user to validate the results of the automatic segmentation and make corrections if necessary before delivering the final treatment plan. Evaluation of the tool was performed by five independent experts from four different institutions. During the evaluation, we gathered data concerning user experience and measured performance times for different components of the tool. Both user reports and performance times show significant reduction in treatment-planning complexity and time-consumption from 1-2 days to a few hours. The presented web-based tool is intended to facilitate the treatment planning process and reduce the time needed for it. It is crucial for facilitating expansion of electroporation-based treatments in the clinic and ensuring reliable treatment for the patients. The additional value of the tool is the possibility of easy upgrade and integration of modules with new

  13. Price Comparisons on the Internet Based on Computational Intelligence

    Science.gov (United States)

    Kim, Jun Woo; Ha, Sung Ho

    2014-01-01

    Information-intensive Web services such as price comparison sites have recently been gaining popularity. However, most users including novice shoppers have difficulty in browsing such sites because of the massive amount of information gathered and the uncertainty surrounding Web environments. Even conventional price comparison sites face various problems, which suggests the necessity of a new approach to address these problems. Therefore, for this study, an intelligent product search system was developed that enables price comparisons for online shoppers in a more effective manner. In particular, the developed system adopts linguistic price ratings based on fuzzy logic to accommodate user-defined price ranges, and personalizes product recommendations based on linguistic product clusters, which help online shoppers find desired items in a convenient manner. PMID:25268901

  14. Investigating Pre-Service Mathematics Teachers' Innovation Awareness and Views Regarding Intelligent Tutoring Systems

    Science.gov (United States)

    Erdemir, Mustafa; Ingeç, Sebnem Kandil

    2016-01-01

    The purpose of this study is to identify pre-service primary mathematics teachers' views regarding on Web-based Intelligent Tutoring Systems (WBITS) in relation to its usability and influence on teaching. A survey method was used. The study was conducted with 43 students attending the mathematics teaching program under the department of elementary…

  15. Search of the Deep and Dark Web via DARPA Memex

    Science.gov (United States)

    Mattmann, C. A.

    2015-12-01

    Search has progressed through several stages due to the increasing size of the Web. Search engines first focused on text and its rate of occurrence; then focused on the notion of link analysis and citation then on interactivity and guided search; and now on the use of social media - who we interact with, what we comment on, and who we follow (and who follows us). The next stage, referred to as "deep search," requires solutions that can bring together text, images, video, importance, interactivity, and social media to solve this challenging problem. The Apache Nutch project provides an open framework for large-scale, targeted, vertical search with capabilities to support all past and potential future search engine foci. Nutch is a flexible infrastructure allowing open access to ranking; URL selection and filtering approaches, to the link graph generated from search, and Nutch has spawned entire sub communities including Apache Hadoop and Apache Tika. It addresses many current needs with the capability to support new technologies such as image and video. On the DARPA Memex project, we are creating create specific extensions to Nutch that will directly improve its overall technological superiority for search and that will directly allow us to address complex search problems including human trafficking. We are integrating state-of-the-art algorithms developed by Kitware for IARPA Aladdin combined with work by Harvard to provide image and video understanding support allowing automatic detection of people and things and massive deployment via Nutch. We are expanding Apache Tika for scene understanding, object/person detection and classification in images/video. We are delivering an interactive and visual interface for initiating Nutch crawls. The interface uses Python technologies to expose Nutch data and to provide a domain specific language for crawls. With the Bokeh visualization library the interface we are delivering simple interactive crawl visualization and

  16. An Intelligent Tutoring System for Learning Android Applications UI Development

    OpenAIRE

    Al Rekhawi , Hazem Awni; Abu Naser , Samy S

    2018-01-01

    International audience; The paper describes the design of a web based intelligent tutoring system for teaching Android Applications Development to students to overcome the difficulties they face. The basic idea of this system is a systematic introduction into the concept of Android Application Development. The system presents the topic of Android Application Development and administers automatically generated problems for the students to solve. The system is automatically adapted at run time ...

  17. Strategic latency and warning. Private sector perspectives on current intelligence challenges in science and technology

    Energy Technology Data Exchange (ETDEWEB)

    Davis, Zachary [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gac, Frank [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nacht, Michael [Univ. of California, Berkeley, CA (United States)

    2016-01-08

    Lawrence Livermore National Laboratory and National Intelligence University convened a group of business experts to examine parallels between S&T competition in the marketplace and science and technology intelligence (S&TI). The experts identified the centrality of people — individuals and connected groups — to the successful development and application of latent S&T capabilities. People may indeed be more important to recognizing S&T potential than deep knowledge of any particular technology. This report explores the significance of this key insight for S&TI.

  18. Artificial intelligence, neural network, and Internet tool integration in a pathology workstation to improve information access

    Science.gov (United States)

    Sargis, J. C.; Gray, W. A.

    1999-03-01

    The APWS allows user friendly access to several legacy systems which would normally each demand domain expertise for proper utilization. The generalized model, including objects, classes, strategies and patterns is presented. The core components of the APWS are the Microsoft Windows 95 Operating System, Oracle, Oracle Power Objects, Artificial Intelligence tools, a medical hyperlibrary and a web site. The paper includes a discussion of how could be automated by taking advantage of the expert system, object oriented programming and intelligent relational database tools within the APWS.

  19. Nonlinear finite element modeling of concrete deep beams with openings strengthened with externally-bonded composites

    International Nuclear Information System (INIS)

    Hawileh, Rami A.; El-Maaddawy, Tamer A.; Naser, Mohannad Z.

    2012-01-01

    Highlights: ► A 3D nonlinear FE model is developed of RC deep beams with web openings. ► We used cohesion elements to simulate bond. ► The developed FE model is suitable for analysis of such complex structures. -- Abstract: This paper aims to develop 3D nonlinear finite element (FE) models for reinforced concrete (RC) deep beams containing web openings and strengthened in shear with carbon fiber reinforced polymer (CFRP) composite sheets. The web openings interrupted the natural load path either fully or partially. The FE models adopted realistic materials constitutive laws that account for the nonlinear behavior of materials. In the FE models, solid elements for concrete, multi-layer shell elements for CFRP and link elements for steel reinforcement were used to simulate the physical models. Special interface elements were implemented in the FE models to simulate the interfacial bond behavior between the concrete and CFRP composites. A comparison between the FE results and experimental data published in the literature demonstrated the validity of the computational models in capturing the structural response for both unstrengthened and CFRP-strengthened deep beams with openings. The developed FE models can serve as a numerical platform for performance prediction of RC deep beams with openings strengthened in shear with CFRP composites.

  20. Automated intelligent emergency assesment of GTA pipeline events

    Energy Technology Data Exchange (ETDEWEB)

    Asgary, Ali; Ghaffari, Alireza; Kong, Albert [University of York, Toronto, (Canada)

    2010-07-01

    The traditional approach used for risk assessment in pipeline operations is stochastic, using probabilities of events. This paper reports on an investigation into the deployment of an automated intelligence reasoning system used in decision support for risk assessments related to oil and gas emergencies in the Greater Toronto Area (GTA). The study evaluated the use of fuzzy interference rules encoded using JESS and fuzzy J to develop a risk assessment system. Real time data from web services such as weather, Geographic Information Systems (GIS) and Supervisory Control and Data Acquisition (SCADA) systems were used. This study took into consideration the most recent communications infrastructure and technologies, involving the most advanced human machine interface (HMI) access via hypertext transfer protocol (HTTP). This new approach will support decision making in emergency response scenarios. The study showed that the convergence of several technologies may change the automated intelligence system design paradigm.

  1. Regionalization: The Cure for an Ailing Intelligence Career Field

    Science.gov (United States)

    2013-03-01

    lenses, and it must resist judging the world as if it operated along the same principles and values that drive America.16 A competent strategic... microeconomics – an officer must spend years working within the region and studying within the network of people with long dwell time and deep 12...nurturing those relationships. One of MG Flynn’s principle initiatives for intelligence improvements in Afghanistan directed the analysts to divide their

  2. Parasites in food webs: the ultimate missing links

    Science.gov (United States)

    Lafferty, Kevin D.; Allesina, Stefano; Arim, Matias; Briggs, Cherie J.; De Leo, Giulio A.; Dobson, Andrew P.; Dunne, Jennifer A.; Johnson, Pieter T.J.; Kuris, Armand M.; Marcogliese, David J.; Martinez, Neo D.; Memmott, Jane; Marquet, Pablo A.; McLaughlin, John P.; Mordecai, Eerin A.; Pascual, Mercedes; Poulin, Robert; Thieltges, David W.

    2008-01-01

    Parasitism is the most common consumer strategy among organisms, yet only recently has there been a call for the inclusion of infectious disease agents in food webs. The value of this effort hinges on whether parasites affect food-web properties. Increasing evidence suggests that parasites have the potential to uniquely alter food-web topology in terms of chain length, connectance and robustness. In addition, parasites might affect food-web stability, interaction strength and energy flow. Food-web structure also affects infectious disease dynamics because parasites depend on the ecological networks in which they live. Empirically, incorporating parasites into food webs is straightforward. We may start with existing food webs and add parasites as nodes, or we may try to build food webs around systems for which we already have a good understanding of infectious processes. In the future, perhaps researchers will add parasites while they construct food webs. Less clear is how food-web theory can accommodate parasites. This is a deep and central problem in theoretical biology and applied mathematics. For instance, is representing parasites with complex life cycles as a single node equivalent to representing other species with ontogenetic niche shifts as a single node? Can parasitism fit into fundamental frameworks such as the niche model? Can we integrate infectious disease models into the emerging field of dynamic food-web modelling? Future progress will benefit from interdisciplinary collaborations between ecologists and infectious disease biologists.

  3. Advancing Autonomous Operations for Deep Space Vehicles

    Science.gov (United States)

    Haddock, Angie T.; Stetson, Howard K.

    2014-01-01

    Starting in Jan 2012, the Advanced Exploration Systems (AES) Autonomous Mission Operations (AMO) Project began to investigate the ability to create and execute "single button" crew initiated autonomous activities [1]. NASA Marshall Space Flight Center (MSFC) designed and built a fluid transfer hardware test-bed to use as a sub-system target for the investigations of intelligent procedures that would command and control a fluid transfer test-bed, would perform self-monitoring during fluid transfers, detect anomalies and faults, isolate the fault and recover the procedures function that was being executed, all without operator intervention. In addition to the development of intelligent procedures, the team is also exploring various methods for autonomous activity execution where a planned timeline of activities are executed autonomously and also the initial analysis of crew procedure development. This paper will detail the development of intelligent procedures for the NASA MSFC Autonomous Fluid Transfer System (AFTS) as well as the autonomous plan execution capabilities being investigated. Manned deep space missions, with extreme communication delays with Earth based assets, presents significant challenges for what the on-board procedure content will encompass as well as the planned execution of the procedures.

  4. DOORS to the semantic web and grid with a PORTAL for biomedical computing.

    Science.gov (United States)

    Taswell, Carl

    2008-03-01

    The semantic web remains in the early stages of development. It has not yet achieved the goals envisioned by its founders as a pervasive web of distributed knowledge and intelligence. Success will be attained when a dynamic synergism can be created between people and a sufficient number of infrastructure systems and tools for the semantic web in analogy with those for the original web. The domain name system (DNS), web browsers, and the benefits of publishing web pages motivated many people to register domain names and publish web sites on the original web. An analogous resource label system, semantic search applications, and the benefits of collaborative semantic networks will motivate people to register resource labels and publish resource descriptions on the semantic web. The Domain Ontology Oriented Resource System (DOORS) and Problem Oriented Registry of Tags and Labels (PORTAL) are proposed as infrastructure systems for resource metadata within a paradigm that can serve as a bridge between the original web and the semantic web. The Internet Registry Information Service (IRIS) registers [corrected] domain names while DNS publishes domain addresses with mapping of names to addresses for the original web. Analogously, PORTAL registers resource labels and tags while DOORS publishes resource locations and descriptions with mapping of labels to locations for the semantic web. BioPORT is proposed as a prototype PORTAL registry specific for the problem domain of biomedical computing.

  5. 6th International Conference in Methodologies and intelligent Systems for Technology Enhanced Learning

    CERN Document Server

    Prieta, Fernando; Mascio, Tania; Gennari, Rosella; Rodríguez, Javier; Vittorini, Pierpaolo

    2016-01-01

    The 6th International Conference in Methodologies and intelligent Systems for Technology Enhanced Learning held in Seville (Spain) is host by the University of Seville from 1st to 3rd June, 2016. The 6th edition of this conference expands the topics of the evidence-based TEL workshops series in order to provide an open forum for discussing intelligent systems for TEL, their roots in novel learning theories, empirical methodologies for their design or evaluation, stand-alone solutions or web-based ones. It intends to bring together researchers and developers from industry, the education field and the academic world to report on the latest scientific research, technical advances and methodologies.

  6. DeepARG: a deep learning approach for predicting antibiotic resistance genes from metagenomic data.

    Science.gov (United States)

    Arango-Argoty, Gustavo; Garner, Emily; Pruden, Amy; Heath, Lenwood S; Vikesland, Peter; Zhang, Liqing

    2018-02-01

    DeepARG models and database are available as a command line version and as a Web service at http://bench.cs.vt.edu/deeparg .

  7. A Chatbot as a Natural Web Interface to Arabic Web QA

    Directory of Open Access Journals (Sweden)

    Bayan Abu Shawar

    2011-03-01

    Full Text Available In this paper, we describe a way to access Arabic Web Question Answering (QA corpus using a chatbot, without the need for sophisticated natural language processing or logical inference. Any Natural Language (NL interface to Question Answer (QA system is constrained to reply with the given answers, so there is no need for NL generation to recreate well-formed answers, or for deep analysis or logical inference to map user input questions onto this logical ontology; simple (but large set of pattern-template matching rules will suffice. In previous research, this approach works properly with English and other European languages. In this paper, we try to see how the same chatbot will react in terms of Arabic Web QA corpus. Initial results shows that 93% of answers were correct, but because of a lot of characteristics related to Arabic language, changing Arabic questions into other forms may lead to no answers.

  8. Automating Deep Space Network scheduling and conflict resolution

    Science.gov (United States)

    Johnston, Mark D.; Clement, Bradley

    2005-01-01

    The Deep Space Network (DSN) is a central part of NASA's infrastructure for communicating with active space missions, from earth orbit to beyond the solar system. We describe our recent work in modeling the complexities of user requirements, and then scheduling and resolving conflicts on that basis. We emphasize our innovative use of background 'intelligent' assistants' that carry out search asynchrnously while the user is focusing on various aspects of the schedule.

  9. JAMSTEC E-library of Deep-sea Images (J-EDI) Realizes a Virtual Journey to the Earth's Unexplored Deep Ocean

    Science.gov (United States)

    Sasaki, T.; Azuma, S.; Matsuda, S.; Nagayama, A.; Ogido, M.; Saito, H.; Hanafusa, Y.

    2016-12-01

    The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) archives a large amount of deep-sea research videos and photos obtained by JAMSTEC's research submersibles and vehicles with cameras. The web site "JAMSTEC E-library of Deep-sea Images : J-EDI" (http://www.godac.jamstec.go.jp/jedi/e/) has made videos and photos available to the public via the Internet since 2011. Users can search for target videos and photos by keywords, easy-to-understand icons, and dive information at J-EDI because operating staffs classify videos and photos as to contents, e.g. living organism and geological environment, and add comments to them.Dive survey data including videos and photos are not only valiant academically but also helpful for education and outreach activities. With the aim of the improvement of visibility for broader communities, we added new functions of 3-dimensional display synchronized various dive survey data with videos in this year.New Functions Users can search for dive survey data by 3D maps with plotted dive points using the WebGL virtual map engine "Cesium". By selecting a dive point, users can watch deep-sea videos and photos and associated environmental data, e.g. water temperature, salinity, rock and biological sample photos, obtained by the dive survey. Users can browse a dive track visualized in 3D virtual spaces using the WebGL JavaScript library. By synchronizing this virtual dive track with videos, users can watch deep-sea videos recorded at a point on a dive track. Users can play an animation which a submersible-shaped polygon automatically traces a 3D virtual dive track and displays of dive survey data are synchronized with tracing a dive track. Users can directly refer to additional information of other JAMSTEC data sites such as marine biodiversity database, marine biological sample database, rock sample database, and cruise and dive information database, on each page which a 3D virtual dive track is displayed. A 3D visualization of a dive

  10. 12th International Conference on Intelligent Information Hiding and Multimedia Signal Processing

    CERN Document Server

    Tsai, Pei-Wei; Huang, Hsiang-Cheh

    2017-01-01

    This volume of Smart Innovation, Systems and Technologies contains accepted papers presented in IIH-MSP-2016, the 12th International Conference on Intelligent Information Hiding and Multimedia Signal Processing. The conference this year was technically co-sponsored by Tainan Chapter of IEEE Signal Processing Society, Fujian University of Technology, Chaoyang University of Technology, Taiwan Association for Web Intelligence Consortium, Fujian Provincial Key Laboratory of Big Data Mining and Applications (Fujian University of Technology), and Harbin Institute of Technology Shenzhen Graduate School. IIH-MSP 2016 is held in 21-23, November, 2016 in Kaohsiung, Taiwan. The conference is an international forum for the researchers and professionals in all areas of information hiding and multimedia signal processing. .

  11. High-speed railway real-time localization auxiliary method based on deep neural network

    Science.gov (United States)

    Chen, Dongjie; Zhang, Wensheng; Yang, Yang

    2017-11-01

    High-speed railway intelligent monitoring and management system is composed of schedule integration, geographic information, location services, and data mining technology for integration of time and space data. Assistant localization is a significant submodule of the intelligent monitoring system. In practical application, the general access is to capture the image sequences of the components by using a high-definition camera, digital image processing technique and target detection, tracking and even behavior analysis method. In this paper, we present an end-to-end character recognition method based on a deep CNN network called YOLO-toc for high-speed railway pillar plate number. Different from other deep CNNs, YOLO-toc is an end-to-end multi-target detection framework, furthermore, it exhibits a state-of-art performance on real-time detection with a nearly 50fps achieved on GPU (GTX960). Finally, we realize a real-time but high-accuracy pillar plate number recognition system and integrate natural scene OCR into a dedicated classification YOLO-toc model.

  12. Food-web dynamics and isotopic niches in deep-sea communities residing in a submarine canyon and on the adjacent open slopes

    Science.gov (United States)

    Demopoulos, Amanda W.J.; McClain-Counts, Jennifer; Ross, Steve W.; Brooke, Sandra; Mienis, Furu

    2017-01-01

    Examination of food webs and trophic niches provide insights into organisms' functional ecology, yet few studies have examined trophodynamics within submarine canyons, where the interaction of canyon morphology and oceanography influences habitat provision and food deposition. Using stable isotope analysis and Bayesian ellipses, we documented deep-sea food-web structure and trophic niches in Baltimore Canyon and the adjacent open slopes in the US Mid-Atlantic Region. Results revealed isotopically diverse feeding groups, comprising approximately 5 trophic levels. Regression analysis indicated that consumer isotope data are structured by habitat (canyon vs. slope), feeding group, and depth. Benthic feeders were enriched in 13C and 15N relative to suspension feeders, consistent with consuming older, more refractory organic matter. In contrast, canyon suspension feeders had the largest and more distinct isotopic niche, indicating they consume an isotopically discrete food source, possibly fresher organic material. The wider isotopic niche observed for canyon consumers indicated the presence of feeding specialists and generalists. High dispersion in δ13C values for canyon consumers suggests that the isotopic composition of particulate organic matter changes, which is linked to depositional dynamics, resulting in discrete zones of organic matter accumulation or resuspension. Heterogeneity in habitat and food availability likely enhances trophic diversity in canyons. Given their abundance in the world's oceans, our results from Baltimore Canyon suggest that submarine canyons may represent important havens for trophic diversity.

  13. A New Dimension of Business Intelligence: Location-based Intelligence

    OpenAIRE

    Zeljko Panian

    2012-01-01

    Through the course of this paper we define Locationbased Intelligence (LBI) which is outgrowing from process of amalgamation of geolocation and Business Intelligence. Amalgamating geolocation with traditional Business Intelligence (BI) results in a new dimension of BI named Location-based Intelligence. LBI is defined as leveraging unified location information for business intelligence. Collectively, enterprises can transform location data into business intelligence applic...

  14. A Proposed Smart E-Learning System Using Cloud Computing Services: PaaS, IaaS and Web 3.0

    Directory of Open Access Journals (Sweden)

    Dr.Mona M. Nasr

    2012-09-01

    Full Text Available E-learning systems need to improve its infrastructure, which can devote the required computation and storage resources for e-learning systems. Microsoft cloud computing technologies although in their early stages, have managed to change the way applications are going to be developed and accessed. The objective of the paper is to combine various technologies to design architecture which describe E-learning systems. Web 3.0 uses widget aggregation, intelligent retrieval, user interest modeling and semantic annotation. These technologies are aimed at running applications as services over the internet on a flexible infrastructure. Cloud computing provides a low cost solution to academic institutions for their researchers, faculty and learners. In this paper we integrate cloud computing as a platform with web 3.0 for building intelligent e-learning systems.

  15. Species diversity variations in Neogene deep-sea benthic

    Indian Academy of Sciences (India)

    Some species of benthic foraminifera are sensitive to changes in water mass properties whereas others are sensitive to organic fluxes and deep-sea oxygenation. Benthic faunal diversity has been found closely linked to food web, bottom water oxygen levels, and substrate and water mass stability. The present study is ...

  16. Facilitating Multiple Intelligences Through Multimodal Learning Analytics

    Directory of Open Access Journals (Sweden)

    Ayesha PERVEEN

    2018-01-01

    Full Text Available This paper develops a theoretical framework for employing learning analytics in online education to trace multiple learning variations of online students by considering their potential of being multiple intelligences based on Howard Gardner’s 1983 theory of multiple intelligences. The study first emphasizes the need to facilitate students as multiple intelligences by online education systems and then suggests a framework of the advanced form of learning analytics i.e., multimodal learning analytics for tracing and facilitating multiple intelligences while they are engaged in online ubiquitous learning. As multimodal learning analytics is still an evolving area, it poses many challenges for technologists, educationists as well as organizational managers. Learning analytics make machines meet humans, therefore, the educationists with an expertise in learning theories can help technologists devise latest technological methods for multimodal learning analytics and organizational managers can implement them for the improvement of online education. Therefore, a careful instructional design based on a deep understanding of students’ learning abilities, is required to develop teaching plans and technological possibilities for monitoring students’ learning paths. This is how learning analytics can help design an adaptive instructional design based on a quick analysis of the data gathered. Based on that analysis, the academicians can critically reflect upon the quick or delayed implementation of the existing instructional design based on students’ cognitive abilities or even about the single or double loop learning design. The researcher concludes that the online education is multimodal in nature, has the capacity to endorse multiliteracies and, therefore, multiple intelligences can be tracked and facilitated through multimodal learning analytics in an online mode. However, online teachers’ training both in technological implementations and

  17. Intelligence analysis – the royal discipline of Competitive Intelligence

    Directory of Open Access Journals (Sweden)

    František Bartes

    2011-01-01

    Full Text Available The aim of this article is to propose work methodology for Competitive Intelligence teams in one of the intelligence cycle’s specific area, in the so-called “Intelligence Analysis”. Intelligence Analysis is one of the stages of the Intelligence Cycle in which data from both the primary and secondary research are analyzed. The main result of the effort is the creation of added value for the information collected. Company Competiitve Intelligence, correctly understood and implemented in business practice, is the “forecasting of the future”. That is forecasting about the future, which forms the basis for strategic decisions made by the company’s top management. To implement that requirement in corporate practice, the author perceives Competitive Intelligence as a systemic application discipline. This approach allows him to propose a “Work Plan” for Competitive Intelligence as a fundamental standardized document to steer Competitive Intelligence team activities. The author divides the Competitive Intelligence team work plan into five basic parts. Those parts are derived from the five-stage model of the intelligence cycle, which, in the author’s opinion, is more appropriate for complicated cases of Competitive Intelligence.

  18. Biodiversity maintenance in food webs with regulatory environmental feedbacks.

    Science.gov (United States)

    Bagdassarian, Carey K; Dunham, Amy E; Brown, Christopher G; Rauscher, Daniel

    2007-04-21

    Although the food web is one of the most fundamental and oldest concepts in ecology, elucidating the strategies and structures by which natural communities of species persist remains a challenge to empirical and theoretical ecologists. We show that simple regulatory feedbacks between autotrophs and their environment when embedded within complex and realistic food-web models enhance biodiversity. The food webs are generated through the niche-model algorithm and coupled with predator-prey dynamics, with and without environmental feedbacks at the autotroph level. With high probability and especially at lower, more realistic connectance levels, regulatory environmental feedbacks result in fewer species extinctions, that is, in increased species persistence. These same feedback couplings, however, also sensitize food webs to environmental stresses leading to abrupt collapses in biodiversity with increased forcing. Feedback interactions between species and their material environments anchor food-web persistence, adding another dimension to biodiversity conservation. We suggest that the regulatory features of two natural systems, deep-sea tubeworms with their microbial consortia and a soil ecosystem manifesting adaptive homeostatic changes, can be embedded within niche-model food-web dynamics.

  19. Spiritual Intelligence, Emotional Intelligence and Auditor’s Performance

    OpenAIRE

    Hanafi, Rustam

    2010-01-01

    The objective of this research was to investigate empirical evidence about influence audi-tor spiritual intelligence on the performance with emotional intelligence as a mediator variable. Linear regression models are developed to examine the hypothesis and path analysis. The de-pendent variable of each model is auditor performance, whereas the independent variable of model 1 is spiritual intelligence, of model 2 are emotional intelligence and spiritual intelligence. The parameters were estima...

  20. Network-Capable Application Process and Wireless Intelligent Sensors for ISHM

    Science.gov (United States)

    Figueroa, Fernando; Morris, Jon; Turowski, Mark; Wang, Ray

    2011-01-01

    invention enables wide-area sensing and employs numerous globally distributed sensing devices that observe the physical world through the existing sensor network. This innovation enables distributed storage, distributed processing, distributed intelligence, and the availability of DiaK (Data, Information, and Knowledge) to any element as needed. It also enables the simultaneous execution of multiple processes, and represents models that contribute to the determination of the condition and health of each element in the system. The NCAP (intelligent process) can configure data-collection and filtering processes in reaction to sensed data, allowing it to decide when and how to adapt collection and processing with regard to sophisticated analysis of data derived from multiple sensors. The user will be able to view the sensing device network as a single unit that supports a high-level query language. Each query would be able to operate over data collected from across the global sensor network just as a search query encompasses millions of Web pages. The sensor web can preserve ubiquitous information access between the querier and the queried data. Pervasive monitoring of the physical world raises significant data and privacy concerns. This innovation enables different authorities to control portions of the sensing infrastructure, and sensor service authors may wish to compose services across authority boundaries.

  1. Sentimen Analisis Tweet Berbahasa Indonesia Dengan Deep Belief Network

    Directory of Open Access Journals (Sweden)

    Ira zulfa

    2017-07-01

    Full Text Available Sentiment analysis is a computational research of opinion sentiment and emotion which is expressed in textual mode. Twitter becomes the most popular communication device among internet users. Deep Learning is a new area of machine learning research. It aims to move machine learning closer to its main goal, artificial intelligence. The purpose of deep learning is to change the manual of engineering with learning. At its growth, deep learning has algorithms arrangement that focus on non-linear data representation. One of the machine learning methods is Deep Belief Network (DBN. Deep Belief Network (DBN, which is included in Deep Learning method, is a stack of several algorithms with some extraction features that optimally utilize all resources. This study has two points. First, it aims to classify positive, negative, and neutral sentiments towards the test data. Second, it determines the classification model accuracy by using Deep Belief Network method so it would be able to be applied into the tweet classification, to highlight the sentiment class of training data tweet in Bahasa Indonesia. Based on the experimental result, it can be concluded that the best method in managing tweet data is the DBN method with an accuracy of 93.31%, compared with  Naive Bayes method which has an accuracy of 79.10%, and SVM (Support Vector Machine method with an accuracy of 92.18%.

  2. Multidimensional Learner Model In Intelligent Learning System

    Science.gov (United States)

    Deliyska, B.; Rozeva, A.

    2009-11-01

    The learner model in an intelligent learning system (ILS) has to ensure the personalization (individualization) and the adaptability of e-learning in an online learner-centered environment. ILS is a distributed e-learning system whose modules can be independent and located in different nodes (servers) on the Web. This kind of e-learning is achieved through the resources of the Semantic Web and is designed and developed around a course, group of courses or specialty. An essential part of ILS is learner model database which contains structured data about learner profile and temporal status in the learning process of one or more courses. In the paper a learner model position in ILS is considered and a relational database is designed from learner's domain ontology. Multidimensional modeling agent for the source database is designed and resultant learner data cube is presented. Agent's modules are proposed with corresponding algorithms and procedures. Multidimensional (OLAP) analysis guidelines on the resultant learner module for designing dynamic learning strategy have been highlighted.

  3. Cognitive computing and eScience in health and life science research: artificial intelligence and obesity intervention programs.

    Science.gov (United States)

    Marshall, Thomas; Champagne-Langabeer, Tiffiany; Castelli, Darla; Hoelscher, Deanna

    2017-12-01

    To present research models based on artificial intelligence and discuss the concept of cognitive computing and eScience as disruptive factors in health and life science research methodologies. The paper identifies big data as a catalyst to innovation and the development of artificial intelligence, presents a framework for computer-supported human problem solving and describes a transformation of research support models. This framework includes traditional computer support; federated cognition using machine learning and cognitive agents to augment human intelligence; and a semi-autonomous/autonomous cognitive model, based on deep machine learning, which supports eScience. The paper provides a forward view of the impact of artificial intelligence on our human-computer support and research methods in health and life science research. By augmenting or amplifying human task performance with artificial intelligence, cognitive computing and eScience research models are discussed as novel and innovative systems for developing more effective adaptive obesity intervention programs.

  4. Artificial Intelligence in planetary spectroscopy

    Science.gov (United States)

    Waldmann, Ingo

    2017-10-01

    The field of exoplanetary spectroscopy is as fast moving as it is new. Analysing currently available observations of exoplanetary atmospheres often invoke large and correlated parameter spaces that can be difficult to map or constrain. This is true for both: the data analysis of observations as well as the theoretical modelling of their atmospheres.Issues of low signal-to-noise data and large, non-linear parameter spaces are nothing new and commonly found in many fields of engineering and the physical sciences. Recent years have seen vast improvements in statistical data analysis and machine learning that have revolutionised fields as diverse as telecommunication, pattern recognition, medical physics and cosmology.In many aspects, data mining and non-linearity challenges encountered in other data intensive fields are directly transferable to the field of extrasolar planets. In this conference, I will discuss how deep neural networks can be designed to facilitate solving said issues both in exoplanet atmospheres as well as for atmospheres in our own solar system. I will present a deep belief network, RobERt (Robotic Exoplanet Recognition), able to learn to recognise exoplanetary spectra and provide artificial intelligences to state-of-the-art atmospheric retrieval algorithms. Furthermore, I will present a new deep convolutional network that is able to map planetary surface compositions using hyper-spectral imaging and demonstrate its uses on Cassini-VIMS data of Saturn.

  5. Emerging trends in geospatial artificial intelligence (geoAI): potential applications for environmental epidemiology.

    Science.gov (United States)

    VoPham, Trang; Hart, Jaime E; Laden, Francine; Chiang, Yao-Yi

    2018-04-17

    Geospatial artificial intelligence (geoAI) is an emerging scientific discipline that combines innovations in spatial science, artificial intelligence methods in machine learning (e.g., deep learning), data mining, and high-performance computing to extract knowledge from spatial big data. In environmental epidemiology, exposure modeling is a commonly used approach to conduct exposure assessment to determine the distribution of exposures in study populations. geoAI technologies provide important advantages for exposure modeling in environmental epidemiology, including the ability to incorporate large amounts of big spatial and temporal data in a variety of formats; computational efficiency; flexibility in algorithms and workflows to accommodate relevant characteristics of spatial (environmental) processes including spatial nonstationarity; and scalability to model other environmental exposures across different geographic areas. The objectives of this commentary are to provide an overview of key concepts surrounding the evolving and interdisciplinary field of geoAI including spatial data science, machine learning, deep learning, and data mining; recent geoAI applications in research; and potential future directions for geoAI in environmental epidemiology.

  6. Mining for Strategic Competitive Intelligence Foundations and Applications

    CERN Document Server

    Ziegler, Cai-Nicolas

    2012-01-01

    The textbook at hand aims to provide an introduction to the use of automated methods for gathering strategic competitive intelligence. Hereby, the text does not describe a singleton research discipline in its own right, such as machine learning or Web mining. It rather contemplates an application scenario, namely the gathering of knowledge that appears of paramount importance to organizations, e.g., companies and corporations. To this end, the book first summarizes the range of research disciplines that contribute to addressing the issue, extracting from each those grains that are of utmost relevance to the depicted application scope. Moreover, the book presents systems that put these techniques to practical use (e.g., reputation monitoring platforms) and takes an inductive approach to define the gestalt of mining for competitive strategic intelligence by selecting major use cases that are laid out and explained in detail. These pieces form the first part of the book. Each of those use cases is backed by a nu...

  7. Using business intelligence for efficient inter-facility patient transfer.

    Science.gov (United States)

    Haque, Waqar; Derksen, Beth Ann; Calado, Devin; Foster, Lee

    2015-01-01

    In the context of inter-facility patient transfer, a transfer operator must be able to objectively identify a destination which meets the needs of a patient, while keeping in mind each facility's limitations. We propose a solution which uses Business Intelligence (BI) techniques to analyze data related to healthcare infrastructure and services, and provides a web based system to identify optimal destination(s). The proposed inter-facility transfer system uses a single data warehouse with an Online Analytical Processing (OLAP) cube built on top that supplies analytical data to multiple reports embedded in web pages. The data visualization tool includes map based navigation of the health authority as well as an interactive filtering mechanism which finds facilities meeting the selected criteria. The data visualization is backed by an intuitive data entry web form which safely constrains the data, ensuring consistency and a single version of truth. The overall time required to identify the destination for inter-facility transfers is reduced from hours to a few minutes with this interactive solution.

  8. Artificial intelligence in diagnosis of obstructive lung disease: current status and future potential.

    Science.gov (United States)

    Das, Nilakash; Topalovic, Marko; Janssens, Wim

    2018-03-01

    The application of artificial intelligence in the diagnosis of obstructive lung diseases is an exciting phenomenon. Artificial intelligence algorithms work by finding patterns in data obtained from diagnostic tests, which can be used to predict clinical outcomes or to detect obstructive phenotypes. The purpose of this review is to describe the latest trends and to discuss the future potential of artificial intelligence in the diagnosis of obstructive lung diseases. Machine learning has been successfully used in automated interpretation of pulmonary function tests for differential diagnosis of obstructive lung diseases. Deep learning models such as convolutional neural network are state-of-the art for obstructive pattern recognition in computed tomography. Machine learning has also been applied in other diagnostic approaches such as forced oscillation test, breath analysis, lung sound analysis and telemedicine with promising results in small-scale studies. Overall, the application of artificial intelligence has produced encouraging results in the diagnosis of obstructive lung diseases. However, large-scale studies are still required to validate current findings and to boost its adoption by the medical community.

  9. Deep-Sea Microbes: Linking Biogeochemical Rates to -Omics Approaches

    Science.gov (United States)

    Herndl, G. J.; Sintes, E.; Bayer, B.; Bergauer, K.; Amano, C.; Hansman, R.; Garcia, J.; Reinthaler, T.

    2016-02-01

    Over the past decade substantial progress has been made in determining deep ocean microbial activity and resolving some of the enigmas in understanding the deep ocean carbon flux. Also, metagenomics approaches have shed light onto the dark ocean's microbes but linking -omics approaches to biogeochemical rate measurements are generally rare in microbial oceanography and even more so for the deep ocean. In this presentation, we will show by combining metagenomics, -proteomics and biogeochemical rate measurements on the bulk and single-cell level that deep-sea microbes exhibit characteristics of generalists with a large genome repertoire, versatile in utilizing substrate as revealed by metaproteomics. This is in striking contrast with the apparently rather uniform dissolved organic matter pool in the deep ocean. Combining the different -omics approaches with metabolic rate measurements, we will highlight some major inconsistencies and enigmas in our understanding of the carbon cycling and microbial food web structure in the dark ocean.

  10. Intelligence analysis – the royal discipline of Competitive Intelligence

    OpenAIRE

    František Bartes

    2011-01-01

    The aim of this article is to propose work methodology for Competitive Intelligence teams in one of the intelligence cycle’s specific area, in the so-called “Intelligence Analysis”. Intelligence Analysis is one of the stages of the Intelligence Cycle in which data from both the primary and secondary research are analyzed. The main result of the effort is the creation of added value for the information collected. Company Competiitve Intelligence, correctly understood and implemented in busines...

  11. Extracting Databases from Dark Data with DeepDive.

    Science.gov (United States)

    Zhang, Ce; Shin, Jaeho; Ré, Christopher; Cafarella, Michael; Niu, Feng

    2016-01-01

    DeepDive is a system for extracting relational databases from dark data : the mass of text, tables, and images that are widely collected and stored but which cannot be exploited by standard relational tools. If the information in dark data - scientific papers, Web classified ads, customer service notes, and so on - were instead in a relational database, it would give analysts a massive and valuable new set of "big data." DeepDive is distinctive when compared to previous information extraction systems in its ability to obtain very high precision and recall at reasonable engineering cost; in a number of applications, we have used DeepDive to create databases with accuracy that meets that of human annotators. To date we have successfully deployed DeepDive to create data-centric applications for insurance, materials science, genomics, paleontologists, law enforcement, and others. The data unlocked by DeepDive represents a massive opportunity for industry, government, and scientific researchers. DeepDive is enabled by an unusual design that combines large-scale probabilistic inference with a novel developer interaction cycle. This design is enabled by several core innovations around probabilistic training and inference.

  12. 7th International Conference in Methodologies and Intelligent Systems for Technology Enhanced Learning

    CERN Document Server

    Gennari, Rosella; Mascio, Tania; Rodríguez, Sara; Prieta, Fernando; Ramos, Carlos; Silveira, Ricardo

    2017-01-01

    This book presents the outcomes of the 7th International Conference in Methodologies and Intelligent Systems for Technology Enhanced Learning (MIS4TEL'17), hosted by the Polytechnic of Porto, Portugal from 21 to 23 June 2017. Expanding on the topics of the previous conferences, it provided an open forum for discussing intelligent systems for technology enhanced learning (TEL) and their roots in novel learning theories, empirical methodologies for their design or evaluation, stand-alone and web-based solutions, and makerspaces. It also fostered entrepreneurship and business startup ideas, bringing together researchers and developers from industry, education and the academic world to report on the latest scientific research, technical advances and methodologies.

  13. Forecasting rain events - Meteorological models or collective intelligence?

    Science.gov (United States)

    Arazy, Ofer; Halfon, Noam; Malkinson, Dan

    2015-04-01

    Collective intelligence is shared (or group) intelligence that emerges from the collective efforts of many individuals. Collective intelligence is the aggregate of individual contributions: from simple collective decision making to more sophisticated aggregations such as in crowdsourcing and peer-production systems. In particular, collective intelligence could be used in making predictions about future events, for example by using prediction markets to forecast election results, stock prices, or the outcomes of sport events. To date, there is little research regarding the use of collective intelligence for prediction of weather forecasting. The objective of this study is to investigate the extent to which collective intelligence could be utilized to accurately predict weather events, and in particular rainfall. Our analyses employ metrics of group intelligence, as well as compare the accuracy of groups' predictions against the predictions of the standard model used by the National Meteorological Services. We report on preliminary results from a study conducted over the 2013-2014 and 2014-2015 winters. We have built a web site that allows people to make predictions on precipitation levels on certain locations. During each competition participants were allowed to enter their precipitation forecasts (i.e. 'bets') at three locations and these locations changed between competitions. A precipitation competition was defined as a 48-96 hour period (depending on the expected weather conditions), bets were open 24-48 hours prior to the competition, and during betting period participants were allowed to change their bets with no limitation. In order to explore the effect of transparency, betting mechanisms varied across study's sites: full transparency (participants able to see each other's bets); partial transparency (participants see the group's average bet); and no transparency (no information of others' bets is made available). Several interesting findings emerged from

  14. Web-based Weather Expert System (WES) for Space Shuttle Launch

    Science.gov (United States)

    Bardina, Jorge E.; Rajkumar, T.

    2003-01-01

    The Web-based Weather Expert System (WES) is a critical module of the Virtual Test Bed development to support 'go/no go' decisions for Space Shuttle operations in the Intelligent Launch and Range Operations program of NASA. The weather rules characterize certain aspects of the environment related to the launching or landing site, the time of the day or night, the pad or runway conditions, the mission durations, the runway equipment and landing type. Expert system rules are derived from weather contingency rules, which were developed over years by NASA. Backward chaining, a goal-directed inference method is adopted, because a particular consequence or goal clause is evaluated first, and then chained backward through the rules. Once a rule is satisfied or true, then that particular rule is fired and the decision is expressed. The expert system is continuously verifying the rules against the past one-hour weather conditions and the decisions are made. The normal procedure of operations requires a formal pre-launch weather briefing held on Launch minus 1 day, which is a specific weather briefing for all areas of Space Shuttle launch operations. In this paper, the Web-based Weather Expert System of the Intelligent Launch and range Operations program is presented.

  15. Food web transport of trace metals and radionuclides from the deep sea: a review

    International Nuclear Information System (INIS)

    Young, J.S.

    1979-06-01

    This report summarizes aspects of the potential distribution pathways of metals and radionuclides, particularly Co and Ni, through a biological trophic framework after their deposition at 4000 to 5000 meters in the North Atlantic or North Pacific. It discusses (a) the basic, deep-sea trophic structure of eutrophic and oligotrophic regions; (b) the transport pathways of biologically available energy to and from the deep sea, pathways that may act as accumulators and vectors of radionuclide distribution, and (c) distribution routes that have come into question as potential carriers of radionuclides from the deep-sea bed to man

  16. Prototyping a Web-of-Energy Architecture for Smart Integration of Sensor Networks in Smart Grids Domain

    Science.gov (United States)

    Vernet, David; Corral, Guiomar

    2018-01-01

    Sensor networks and the Internet of Things have driven the evolution of traditional electric power distribution networks towards a new paradigm referred to as Smart Grid. However, the different elements that compose the Information and Communication Technologies (ICTs) layer of a Smart Grid are usually conceived as isolated systems that typically result in rigid hardware architectures which are hard to interoperate, manage, and to adapt to new situations. If the Smart Grid paradigm has to be presented as a solution to the demand for distributed and intelligent energy management system, it is necessary to deploy innovative IT infrastructures to support these smart functions. One of the main issues of Smart Grids is the heterogeneity of communication protocols used by the smart sensor devices that integrate them. The use of the concept of the Web of Things is proposed in this work to tackle this problem. More specifically, the implementation of a Smart Grid’s Web of Things, coined as the Web of Energy is introduced. The purpose of this paper is to propose the usage of Web of Energy by means of the Actor Model paradigm to address the latent deployment and management limitations of Smart Grids. Smart Grid designers can use the Actor Model as a design model for an infrastructure that supports the intelligent functions demanded and is capable of grouping and converting the heterogeneity of traditional infrastructures into the homogeneity feature of the Web of Things. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction. PMID:29385748

  17. Prototyping a Web-of-Energy Architecture for Smart Integration of Sensor Networks in Smart Grids Domain.

    Science.gov (United States)

    Caballero, Víctor; Vernet, David; Zaballos, Agustín; Corral, Guiomar

    2018-01-30

    Sensor networks and the Internet of Things have driven the evolution of traditional electric power distribution networks towards a new paradigm referred to as Smart Grid. However, the different elements that compose the Information and Communication Technologies (ICTs) layer of a Smart Grid are usually conceived as isolated systems that typically result in rigid hardware architectures which are hard to interoperate, manage, and to adapt to new situations. If the Smart Grid paradigm has to be presented as a solution to the demand for distributed and intelligent energy management system, it is necessary to deploy innovative IT infrastructures to support these smart functions. One of the main issues of Smart Grids is the heterogeneity of communication protocols used by the smart sensor devices that integrate them. The use of the concept of the Web of Things is proposed in this work to tackle this problem. More specifically, the implementation of a Smart Grid's Web of Things, coined as the Web of Energy is introduced. The purpose of this paper is to propose the usage of Web of Energy by means of the Actor Model paradigm to address the latent deployment and management limitations of Smart Grids. Smart Grid designers can use the Actor Model as a design model for an infrastructure that supports the intelligent functions demanded and is capable of grouping and converting the heterogeneity of traditional infrastructures into the homogeneity feature of the Web of Things. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction.

  18. Prototyping a Web-of-Energy Architecture for Smart Integration of Sensor Networks in Smart Grids Domain

    Directory of Open Access Journals (Sweden)

    Víctor Caballero

    2018-01-01

    Full Text Available Sensor networks and the Internet of Things have driven the evolution of traditional electric power distribution networks towards a new paradigm referred to as Smart Grid. However, the different elements that compose the Information and Communication Technologies (ICTs layer of a Smart Grid are usually conceived as isolated systems that typically result in rigid hardware architectures which are hard to interoperate, manage, and to adapt to new situations. If the Smart Grid paradigm has to be presented as a solution to the demand for distributed and intelligent energy management system, it is necessary to deploy innovative IT infrastructures to support these smart functions. One of the main issues of Smart Grids is the heterogeneity of communication protocols used by the smart sensor devices that integrate them. The use of the concept of the Web of Things is proposed in this work to tackle this problem. More specifically, the implementation of a Smart Grid’s Web of Things, coined as the Web of Energy is introduced. The purpose of this paper is to propose the usage of Web of Energy by means of the Actor Model paradigm to address the latent deployment and management limitations of Smart Grids. Smart Grid designers can use the Actor Model as a design model for an infrastructure that supports the intelligent functions demanded and is capable of grouping and converting the heterogeneity of traditional infrastructures into the homogeneity feature of the Web of Things. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction.

  19. Using Web 2.0 technologies to enhance evidence-based medical information.

    Science.gov (United States)

    Metzger, Miriam J; Flanagin, Andrew J

    2011-01-01

    This article invokes research on information seeking and evaluation to address how providers of evidence-based medical information can use Web 2.0 technologies to increase access to, enliven users' experiences with, and enrich the quality of the information available. In an ideal scenario, evidence-based medical information can take appropriate advantage of community intelligence spawned by Web 2.0 technologies, resulting in the ideal combination of scientifically sound, high-quality information that is imbued with experiential insights from a multitude of individuals. To achieve this goal, the authors argue that people will engage with information that they can access easily, and that they perceive as (a) relevant to their information-seeking goals and (b) credible. The authors suggest the utility of Web 2.0 technologies for engaging stakeholders with evidence-based medical information through these mechanisms, and the degree to which the information provided can and should be trusted. Last, the authors discuss potential problems with Web 2.0 information in relation to decision making in health contexts, and they conclude with specific and practical recommendations for the dissemination of evidence-based health information via Web 2.0 technologies.

  20. Challenges and solutions for installing an intelligent completion in offshore deepwater Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Reyes, Alfonso R. [WellDynamics, Spring, TX (United States); Arias, Jose Luiz [PETROBRAS, Rio de Janeiro, RJ (Brazil)

    2004-07-01

    This paper describes an atypical and challenging Intelligent Well Completion (IWC) installed in ultra deep water (1500-2000 m), offshore Brazil. The well is a water injector designed to selectively control the injection flow rate in to two stacked gravel pack zones. The field is Roncador, approximately 150 kilometers offshore the North-Eastern coast of the state of Rio de Janeiro, Brazil. This application is an atypical IWC due to the long distance ({approx}15 Km) from the production platform to the well. Intelligent wells have been installed at such distances previously but never with a direct control umbilical. Previous completions used a Subsea Control Module (SCM) or pod located in the wellhead. Reduced intervention costs are the typical driver for IWC in deep water applications, but water management is becoming an increasingly common application. The Roncador field development team has taken a novel approach by using IWC to manage water injection in an ultra deep water development. The challenge for the project team was to design an IWC system, which would accommodate the field infrastructure constraint, require minimal modification to the existing subsea hardware and ensure the necessary flexibility to locate surface equipment without the need for modification to the production facilities. The solution adopted for Roncador 35 is mainly based on an emerging ISO standard for the integration of IWC into Subsea Production Systems. The modular and expandable approach will enable extension of this solution to other wells in the Roncador field. (author)

  1. Big Data Analytics and Machine Intelligence Capability Development at NASA Langley Research Center: Strategy, Roadmap, and Progress

    Science.gov (United States)

    Ambur, Manjula Y.; Yagle, Jeremy J.; Reith, William; McLarney, Edward

    2016-01-01

    In 2014, a team of researchers, engineers and information technology specialists at NASA Langley Research Center developed a Big Data Analytics and Machine Intelligence Strategy and Roadmap as part of Langley's Comprehensive Digital Transformation Initiative, with the goal of identifying the goals, objectives, initiatives, and recommendations need to develop near-, mid- and long-term capabilities for data analytics and machine intelligence in aerospace domains. Since that time, significant progress has been made in developing pilots and projects in several research, engineering, and scientific domains by following the original strategy of collaboration between mission support organizations, mission organizations, and external partners from universities and industry. This report summarizes the work to date in Data Intensive Scientific Discovery, Deep Content Analytics, and Deep Q&A projects, as well as the progress made in collaboration, outreach, and education. Recommendations for continuing this success into future phases of the initiative are also made.

  2. Students' Evaluation Strategies in a Web Research Task: Are They Sensitive to Relevance and Reliability?

    Science.gov (United States)

    Rodicio, Héctor García

    2015-01-01

    When searching and using resources on the Web, students have to evaluate Web pages in terms of relevance and reliability. This evaluation can be done in a more or less systematic way, by either considering deep or superficial cues of relevance and reliability. The goal of this study was to examine how systematic students are when evaluating Web…

  3. Using a Metro Map Metaphor for organizing Web-based learning resources

    DEFF Research Database (Denmark)

    Grønbæk, Kaj; Bang, Tove; Hansen, Per Steen

    2002-01-01

    This paper briefly describes the WebNize system and how it applies a Metro Map metaphor for organizing guided tours in Web based resources. Then, experiences in using the Metro Map based tours in a Knowledge Sharing project at the library at Aarhus School of Business (ASB) in Denmark, are discussed...... is to create models for Intelligent Knowledge Solutions that can contribute to form the learning environments of the School in the 21st century. The WebNize system is used for sharing of knowledge through metro maps for specific subject areas made available in the Learning Resource Centre at ASB. The metro....... The Library has been involved in establishing a Learning Resource Center (LRC). The LRC serves as an exploratorium for the development and the testing of new forms of communication and learning, at the same time as it integrates the information resources of the electronic research library. The objective...

  4. Artificial Intelligence and Moral intelligence

    Directory of Open Access Journals (Sweden)

    Laura Pana

    2008-07-01

    Full Text Available We discuss the thesis that the implementation of a moral code in the behaviour of artificial intelligent systems needs a specific form of human and artificial intelligence, not just an abstract intelligence. We present intelligence as a system with an internal structure and the structural levels of the moral system, as well as certain characteristics of artificial intelligent agents which can/must be treated as 1- individual entities (with a complex, specialized, autonomous or selfdetermined, even unpredictable conduct, 2- entities endowed with diverse or even multiple intelligence forms, like moral intelligence, 3- open and, even, free-conduct performing systems (with specific, flexible and heuristic mechanisms and procedures of decision, 4 – systems which are open to education, not just to instruction, 5- entities with “lifegraphy”, not just “stategraphy”, 6- equipped not just with automatisms but with beliefs (cognitive and affective complexes, 7- capable even of reflection (“moral life” is a form of spiritual, not just of conscious activity, 8 – elements/members of some real (corporal or virtual community, 9 – cultural beings: free conduct gives cultural value to the action of a ”natural” or artificial being. Implementation of such characteristics does not necessarily suppose efforts to design, construct and educate machines like human beings. The human moral code is irremediably imperfect: it is a morality of preference, of accountability (not of responsibility and a morality of non-liberty, which cannot be remedied by the invention of ethical systems, by the circulation of ideal values and by ethical (even computing education. But such an imperfect morality needs perfect instruments for its implementation: applications of special logic fields; efficient psychological (theoretical and technical attainments to endow the machine not just with intelligence, but with conscience and even spirit; comprehensive technical

  5. Google DeepMind and healthcare in an age of algorithms.

    Science.gov (United States)

    Powles, Julia; Hodson, Hal

    2017-01-01

    Data-driven tools and techniques, particularly machine learning methods that underpin artificial intelligence, offer promise in improving healthcare systems and services. One of the companies aspiring to pioneer these advances is DeepMind Technologies Limited, a wholly-owned subsidiary of the Google conglomerate, Alphabet Inc. In 2016, DeepMind announced its first major health project: a collaboration with the Royal Free London NHS Foundation Trust, to assist in the management of acute kidney injury. Initially received with great enthusiasm, the collaboration has suffered from a lack of clarity and openness, with issues of privacy and power emerging as potent challenges as the project has unfolded. Taking the DeepMind-Royal Free case study as its pivot, this article draws a number of lessons on the transfer of population-derived datasets to large private prospectors, identifying critical questions for policy-makers, industry and individuals as healthcare moves into an algorithmic age.

  6. Zinc in an ultraoligotrophic lake food web.

    Science.gov (United States)

    Montañez, Juan Cruz; Arribére, María A; Rizzo, Andrea; Arcagni, Marina; Campbell, Linda; Ribeiro Guevara, Sergio

    2018-03-21

    Zinc (Zn) bioaccumulation and trophic transfer were analyzed in the food web of Lake Nahuel Huapi, a deep, unpolluted ultraoligotrophic system in North Patagonia. Benthic macroinvertebrates, plankton, and native and introduced fish were collected at three sites. The effect of pyroclastic inputs on Zn levels in lacustrine food webs was assessed by studying the impact of the eruption of Puyehue-Cordón Caulle volcanic complex (PCCVC) in 2011, by performing three sampling campaigns immediately before and after the PCCVC eruption, and after 2 years of recovery of the ecosystem. Zinc trophodynamics in L. Nahuel Huapi food web was assessed using nitrogen stable isotopes (δ 15 N). There was no significant increase of Zn concentrations ([Zn]) in L. Nahuel Huapi biota after the PCCVC eruption, despite the evidence of [Zn] increase in lake water that could be associated with volcanic ash leaching. The organisms studied exhibited [Zn] above the threshold level considered for dietary deficiency, regulating Zn adequately even under a catastrophic situations like PCCVC 2011 eruption. Zinc concentrations exhibited a biodilution pattern in the lake's food web. To the best of our knowledge, present research is the first report of Zn biodilution in lacustrine systems, and the first to study Zn transfer in a freshwater food web including both pelagic and benthic compartments.

  7. Naturalist Intelligence Among the Other Multiple Intelligences [In Bulgarian

    Directory of Open Access Journals (Sweden)

    R. Genkov

    2007-09-01

    Full Text Available The theory of multiple intelligences was presented by Gardner in 1983. The theory was revised later (1999 and among the other intelligences a naturalist intelligence was added. The criteria for distinguishing of the different types of intelligences are considered. While Gardner restricted the analysis of the naturalist intelligence with examples from the living nature only, the present paper considered this problem on wider background including objects and persons of the natural sciences.

  8. Artificial intelligence

    CERN Document Server

    Hunt, Earl B

    1975-01-01

    Artificial Intelligence provides information pertinent to the fundamental aspects of artificial intelligence. This book presents the basic mathematical and computational approaches to problems in the artificial intelligence field.Organized into four parts encompassing 16 chapters, this book begins with an overview of the various fields of artificial intelligence. This text then attempts to connect artificial intelligence problems to some of the notions of computability and abstract computing devices. Other chapters consider the general notion of computability, with focus on the interaction bet

  9. Intelligence Ethics:

    DEFF Research Database (Denmark)

    Rønn, Kira Vrist

    2016-01-01

    Questions concerning what constitutes a morally justified conduct of intelligence activities have received increased attention in recent decades. However, intelligence ethics is not yet homogeneous or embedded as a solid research field. The aim of this article is to sketch the state of the art...... of intelligence ethics and point out subjects for further scrutiny in future research. The review clusters the literature on intelligence ethics into two groups: respectively, contributions on external topics (i.e., the accountability of and the public trust in intelligence agencies) and internal topics (i.......e., the search for an ideal ethical framework for intelligence actions). The article concludes that there are many holes to fill for future studies on intelligence ethics both in external and internal discussions. Thus, the article is an invitation – especially, to moral philosophers and political theorists...

  10. An adaptive deep convolutional neural network for rolling bearing fault diagnosis

    International Nuclear Information System (INIS)

    Fuan, Wang; Hongkai, Jiang; Haidong, Shao; Wenjing, Duan; Shuaipeng, Wu

    2017-01-01

    The working conditions of rolling bearings usually is very complex, which makes it difficult to diagnose rolling bearing faults. In this paper, a novel method called the adaptive deep convolutional neural network (CNN) is proposed for rolling bearing fault diagnosis. Firstly, to get rid of manual feature extraction, the deep CNN model is initialized for automatic feature learning. Secondly, to adapt to different signal characteristics, the main parameters of the deep CNN model are determined with a particle swarm optimization method. Thirdly, to evaluate the feature learning ability of the proposed method, t-distributed stochastic neighbor embedding (t-SNE) is further adopted to visualize the hierarchical feature learning process. The proposed method is applied to diagnose rolling bearing faults, and the results confirm that the proposed method is more effective and robust than other intelligent methods. (paper)

  11. #%Applications of artificial intelligence in intelligent manufacturing: a review

    Institute of Scientific and Technical Information of China (English)

    #

    2017-01-01

    #%Based on research into the applications of artificial intelligence (AI) technology in the manufacturing industry in recent years, we analyze the rapid development of core technologies in the new era of 'Internet plus AI', which is triggering a great change in the models, means, and ecosystems of the manufacturing industry, as well as in the development of AI. We then propose new models, means, and forms of intelligent manufacturing, intelligent manufacturing system architecture, and intelligent man-ufacturing technology system, based on the integration of AI technology with information communications, manufacturing, and related product technology. Moreover, from the perspectives of intelligent manufacturing application technology, industry, and application demonstration, the current development in intelligent manufacturing is discussed. Finally, suggestions for the appli-cation of AI in intelligent manufacturing in China are presented.

  12. A Comparative Investigation on Citation Counts and Altmetrics between Papers Authored by Universities and Companies in the Research Field of Artificial Intelligence

    OpenAIRE

    Luo, Feiheng; Zheng, Han; Erdt, Mojisola Helen; Raamkumar, Aravind Sesagiri; Theng, Yin-Leng

    2018-01-01

    Artificial Intelligence is currently a popular research field. With the development of deep learning techniques, researchers in this area have achieved impressive results in a variety of tasks. In this initial study, we explored scientific papers in Artificial Intelligence, making comparisons between papers authored by the top universities and companies from the dual perspectives of bibliometrics and altmetrics. We selected publication venues according to the venue rankings provided by Google...

  13. The natural diet of a hexactinellid sponge: Benthic pelagic coupling in a deep-sea microbial food web

    Science.gov (United States)

    Pile, Adele J.; Young, Craig M.

    2006-07-01

    Dense communities of shallow-water suspension feeders are known to sidestep the microbial loop by grazing on ultraplankton at its base. We quantified the diet, rates of water processing, and abundance of the deep-sea hexactinellid sponge Sericolophus hawaiicus, and found that, like their demosponge relatives in shallow water, hexactinellids are a significant sink for ultraplankton. S. hawaiicus forms a dense bed of sponges on the Big Island of Hawaii between 360 and 460 m depth, with a mean density of 4.7 sponges m -2. Grazing of S. hawaiicus on ultraplankton was quantified from in situ samples using flow cytometry, and was found to be unselective. Rates of water processing were determined with dye visualization and ranged from 1.62 to 3.57 cm s -1, resulting in a processing rate of 7.9±2.4 ml sponge -1 s -1. The large amount of water processed by these benthic suspension feeders results in the transfer of approximately 55 mg carbon and 7.3 mg N d -1 m -2 from the water column to the benthos. The magnitude of this flux places S. hawaiicus squarely within the functional group of organisms that link the pelagic microbial food web to the benthos.

  14. Breastfeeding and intelligence: a systematic review and meta-analysis.

    Science.gov (United States)

    Horta, Bernardo L; Loret de Mola, Christian; Victora, Cesar G

    2015-12-01

    This study was aimed at systematically reviewing evidence of the association between breastfeeding and performance in intelligence tests. Two independent searches were carried out using Medline, LILACS, SCIELO and Web of Science. Studies restricted to infants and those where estimates were not adjusted for stimulation or interaction at home were excluded. Fixed- and random-effects models were used to pool the effect estimates, and a random-effects regression was used to assess potential sources of heterogeneity. We included 17 studies with 18 estimates of the relationship between breastfeeding and performance in intelligence tests. In a random-effects model, breastfed subjects achieved a higher IQ [mean difference: 3.44 points (95% confidence interval: 2.30; 4.58)]. We found no evidence of publication bias. Studies that controlled for maternal IQ showed a smaller benefit from breastfeeding [mean difference 2.62 points (95% confidence interval: 1.25; 3.98)]. In the meta-regression, none of the study characteristics explained the heterogeneity among the studies. Breastfeeding is related to improved performance in intelligence tests. A positive effect of breastfeeding on cognition was also observed in a randomised trial. This suggests that the association is causal. ©2015 The Authors. Acta Paediatrica published by John Wiley & Sons Ltd on behalf of Foundation Acta Paediatrica.

  15. Flood AI: An Intelligent Systems for Discovery and Communication of Disaster Knowledge

    Science.gov (United States)

    Demir, I.; Sermet, M. Y.

    2017-12-01

    Communities are not immune from extreme events or natural disasters that can lead to large-scale consequences for the nation and public. Improving resilience to better prepare, plan, recover, and adapt to disasters is critical to reduce the impacts of extreme events. The National Research Council (NRC) report discusses the topic of how to increase resilience to extreme events through a vision of resilient nation in the year 2030. The report highlights the importance of data, information, gaps and knowledge challenges that needs to be addressed, and suggests every individual to access the risk and vulnerability information to make their communities more resilient. This project presents an intelligent system, Flood AI, for flooding to improve societal preparedness by providing a knowledge engine using voice recognition, artificial intelligence, and natural language processing based on a generalized ontology for disasters with a primary focus on flooding. The knowledge engine utilizes the flood ontology and concepts to connect user input to relevant knowledge discovery channels on flooding by developing a data acquisition and processing framework utilizing environmental observations, forecast models, and knowledge bases. Communication channels of the framework includes web-based systems, agent-based chat bots, smartphone applications, automated web workflows, and smart home devices, opening the knowledge discovery for flooding to many unique use cases.

  16. Artificial Intelligence and Information Management

    Science.gov (United States)

    Fukumura, Teruo

    After reviewing the recent popularization of the information transmission and processing technologies, which are supported by the progress of electronics, the authors describe that by the introduction of the opto-electronics into the information technology, the possibility of applying the artificial intelligence (AI) technique to the mechanization of the information management has emerged. It is pointed out that althuogh AI deals with problems in the mental world, its basic methodology relies upon the verification by evidence, so the experiment on computers become indispensable for the study of AI. The authors also describe that as computers operate by the program, the basic intelligence which is concerned in AI is that expressed by languages. This results in the fact that the main tool of AI is the logical proof and it involves an intrinsic limitation. To answer a question “Why do you employ AI in your problem solving”, one must have ill-structured problems and intend to conduct deep studies on the thinking and the inference, and the memory and the knowledge-representation. Finally the authors discuss the application of AI technique to the information management. The possibility of the expert-system, processing of the query, and the necessity of document knowledge-base are stated.

  17. Increasing the Intelligence of Virtual Sales Assistants through Knowledge Modeling Techniques

    OpenAIRE

    Molina, Martin

    2001-01-01

    Shopping agents are web-based applications that help consumers to find appropriate products in the context of e-commerce. In this paper we argue about the utility of advanced model-based techniques that recently have been proposed in the fields of Artificial Intelligence and Knowledge Engineering, in order to increase the level of support provided by this type of applications. We illustrate this approach with a virtual sales assistant that dynamically configures a product according to the nee...

  18. Time to regenerate: the doctor in the age of artificial intelligence

    OpenAIRE

    Liu, X.; Keane, P. A.; Denniston, A. K.

    2018-01-01

    Introduction We are experiencing a rapid expansion of new technologies which are fusing the digital and biological worlds. New digital technologies—such as artificial intelligence, electronic health records and Big Data, telemedicine, ‘wearables’ for home monitoring and virtual/augmented realities—are shaping the future of medicine to become more efficient, more accurate and more sustainable.1 Digital systems from industry leaders such as DeepMind and IBM Watson are already being tested fo...

  19. Web-based expert system for foundry pollution prevention

    Science.gov (United States)

    Moynihan, Gary P.

    2004-02-01

    Pollution prevention is a complex task. Many small foundries lack the in-house expertise to perform these tasks. Expert systems are a type of computer information system that incorporates artificial intelligence. As noted in the literature, they provide a means of automating specialized expertise. This approach may be further leveraged by implementing the expert system on the internet (or world-wide web). This will allow distribution of the expertise to a variety of geographically-dispersed foundries. The purpose of this research is to develop a prototype web-based expert system to support pollution prevention for the foundry industry. The prototype system identifies potential emissions for a specified process, and also provides recommendations for the prevention of these contaminants. The system is viewed as an initial step toward assisting the foundry industry in better meeting government pollution regulations, as well as improving operating efficiencies within these companies.

  20. Social intelligence, human intelligence and niche construction.

    Science.gov (United States)

    Sterelny, Kim

    2007-04-29

    This paper is about the evolution of hominin intelligence. I agree with defenders of the social intelligence hypothesis in thinking that externalist models of hominin intelligence are not plausible: such models cannot explain the unique cognition and cooperation explosion in our lineage, for changes in the external environment (e.g. increasing environmental unpredictability) affect many lineages. Both the social intelligence hypothesis and the social intelligence-ecological complexity hybrid I outline here are niche construction models. Hominin evolution is hominin response to selective environments that earlier hominins have made. In contrast to social intelligence models, I argue that hominins have both created and responded to a unique foraging mode; a mode that is both social in itself and which has further effects on hominin social environments. In contrast to some social intelligence models, on this view, hominin encounters with their ecological environments continue to have profound selective effects. However, though the ecological environment selects, it does not select on its own. Accidents and their consequences, differential success and failure, result from the combination of the ecological environment an agent faces and the social features that enhance some opportunities and suppress others and that exacerbate some dangers and lessen others. Individuals do not face the ecological filters on their environment alone, but with others, and with the technology, information and misinformation that their social world provides.

  1. Trends in ambient intelligent systems the role of computational intelligence

    CERN Document Server

    Khan, Mohammad; Abraham, Ajith

    2016-01-01

    This book demonstrates the success of Ambient Intelligence in providing possible solutions for the daily needs of humans. The book addresses implications of ambient intelligence in areas of domestic living, elderly care, robotics, communication, philosophy and others. The objective of this edited volume is to show that Ambient Intelligence is a boon to humanity with conceptual, philosophical, methodical and applicative understanding. The book also aims to schematically demonstrate developments in the direction of augmented sensors, embedded systems and behavioral intelligence towards Ambient Intelligent Networks or Smart Living Technology. It contains chapters in the field of Ambient Intelligent Networks, which received highly positive feedback during the review process. The book contains research work, with in-depth state of the art from augmented sensors, embedded technology and artificial intelligence along with cutting-edge research and development of technologies and applications of Ambient Intelligent N...

  2. Intelligent Parking Assistant - A Showcase of the MOBiNET Platform Functionalities

    DEFF Research Database (Denmark)

    Mikkelsen, Lars Møller; Toledo, Raphael; Agerholm, Niels

    availability, location and pricing schemes. The intelligence of the IPA system consists of automatically fetching parking lot information from the relevant service, automatically registering when the vehicle is stopped, evaluating whether the vehicle is stopped inside a payed parking lot, and automatically......The Intelligent Parking Assistant (IPA) system is developed based on work done in the Danish ITS Platform project. The IPA system includes an Android app that helps the user chose, find and pay for parkings. The app uses information about parking lots fetched from a web service, including live...... initiating the payment for the parking. The IPA app and the back end services, are published via the MOBiNET platform which is a European-wide platform supporting ITS services, and offering functionalities enabling easy migration of services. The migration is enabled by defining a common methodology...

  3. The structure of the pelagic food web in relation to water column structure in the Skagerrak

    DEFF Research Database (Denmark)

    Kiørboe, Thomas; Kaas, H.; Kruse, B.

    1990-01-01

    by a doming of the pycnocline, with a deep mixed layer along the periphery and a very shallow pycnocline in central parts. Average phytoplankton size increased with the depth of the upper mixed layer, and the central stratified area was characterized by small flagellates while large and chain-forming diatoms...... on particle surface area rather than particle volume or chl a, and showed a distributional pattern that was nearly the inverse of the distribution of copepod activity. That is, peak bacterial growth rates occurred in central, stratified parts and lower rates were found along the margin with a deep mixed layer....... Thus a 'microbial loop' type of food web seemed to be evolving in the central, strongly stratified parts of the Skagerrak, while a shorter 'classical' type of food web appeared to dominate along the margin. The relation between food web structure and vertical mixing processes observed on oceanwide...

  4. Web-based research publications on Sub-Saharan Africa's prized ...

    African Journals Online (AJOL)

    The study confirms Africa's deep interest in the grasscutter which is not shared by other parts of the world. We recommend increased publication of research on cane rats in web-based journals to quickly spread the food value of this prized meat rodent to other parts of the world and so attract research interest and funding.

  5. On-Board Mining in the Sensor Web

    Science.gov (United States)

    Tanner, S.; Conover, H.; Graves, S.; Ramachandran, R.; Rushing, J.

    2004-12-01

    On-board data mining can contribute to many research and engineering applications, including natural hazard detection and prediction, intelligent sensor control, and the generation of customized data products for direct distribution to users. The ability to mine sensor data in real time can also be a critical component of autonomous operations, supporting deep space missions, unmanned aerial and ground-based vehicles (UAVs, UGVs), and a wide range of sensor meshes, webs and grids. On-board processing is expected to play a significant role in the next generation of NASA, Homeland Security, Department of Defense and civilian programs, providing for greater flexibility and versatility in measurements of physical systems. In addition, the use of UAV and UGV systems is increasing in military, emergency response and industrial applications. As research into the autonomy of these vehicles progresses, especially in fleet or web configurations, the applicability of on-board data mining is expected to increase significantly. Data mining in real time on board sensor platforms presents unique challenges. Most notably, the data to be mined is a continuous stream, rather than a fixed store such as a database. This means that the data mining algorithms must be modified to make only a single pass through the data. In addition, the on-board environment requires real time processing with limited computing resources, thus the algorithms must use fixed and relatively small amounts of processing time and memory. The University of Alabama in Huntsville is developing an innovative processing framework for the on-board data and information environment. The Environment for On-Board Processing (EVE) and the Adaptive On-board Data Processing (AODP) projects serve as proofs-of-concept of advanced information systems for remote sensing platforms. The EVE real-time processing infrastructure will upload, schedule and control the execution of processing plans on board remote sensors. These plans

  6. Advanced intelligent systems

    CERN Document Server

    Ryoo, Young; Jang, Moon-soo; Bae, Young-Chul

    2014-01-01

    Intelligent systems have been initiated with the attempt to imitate the human brain. People wish to let machines perform intelligent works. Many techniques of intelligent systems are based on artificial intelligence. According to changing and novel requirements, the advanced intelligent systems cover a wide spectrum: big data processing, intelligent control, advanced robotics, artificial intelligence and machine learning. This book focuses on coordinating intelligent systems with highly integrated and foundationally functional components. The book consists of 19 contributions that features social network-based recommender systems, application of fuzzy enforcement, energy visualization, ultrasonic muscular thickness measurement, regional analysis and predictive modeling, analysis of 3D polygon data, blood pressure estimation system, fuzzy human model, fuzzy ultrasonic imaging method, ultrasonic mobile smart technology, pseudo-normal image synthesis, subspace classifier, mobile object tracking, standing-up moti...

  7. Realizing the Promise of Web 2.0: Engaging Community Intelligence

    Science.gov (United States)

    HESSE, BRADFORD W.; O’CONNELL, MARY; AUGUSTSON, ERIK M.; CHOU, WEN-YING SYLVIA; SHAIKH, ABDUL R.; RUTTEN, LILA J. FINNEY

    2011-01-01

    Discussions of “Health 2.0,” first coined in 2005, were guided by three main tenets: (a) health was to become more participatory, as an evolution in the Web encouraged more direct consumer engagement in their own healthcare; (b) data was to become the new “Intel Inside” for systems supporting the “vital decisions” in health; and (c) a sense of “collective intelligence” from the network would supplement traditional sources of knowledge in health decision-making. Interests in understanding the implications of a new paradigm for patient engagement in health and healthcare were kindled by findings from surveys such as the National Cancer Institute’s Health Information National Trends Survey (HINTS), showing that patients were quick to look online for information to help them cope with disease. This paper considers how these three facets of Health 2.0–participation, data, and collective intelligence–can be harnessed to improve the health of the nation according to Healthy People 2020 goals. We begin with an examination of evidence from behavioral science to understand how Web 2.0 participative technologies may influence patient processes and outcomes, better or worse, in an era of changing communication technologies. The paper then focuses specifically on the clinical implications of “Health 2.0” and offers recommendations to ensure that changes in the communication environment do not detract from national (e.g., Health People 2020) health goals. Changes in the clinical environment, as catalyzed by the Health Information Technology for Economic and Clinical Health (HITECH) Act to take advantage of Health 2.0 principles in evidence-based ways, are also considered. PMID:21843093

  8. Patient's Guide to Recovery After Deep Vein Thrombosis or Pulmonary Embolism

    Science.gov (United States)

    ... the following A Patient’s Guide to Recovery After Deep Vein Thrombosis or Pulmonary Embolism Message Subject (Your Name) has sent you a message from Circulation Message Body (Your Name) thought you would like to see the Circulation web site. Your Personal Message Send Message Share on ...

  9. Extreme diving behaviour in devil rays links surface waters and the deep ocean

    KAUST Repository

    Thorrold, Simon R.; Afonso, Pedro; Fontes, Jorge; Braun, Camrin D.; Santos, Ricardo S.; Skomal, Gregory B.; Berumen, Michael L.

    2014-01-01

    Ecological connections between surface waters and the deep ocean remain poorly studied despite the high biomass of fishes and squids residing at depths beyond the euphotic zone. These animals likely support pelagic food webs containing a suite

  10. Competitive Intelligence.

    Science.gov (United States)

    Bergeron, Pierrette; Hiller, Christine A.

    2002-01-01

    Reviews the evolution of competitive intelligence since 1994, including terminology and definitions and analytical techniques. Addresses the issue of ethics; explores how information technology supports the competitive intelligence process; and discusses education and training opportunities for competitive intelligence, including core competencies…

  11. The Association Between Maternal Subclinical Hypothyroidism and Growth, Development, and Childhood Intelligence: A Meta-analysis

    Science.gov (United States)

    Liu, Yahong; Chen, Hui; Jing, Chen; Li, FuPin

    2018-06-01

    To explore the association between maternal subclinical hypothyroidism (SCH) in pregnancy and the somatic and intellectual development of their offspring. Using RevMan 5.3 software, a meta-analysis of cohort studies published from inception to May 2017, focusing on the association between maternal SCH in pregnancy and childhood growth, development and intelligence, was performed. Sources included the Cochrane Library, Pub-Med, Web of Science, China National Knowledge Infrastructure and Wan Fang Data. Analysis of a total of 15 cohort studies involving 1.896 pregnant women with SCH revealed that SCH in pregnancy was significantly associated with the intelligence (p=0.0007) and motor development (pdevelopment, low birth weight, premature delivery, fetal distress and fetal growth restriction.

  12. A simple method for serving Web hypermaps with dynamic database drill-down

    Directory of Open Access Journals (Sweden)

    Carson Ewart R

    2002-08-01

    Full Text Available Abstract Background HealthCyberMap http://healthcybermap.semanticweb.org aims at mapping parts of health information cyberspace in novel ways to deliver a semantically superior user experience. This is achieved through "intelligent" categorisation and interactive hypermedia visualisation of health resources using metadata, clinical codes and GIS. HealthCyberMap is an ArcView 3.1 project. WebView, the Internet extension to ArcView, publishes HealthCyberMap ArcView Views as Web client-side imagemaps. The basic WebView set-up does not support any GIS database connection, and published Web maps become disconnected from the original project. A dedicated Internet map server would be the best way to serve HealthCyberMap database-driven interactive Web maps, but is an expensive and complex solution to acquire, run and maintain. This paper describes HealthCyberMap simple, low-cost method for "patching" WebView to serve hypermaps with dynamic database drill-down functionality on the Web. Results The proposed solution is currently used for publishing HealthCyberMap GIS-generated navigational information maps on the Web while maintaining their links with the underlying resource metadata base. Conclusion The authors believe their map serving approach as adopted in HealthCyberMap has been very successful, especially in cases when only map attribute data change without a corresponding effect on map appearance. It should be also possible to use the same solution to publish other interactive GIS-driven maps on the Web, e.g., maps of real world health problems.

  13. Organohalogen compounds in deep-sea fishes from the western North Pacific, off-Tohoku, Japan: Contamination status and bioaccumulation profiles

    International Nuclear Information System (INIS)

    Takahashi, Shin; Oshihoi, Tomoko; Ramu, Karri; Isobe, Tomohiko; Ohmori, Koji; Kubodera, Tsunemi; Tanabe, Shinsuke

    2010-01-01

    Twelve species of deep-sea fishes collected in 2005 from the western North Pacific, off-Tohoku, Japan were analyzed for organohalogen compounds. Among the compounds analyzed, concentrations of DDTs and PCBs (up to 23,000 and 12,400 ng/g lipid wt, respectively) were the highest. The present study is the foremost to report the occurrence of brominated flame retardants such as PBDEs and HBCDs in deep-sea organisms from the North Pacific region. Significant positive correlations found between δ 15 N ( per mille ) and PCBs, DDTs and PBDEs suggest the high biomagnification potential of these contaminants in food web. The large variation in δ 13 C ( per mille ) values observed between the species indicate multiple sources of carbon in the food web and specific accumulation of hydrophobic organohalogen compounds in benthic dwelling carnivore species like snubnosed eel. The results obtained in this study highlight the usefulness of deep-sea fishes as sentinel species to monitor the deep-sea environment.

  14. Compilation and network analyses of cambrian food webs.

    Directory of Open Access Journals (Sweden)

    Jennifer A Dunne

    2008-04-01

    diversification of species, body plans, and trophic roles during the Cambrian radiation. More research is needed to explore the generality of food-web structure through deep time and across habitats, especially to investigate potential mechanisms that could give rise to similar structure, as well as any differences.

  15. Compilation and network analyses of cambrian food webs.

    Science.gov (United States)

    Dunne, Jennifer A; Williams, Richard J; Martinez, Neo D; Wood, Rachel A; Erwin, Douglas H

    2008-04-29

    plans, and trophic roles during the Cambrian radiation. More research is needed to explore the generality of food-web structure through deep time and across habitats, especially to investigate potential mechanisms that could give rise to similar structure, as well as any differences.

  16. Large-Scale Image Analytics Using Deep Learning

    Science.gov (United States)

    Ganguly, S.; Nemani, R. R.; Basu, S.; Mukhopadhyay, S.; Michaelis, A.; Votava, P.

    2014-12-01

    High resolution land cover classification maps are needed to increase the accuracy of current Land ecosystem and climate model outputs. Limited studies are in place that demonstrates the state-of-the-art in deriving very high resolution (VHR) land cover products. In addition, most methods heavily rely on commercial softwares that are difficult to scale given the region of study (e.g. continents to globe). Complexities in present approaches relate to (a) scalability of the algorithm, (b) large image data processing (compute and memory intensive), (c) computational cost, (d) massively parallel architecture, and (e) machine learning automation. In addition, VHR satellite datasets are of the order of terabytes and features extracted from these datasets are of the order of petabytes. In our present study, we have acquired the National Agricultural Imaging Program (NAIP) dataset for the Continental United States at a spatial resolution of 1-m. This data comes as image tiles (a total of quarter million image scenes with ~60 million pixels) and has a total size of ~100 terabytes for a single acquisition. Features extracted from the entire dataset would amount to ~8-10 petabytes. In our proposed approach, we have implemented a novel semi-automated machine learning algorithm rooted on the principles of "deep learning" to delineate the percentage of tree cover. In order to perform image analytics in such a granular system, it is mandatory to devise an intelligent archiving and query system for image retrieval, file structuring, metadata processing and filtering of all available image scenes. Using the Open NASA Earth Exchange (NEX) initiative, which is a partnership with Amazon Web Services (AWS), we have developed an end-to-end architecture for designing the database and the deep belief network (following the distbelief computing model) to solve a grand challenge of scaling this process across quarter million NAIP tiles that cover the entire Continental United States. The

  17. A System to Provide Real-Time Collaborative Situational Awareness by Web Enabling a Distributed Sensor Network

    Science.gov (United States)

    Panangadan, Anand; Monacos, Steve; Burleigh, Scott; Joswig, Joseph; James, Mark; Chow, Edward

    2012-01-01

    In this paper, we describe the architecture of both the PATS and SAP systems and how these two systems interoperate with each other forming a unified capability for deploying intelligence in hostile environments with the objective of providing actionable situational awareness of individuals. The SAP system works in concert with the UICDS information sharing middleware to provide data fusion from multiple sources. UICDS can then publish the sensor data using the OGC's Web Mapping Service, Web Feature Service, and Sensor Observation Service standards. The system described in the paper is able to integrate a spatially distributed sensor system, operating without the benefit of the Web infrastructure, with a remote monitoring and control system that is equipped to take advantage of SWE.

  18. Ductility and performance assessment of high strength self compacting concrete (HSSCC) deep beams: An experimental investigation

    International Nuclear Information System (INIS)

    Mohammadhassani, Mohammad; Jumaat, Mohd Zamin; Jameel, Mohammed; Badiee, Hamid; Arumugam, Arul M.S.

    2012-01-01

    Highlights: ► Ductility decreased with increase in tensile reinforcement ratio. ► The width of the load point and the support point influences premature failure. ► Load–deflection relationship is linear till 85% of the ultimate load. ► The absorbed energy increases with the increase of tensile reinforcement ratios. - Abstract: The behavior of deep beams is significantly different from that of normal beams. Because of their proportions, deep beams are likely to have strength controlled by shear. This paper discusses the results of eight simply supported high strength self compacting concrete (HSSCC) deep beams having variation in ratio of web reinforcement and tensile reinforcement. The deflection at two points along the beam length, web strains, tensile bars strains and the strain at concrete surface are recorded. The results show that the strain distribution at the section height of mid span is nonlinear. Ductility decreased with increase in tensile reinforcement ratio. The effect of width of load point and the support point is more important than the effect of tensile reinforcement ratio in preventing premature failure. Load–deflection graphs confirm linear relationship up to 85% of the ultimate load for HSSCC over-reinforcement web sections. The absorbed energy index increases with the increase in tensile reinforcement ratios.

  19. Development of cyberblog-based intelligent tutorial system to improve students learning ability algorithm

    Science.gov (United States)

    Wahyudin; Riza, L. S.; Putro, B. L.

    2018-05-01

    E-learning as a learning activity conducted online by the students with the usual tools is favoured by students. The use of computer media in learning provides benefits that are not owned by other learning media that is the ability of computers to interact individually with students. But the weakness of many learning media is to assume that all students have a uniform ability, when in reality this is not the case. The concept of Intelligent Tutorial System (ITS) combined with cyberblog application can overcome the weaknesses in neglecting diversity. An Intelligent Tutorial System-based Cyberblog application (ITS) is a web-based interactive application program that implements artificial intelligence which can be used as a learning and evaluation media in the learning process. The use of ITS-based Cyberblog in learning is one of the alternative learning media that is interesting and able to help students in measuring ability in understanding the material. This research will be associated with the improvement of logical thinking ability (logical thinking) of students, especially in algorithm subjects.

  20. The Social Semantic Web in Intelligent Learning Environments: State of the Art and Future Challenges

    Science.gov (United States)

    Jovanovic, Jelena; Gasevic, Dragan; Torniai, Carlo; Bateman, Scott; Hatala, Marek

    2009-01-01

    Today's technology-enhanced learning practices cater to students and teachers who use many different learning tools and environments and are used to a paradigm of interaction derived from open, ubiquitous, and socially oriented services. In this context, a crucial issue for education systems in general, and for Intelligent Learning Environments…

  1. Modelling traffic flows with intelligent cars and intelligent roads

    NARCIS (Netherlands)

    van Arem, Bart; Tampere, Chris M.J.; Malone, Kerry

    2003-01-01

    This paper addresses the modeling of traffic flows with intelligent cars and intelligent roads. It will describe the modeling approach MIXIC and review the results for different ADA systems: Adaptive Cruise Control, a special lane for Intelligent Vehicles, cooperative following and external speed

  2. Intelligence.

    Science.gov (United States)

    Sternberg, Robert J

    2012-03-01

    Intelligence is the ability to learn from experience and to adapt to, shape, and select environments. Intelligence as measured by (raw scores on) conventional standardized tests varies across the lifespan, and also across generations. Intelligence can be understood in part in terms of the biology of the brain-especially with regard to the functioning in the prefrontal cortex-and also correlates with brain size, at least within humans. Studies of the effects of genes and environment suggest that the heritability coefficient (ratio of genetic to phenotypic variation) is between .4 and .8, although heritability varies as a function of socioeconomic status and other factors. Racial differences in measured intelligence have been observed, but race is a socially constructed rather than biological variable, so such differences are difficult to interpret.

  3. Intelligence

    Science.gov (United States)

    Sternberg, Robert J.

    2012-01-01

    Intelligence is the ability to learn from experience and to adapt to, shape, and select environments. Intelligence as measured by (raw scores on) conventional standardized tests varies across the lifespan, and also across generations. Intelligence can be understood in part in terms of the biology of the brain—especially with regard to the functioning in the prefrontal cortex—and also correlates with brain size, at least within humans. Studies of the effects of genes and environment suggest that the heritability coefficient (ratio of genetic to phenotypic variation) is between .4 and .8, although heritability varies as a function of socioeconomic status and other factors. Racial differences in measured intelligence have been observed, but race is a socially constructed rather than biological variable, so such differences are difficult to interpret. PMID:22577301

  4. Investigations into Library Web-Scale Discovery Services

    Directory of Open Access Journals (Sweden)

    Jason Vaughan

    2008-03-01

    Full Text Available Web-scale discovery services for libraries provide deep discovery to a library’s local and licensed content, and represent an evolution, perhaps a revolution, for end user information discovery as pertains to library collections.  This article frames the topic of web-scale discovery, and begins by illuminating web-scale discovery from an academic library’s perspective – that is, the internal perspective seeking widespread staff participation in the discovery conversation.  This included the creation of a discovery task force, a group which educated library staff, conducted internal staff surveys, and gathered observations from early adopters.  The article next addresses the substantial research conducted with library vendors which have developed these services.  Such work included drafting of multiple comprehensive question lists distributed to the vendors, onsite vendor visits, and continual tracking of service enhancements.  Together, feedback gained from library staff, insights arrived at by the Discovery Task Force, and information gathered from vendors collectively informed the recommendation of a service for the UNLV Libraries.

  5. Combined Intelligent Control (CIC an Intelligent Decision Making Algorithm

    Directory of Open Access Journals (Sweden)

    Moteaal Asadi Shirzi

    2007-03-01

    Full Text Available The focus of this research is to introduce the concept of combined intelligent control (CIC as an effective architecture for decision-making and control of intelligent agents and multi-robot sets. Basically, the CIC is a combination of various architectures and methods from fields such as artificial intelligence, Distributed Artificial Intelligence (DAI, control and biological computing. Although any intelligent architecture may be very effective for some specific applications, it could be less for others. Therefore, CIC combines and arranges them in a way that the strengths of any approach cover the weaknesses of others. In this paper first, we introduce some intelligent architectures from a new aspect. Afterward, we offer the CIC by combining them. CIC has been executed in a multi-agent set. In this set, robots must cooperate to perform some various tasks in a complex and nondeterministic environment with a low sensory feedback and relationship. In order to investigate, improve, and correct the combined intelligent control method, simulation software has been designed which will be presented and considered. To show the ability of the CIC algorithm as a distributed architecture, a central algorithm is designed and compared with the CIC.

  6. Intelligence and negotiating

    International Nuclear Information System (INIS)

    George, D.G.

    1990-01-01

    This paper discusses the role of US intelligence during arms control negotiations between 1982 and 1987. It also covers : the orchestration of intelligence projects; an evaluation of the performance of intelligence activities; the effect intelligence work had on actual arms negotiations; and suggestions for improvements in the future

  7. Web-based telemonitoring and delivery of caregiver support for patients with Parkinson disease after deep brain stimulation: protocol.

    Science.gov (United States)

    Marceglia, Sara; Rossi, Elena; Rosa, Manuela; Cogiamanian, Filippo; Rossi, Lorenzo; Bertolasi, Laura; Vogrig, Alberto; Pinciroli, Francesco; Barbieri, Sergio; Priori, Alberto

    2015-03-06

    The increasing number of patients, the high costs of management, and the chronic progress of the disease that prevents patients from performing even simple daily activities make Parkinson disease (PD) a complex pathology with a high impact on society. In particular, patients implanted with deep brain stimulation (DBS) electrodes face a highly fragile stabilization period, requiring specific support at home. However, DBS patients are followed usually by untrained personnel (caregivers or family), without specific care pathways and supporting systems. This projects aims to (1) create a reference consensus guideline and a shared requirements set for the homecare and monitoring of DBS patients, (2) define a set of biomarkers that provides alarms to caregivers for continuous home monitoring, and (3) implement an information system architecture allowing communication between health care professionals and caregivers and improving the quality of care for DBS patients. The definitions of the consensus care pathway and of caregiver needs will be obtained by analyzing the current practices for patient follow-up through focus groups and structured interviews involving health care professionals, patients, and caregivers. The results of this analysis will be represented in a formal graphical model of the process of DBS patient care at home. To define the neurophysiological biomarkers to be used to raise alarms during the monitoring process, neurosignals will be acquired from DBS electrodes through a new experimental system that records while DBS is turned ON and transmits signals by radiofrequency. Motor, cognitive, and behavioral protocols will be used to study possible feedback/alarms to be provided by the system. Finally, a set of mobile apps to support the caregiver at home in managing and monitoring the patient will be developed and tested in the community of caregivers that participated in the focus groups. The set of developed apps will be connected to the already

  8. Intelligence and childlessness.

    Science.gov (United States)

    Kanazawa, Satoshi

    2014-11-01

    Demographers debate why people have children in advanced industrial societies where children are net economic costs. From an evolutionary perspective, however, the important question is why some individuals choose not to have children. Recent theoretical developments in evolutionary psychology suggest that more intelligent individuals may be more likely to prefer to remain childless than less intelligent individuals. Analyses of the National Child Development Study show that more intelligent men and women express preference to remain childless early in their reproductive careers, but only more intelligent women (not more intelligent men) are more likely to remain childless by the end of their reproductive careers. Controlling for education and earnings does not at all attenuate the association between childhood general intelligence and lifetime childlessness among women. One-standard-deviation increase in childhood general intelligence (15 IQ points) decreases women's odds of parenthood by 21-25%. Because women have a greater impact on the average intelligence of future generations, the dysgenic fertility among women is predicted to lead to a decline in the average intelligence of the population in advanced industrial nations. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Perceived intelligence is associated with measured intelligence in men but not women.

    Science.gov (United States)

    Kleisner, Karel; Chvátalová, Veronika; Flegr, Jaroslav

    2014-01-01

    The ability to accurately assess the intelligence of other persons finds its place in everyday social interaction and should have important evolutionary consequences. We used static facial photographs of 40 men and 40 women to test the relationship between measured IQ, perceived intelligence, and facial shape. Both men and women were able to accurately evaluate the intelligence of men by viewing facial photographs. In addition to general intelligence, figural and fluid intelligence showed a significant relationship with perceived intelligence, but again, only in men. No relationship between perceived intelligence and IQ was found for women. We used geometric morphometrics to determine which facial traits are associated with the perception of intelligence, as well as with intelligence as measured by IQ testing. Faces that are perceived as highly intelligent are rather prolonged with a broader distance between the eyes, a larger nose, a slight upturn to the corners of the mouth, and a sharper, pointing, less rounded chin. By contrast, the perception of lower intelligence is associated with broader, more rounded faces with eyes closer to each other, a shorter nose, declining corners of the mouth, and a rounded and massive chin. By contrast, we found no correlation between morphological traits and real intelligence measured with IQ test, either in men or women. These results suggest that a perceiver can accurately gauge the real intelligence of men, but not women, by viewing their faces in photographs; however, this estimation is possibly not based on facial shape. Our study revealed no relation between intelligence and either attractiveness or face shape.

  10. Artificial Intelligence.

    Science.gov (United States)

    Information Technology Quarterly, 1985

    1985-01-01

    This issue of "Information Technology Quarterly" is devoted to the theme of "Artificial Intelligence." It contains two major articles: (1) Artificial Intelligence and Law" (D. Peter O'Neill and George D. Wood); (2) "Artificial Intelligence: A Long and Winding Road" (John J. Simon, Jr.). In addition, it contains two sidebars: (1) "Calculating and…

  11. Stellar Atmospheric Parameterization Based on Deep Learning

    Science.gov (United States)

    Pan, Ru-yang; Li, Xiang-ru

    2017-07-01

    Deep learning is a typical learning method widely studied in the fields of machine learning, pattern recognition, and artificial intelligence. This work investigates the problem of stellar atmospheric parameterization by constructing a deep neural network with five layers, and the node number in each layer of the network is respectively 3821-500-100-50-1. The proposed scheme is verified on both the real spectra measured by the Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with the Kurucz's New Opacity Distribution Function (NEWODF) model, to make an automatic estimation for three physical parameters: the effective temperature (Teff), surface gravitational acceleration (lg g), and metallic abundance (Fe/H). The results show that the stacked autoencoder deep neural network has a better accuracy for the estimation. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for Teff/K, 0.0058 for (lg Teff/K), 0.1706 for lg (g/(cm·s-2)), and 0.1294 dex for the [Fe/H], respectively; On the theoretic spectra, the MAEs are 15.34 for Teff/K, 0.0011 for lg (Teff/K), 0.0214 for lg(g/(cm · s-2)), and 0.0121 dex for [Fe/H], respectively.

  12. Learning Deep Visual Object Models From Noisy Web Data: How to Make it Work

    OpenAIRE

    Massouh, Nizar; Babiloni, Francesca; Tommasi, Tatiana; Young, Jay; Hawes, Nick; Caputo, Barbara

    2017-01-01

    Deep networks thrive when trained on large scale data collections. This has given ImageNet a central role in the development of deep architectures for visual object classification. However, ImageNet was created during a specific period in time, and as such it is prone to aging, as well as dataset bias issues. Moving beyond fixed training datasets will lead to more robust visual systems, especially when deployed on robots in new environments which must train on the objects they encounter there...

  13. A preliminary examination of the diagnostic value of deep learning in hip osteoarthritis.

    Directory of Open Access Journals (Sweden)

    Yanping Xue

    Full Text Available Hip Osteoarthritis (OA is a common disease among the middle-aged and elderly people. Conventionally, hip OA is diagnosed by manually assessing X-ray images. This study took the hip joint as the object of observation and explored the diagnostic value of deep learning in hip osteoarthritis. A deep convolutional neural network (CNN was trained and tested on 420 hip X-ray images to automatically diagnose hip OA. This CNN model achieved a balance of high sensitivity of 95.0% and high specificity of 90.7%, as well as an accuracy of 92.8% compared to the chief physicians. The CNN model performance is comparable to an attending physician with 10 years of experience. The results of this study indicate that deep learning has promising potential in the field of intelligent medical image diagnosis practice.

  14. Artificial intelligence in radiology.

    Science.gov (United States)

    Hosny, Ahmed; Parmar, Chintan; Quackenbush, John; Schwartz, Lawrence H; Aerts, Hugo J W L

    2018-05-17

    Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.

  15. Self-Assessing of the Emotional Intelligence and Organizational Intelligence in Schools

    Science.gov (United States)

    Dagiene, Valentina; Juškeviciene, Anita; Carneiro, Roberto; Child, Camilla; Cullen, Joe

    2015-01-01

    The paper presents the results of an evaluation of the Emotional Intelligence (EI) and Organisational Intelligence (OI) competences self-assessment tools developed and applied by the IGUANA project. In the paper Emotional Intelligence and Organisational Intelligence competences are discussed, their use in action research experiments to assess and…

  16. De Novo Design of Bioactive Small Molecules by Artificial Intelligence.

    Science.gov (United States)

    Merk, Daniel; Friedrich, Lukas; Grisoni, Francesca; Schneider, Gisbert

    2018-01-01

    Generative artificial intelligence offers a fresh view on molecular design. We present the first-time prospective application of a deep learning model for designing new druglike compounds with desired activities. For this purpose, we trained a recurrent neural network to capture the constitution of a large set of known bioactive compounds represented as SMILES strings. By transfer learning, this general model was fine-tuned on recognizing retinoid X and peroxisome proliferator-activated receptor agonists. We synthesized five top-ranking compounds designed by the generative model. Four of the compounds revealed nanomolar to low-micromolar receptor modulatory activity in cell-based assays. Apparently, the computational model intrinsically captured relevant chemical and biological knowledge without the need for explicit rules. The results of this study advocate generative artificial intelligence for prospective de novo molecular design, and demonstrate the potential of these methods for future medicinal chemistry. © 2018 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  17. Deep primary production in coastal pelagic systems

    DEFF Research Database (Denmark)

    Lyngsgaard, Maren Moltke; Richardson, Katherine; Markager, Stiig

    2014-01-01

    produced. The primary production (PP) occurring below the surface layer, i.e. in the pycnocline-bottom layer (PBL), is shown to contribute significantly to total PP. Oxygen concentrations in the PBL are shown to correlate significantly with the deep primary production (DPP) as well as with salinity...... that eutrophication effects may include changes in the structure of planktonic food webs and element cycling in the water column, both brought about through an altered vertical distribution of PP....

  18. An algorithm for management of deep brain stimulation battery replacements: devising a web-based battery estimator and clinical symptom approach.

    Science.gov (United States)

    Montuno, Michael A; Kohner, Andrew B; Foote, Kelly D; Okun, Michael S

    2013-01-01

    Deep brain stimulation (DBS) is an effective technique that has been utilized to treat advanced and medication-refractory movement and psychiatric disorders. In order to avoid implanted pulse generator (IPG) failure and consequent adverse symptoms, a better understanding of IPG battery longevity and management is necessary. Existing methods for battery estimation lack the specificity required for clinical incorporation. Technical challenges prevent higher accuracy longevity estimations, and a better approach to managing end of DBS battery life is needed. The literature was reviewed and DBS battery estimators were constructed by the authors and made available on the web at http://mdc.mbi.ufl.edu/surgery/dbs-battery-estimator. A clinical algorithm for management of DBS battery life was constructed. The algorithm takes into account battery estimations and clinical symptoms. Existing methods of DBS battery life estimation utilize an interpolation of averaged current drains to calculate how long a battery will last. Unfortunately, this technique can only provide general approximations. There are inherent errors in this technique, and these errors compound with each iteration of the battery estimation. Some of these errors cannot be accounted for in the estimation process, and some of the errors stem from device variation, battery voltage dependence, battery usage, battery chemistry, impedance fluctuations, interpolation error, usage patterns, and self-discharge. We present web-based battery estimators along with an algorithm for clinical management. We discuss the perils of using a battery estimator without taking into account the clinical picture. Future work will be needed to provide more reliable management of implanted device batteries; however, implementation of a clinical algorithm that accounts for both estimated battery life and for patient symptoms should improve the care of DBS patients. © 2012 International Neuromodulation Society.

  19. Perceived Intelligence Is Associated with Measured Intelligence in Men but Not Women

    Science.gov (United States)

    Kleisner, Karel; Chvátalová, Veronika; Flegr, Jaroslav

    2014-01-01

    Background The ability to accurately assess the intelligence of other persons finds its place in everyday social interaction and should have important evolutionary consequences. Methodology/Principal Findings We used static facial photographs of 40 men and 40 women to test the relationship between measured IQ, perceived intelligence, and facial shape. Both men and women were able to accurately evaluate the intelligence of men by viewing facial photographs. In addition to general intelligence, figural and fluid intelligence showed a significant relationship with perceived intelligence, but again, only in men. No relationship between perceived intelligence and IQ was found for women. We used geometric morphometrics to determine which facial traits are associated with the perception of intelligence, as well as with intelligence as measured by IQ testing. Faces that are perceived as highly intelligent are rather prolonged with a broader distance between the eyes, a larger nose, a slight upturn to the corners of the mouth, and a sharper, pointing, less rounded chin. By contrast, the perception of lower intelligence is associated with broader, more rounded faces with eyes closer to each other, a shorter nose, declining corners of the mouth, and a rounded and massive chin. By contrast, we found no correlation between morphological traits and real intelligence measured with IQ test, either in men or women. Conclusions These results suggest that a perceiver can accurately gauge the real intelligence of men, but not women, by viewing their faces in photographs; however, this estimation is possibly not based on facial shape. Our study revealed no relation between intelligence and either attractiveness or face shape. PMID:24651120

  20. Nexus between Intelligence Education and Intelligence Training: A South African Perspective

    Directory of Open Access Journals (Sweden)

    M. A. van den Berg

    2015-10-01

    Full Text Available This paper examines the nexus of intelligence education and training from a South African perspective with the focus on current practices in light of the country’s transition towards democracy. A brief overview is provided on the history and development of the South African intelligence community with specific focus on the civilian intelligence services from the period prior 1994 to date (2015. The main focus, however, is on intelligence education that is currently available from training institutions and universities in South Africa as registered with the Department of Higher Education as well as private training institutions on the one hand, and the intelligence training practices within the statutory intelligence environment on the other. To this extent, the relations between academic institutions and the intelligence structures in terms of education and training within South Africa are perused against other practices within the African continent and internationally. The approaches to the study of intelligence are also addressed within this paper. Likewise, the how, what as well as to whom – pertaining to intelligence education and training availability and accessibility to students and practitioners within South Africa, is reviewed and analysed with the focus on making recommendations for the enhancement and improvement thereof to enable a focus on preparing the next generation of professional intelligence officers.

  1. Modeling food web interactions in benthic deep-sea ecosystems. A practical guide

    NARCIS (Netherlands)

    Soetaert, K.E.R.; Van Oevelen, D.J.

    2009-01-01

    Deep-sea benthic systems are notoriously difficult to sample. Even more than for other benthic systems, many flows among biological groups cannot be directly measured, and data sets remain incomplete and uncertain. In such cases, mathematical models are often used to quantify unmeasured biological

  2. Results of AN Evaluation of the Orchestration Capabilities of the Zoo Project and the 52° North Framework for AN Intelligent Geoportal

    Science.gov (United States)

    Rautenbach, V.; Coetzee, S.; Strzelecki, M.; Iwaniak, A.

    2012-07-01

    The aim of a spatial data infrastructure (SDI) is to make data available for the economical and societal benefit to a wide audience. A geoportal typically provides access to spatial data and associated web services in an SDI, facilitating the discovery, display, editing and analysis of data. In contrast, a spatial information infrastructure (SII) should provide access to information, i.e. data that has been processed, organized and presented so as to be useful. Thematic maps are an example of the representation of spatial information. An SII geoportal requires intelligence to orchestrate (automatically coordinate) web services that prepare, discover and present information, instead of data, to the user. We call this an intelligent geoportal. The Open Geospatial Consortium's Web Processing Service (WPS) standard provides the rules for describing the input and output of any type of spatial process. In this paper we present the results of an evaluation of two orchestration platforms: the 52° North framework and ZOO project. We evaluated the frameworks' orchestration capabilities for producing thematic maps. Results of the evaluation show that both frameworks have potential to facilitate orchestration in an intelligent geoportal, but that some functionality is still lacking; lack of semantic information and usability of the framework; these limitations creates barriers for the wide spread use of the frameworks. Before, the frameworks can be used for advance orchestration these limitations need to be addressed. The results of our evaluation of these frameworks, both with their respective strengths and weaknesses, can guide developers to choose the framework best suitable for their specific needs.

  3. Computational intelligence from AI to BI to NI

    Science.gov (United States)

    Werbos, Paul J.

    2015-05-01

    This paper gives highlights of the history of the neural network field, stressing the fundamental ideas which have been in play. Early neural network research was motivated mainly by the goals of artificial intelligence (AI) and of functional neuroscience (biological intelligence, BI), but the field almost died due to frustrations articulated in the famous book Perceptrons by Minsky and Papert. When I found a way to overcome the difficulties by 1974, the community mindset was very resistant to change; it was not until 1987/1988 that the field was reborn in a spectacular way, leading to the organized communities now in place. Even then, it took many more years to establish crossdisciplinary research in the types of mathematical neural networks needed to really understand the kind of intelligence we see in the brain, and to address the most demanding engineering applications. Only through a new (albeit short-lived) funding initiative, funding crossdisciplinary teams of systems engineers and neuroscientists, were we able to fund the critical empirical demonstrations which put our old basic principle of "deep learning" firmly on the map in computer science. Progress has rightly been inhibited at times by legitimate concerns about the "Terminator threat" and other possible abuses of technology. This year, at SPIE, in the quantum computing track, we outline the next stage ahead of us in breaking out of the box, again and again, and rising to fundamental challenges and opportunities still ahead of us.

  4. Applications of Deep Learning and Reinforcement Learning to Biological Data.

    Science.gov (United States)

    Mahmud, Mufti; Kaiser, Mohammed Shamim; Hussain, Amir; Vassanelli, Stefano

    2018-06-01

    Rapid advances in hardware-based technologies during the past decades have opened up new possibilities for life scientists to gather multimodal data in various application domains, such as omics, bioimaging, medical imaging, and (brain/body)-machine interfaces. These have generated novel opportunities for development of dedicated data-intensive machine learning techniques. In particular, recent research in deep learning (DL), reinforcement learning (RL), and their combination (deep RL) promise to revolutionize the future of artificial intelligence. The growth in computational power accompanied by faster and increased data storage, and declining computing costs have already allowed scientists in various fields to apply these techniques on data sets that were previously intractable owing to their size and complexity. This paper provides a comprehensive survey on the application of DL, RL, and deep RL techniques in mining biological data. In addition, we compare the performances of DL techniques when applied to different data sets across various application domains. Finally, we outline open issues in this challenging research area and discuss future development perspectives.

  5. The relationship of Emotional Intelligence with Academic Intelligence and the Big Five

    NARCIS (Netherlands)

    Van der Zee, K.I.; Thijs, Melanie; Schakel, Lolle

    The present study examines the relationship of self- and other ratings of emotional intelligence with academic intelligence and personality, as well as the incremental validity of emotional intelligence beyond academic intelligence and personality in predicting academic and social success. A sample

  6. The relationship of emotional intelligence with academic intelligence and the Big Five

    NARCIS (Netherlands)

    van der Zee, K; Thijs, M; Schakel, L

    2002-01-01

    The present study examines the relationship of self- and other ratings of emotional intelligence with academic intelligence and personality, as well as the incremental validity of emotional intelligence beyond academic intelligence and personality in predicting academic and social success. A sample

  7. RaptorX-Property: a web server for protein structure property prediction.

    Science.gov (United States)

    Wang, Sheng; Li, Wei; Liu, Shiwang; Xu, Jinbo

    2016-07-08

    RaptorX Property (http://raptorx2.uchicago.edu/StructurePropertyPred/predict/) is a web server predicting structure property of a protein sequence without using any templates. It outperforms other servers, especially for proteins without close homologs in PDB or with very sparse sequence profile (i.e. carries little evolutionary information). This server employs a powerful in-house deep learning model DeepCNF (Deep Convolutional Neural Fields) to predict secondary structure (SS), solvent accessibility (ACC) and disorder regions (DISO). DeepCNF not only models complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent property labels. Our experimental results show that, tested on CASP10, CASP11 and the other benchmarks, this server can obtain ∼84% Q3 accuracy for 3-state SS, ∼72% Q8 accuracy for 8-state SS, ∼66% Q3 accuracy for 3-state solvent accessibility, and ∼0.89 area under the ROC curve (AUC) for disorder prediction. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Incorporating intelligence into structured radiology reports

    Science.gov (United States)

    Kahn, Charles E.

    2014-03-01

    The new standard for radiology reporting templates being developed through the Integrating the Healthcare Enterprise (IHE) and DICOM organizations defines the storage and exchange of reporting templates as Hypertext Markup Language version 5 (HTML5) documents. The use of HTML5 enables the incorporation of "dynamic HTML," in which documents can be altered in response to their content. HTML5 documents can employ JavaScript, the HTML Document Object Model (DOM), and external web services to create intelligent reporting templates. Several reporting templates were created to demonstrate the use of scripts to perform in-template calculations and decision support. For example, a template for adrenal CT was created to compute contrast washout percentage from input values of precontrast, dynamic postcontrast, and delayed adrenal nodule attenuation values; the washout value can used to classify an adrenal nodule as a benign cortical adenoma. Dynamic templates were developed to compute volumes and apply diagnostic criteria, such as those for determination of internal carotid artery stenosis. Although reporting systems need not use a web browser to render the templates or their contents, the use of JavaScript creates innumerable opportunities to construct highly sophisticated HTML5 reporting templates. This report demonstrates the ability to incorporate dynamic content to enhance the use of radiology reporting templates.

  9. Helping utilities harness the power of the web for substation automation

    Energy Technology Data Exchange (ETDEWEB)

    Finney, D.

    2000-11-01

    The significance of the Internet to the deregulated electric power industry and the ability to tap into the benefits of Web-enabled substation monitoring and control are reviewed. It is this author's contention that the convergence of Internet access from PCs, servers and Internet-ready intelligent electronic devices make it possible to have full-scale substation automation and control without the high price tag associated with SCADA systems. Whereas in the past automation solutions were thought to be appropriate only for big utilities, the potential of the Internet such as the GE-hosted enerVista.com service, which is made up of a number of modules which can provide many of the services of a complex enterprise management system at a fraction of the cost, make it possible for smaller utilities to overcome substation automation problems at an affordable cost. By having the communications link over the web, and data acquisition hosted by an outside vendor, even the smallest municipal utility can have the most up-to-date equipment at their disposal, and expand their control to SCADA-level functionality without having to incur the usual programming and technology costs. The example of Whitby Hydro to automate their three substation system with GE Power Management's Universal Relay (UR) intelligent electronic device system by installing a modem as an Internet appliance for 24/7 monitoring, optional protection and control, is cited. Utilities in Oshawa and Thunder Bay, Ontario, and others in New York State and Tennessee, are some of the other utilities currently involved in developing web-based applications that address their unique requirements. At present, there appears to be no limit to the role that the Internet can play in substation automation and control for utilities competing in a global market.

  10. Web Page Recommendation Using Web Mining

    OpenAIRE

    Modraj Bhavsar; Mrs. P. M. Chavan

    2014-01-01

    On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1) First we describe the basics of web mining, types of web mining. 2) Details of each...

  11. Ductility and performance assessment of high strength self compacting concrete (HSSCC) deep beams: An experimental investigation

    Energy Technology Data Exchange (ETDEWEB)

    Mohammadhassani, Mohammad, E-mail: mmh356@yahoo.com [Department of Civil Engineering, University of Malaya, Kuala Lumpur (Malaysia); Jumaat, Mohd Zamin; Jameel, Mohammed [Department of Civil Engineering, University of Malaya, Kuala Lumpur (Malaysia); Badiee, Hamid [Department of Civil Engineering, University of Kerman (Iran, Islamic Republic of); Arumugam, Arul M.S. [Department of Civil Engineering, University of Malaya, Kuala Lumpur (Malaysia)

    2012-09-15

    Highlights: Black-Right-Pointing-Pointer Ductility decreased with increase in tensile reinforcement ratio. Black-Right-Pointing-Pointer The width of the load point and the support point influences premature failure. Black-Right-Pointing-Pointer Load-deflection relationship is linear till 85% of the ultimate load. Black-Right-Pointing-Pointer The absorbed energy increases with the increase of tensile reinforcement ratios. - Abstract: The behavior of deep beams is significantly different from that of normal beams. Because of their proportions, deep beams are likely to have strength controlled by shear. This paper discusses the results of eight simply supported high strength self compacting concrete (HSSCC) deep beams having variation in ratio of web reinforcement and tensile reinforcement. The deflection at two points along the beam length, web strains, tensile bars strains and the strain at concrete surface are recorded. The results show that the strain distribution at the section height of mid span is nonlinear. Ductility decreased with increase in tensile reinforcement ratio. The effect of width of load point and the support point is more important than the effect of tensile reinforcement ratio in preventing premature failure. Load-deflection graphs confirm linear relationship up to 85% of the ultimate load for HSSCC over-reinforcement web sections. The absorbed energy index increases with the increase in tensile reinforcement ratios.

  12. Web sites survey for electronic public participation

    International Nuclear Information System (INIS)

    Park, Moon Su; Lee, Young Wook; Kang, Chang Sun

    2004-01-01

    Public acceptance has been a key factor in nuclear industry as well as other fields. There are many ways to get public acceptance. Public participation in making a policy must be a good tool for this purpose. Moreover, the participation by means of internet may be an excellent way to increase voluntary participation. In this paper, the level of electronic public participation is defined and how easy and deep for lay public to participate electronically is assessed for some organization's web sites

  13. OPUS One: An Intelligent Adaptive Learning Environment Using Artificial Intelligence Support

    Science.gov (United States)

    Pedrazzoli, Attilio

    2010-06-01

    AI based Tutoring and Learning Path Adaptation are well known concepts in e-Learning scenarios today and increasingly applied in modern learning environments. In order to gain more flexibility and to enhance existing e-learning platforms, the OPUS One LMS Extension package will enable a generic Intelligent Tutored Adaptive Learning Environment, based on a holistic Multidimensional Instructional Design Model (PENTHA ID Model), allowing AI based tutoring and adaptation functionality to existing Web-based e-learning systems. Relying on "real time" adapted profiles, it allows content- / course authors to apply a dynamic course design, supporting tutored, collaborative sessions and activities, as suggested by modern pedagogy. The concept presented combines a personalized level of surveillance, learning activity- and learning path adaptation suggestions to ensure the students learning motivation and learning success. The OPUS One concept allows to implement an advanced tutoring approach combining "expert based" e-tutoring with the more "personal" human tutoring function. It supplies the "Human Tutor" with precise, extended course activity data and "adaptation" suggestions based on predefined subject matter rules. The concept architecture is modular allowing a personalized platform configuration.

  14. PredMP: A Web Resource for Computationally Predicted Membrane Proteins via Deep Learning

    KAUST Repository

    Wang, Sheng; Fei, Shiyang; Zongan, Wang; Li, Yu; Zhao, Feng; Gao, Xin

    2018-01-01

    structures in Protein Data Bank (PDB). To elucidate the MP structures computationally, we developed a novel web resource, denoted as PredMP (http://52.87.130.56:3001/#/proteinindex), that delivers one-dimensional (1D) annotation of the membrane topology

  15. Keeping Dublin Core Simple: Cross-Domain Discovery or Resource Description?; First Steps in an Information Commerce Economy: Digital Rights Management in the Emerging E-Book Environment; Interoperability: Digital Rights Management and the Emerging EBook Environment; Searching the Deep Web: Direct Query Engine Applications at the Department of Energy.

    Science.gov (United States)

    Lagoze, Carl; Neylon, Eamonn; Mooney, Stephen; Warnick, Walter L.; Scott, R. L.; Spence, Karen J.; Johnson, Lorrie A.; Allen, Valerie S.; Lederman, Abe

    2001-01-01

    Includes four articles that discuss Dublin Core metadata, digital rights management and electronic books, including interoperability; and directed query engines, a type of search engine designed to access resources on the deep Web that is being used at the Department of Energy. (LRW)

  16. Artificial Intelligence Project

    Science.gov (United States)

    1990-01-01

    Symposium on Aritificial Intelligence and Software Engineering Working Notes, March 1989. Blumenthal, Brad, "An Architecture for Automating...Artificial Intelligence Project Final Technical Report ARO Contract: DAAG29-84-K-OGO Artificial Intelligence LaboratO"ry The University of Texas at...Austin N>.. ~ ~ JA 1/I 1991 n~~~ Austin, Texas 78712 ________k A,.tificial Intelligence Project i Final Technical Report ARO Contract: DAAG29-84-K-0060

  17. Improving Logistics Processes in Industry Using Web Technologies

    Science.gov (United States)

    Jánošík, Ján; Tanuška, Pavol; Václavová, Andrea

    2016-12-01

    The aim of this paper is to propose the concept of a system that takes advantage of web technologies and integrates them into the management process and management of internal stocks which may relate to external applications and creates the conditions to transform a Computerized Control of Warehouse Stock (CCWS) in the company. The importance of implementing CCWS is in the elimination of the claims caused by the human factor, as well as to allow the processing of information for analytical purposes and their subsequent use to improve internal processes. Using CCWS in the company would also facilitate better use of the potential tools Business Intelligence and Data Mining.

  18. Sharing adverse drug event data using business intelligence technology.

    Science.gov (United States)

    Horvath, Monica M; Cozart, Heidi; Ahmad, Asif; Langman, Matthew K; Ferranti, Jeffrey

    2009-03-01

    Duke University Health System uses computerized adverse drug event surveillance as an integral part of medication safety at 2 community hospitals and an academic medical center. This information must be swiftly communicated to organizational patient safety stakeholders to find opportunities to improve patient care; however, this process is encumbered by highly manual methods of preparing the data. Following the examples of other industries, we deployed a business intelligence tool to provide dynamic safety reports on adverse drug events. Once data were migrated into the health system data warehouse, we developed census-adjusted reports with user-driven prompts. Drill down functionality enables navigation from aggregate trends to event details by clicking report graphics. Reports can be accessed by patient safety leadership either through an existing safety reporting portal or the health system performance improvement Web site. Elaborate prompt screens allow many varieties of reports to be created quickly by patient safety personnel without consultation with the research analyst. The reduction in research analyst workload because of business intelligence implementation made this individual available to additional patient safety projects thereby leveraging their talents more effectively. Dedicated liaisons are essential to ensure clear communication between clinical and technical staff throughout the development life cycle. Design and development of the business intelligence model for adverse drug event data must reflect the eccentricities of the operational system, especially as new areas of emphasis evolve. Future usability studies examining the data presentation and access model are needed.

  19. Routledge companion to intelligence studies

    CERN Document Server

    Dover, Robert; Hillebrand, Claudia

    2013-01-01

    The Routledge Companion to Intelligence Studies provides a broad overview of the growing field of intelligence studies. The recent growth of interest in intelligence and security studies has led to an increased demand for popular depictions of intelligence and reference works to explain the architecture and underpinnings of intelligence activity. Divided into five comprehensive sections, this Companion provides a strong survey of the cutting-edge research in the field of intelligence studies: Part I: The evolution of intelligence studies; Part II: Abstract approaches to intelligence; Part III: Historical approaches to intelligence; Part IV: Systems of intelligence; Part V: Contemporary challenges. With a broad focus on the origins, practices and nature of intelligence, the book not only addresses classical issues, but also examines topics of recent interest in security studies. The overarching aim is to reveal the rich tapestry of intelligence studies in both a sophisticated and accessible way. This Companion...

  20. Definición y desarrollo de herramienta web de gestión de metadatos Business Intelligence

    OpenAIRE

    Montalvillo Mendizabal, Leticia

    2012-01-01

    Hoy en día las grandes empresas cuentan con sistemas de Business Intelligence (BI) para diversos objetivos y tareas que deben realizar. Este proyecto se centra en la definición de un repositorio de metadatos BI que almacenará los datos relativos a los Indicadores Clave de Rendimiento (KPI).

  1. Deep Learning Improves Antimicrobial Peptide Recognition.

    Science.gov (United States)

    Veltri, Daniel; Kamath, Uday; Shehu, Amarda

    2018-03-24

    Bacterial resistance to antibiotics is a growing concern. Antimicrobial peptides (AMPs), natural components of innate immunity, are popular targets for developing new drugs. Machine learning methods are now commonly adopted by wet-laboratory researchers to screen for promising candidates. In this work we utilize deep learning to recognize antimicrobial activity. We propose a neural network model with convolutional and recurrent layers that leverage primary sequence composition. Results show that the proposed model outperforms state-of-the-art classification models on a comprehensive data set. By utilizing the embedding weights, we also present a reduced-alphabet representation and show that reasonable AMP recognition can be maintained using nine amino-acid types. Models and data sets are made freely available through the Antimicrobial Peptide Scanner vr.2 web server at: www.ampscanner.com. amarda@gmu.edu for general inquiries and dan.veltri@gmail.com for web server information. Supplementary data are available at Bioinformatics online.

  2. 78 FR 90 - Defense Intelligence Agency National Intelligence University Board of Visitors Closed Meeting

    Science.gov (United States)

    2013-01-02

    ... DEPARTMENT OF DEFENSE Office of the Secretary Defense Intelligence Agency National Intelligence University Board of Visitors Closed Meeting AGENCY: National Intelligence University, Defense Intelligence... hereby given that a closed meeting of the National Intelligence University Board of Visitors has been...

  3. Intelligent approaches for the synthesis of petrophysical logs

    International Nuclear Information System (INIS)

    Rezaee, M Reza; Kadkhodaie-Ilkhchi, Ali; Alizadeh, Pooya Mohammad

    2008-01-01

    Log data are of prime importance in acquiring petrophysical data from hydrocarbon reservoirs. Reliable log analysis in a hydrocarbon reservoir requires a complete set of logs. For many reasons, such as incomplete logging in old wells, destruction of logs due to inappropriate data storage and measurement errors due to problems with logging apparatus or hole conditions, log suites are either incomplete or unreliable. In this study, fuzzy logic and artificial neural networks were used as intelligent tools to synthesize petrophysical logs including neutron, density, sonic and deep resistivity. The petrophysical data from two wells were used for constructing intelligent models in the Fahlian limestone reservoir, Southern Iran. A third well from the field was used to evaluate the reliability of the models. The results showed that fuzzy logic and artificial neural networks were successful in synthesizing wireline logs. The combination of the results obtained from fuzzy logic and neural networks in a simple averaging committee machine (CM) showed a significant improvement in the accuracy of the estimations. This committee machine performed better than fuzzy logic or the neural network model in the problem of estimating petrophysical properties from well logs

  4. Intelligent medical information filtering.

    Science.gov (United States)

    Quintana, Y

    1998-01-01

    This paper describes an intelligent information filtering system to assist users to be notified of updates to new and relevant medical information. Among the major problems users face is the large volume of medical information that is generated each day, and the need to filter and retrieve relevant information. The Internet has dramatically increased the amount of electronically accessible medical information and reduced the cost and time needed to publish. The opportunity of the Internet for the medical profession and consumers is to have more information to make decisions and this could potentially lead to better medical decisions and outcomes. However, without the assistance from professional medical librarians, retrieving new and relevant information from databases and the Internet remains a challenge. Many physicians do not have access to the services of a medical librarian. Most physicians indicate on surveys that they do not prefer to retrieve the literature themselves, or visit libraries because of the lack of recent materials, poor organisation and indexing of materials, lack of appropriate and available material, and lack of time. The information filtering system described in this paper records the online web browsing behaviour of each user and creates a user profile of the index terms found on the web pages visited by the user. A relevance-ranking algorithm then matches the user profiles to the index terms of new health care web pages that are added each day. The system creates customised summaries of new information for each user. A user can then connect to the web site to read the new information. Relevance feedback buttons on each page ask the user to rate the usefulness of the page to their immediate information needs. Errors in relevance ranking are reduced in this system by having both the user profile and medical information represented in the same representation language using a controlled vocabulary. This system also updates the user profiles

  5. Deep learning guided stroke management: a review of clinical applications.

    Science.gov (United States)

    Feng, Rui; Badgeley, Marcus; Mocco, J; Oermann, Eric K

    2018-04-01

    Stroke is a leading cause of long-term disability, and outcome is directly related to timely intervention. Not all patients benefit from rapid intervention, however. Thus a significant amount of attention has been paid to using neuroimaging to assess potential benefit by identifying areas of ischemia that have not yet experienced cellular death. The perfusion-diffusion mismatch, is used as a simple metric for potential benefit with timely intervention, yet penumbral patterns provide an inaccurate predictor of clinical outcome. Machine learning research in the form of deep learning (artificial intelligence) techniques using deep neural networks (DNNs) excel at working with complex inputs. The key areas where deep learning may be imminently applied to stroke management are image segmentation, automated featurization (radiomics), and multimodal prognostication. The application of convolutional neural networks, the family of DNN architectures designed to work with images, to stroke imaging data is a perfect match between a mature deep learning technique and a data type that is naturally suited to benefit from deep learning's strengths. These powerful tools have opened up exciting opportunities for data-driven stroke management for acute intervention and for guiding prognosis. Deep learning techniques are useful for the speed and power of results they can deliver and will become an increasingly standard tool in the modern stroke specialist's arsenal for delivering personalized medicine to patients with ischemic stroke. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. Neural Network Substorm Identification: Enabling TREx Sensor Web Modes

    Science.gov (United States)

    Chaddock, D.; Spanswick, E.; Arnason, K. M.; Donovan, E.; Liang, J.; Ahmad, S.; Jackel, B. J.

    2017-12-01

    Transition Region Explorer (TREx) is a ground-based sensor web of optical and radio instruments that is presently being deployed across central Canada. The project consists of an array of co-located blue-line, full-colour, and near-infrared all-sky imagers, imaging riometers, proton aurora spectrographs, and GNSS systems. A key goal of the TREx project is to create the world's first (artificial) intelligent sensor web for remote sensing space weather. The sensor web will autonomously control and coordinate instrument operations in real-time. To accomplish this, we will use real-time in-line analytics of TREx and other data to dynamically switch between operational modes. An operating mode could be, for example, to have a blue-line imager gather data at a one or two orders of magnitude higher cadence than it operates for its `baseline' mode. The software decision to increase the imaging cadence would be in response to an anticipated increase in auroral activity or other programmatic requirements. Our first test for TREx's sensor web technologies is to develop the capacity to autonomously alter the TREx operating mode prior to a substorm expansion phase onset. In this paper, we present our neural network analysis of historical optical and riometer data and our ability to predict an optical onset. We explore the preliminary insights into using a neural network to pick out trends and features which it deems are similar among substorms.

  7. Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction

    Science.gov (United States)

    Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon

    2016-01-01

    Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients’ psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller’s mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study. PMID:27608023

  8. Drupal 7 Mobile Web Development Beginner's Guide

    CERN Document Server

    StovallTom

    2012-01-01

    Follow the fun example of a family pizza restaurant to help you adapt your own website to one that is fullyfunctional in a mobile environment. Each chapter covers a different aspect of mobile web development with plenty of step-by-step instructions and screenshots to make things clearer. This book is for independent developers who may or may not have had experience with Drupal websites. We take some "deep dives" into customized functionality that will take your Drupal development and your development workflow to the next level.

  9. Intelligent products : A survey

    NARCIS (Netherlands)

    Meyer, G.G.; Främling, K.; Holmström, J.

    This paper presents an overview of the field of Intelligent Products. As Intelligent Products have many facets, this paper is mainly focused on the concept behind Intelligent Products, the technical foundations, and the achievable practical goals of Intelligent Products. A novel classification of

  10. Benefits of collective intelligence: Swarm intelligent foraging, an ethnographic research

    Directory of Open Access Journals (Sweden)

    Sivave Mashingaidze

    2014-12-01

    Full Text Available Wisdom of crowds; bees, colonies of ants, schools of fish, flocks of birds, and fireflies flashing synchronously are all examples of highly coordinated behaviors that emerge from collective, decentralized intelligence. This article is an ethnographic study of swarm intelligence foraging of swarms and the benefits derived from collective decision making. The author used using secondary data analysis to look at the benefits of swarm intelligence in decision making to achieve intended goals. Concepts like combined decision making and consensus were discussed and four principles of swarm intelligence were also discussed viz; coordination, cooperation, deliberation and collaboration. The research found out that collective decision making in swarms is the touchstone of achieving their goals. The research further recommended corporate to adopt collective intelligence for business sustainability.

  11. Gulf of Mexico Deep-Sea Coral Ecosystem Studies, 2008-2011

    Science.gov (United States)

    Kellogg, Christina A.

    2009-01-01

    Most people are familiar with tropical coral reefs, located in warm, well-illuminated, shallow waters. However, corals also exist hundreds and even thousands of meters below the ocean surface, where it is cold and completely dark. These deep-sea corals, also known as cold-water corals, have become a topic of interest due to conservation concerns over the impacts of trawling, exploration for oil and gas, and climate change. Although the existence of these corals has been known since the 1800s, our understanding of their distribution, ecology, and biology is limited due to the technical difficulties of conducting deep-sea research. DISCOVRE (DIversity, Systematics, and COnnectivity of Vulnerable Reef Ecosystems) is a new U.S. Geological Survey (USGS) program focused on deep-water coral ecosystems in the Gulf of Mexico. This integrated, multidisciplinary, international effort investigates a variety of topics related to unique and fragile deep-sea coral ecosystems from the microscopic level to the ecosystem level, including components of microbiology, population genetics, paleoecology, food webs, taxonomy, community ecology, physical oceanography, and mapping.

  12. Intelligence: Real or artificial?

    OpenAIRE

    Schlinger, Henry D.

    1992-01-01

    Throughout the history of the artificial intelligence movement, researchers have strived to create computers that could simulate general human intelligence. This paper argues that workers in artificial intelligence have failed to achieve this goal because they adopted the wrong model of human behavior and intelligence, namely a cognitive essentialist model with origins in the traditional philosophies of natural intelligence. An analysis of the word “intelligence” suggests that it originally r...

  13. Intelligent technology for construction of tutoring integrated expert systems: new aspects

    Directory of Open Access Journals (Sweden)

    Galina V. Rybina

    2017-01-01

    Full Text Available The main aim of this paper is to acquaint readers of the journal “Open Education” with the accumulated experience of construction and practical use in the educational process of Cybernetics Department of the National Research Nuclear University MEPhI of a special class of intelligent tutoring systems, based on the architectures of tutoring integrated expert systems. The development is carried out on the problem-oriented methodology basis and intelligent software environment of AT-TECHNOLOGY workbench. They provide automation of support of all the stages of construction and maintenance of the life cycle of such systems.In the context of basic models, methods, algorithms and tools that implement the conceptual foundations of a problem-oriented methodology, and which are evolutionarily developed and experimentally investigated in the process of constructing various architectures of training integrated expert systems, including webbased ones, some features of the generalized model of intellectual learning and its components are considered (in particular, the competence-based model of the trainee, the adaptive tutoring model, the ontology model of the course /discipline et al. as well as methods and means of their realization in the current versions of tutoring integrated expert systems.In current versions of tutoring integrated expert systems examples of implementation of typical intelligent tutoring problems are described for the generalized ontology “Intelligent systems and technologies” (individual planning of the method of studying the training course, intelligent analysis of training tasks, intelligent support for decision making.A brief description of the conceptual foundations of the model of the intelligent software environment of the AT-TECHNOLOGY workbench is given and a description of some components of the model is presented with a focus on the basic components – intelligent planner, standard design procedures and reusable

  14. Search Techniques for the Web of Things: A Taxonomy and Survey

    Science.gov (United States)

    Zhou, Yuchao; De, Suparna; Wang, Wei; Moessner, Klaus

    2016-01-01

    The Web of Things aims to make physical world objects and their data accessible through standard Web technologies to enable intelligent applications and sophisticated data analytics. Due to the amount and heterogeneity of the data, it is challenging to perform data analysis directly; especially when the data is captured from a large number of distributed sources. However, the size and scope of the data can be reduced and narrowed down with search techniques, so that only the most relevant and useful data items are selected according to the application requirements. Search is fundamental to the Web of Things while challenging by nature in this context, e.g., mobility of the objects, opportunistic presence and sensing, continuous data streams with changing spatial and temporal properties, efficient indexing for historical and real time data. The research community has developed numerous techniques and methods to tackle these problems as reported by a large body of literature in the last few years. A comprehensive investigation of the current and past studies is necessary to gain a clear view of the research landscape and to identify promising future directions. This survey reviews the state-of-the-art search methods for the Web of Things, which are classified according to three different viewpoints: basic principles, data/knowledge representation, and contents being searched. Experiences and lessons learned from the existing work and some EU research projects related to Web of Things are discussed, and an outlook to the future research is presented. PMID:27128918

  15. Quality control of intelligence research

    International Nuclear Information System (INIS)

    Lu Yan; Xin Pingping; Wu Jian

    2014-01-01

    Quality control of intelligence research is the core issue of intelligence management, is a problem in study of information science This paper focuses on the performance of intelligence to explain the significance of intelligence research quality control. In summing up the results of the study on the basis of the analysis, discusses quality control methods in intelligence research, introduces the experience of foreign intelligence research quality control, proposes some recommendations to improve quality control in intelligence research. (authors)

  16. Intelligence Issues for Congress

    Science.gov (United States)

    2013-04-23

    open source information— osint (newspapers...by user agencies. Section 1052 of the Intelligence Reform Act expressed the sense of Congress that there should be an open source intelligence ...center to coordinate the collection, analysis, production, and dissemination of open source intelligence to other intelligence agencies. An Open Source

  17. Algorithmic memory and the right to be forgotten on the web

    Directory of Open Access Journals (Sweden)

    Elena Esposito

    2017-04-01

    Full Text Available The debate on the right to be forgotten on Google involves the relationship between human information processing and digital processing by algorithms. The specificity of digital memory is not so much its often discussed inability to forget. What distinguishes digital memory is, instead, its ability to process information without understanding. Algorithms only work with data (i.e. with differences without remembering or forgetting. Merely calculating, algorithms manage to produce significant results not because they operate in an intelligent way, but because they “parasitically” exploit the intelligence, the memory, and the attribution of meaning by human actors. The specificity of algorithmic processing makes it possible to bypass the paradox of remembering to forget, which up to now blocked any human-based forgetting technique. If you decide to forget some memory, the most immediate effect is drawing attention to it, thereby activating remembering. Working differently from human intelligence, however, algorithms can implement, for the first time, the classical insight that it might be possible to reinforce forgetting not by erasing memories but by multiplying them. After discussing several projects on the web which implicitly adopt this approach, the article concludes by raising some deeper problems posed when algorithms use data and metadata to produce information that cannot be attributed to any human being.

  18. Orchestrating Multiple Intelligences

    Science.gov (United States)

    Moran, Seana; Kornhaber, Mindy; Gardner, Howard

    2006-01-01

    Education policymakers often go astray when they attempt to integrate multiple intelligences theory into schools, according to the originator of the theory, Howard Gardner, and his colleagues. The greatest potential of a multiple intelligences approach to education grows from the concept of a profile of intelligences. Each learner's intelligence…

  19. Deep ART Neural Model for Biologically Inspired Episodic Memory and Its Application to Task Performance of Robots.

    Science.gov (United States)

    Park, Gyeong-Moon; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan

    2017-06-26

    Robots are expected to perform smart services and to undertake various troublesome or difficult tasks in the place of humans. Since these human-scale tasks consist of a temporal sequence of events, robots need episodic memory to store and retrieve the sequences to perform the tasks autonomously in similar situations. As episodic memory, in this paper we propose a novel Deep adaptive resonance theory (ART) neural model and apply it to the task performance of the humanoid robot, Mybot, developed in the Robot Intelligence Technology Laboratory at KAIST. Deep ART has a deep structure to learn events, episodes, and even more like daily episodes. Moreover, it can retrieve the correct episode from partial input cues robustly. To demonstrate the effectiveness and applicability of the proposed Deep ART, experiments are conducted with the humanoid robot, Mybot, for performing the three tasks of arranging toys, making cereal, and disposing of garbage.

  20. Web components and the semantic web

    OpenAIRE

    Casey, Maire; Pahl, Claus

    2003-01-01

    Component-based software engineering on the Web differs from traditional component and software engineering. We investigate Web component engineering activites that are crucial for the development,com position, and deployment of components on the Web. The current Web Services and Semantic Web initiatives strongly influence our work. Focussing on Web component composition we develop description and reasoning techniques that support a component developer in the composition activities,fo cussing...

  1. [Advantages and Application Prospects of Deep Learning in Image Recognition and Bone Age Assessment].

    Science.gov (United States)

    Hu, T H; Wan, L; Liu, T A; Wang, M W; Chen, T; Wang, Y H

    2017-12-01

    Deep learning and neural network models have been new research directions and hot issues in the fields of machine learning and artificial intelligence in recent years. Deep learning has made a breakthrough in the applications of image and speech recognitions, and also has been extensively used in the fields of face recognition and information retrieval because of its special superiority. Bone X-ray images express different variations in black-white-gray gradations, which have image features of black and white contrasts and level differences. Based on these advantages of deep learning in image recognition, we combine it with the research of bone age assessment to provide basic datum for constructing a forensic automatic system of bone age assessment. This paper reviews the basic concept and network architectures of deep learning, and describes its recent research progress on image recognition in different research fields at home and abroad, and explores its advantages and application prospects in bone age assessment. Copyright© by the Editorial Department of Journal of Forensic Medicine.

  2. Research of Classical and Intelligent Information System Solutions for Criminal Intelligence Analysis

    OpenAIRE

    Šimović, Vladimir

    2001-01-01

    The objective of this study is to present research on classical and intelligent information system solutions used in criminal intelligence analysis in Croatian security system theory. The study analyses objective and classical methods of information science, including artificial intelligence and other scientific methods. The intelligence and classical software solutions researched, proposed, and presented in this study were used in developing the integrated information system for the Croatian...

  3. Monte Carlo techniques for analyzing deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1986-01-01

    Current methods and difficulties in Monte Carlo deep-penetration calculations are reviewed, including statistical uncertainty and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multigroup Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications

  4. Deep learning in color: towards automated quark/gluon jet discrimination

    International Nuclear Information System (INIS)

    Komiske, Patrick T.; Metodiev, Eric M.; Schwartz, Matthew D.

    2017-01-01

    Artificial intelligence offers the potential to automate challenging data-processing tasks in collider physics. Here, to establish its prospects, we explore to what extent deep learning with convolutional neural networks can discriminate quark and gluon jets better than observables designed by physicists. Our approach builds upon the paradigm that a jet can be treated as an image, with intensity given by the local calorimeter deposits. We supplement this construction by adding color to the images, with red, green and blue intensities given by the transverse momentum in charged particles, transverse momentum in neutral particles, and pixel-level charged particle counts. Overall, the deep networks match or outperform traditional jet variables. We also find that, while various simulations produce different quark and gluon jets, the neural networks are surprisingly insensitive to these differences, similar to traditional observables. This suggests that the networks can extract robust physical information from imperfect simulations.

  5. Artificial Consciousness or Artificial Intelligence

    OpenAIRE

    Spanache Florin

    2017-01-01

    Artificial intelligence is a tool designed by people for the gratification of their own creative ego, so we can not confuse conscience with intelligence and not even intelligence in its human representation with conscience. They are all different concepts and they have different uses. Philosophically, there are differences between autonomous people and automatic artificial intelligence. This is the difference between intelligence and artificial intelligence, autonomous versus a...

  6. Food-Web Complexity in Guaymas Basin Hydrothermal Vents and Cold Seeps.

    Directory of Open Access Journals (Sweden)

    Marie Portail

    Full Text Available In the Guaymas Basin, the presence of cold seeps and hydrothermal vents in close proximity, similar sedimentary settings and comparable depths offers a unique opportunity to assess and compare the functioning of these deep-sea chemosynthetic ecosystems. The food webs of five seep and four vent assemblages were studied using stable carbon and nitrogen isotope analyses. Although the two ecosystems shared similar potential basal sources, their food webs differed: seeps relied predominantly on methanotrophy and thiotrophy via the Calvin-Benson-Bassham (CBB cycle and vents on petroleum-derived organic matter and thiotrophy via the CBB and reductive tricarboxylic acid (rTCA cycles. In contrast to symbiotic species, the heterotrophic fauna exhibited high trophic flexibility among assemblages, suggesting weak trophic links to the metabolic diversity of chemosynthetic primary producers. At both ecosystems, food webs did not appear to be organised through predator-prey links but rather through weak trophic relationships among co-occurring species. Examples of trophic or spatial niche differentiation highlighted the importance of species-sorting processes within chemosynthetic ecosystems. Variability in food web structure, addressed through Bayesian metrics, revealed consistent trends across ecosystems. Food-web complexity significantly decreased with increasing methane concentrations, a common proxy for the intensity of seep and vent fluid fluxes. Although high fluid-fluxes have the potential to enhance primary productivity, they generate environmental constraints that may limit microbial diversity, colonisation of consumers and the structuring role of competitive interactions, leading to an overall reduction of food-web complexity and an increase in trophic redundancy. Heterogeneity provided by foundation species was identified as an additional structuring factor. According to their biological activities, foundation species may have the potential to

  7. Intelligent mechatronics; Intelligent mechatronics

    Energy Technology Data Exchange (ETDEWEB)

    Hashimoto, H. [The University of Tokyo, Tokyo (Japan). Institute of Industrial Science

    1995-10-01

    Intelligent mechatronics (IM) was explained as follows: a study of IM essentially targets realization of a robot namely, but in the present stage the target is a creation of new values by intellectualization of machine, that is, a combination of the information infrastructure and the intelligent machine system. IM is also thought to be constituted of computers positively used and micromechatronics. The paper next introduces examples of IM study, mainly those the author is concerned with as shown below: sensor gloves, robot hands, robot eyes, tele operation, three-dimensional object recognition, mobile robot, magnetic bearing, construction of remote controlled unmanned dam, robot network, sensitivity communication using neuro baby, etc. 27 figs.

  8. Brain Intelligence: Go Beyond Artificial Intelligence

    OpenAIRE

    Lu, Huimin; Li, Yujie; Chen, Min; Kim, Hyoungseop; Serikawa, Seiichi

    2017-01-01

    Artificial intelligence (AI) is an important technology that supports daily social life and economic activities. It contributes greatly to the sustainable growth of Japan's economy and solves various social problems. In recent years, AI has attracted attention as a key for growth in developed countries such as Europe and the United States and developing countries such as China and India. The attention has been focused mainly on developing new artificial intelligence information communication ...

  9. Business Intelligence Systems

    Directory of Open Access Journals (Sweden)

    Bogdan NEDELCU

    2014-02-01

    Full Text Available The aim of this article is to show the importance of business intelligence and its growing influence. It also shows when the concept of business intelligence was used for the first time and how it evolved over time. The paper discusses the utility of a business intelligence system in any organization and its contribution to daily activities. Furthermore, we highlight the role and the objectives of business intelligence systems inside an organization and the needs to grow the incomes and reduce the costs, to manage the complexity of the business environment and to cut IT costs so that the organization survives in the current competitive climate. The article contains information about architectural principles of a business intelligence system and how such a system can be achieved.

  10. Taxonomic names, metadata, and the Semantic Web

    Directory of Open Access Journals (Sweden)

    Roderic D. M. Page

    2006-01-01

    Full Text Available Life Science Identifiers (LSIDs offer an attractive solution to the problem of globally unique identifiers for digital objects in biology. However, I suggest that in the context of taxonomic names, the most compelling benefit of adopting these identifiers comes from the metadata associated with each LSID. By using existing vocabularies wherever possible, and using a simple vocabulary for taxonomy-specific concepts we can quickly capture the essential information about a taxonomic name in the Resource Description Framework (RDF format. This opens up the prospect of using technologies developed for the Semantic Web to add ``taxonomic intelligence" to biodiversity databases. This essay explores some of these ideas in the context of providing a taxonomic framework for the phylogenetic database TreeBASE.

  11. Web-based E-commerce information consultation system

    International Nuclear Information System (INIS)

    Zhao Yanping; Xu Rongsheng

    2003-01-01

    This paper discusses an on-line e-Commerce information consultation system. It uses NLP and Robot techniques, to provide information retrieval more easily for users, and find required content answers not large amount of documents from variety of e-Commerce markets and products information from Internet. It can supplies more convenient, quicker and exact queried results. On the design of the our system framework, we integrate FAQ database with Internet as a knowledge base, which makes user be able to not only query existing EC products information, but also just-in-time information. An intelligent web crawler is integrated to help users to gather specific information from EC sites. We briefly introduce the function and realization of each part of the system and test the system. (authors)

  12. WebVis: a hierarchical web homepage visualizer

    Science.gov (United States)

    Renteria, Jose C.; Lodha, Suresh K.

    2000-02-01

    WebVis, the Hierarchical Web Home Page Visualizer, is a tool for managing home web pages. The user can access this tool via the WWW and obtain a hierarchical visualization of one's home web pages. WebVis is a real time interactive tool that supports many different queries on the statistics of internal files such as sizes, age, and type. In addition, statistics on embedded information such as VRML files, Java applets, images and sound files can be extracted and queried. Results of these queries are visualized using color, shape and size of different nodes of the hierarchy. The visualization assists the user in a variety of task, such as quickly finding outdated information or locate large files. WebVIs is one solution to the growing web space maintenance problem. Implementation of WebVis is realized with Perl and Java. Perl pattern matching and file handling routines are used to collect and process web space linkage information and web document information. Java utilizes the collected information to produce visualization of the web space. Java also provides WebVis with real time interactivity, while running off the WWW. Some WebVis examples of home web page visualization are presented.

  13. Professionalizing Intelligence Analysis

    Directory of Open Access Journals (Sweden)

    James B. Bruce

    2015-09-01

    Full Text Available This article examines the current state of professionalism in national security intelligence analysis in the U.S. Government. Since the introduction of major intelligence reforms directed by the Intelligence Reform and Terrorism Prevention Act (IRTPA in December, 2004, we have seen notable strides in many aspects of intelligence professionalization, including in analysis. But progress is halting, uneven, and by no means permanent. To consolidate its gains, and if it is to continue improving, the U.S. intelligence community (IC should commit itself to accomplishing a new program of further professionalization of analysis to ensure that it will develop an analytic cadre that is fully prepared to deal with the complexities of an emerging multipolar and highly dynamic world that the IC itself is forecasting. Some recent reforms in intelligence analysis can be assessed against established standards of more fully developed professions; these may well fall short of moving the IC closer to the more fully professionalized analytical capability required for producing the kind of analysis needed now by the United States.

  14. The SP Theory of Intelligence: Benefits and Applications

    Directory of Open Access Journals (Sweden)

    J. Gerard Wolff

    2013-12-01

    Full Text Available This article describes existing and expected benefits of the SP theory ofintelligence, and some potential applications. The theory aims to simplify and integrate ideasacross artificial intelligence, mainstream computing, and human perception and cognition,with information compression as a unifying theme. It combines conceptual simplicitywith descriptive and explanatory power across several areas of computing and cognition.In the SP machine—an expression of the SP theory which is currently realized in theform of a computer model—there is potential for an overall simplification of computingsystems, including software. The SP theory promises deeper insights and better solutions inseveral areas of application including, most notably, unsupervised learning, natural languageprocessing, autonomous robots, computer vision, intelligent databases, software engineering,information compression, medical diagnosis and big data. There is also potential inareas such as the semantic web, bioinformatics, structuring of documents, the detection ofcomputer viruses, data fusion, new kinds of computer, and the development of scientifictheories. The theory promises seamless integration of structures and functions within andbetween different areas of application. The potential value, worldwide, of these benefits andapplications is at least $190 billion each year. Further development would be facilitatedby the creation of a high-parallel, open-source version of the SP machine, available toresearchers everywhere.

  15. Deep-apical tubules: dynamic lipid-raft microdomains in the brush-border region of enterocytes

    DEFF Research Database (Denmark)

    Hansen, Gert H; Pedersen, Jens; Niels-Christiansen, Lise-Lotte

    2003-01-01

    microdomains. Deep-apical tubules were positioned close to the actin rootlets of adjacent microvilli in the terminal web region, which had a diameter of 50-100 nm, and penetrated up to 1 microm into the cytoplasm. Markers for transcytosis, IgA and the polymeric immunoglobulin receptor, as well as the resident...... lipid raft-containing compartments, but little is otherwise known about these raft microdomains. We therefore studied in closer detail apical lipid-raft compartments in enterocytes by immunogold electron microscopy and biochemical analyses. Novel membrane structures, deep-apical tubules, were visualized...... brush-border enzyme aminopeptidase N, were present in these deep-apical tubules. We propose that deep-apical tubules are a specialized lipid-raft microdomain in the brush-border region functioning as a hub in membrane trafficking at the brush border. In addition, the sensitivity to cholesterol depletion...

  16. Revolutionary Intelligence: The Expanding Intelligence Role of the Iranian Revolutionary Guard Corps

    Directory of Open Access Journals (Sweden)

    Udit Banerjea

    2015-09-01

    Full Text Available The Iranian Revolutionary Guard Corps (IRGC is a military and paramilitary organization that is meant to defend the ideals of the Iranian Islamic Revolution in 1979. Since its formation, the IRGC has grown in influence and its intelligence role has expanded. This paper examines the role of the IRGC in Iran’s intelligence system through a comprehensive analysis of the organization of the IRGC’s intelligence arm, along with its operations and capabilities. In doing so, the scope, objectives, resources, customers, and sponsors of the IRGC’s intelligence activities are also analyzed. Additionally, this paper explores how the IRGC interacts with the government of Iran, the Ministry of Intelligence and Security (MOIS, other key internal stakeholders, and foreign client organizations. A key focus of this analysis is the evolution of the relationship between the IRGC and the MOIS and the growing influence of the IRGC in Iran’s intelligence community over the last decade. The paper concludes that the IRGC has now eclipsed the MOIS within Iran’s intelligence community and is one of the most powerful institutions in Iranian politics today, using its intelligence activities as its key means of maintaining power and influence within the country.

  17. Improving Logistics Processes in Industry Using Web Technologies

    Directory of Open Access Journals (Sweden)

    Jánošík Ján

    2016-12-01

    Full Text Available The aim of this paper is to propose the concept of a system that takes advantage of web technologies and integrates them into the management process and management of internal stocks which may relate to external applications and creates the conditions to transform a Computerized Control of Warehouse Stock (CCWS in the company. The importance of implementing CCWS is in the elimination of the claims caused by the human factor, as well as to allow the processing of information for analytical purposes and their subsequent use to improve internal processes. Using CCWS in the company would also facilitate better use of the potential tools Business Intelligence and Data Mining.

  18. Designing a holistic end-to-end intelligent network analysis and security platform

    Science.gov (United States)

    Alzahrani, M.

    2018-03-01

    Firewall protects a network from outside attacks, however, once an attack entering a network, it is difficult to detect. Recent significance accidents happened. i.e.: millions of Yahoo email account were stolen and crucial data from institutions are held for ransom. Within two year Yahoo’s system administrators were not aware that there are intruder inside the network. This happened due to the lack of intelligent tools to monitor user behaviour in internal network. This paper discusses a design of an intelligent anomaly/malware detection system with proper proactive actions. The aim is to equip the system administrator with a proper tool to battle the insider attackers. The proposed system adopts machine learning to analyse user’s behaviour through the runtime behaviour of each node in the network. The machine learning techniques include: deep learning, evolving machine learning perceptron, hybrid of Neural Network and Fuzzy, as well as predictive memory techniques. The proposed system is expanded to deal with larger network using agent techniques.

  19. Condition Monitoring Using Computational Intelligence Methods Applications in Mechanical and Electrical Systems

    CERN Document Server

    Marwala, Tshilidzi

    2012-01-01

    Condition monitoring uses the observed operating characteristics of a machine or structure to diagnose trends in the signal being monitored and to predict the need for maintenance before a breakdown occurs. This reduces the risk, inherent in a fixed maintenance schedule, of performing maintenance needlessly early or of having a machine fail before maintenance is due either of which can be expensive with the latter also posing a risk of serious accident especially in systems like aeroengines in which a catastrophic failure would put lives at risk. The technique also measures responses from the whole of the system under observation so it can detect the effects of faults which might be hidden deep within a system, hidden from traditional methods of inspection. Condition Monitoring Using Computational Intelligence Methods promotes the various approaches gathered under the umbrella of computational intelligence to show how condition monitoring can be used to avoid equipment failures and lengthen its useful life, m...

  20. 77 FR 32952 - Defense Intelligence Agency National Intelligence University Board of Visitors Closed Meeting

    Science.gov (United States)

    2012-06-04

    ... DEPARTMENT OF DEFENSE Office of the Secretary Defense Intelligence Agency National Intelligence University Board of Visitors Closed Meeting AGENCY: Department of Defense, Defense Intelligence Agency, National Intelligence University. ACTION: Notice of closed meeting. SUMMARY: Pursuant to the provisions of...