WorldWideScience

Sample records for intelligent deep web

  1. Deep iCrawl: An Intelligent Vision-Based Deep Web Crawler

    OpenAIRE

    R.Anita; V.Ganga Bharani; N.Nityanandam; Pradeep Kumar Sahoo

    2011-01-01

    The explosive growth of World Wide Web has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. Deep web pages are created dynamically as a result of queries posed to specific web databases. The structure of the deep web pages makes it impossible for traditional web crawlers to access deep web contents. This paper, Deep iCrawl, gives a novel and vision-based app...

  2. Web Intelligence and Artificial Intelligence in Education

    Science.gov (United States)

    Devedzic, Vladan

    2004-01-01

    This paper surveys important aspects of Web Intelligence (WI) in the context of Artificial Intelligence in Education (AIED) research. WI explores the fundamental roles as well as practical impacts of Artificial Intelligence (AI) and advanced Information Technology (IT) on the next generation of Web-related products, systems, services, and…

  3. How to Improve Artificial Intelligence through Web

    OpenAIRE

    Adrian Lupasc

    2005-01-01

    Intelligent agents, intelligent software applications and artificial intelligent applications from artificial intelligence service providers may make their way onto the Web in greater number as adaptive software, dynamic programming languages and Learning Algorithms are introduced into Web Services. The evolution of Web architecture may allow intelligent applications to run directly on the Web by introducing XML, RDF and logic layer. The Intelligent Wireless Web’s significant potential for ra...

  4. Digging Deeper: The Deep Web.

    Science.gov (United States)

    Turner, Laura

    2001-01-01

    Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…

  5. Deep Web and Dark Web: Deep World of the Internet

    OpenAIRE

    Çelik, Emine

    2018-01-01

    The Internet is undoubtedly still a revolutionary breakthrough in the history of humanity. Many people use the internet for communication, social media, shopping, political and social agenda, and more. Deep Web and Dark Web concepts not only handled by computer, software engineers but also handled by social siciensists because of the role of internet for the States in international arenas, public institutions and human life. By the moving point that very importantrole of internet for social s...

  6. A study on the Web intelligence

    Institute of Scientific and Technical Information of China (English)

    Sang-Geun Kim

    2004-01-01

    This paper surveys important aspects of Web Intelligence (WI). WI explores the fundamental roles as well as practical impacts of Artificial Intelligence (AI) and advanced Information Technology (IT) on the next generation of Web - related products, systens, and activities. As a direction for scientific research and devlopment, WI can be extremely beneficial for the field of Artificial Intelligence in Education (AIED). This paper covers these issues only very briefly. It focuses more on other issues in WI, such as intelligent Web services, and semantic web, and proposes how to use them as basis for tackling new and challenging research problems in AIED.

  7. Distributed Deep Web Search

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien

    2013-01-01

    The World Wide Web contains billions of documents (and counting); hence, it is likely that some document will contain the answer or content you are searching for. While major search engines like Bing and Google often manage to return relevant results to your query, there are plenty of situations in

  8. Emergent web intelligence advanced information retrieval

    CERN Document Server

    Badr, Youakim; Abraham, Ajith; Hassanien, Aboul-Ella

    2010-01-01

    Web Intelligence explores the impact of artificial intelligence and advanced information technologies representing the next generation of Web-based systems, services, and environments, and designing hybrid web systems that serve wired and wireless users more efficiently. Multimedia and XML-based data are produced regularly and in increasing way in our daily digital activities, and their retrieval must be explored and studied in this emergent web-based era. 'Emergent Web Intelligence: Advanced information retrieval, provides reviews of the related cutting-edge technologies and insights. It is v

  9. How to Improve Artificial Intelligence through Web

    Directory of Open Access Journals (Sweden)

    Adrian LUPASC

    2005-10-01

    Full Text Available Intelligent agents, intelligent software applications and artificial intelligent applications from artificial intelligence service providers maymake their way onto the Web in greater number as adaptive software, dynamic programming languages and Learning Algorithms are introduced intoWeb Services. The evolution of Web architecture may allow intelligent applications to run directly on the Web by introducing XML, RDF and logiclayer. The Intelligent Wireless Web’s significant potential for rapidly completing information transactions may take an important contribution toglobal worker productivity. Artificial intelligence can be defined as the study of the ways in which computers can be made to perform cognitivetasks. Examples of such tasks include understanding natural language statements, recognizing visual patterns or scenes, diagnosing diseases orillnesses, solving mathematical problems, performing financial analyses, learning new procedures for solving problems. The term expert system canbe considered to be a particular type of knowledge-based system. An expert system is a system in which the knowledge is deliberately represented“as it is”. Expert systems are applications that make decisions in real-life situations that would otherwise be performed by a human expert. They areprograms designed to mimic human performance at specialized, constrained problem-solving tasks. They are constructed as a collection of IF-THENproduction rules combined with a reasoning engine that applies those rules, either in a forward or backward direction, to specific problems.

  10. An Intelligent QoS Identification for Untrustworthy Web Services Via Two-phase Neural Networks

    OpenAIRE

    Wang, Weidong; Wang, Liqiang; Lu, Wei

    2016-01-01

    QoS identification for untrustworthy Web services is critical in QoS management in the service computing since the performance of untrustworthy Web services may result in QoS downgrade. The key issue is to intelligently learn the characteristics of trustworthy Web services from different QoS levels, then to identify the untrustworthy ones according to the characteristics of QoS metrics. As one of the intelligent identification approaches, deep neural network has emerged as a powerful techniqu...

  11. Deep web search: an overview and roadmap

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien; Trieschnigg, Rudolf Berend; Hiemstra, Djoerd

    2011-01-01

    We review the state-of-the-art in deep web search and propose a novel classification scheme to better compare deep web search systems. The current binary classification (surfacing versus virtual integration) hides a number of implicit decisions that must be made by a developer. We make these

  12. Research Proposal for Distributed Deep Web Search

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien

    2010-01-01

    This proposal identifies two main problems related to deep web search, and proposes a step by step solution for each of them. The first problem is about searching deep web content by means of a simple free-text interface (with just one input field, instead of a complex interface with many input

  13. Harnessing the Deep Web: Present and Future

    OpenAIRE

    Madhavan, Jayant; Afanasiev, Loredana; Antova, Lyublena; Halevy, Alon

    2009-01-01

    Over the past few years, we have built a system that has exposed large volumes of Deep-Web content to Google.com users. The content that our system exposes contributes to more than 1000 search queries per-second and spans over 50 languages and hundreds of domains. The Deep Web has long been acknowledged to be a major source of structured data on the web, and hence accessing Deep-Web content has long been a problem of interest in the data management community. In this paper, we report on where...

  14. Business intelligence and capacity planning: web-based solutions.

    Science.gov (United States)

    James, Roger

    2010-07-01

    Income (activity) and expenditure (costs) form the basis of a modern hospital's 'business intelligence'. However, clinical engagement in business intelligence is patchy. This article describes the principles of business intelligence and outlines some recent developments using web-based applications.

  15. Towards Brain-inspired Web Intelligence

    Science.gov (United States)

    Zhong, Ning

    Artificial Intelligence (AI) has been mainly studied within the realm of computer based technologies. Various computational models and knowledge based systems have been developed for automated reasoning, learning, and problem-solving. However, there still exist several grand challenges. The AI research has not produced major breakthrough recently due to a lack of understanding of human brains and natural intelligence. In addition, most of the AI models and systems will not work well when dealing with large-scale, dynamically changing, open and distributed information sources at a Web scale.

  16. Hacking web intelligence open source intelligence and web reconnaissance concepts and techniques

    CERN Document Server

    Chauhan, Sudhanshu

    2015-01-01

    Open source intelligence (OSINT) and web reconnaissance are rich topics for infosec professionals looking for the best ways to sift through the abundance of information widely available online. In many cases, the first stage of any security assessment-that is, reconnaissance-is not given enough attention by security professionals, hackers, and penetration testers. Often, the information openly present is as critical as the confidential data. Hacking Web Intelligence shows you how to dig into the Web and uncover the information many don't even know exists. The book takes a holistic approach

  17. Un paseo por la Deep Web

    OpenAIRE

    Ortega Castillo, Carlos

    2018-01-01

    Este documento busca presentar una mirada técnica e inclusiva a algunas de las tecnologías de interconexión desarrolladas en la DeepWeb, primero desde un punto de vista teórico y después con una breve introducción práctica. La desmitificación de los procesos desarrollados bajo la DeepWeb, brinda herramientas a los usuarios para esclarecer y construir nuevos paradigmas de sociedad, conocimiento y tecnología que aporten al desarrollo responsable de este tipo de redes y contribuyan al crecimi...

  18. La deep web : el mercado negro global

    OpenAIRE

    Gay Fernández, José

    2015-01-01

    La deep web es un espacio oculto de internet donde la primera garantía es el anonimato. En líneas generales, la deep web contiene todo aquello que los buscadores convencionales no pueden localizar. Esta garantía sirve para albergar una vasta red de servicios ilegales, como el narcotráfico, la trata de blancas, la contratación de sicarios, la compra-venta de pasaportes y cuentas bancarias, o la pornografía infantil, entre otros muchos. Pero el anonimato también posibilita que activ...

  19. Intelligent Agent Based Semantic Web in Cloud Computing Environment

    OpenAIRE

    Mukhopadhyay, Debajyoti; Sharma, Manoj; Joshi, Gajanan; Pagare, Trupti; Palwe, Adarsha

    2013-01-01

    Considering today's web scenario, there is a need of effective and meaningful search over the web which is provided by Semantic Web. Existing search engines are keyword based. They are vulnerable in answering intelligent queries from the user due to the dependence of their results on information available in web pages. While semantic search engines provides efficient and relevant results as the semantic web is an extension of the current web in which information is given well defined meaning....

  20. Deep web query interface understanding and integration

    CERN Document Server

    Dragut, Eduard C; Yu, Clement T

    2012-01-01

    There are millions of searchable data sources on the Web and to a large extent their contents can only be reached through their own query interfaces. There is an enormous interest in making the data in these sources easily accessible. There are primarily two general approaches to achieve this objective. The first is to surface the contents of these sources from the deep Web and add the contents to the index of regular search engines. The second is to integrate the searching capabilities of these sources and support integrated access to them. In this book, we introduce the state-of-the-art tech

  1. The deep learning AI playbook strategy for disruptive artificial intelligence

    CERN Document Server

    Perez, Carlos E

    2017-01-01

    Deep Learning Artificial Intelligence involves the interplay of Computer Science, Physics, Biology, Linguistics and Psychology. In addition to that, it is technology that can be extremely disruptive. The ramifications to society and even our own humanity will be profound. There are few subjects that are as captivating and as consequential as this. Surprisingly, there is very little that is written about this new technology in a more comprehensive and cohesive way. This book is an opinionated take on the developments of Deep Learning AI. One question many have will be "how to apply Deep Learning AI in a business context?" Technology that is disruptive does not automatically imply that its application to valuable use cases will be apparent. For years, many people could not figure out how to monetize the World Wide Web. We are in a similar situation with Deep Learning AI. The developments may be mind-boggling but its monetization is far from being obvious. This book presents a framework to address this shortcomi...

  2. Deep Web: aproximaciones a la ciber irresponsabilidad

    Directory of Open Access Journals (Sweden)

    Dulce María Bautista Luzardo

    2015-01-01

    Full Text Available La Deep web o Hard web es una parte gigantesca de las plataformas virtuales indetectables donde ocurren ciberacciones que tienen como precedente el ocultamiento de la identidad del usuario y han dado pie a la tergiversación del concepto de persona y a la utilización de la web de una manera irresponsable —en algunos casos— para causar desazón, para perseguir o a veces hackear bancos, entidades y cuentas privadas. Este es un artículo de reflexión para analizar los alcances de la práctica de esconder acciones en Internet y de modificar el rostro en la cibersociedad contemporánea. Con esta reflexión se pretende llamar la atención acerca de la responsabilidad que tenemos a la hora de entrar en el mundo del Internet y se analiza los peligros que estas prácticas conllevan.

  3. Focused Crawling of the Deep Web Using Service Class Descriptions

    Energy Technology Data Exchange (ETDEWEB)

    Rocco, D; Liu, L; Critchlow, T

    2004-06-21

    Dynamic Web data sources--sometimes known collectively as the Deep Web--increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deep Web. To address these challenges, we present DynaBot, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DynaBot has three unique characteristics. First, DynaBot utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DynaBot employs a modular, self-tuning system architecture for focused crawling of the DeepWeb using service class descriptions. Third, DynaBot incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.

  4. Intelligent web agents for a 3D virtual community

    Science.gov (United States)

    Dave, T. M.; Zhang, Yanqing; Owen, G. S. S.; Sunderraman, Rajshekhar

    2003-08-01

    In this paper, we propose an Avatar-based intelligent agent technique for 3D Web based Virtual Communities based on distributed artificial intelligence, intelligent agent techniques, and databases and knowledge bases in a digital library. One of the goals of this joint NSF (IIS-9980130) and ACM SIGGRAPH Education Committee (ASEC) project is to create a virtual community of educators and students who have a common interest in comptuer graphics, visualization, and interactive techniqeus. In this virtual community (ASEC World) Avatars will represent the educators, students, and other visitors to the world. Intelligent agents represented as specially dressed Avatars will be available to assist the visitors to ASEC World. The basic Web client-server architecture of the intelligent knowledge-based avatars is given. Importantly, the intelligent Web agent software system for the 3D virtual community is implemented successfully.

  5. A Framework for Transparently Accessing Deep Web Sources

    Science.gov (United States)

    Dragut, Eduard Constantin

    2010-01-01

    An increasing number of Web sites expose their content via query interfaces, many of them offering the same type of products/services (e.g., flight tickets, car rental/purchasing). They constitute the so-called "Deep Web". Accessing the content on the Deep Web has been a long-standing challenge for the database community. For a user interested in…

  6. Stratification-Based Outlier Detection over the Deep Web.

    Science.gov (United States)

    Xian, Xuefeng; Zhao, Pengpeng; Sheng, Victor S; Fang, Ligang; Gu, Caidong; Yang, Yuanfeng; Cui, Zhiming

    2016-01-01

    For many applications, finding rare instances or outliers can be more interesting than finding common patterns. Existing work in outlier detection never considers the context of deep web. In this paper, we argue that, for many scenarios, it is more meaningful to detect outliers over deep web. In the context of deep web, users must submit queries through a query interface to retrieve corresponding data. Therefore, traditional data mining methods cannot be directly applied. The primary contribution of this paper is to develop a new data mining method for outlier detection over deep web. In our approach, the query space of a deep web data source is stratified based on a pilot sample. Neighborhood sampling and uncertainty sampling are developed in this paper with the goal of improving recall and precision based on stratification. Finally, a careful performance evaluation of our algorithm confirms that our approach can effectively detect outliers in deep web.

  7. Stratification-Based Outlier Detection over the Deep Web

    OpenAIRE

    Xian, Xuefeng; Zhao, Pengpeng; Sheng, Victor S.; Fang, Ligang; Gu, Caidong; Yang, Yuanfeng; Cui, Zhiming

    2016-01-01

    For many applications, finding rare instances or outliers can be more interesting than finding common patterns. Existing work in outlier detection never considers the context of deep web. In this paper, we argue that, for many scenarios, it is more meaningful to detect outliers over deep web. In the context of deep web, users must submit queries through a query interface to retrieve corresponding data. Therefore, traditional data mining methods cannot be directly applied. The primary contribu...

  8. Intelligent web data management software architectures and emerging technologies

    CERN Document Server

    Ma, Kun; Yang, Bo; Sun, Runyuan

    2016-01-01

    This book presents some of the emerging techniques and technologies used to handle Web data management. Authors present novel software architectures and emerging technologies and then validate using experimental data and real world applications. The contents of this book are focused on four popular thematic categories of intelligent Web data management: cloud computing, social networking, monitoring and literature management. The Volume will be a valuable reference to researchers, students and practitioners in the field of Web data management, cloud computing, social networks using advanced intelligence tools.

  9. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2016-01-01

    Web content changes rapidly [18]. In Focused Web Harvesting [17] which aim it is to achieve a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a set of all the relevant web data to their topics of interest. Whether you are a fan

  10. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2016-01-01

    The change of the web content is rapid. In Focused Web Harvesting [?], which aims at achieving a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a complete set of related web data to their interesting topics. Whether you are a fan

  11. Intelligent Overload Control for Composite Web Services

    NARCIS (Netherlands)

    Meulenhoff, P.J.; Ostendorf, D.R.; Zivkovic, Miroslav; Meeuwissen, H.B.; Gijsen, B.M.M.

    2009-01-01

    In this paper, we analyze overload control for composite web services in service oriented architectures by an orchestrating broker, and propose two practical access control rules which effectively mitigate the effects of severe overloads at some web services in the composite service. These two rules

  12. Enhancing E-Learning through Web Service and Intelligent Agents

    Directory of Open Access Journals (Sweden)

    Nasir Hussain

    2006-04-01

    Full Text Available E-learning is basically the integration of various technologies. E-Learning technology is now maturing and we can find a multiplicity of standards. New technologies such as agents and web services are promising better results. In this paper we have proposed an e-learning architecture that is dependent on intelligent agent systems and web services. These communication technologies will make the architecture more robust, scalable and efficient.

  13. Intelligent Web-Based English Instruction in Middle Schools

    Science.gov (United States)

    Jia, Jiyou

    2015-01-01

    The integration of technology into educational environments has become more prominent over the years. The combination of technology and face-to-face interaction with instructors allows for a thorough, more valuable educational experience. "Intelligent Web-Based English Instruction in Middle Schools" addresses the concerns associated with…

  14. Semantic mashups intelligent reuse of web resources

    CERN Document Server

    Endres-Niggemeyer, Brigitte

    2013-01-01

    Mashups are mostly lightweight Web applications that offer new functionalities by combining, aggregating and transforming resources and services available on the Web. Popular examples include a map in their main offer, for instance for real estate, hotel recommendations, or navigation tools.  Mashups may contain and mix client-side and server-side activity. Obviously, understanding the incoming resources (services, statistical figures, text, videos, etc.) is a precondition for optimally combining them, so that there is always some undercover semantics being used.  By using semantic annotations

  15. REVIEW PAPER ON THE DEEP WEB DATA EXTRACTION

    OpenAIRE

    Prof. V. S. Patil*1, Miss Sneha Sitafale2, Miss Priyanka Kale3, Miss Poonam Bhujbal 4 , Miss Mohini Dandge 5 .

    2018-01-01

    Deep web data extraction is the process of extracting a set of data records and the items that they contain from a query result page. Such structured data can be later integrated into results from other data sources and given to the user in a single, cohesive view. Domain identification is used to identify the query interfaces related to the domain from the forms obtained in the search process. The surface web contains a large amount of unfiltered information, whereas the deep web includes hi...

  16. Using the Web for Competitive Intelligence (CI) Gathering

    Science.gov (United States)

    Rocker, JoAnne; Roncaglia, George

    2002-01-01

    Businesses use the Internet as a way to communicate company information as a way of engaging their customers. As the use of the Web for business transactions and advertising grows, so too, does the amount of useful information for practitioners of competitive intelligence (CI). CI is the legal and ethical practice of information gathering about competitors and the marketplace. Information sources like company webpages, online newspapers and news organizations, electronic journal articles and reports, and Internet search engines allow CI practitioners analyze company strengths and weaknesses for their customers. More company and marketplace information than ever is available on the Internet and a lot of it is free. Companies should view the Web not only as a business tool but also as a source of competitive intelligence. In a highly competitive marketplace can any organization afford to ignore information about the other players and customers in that same marketplace?

  17. Intelligent Shimming for Deep Drawing Processes

    DEFF Research Database (Denmark)

    Tommerup, Søren; Endelt, Benny Ørtoft; Danckert, Joachim

    2011-01-01

    cavities the blank-holder force distribution can be controlled during the punch stroke. By means of a sequence of numerical simulations abrasive wear is imposed to the deep drawing of a rectangular cup. The abrasive wear is modelled by changing the tool surface geometry using an algorithm based...... on the sliding energy density. As the tool surfaces are changed the material draw-in is significantly altered when using conventional open-loop control of the blank-holder force. A feed-back controller is presented which is capable of reducing the draw-in difference to a certain degree. Further a learning...

  18. AN EFFICIENT METHOD FOR DEEP WEB CRAWLER BASED ON ACCURACY -A REVIEW

    OpenAIRE

    Pranali Zade1, Dr.S.W.Mohod2

    2018-01-01

    As deep web grows at a very fast pace, there has been increased interest in techniques that help efficiently locate deep-web interfaces. However, due to the large volume of web resources and the dynamic nature of deep web, achieving wide coverage and high efficiency is a challenging issue. We propose a three-stage framework, for efficient harvesting deep web interfaces. Project experimental results on a set of representative domains show the agility and accuracy of our proposed crawler framew...

  19. Designing A General Deep Web Harvester by Harvestability Factor

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; van Keulen, Maurice; Hiemstra, Djoerd

    2014-01-01

    To make deep web data accessible, harvesters have a crucial role. Targeting different domains and websites enhances the need of a general-purpose harvester which can be applied to different settings and situations. To develop such a harvester, a large number of issues should be addressed. To have

  20. Moby and Moby 2: creatures of the deep (web).

    Science.gov (United States)

    Vandervalk, Ben P; McCarthy, E Luke; Wilkinson, Mark D

    2009-03-01

    Facile and meaningful integration of data from disparate resources is the 'holy grail' of bioinformatics. Some resources have begun to address this problem by providing their data using Semantic Web standards, specifically the Resource Description Framework (RDF) and the Web Ontology Language (OWL). Unfortunately, adoption of Semantic Web standards has been slow overall, and even in cases where the standards are being utilized, interconnectivity between resources is rare. In response, we have seen the emergence of centralized 'semantic warehouses' that collect public data from third parties, integrate it, translate it into OWL/RDF and provide it to the community as a unified and queryable resource. One limitation of the warehouse approach is that queries are confined to the resources that have been selected for inclusion. A related problem, perhaps of greater concern, is that the majority of bioinformatics data exists in the 'Deep Web'-that is, the data does not exist until an application or analytical tool is invoked, and therefore does not have a predictable Web address. The inability to utilize Uniform Resource Identifiers (URIs) to address this data is a barrier to its accessibility via URI-centric Semantic Web technologies. Here we examine 'The State of the Union' for the adoption of Semantic Web standards in the health care and life sciences domain by key bioinformatics resources, explore the nature and connectivity of several community-driven semantic warehousing projects, and report on our own progress with the CardioSHARE/Moby-2 project, which aims to make the resources of the Deep Web transparently accessible through SPARQL queries.

  1. Cyanide Suicide After Deep Web Shopping: A Case Report.

    Science.gov (United States)

    Le Garff, Erwan; Delannoy, Yann; Mesli, Vadim; Allorge, Delphine; Hédouin, Valéry; Tournel, Gilles

    2016-09-01

    Cyanide is a product that is known for its use in industrial or laboratory processes, as well as for intentional intoxication. The toxicity of cyanide is well described in humans with rapid inhibition of cellular aerobic metabolism after ingestion or inhalation, leading to severe clinical effects that are frequently lethal. We report the case of a young white man found dead in a hotel room after self-poisoning with cyanide ordered in the deep Web. This case shows a probable complex suicide kit use including cyanide, as a lethal tool, and dextromethorphan, as a sedative and anxiolytic substance. This case is an original example of the emerging deep Web shopping in illegal drug procurement.

  2. Discovering Land Cover Web Map Services from the Deep Web with JavaScript Invocation Rules

    Directory of Open Access Journals (Sweden)

    Dongyang Hou

    2016-06-01

    Full Text Available Automatic discovery of isolated land cover web map services (LCWMSs can potentially help in sharing land cover data. Currently, various search engine-based and crawler-based approaches have been developed for finding services dispersed throughout the surface web. In fact, with the prevalence of geospatial web applications, a considerable number of LCWMSs are hidden in JavaScript code, which belongs to the deep web. However, discovering LCWMSs from JavaScript code remains an open challenge. This paper aims to solve this challenge by proposing a focused deep web crawler for finding more LCWMSs from deep web JavaScript code and the surface web. First, the names of a group of JavaScript links are abstracted as initial judgements. Through name matching, these judgements are utilized to judge whether or not the fetched webpages contain predefined JavaScript links that may prompt JavaScript code to invoke WMSs. Secondly, some JavaScript invocation functions and URL formats for WMS are summarized as JavaScript invocation rules from prior knowledge of how WMSs are employed and coded in JavaScript. These invocation rules are used to identify the JavaScript code for extracting candidate WMSs through rule matching. The above two operations are incorporated into a traditional focused crawling strategy situated between the tasks of fetching webpages and parsing webpages. Thirdly, LCWMSs are selected by matching services with a set of land cover keywords. Moreover, a search engine for LCWMSs is implemented that uses the focused deep web crawler to retrieve and integrate the LCWMSs it discovers. In the first experiment, eight online geospatial web applications serve as seed URLs (Uniform Resource Locators and crawling scopes; the proposed crawler addresses only the JavaScript code in these eight applications. All 32 available WMSs hidden in JavaScript code were found using the proposed crawler, while not one WMS was discovered through the focused crawler

  3. deepTools2: a next generation web server for deep-sequencing data analysis.

    Science.gov (United States)

    Ramírez, Fidel; Ryan, Devon P; Grüning, Björn; Bhardwaj, Vivek; Kilpert, Fabian; Richter, Andreas S; Heyne, Steffen; Dündar, Friederike; Manke, Thomas

    2016-07-08

    We present an update to our Galaxy-based web server for processing and visualizing deeply sequenced data. Its core tool set, deepTools, allows users to perform complete bioinformatic workflows ranging from quality controls and normalizations of aligned reads to integrative analyses, including clustering and visualization approaches. Since we first described our deepTools Galaxy server in 2014, we have implemented new solutions for many requests from the community and our users. Here, we introduce significant enhancements and new tools to further improve data visualization and interpretation. deepTools continue to be open to all users and freely available as a web service at deeptools.ie-freiburg.mpg.de The new deepTools2 suite can be easily deployed within any Galaxy framework via the toolshed repository, and we also provide source code for command line usage under Linux and Mac OS X. A public and documented API for access to deepTools functionality is also available. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. The Potential Transformative Impact of Web 2.0 Technology on the Intelligence Community

    National Research Council Canada - National Science Library

    Werner, Adrienne

    2008-01-01

    Web 2.0 technologies can transform and improve interagency collaboration in the Intelligence Community in many of the same ways that have marked their use through the internet in the public domain and private industry...

  5. Effectiveness of Web Quest in Enhancing 4th Grade Students' Spiritual Intelligence

    Science.gov (United States)

    Jwaifell, Mustafa; Al-Mouhtadi, Reham; Aldarabah, Intisar

    2015-01-01

    Spiritual intelligence has gained great interest from a good number of the researchers and scholars, while there is a lack of using new technologies such as WebQuest as an instructional tool; which is one of the e-learning applications in education in enhancing spiritual intelligence of 4th graders in Jordanian schools. This study aimed at…

  6. Intelligent Learning Infrastructure for Knowledge Intensive Organizations: A Semantic Web Perspective

    Science.gov (United States)

    Lytras, Miltiadis, Ed.; Naeve, Ambjorn, Ed.

    2005-01-01

    In the context of Knowledge Society, the convergence of knowledge and learning management is a critical milestone. "Intelligent Learning Infrastructure for Knowledge Intensive Organizations: A Semantic Web Perspective" provides state-of-the art knowledge through a balanced theoretical and technological discussion. The semantic web perspective…

  7. SMART CITIES INTELLIGENCE SYSTEM (SMACiSYS) INTEGRATING SENSOR WEB WITH SPATIAL DATA INFRASTRUCTURES (SENSDI)

    OpenAIRE

    D. Bhattacharya; M. Painho

    2017-01-01

    The paper endeavours to enhance the Sensor Web with crucial geospatial analysis capabilities through integration with Spatial Data Infrastructure. The objective is development of automated smart cities intelligence system (SMACiSYS) with sensor-web access (SENSDI) utilizing geomatics for sustainable societies. There has been a need to develop automated integrated system to categorize events and issue information that reaches users directly. At present, no web-enabled information system exists...

  8. Deep Blue Cannot Play Checkers: The Need for Generalized Intelligence for Mobile Robots

    Directory of Open Access Journals (Sweden)

    Troy D. Kelley

    2010-01-01

    Full Text Available Generalized intelligence is much more difficult than originally anticipated when Artificial Intelligence (AI was first introduced in the early 1960s. Deep Blue, the chess playing supercomputer, was developed to defeat the top rated human chess player and successfully did so by defeating Gary Kasporov in 1997. However, Deep Blue only played chess; it did not play checkers, or any other games. Other examples of AI programs which learned and played games were successful at specific tasks, but generalizing the learned behavior to other domains was not attempted. So the question remains: Why is generalized intelligence so difficult? If complex tasks require a significant amount of development, time and task generalization is not easily accomplished, then a significant amount of effort is going to be required to develop an intelligent system. This approach will require a system of systems approach that uses many AI techniques: neural networks, fuzzy logic, and cognitive architectures.

  9. Towards an Intelligent Possibilistic Web Information Retrieval Using Multiagent System

    Science.gov (United States)

    Elayeb, Bilel; Evrard, Fabrice; Zaghdoud, Montaceur; Ahmed, Mohamed Ben

    2009-01-01

    Purpose: The purpose of this paper is to make a scientific contribution to web information retrieval (IR). Design/methodology/approach: A multiagent system for web IR is proposed based on new technologies: Hierarchical Small-Worlds (HSW) and Possibilistic Networks (PN). This system is based on a possibilistic qualitative approach which extends the…

  10. Programming Collective Intelligence Building Smart Web 2.0 Applications

    CERN Document Server

    Segaran, Toby

    2008-01-01

    This fascinating book demonstrates how you can build web applications to mine the enormous amount of data created by people on the Internet. With the sophisticated algorithms in this book, you can write smart programs to access interesting datasets from other web sites, collect data from users of your own applications, and analyze and understand the data once you've found it.

  11. Deep Web : acceso, seguridad y análisis de tráfico

    OpenAIRE

    Cagiga Vila, Ignacio

    2017-01-01

    RESUMEN: Este trabajo pretende hacer un análisis técnico de la Deep Web en el ámbito de las redes y las tecnologías de Internet. La parte principal del proyecto puede verse dividida en dos partes: Acceso a la Deep Web como cliente, e implementación de un relay de la Tor Network. La implementación de un relay de la Tor Network permite comprender como se consigue asegurar la anonimidad y seguridad de los usuarios que intentan acceder a la Deep Web a través de esta red. La parte de laboratorio d...

  12. Nature vs. Nurture: The Role of Environmental Resources in Evolutionary Deep Intelligence

    OpenAIRE

    Chung, Audrey G.; Fieguth, Paul; Wong, Alexander

    2018-01-01

    Evolutionary deep intelligence synthesizes highly efficient deep neural networks architectures over successive generations. Inspired by the nature versus nurture debate, we propose a study to examine the role of external factors on the network synthesis process by varying the availability of simulated environmental resources. Experimental results were obtained for networks synthesized via asexual evolutionary synthesis (1-parent) and sexual evolutionary synthesis (2-parent, 3-parent, and 5-pa...

  13. An Autonomous Learning System of Bengali Characters Using Web-Based Intelligent Handwriting Recognition

    Science.gov (United States)

    Khatun, Nazma; Miwa, Jouji

    2016-01-01

    This research project was aimed to develop an intelligent Bengali handwriting education system to improve the literacy level in Bangladesh. Due to the socio-economical limitation, all of the population does not have the chance to go to school. Here, we developed a prototype of web-based (iPhone/smartphone or computer browser) intelligent…

  14. Effects of an Intelligent Web-Based English Instruction System on Students' Academic Performance

    Science.gov (United States)

    Jia, J.; Chen, Y.; Ding, Z.; Bai, Y.; Yang, B.; Li, M.; Qi, J.

    2013-01-01

    This research conducted quasi-experiments in four middle schools to evaluate the long-term effects of an intelligent web-based English instruction system, Computer Simulation in Educational Communication (CSIEC), on students' academic attainment. The analysis of regular examination scores and vocabulary test validates the positive impact of CSIEC,…

  15. The utilisation of the deep web for military counter terrorist operations

    CSIR Research Space (South Africa)

    Aschmann, MJ

    2017-03-01

    Full Text Available The Internet offers anonymity and a disregard of national boundaries. Most countries are deeply concerned about the threat cyberspace and in particular, cyberterrorism, are posing to national security. The Deep and Dark Web is associated...

  16. Security Guidelines for the Development of Accessible Web Applications through the implementation of intelligent systems

    Directory of Open Access Journals (Sweden)

    Luis Joyanes Aguilar

    2009-12-01

    Full Text Available Due to the significant increase in threats, attacks and vulnerabilities that affect the Web in recent years has resulted the development and implementation of pools and methods to ensure security measures in the privacy, confidentiality and data integrity of users and businesses. Under certain circumstances, despite the implementation of these tools do not always get the flow of information which is passed in a secure manner. Many of these security tools and methods cannot be accessed by people who have disabilities or assistive technologies which enable people to access the Web efficiently. Among these security tools that are not accessible are the virtual keyboard, the CAPTCHA and other technologies that help to some extent to ensure safety on the Internet and are used in certain measures to combat malicious code and attacks that have been increased in recent times on the Web. Through the implementation of intelligent systems can detect, recover and receive information on the characteristics and properties of the different tools and hardware devices or software with which the user is accessing a web application and through analysis and interpretation of these intelligent systems can infer and automatically adjust the characteristics necessary to have these tools to be accessible by anyone regardless of disability or navigation context. This paper defines a set of guidelines and specific features that should have the security tools and methods to ensure the Web accessibility through the implementation of intelligent systems.

  17. Design and Application of an Intelligent Agent for Web Information Discovery

    Institute of Scientific and Technical Information of China (English)

    闵君; 冯珊; 唐超; 许立达

    2003-01-01

    With the propagation of applications on the internet, the internet has become a great information source which supplies users with valuable information. But it is hard for users to quickly acquire the right information on the web. This paper an intelligent agent for internet applications to retrieve and extract web information under user's guidance. The intelligent agent is made up of a retrieval script to identify web sources, an extraction script based on the document object model to express extraction process, a data translator to export the extracted information into knowledge bases with frame structures, and a data reasoning to reply users' questions. A GUI tool named Script Writer helps to generate the extraction script visually, and knowledge rule databases help to extract wanted information and to generate the answer to questions.

  18. Deep into the Brain: Artificial Intelligence in Stroke Imaging.

    Science.gov (United States)

    Lee, Eun-Jae; Kim, Yong-Hwan; Kim, Namkug; Kang, Dong-Wha

    2017-09-01

    Artificial intelligence (AI), a computer system aiming to mimic human intelligence, is gaining increasing interest and is being incorporated into many fields, including medicine. Stroke medicine is one such area of application of AI, for improving the accuracy of diagnosis and the quality of patient care. For stroke management, adequate analysis of stroke imaging is crucial. Recently, AI techniques have been applied to decipher the data from stroke imaging and have demonstrated some promising results. In the very near future, such AI techniques may play a pivotal role in determining the therapeutic methods and predicting the prognosis for stroke patients in an individualized manner. In this review, we offer a glimpse at the use of AI in stroke imaging, specifically focusing on its technical principles, clinical application, and future perspectives.

  19. Intelligent fault diagnosis of rolling bearings using an improved deep recurrent neural network

    Science.gov (United States)

    Jiang, Hongkai; Li, Xingqiu; Shao, Haidong; Zhao, Ke

    2018-06-01

    Traditional intelligent fault diagnosis methods for rolling bearings heavily depend on manual feature extraction and feature selection. For this purpose, an intelligent deep learning method, named the improved deep recurrent neural network (DRNN), is proposed in this paper. Firstly, frequency spectrum sequences are used as inputs to reduce the input size and ensure good robustness. Secondly, DRNN is constructed by the stacks of the recurrent hidden layer to automatically extract the features from the input spectrum sequences. Thirdly, an adaptive learning rate is adopted to improve the training performance of the constructed DRNN. The proposed method is verified with experimental rolling bearing data, and the results confirm that the proposed method is more effective than traditional intelligent fault diagnosis methods.

  20. The Effect of Web Assisted Learning with Emotional Intelligence Content on Students' Information about Energy Saving, Attitudes towards Environment and Emotional Intelligence

    Science.gov (United States)

    Ercan, Orhan; Ural, Evrim; Köse, Sinan

    2017-01-01

    For a sustainable world, it is very important for students to develop positive environmental attitudes and to have awareness of energy use. The study aims to investigate the effect of web assisted instruction with emotional intelligence content on 8th grade students' emotional intelligence, attitudes towards environment and energy saving, academic…

  1. Advanced Techniques in Web Intelligence-2 Web User Browsing Behaviour and Preference Analysis

    CERN Document Server

    Palade, Vasile; Jain, Lakhmi

    2013-01-01

    This research volume focuses on analyzing the web user browsing behaviour and preferences in traditional web-based environments, social  networks and web 2.0 applications,  by using advanced  techniques in data acquisition, data processing, pattern extraction and  cognitive science for modeling the human actions.  The book is directed to  graduate students, researchers/scientists and engineers  interested in updating their knowledge with the recent trends in web user analysis, for developing the next generation of web-based systems and applications.

  2. An insight into the deep web; why it matters for addiction psychiatry?

    Science.gov (United States)

    Orsolini, Laura; Papanti, Duccio; Corkery, John; Schifano, Fabrizio

    2017-05-01

    Nowadays, the web is rapidly spreading, playing a significant role in the marketing or sale or distribution of "quasi" legal drugs, hence facilitating continuous changes in drug scenarios. The easily renewable and anarchic online drug-market is gradually transforming indeed the drug market itself, from a "street" to a "virtual" one, with customers being able to shop with a relative anonymity in a 24-hr marketplace. The hidden "deep web" is facilitating this phenomenon. The paper aims at providing an overview to mental health's and addiction's professionals on current knowledge about prodrug activities on the deep web. A nonparticipant netnographic qualitative study of a list of prodrug websites (blogs, fora, and drug marketplaces) located into the surface web was here carried out. A systematic Internet search was conducted on Duckduckgo® and Google® whilst including the following keywords: "drugs" or "legal highs" or "Novel Psychoactive Substances" or "NPS" combined with the word deep web. Four themes (e.g., "How to access into the deepweb"; "Darknet and the online drug trading sites"; "Grams-search engine for the deep web"; and "Cryptocurrencies") and 14 categories were here generated and properly discussed. This paper represents a complete or systematical guideline about the deep web, specifically focusing on practical information on online drug marketplaces, useful for addiction's professionals. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Onto-Agents-Enabling Intelligent Agents on the Web

    Science.gov (United States)

    2005-05-01

    Manual annotation is tedious, and often done poorly. Even within the funded DAML project fewer pages were annotated than was hoped. In eCommerce , there...been overcome, congratulations! The DAML project was initiated at the birth of the semantic web. It contributed greatly to define a new research

  4. Meaning on the web : Evolution vs intelligent design?

    NARCIS (Netherlands)

    Brachman, Ron; Connolly, Dan; Khare, Rohit; Smadja, Frank; Van Harmelen, Frank

    2006-01-01

    It is a truism that as the Web grows in size and scope, it becomes harder to find what we want, to identify like-minded people and communities, to find the best ads to offer, and to have applications work together smoothly. Services don't interoperate; queries yield long lists of results, most of

  5. Towards the Development of Web-based Business intelligence Tools

    DEFF Research Database (Denmark)

    Georgiev, Lachezar; Tanev, Stoyan

    2011-01-01

    This paper focuses on using web search techniques in examining the co-creation strategies of technology driven firms. It does not focus on the co-creation results but describes the implementation of a software tool using data mining techniques to analyze the content on firms’ websites. The tool...

  6. Designing A General Deep Web Access Approach Based On A Newly Introduced Factor; Harvestability Factor (HF)

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; van Keulen, Maurice; Hiemstra, Djoerd

    2014-01-01

    The growing need of accessing more and more information draws attentions to huge amount of data hidden behind web forms defined as deep web. To make this data accessible, harvesters have a crucial role. Targeting different domains and websites enhances the need to have a general-purpose harvester

  7. Search of the Deep and Dark Web via DARPA Memex

    Science.gov (United States)

    Mattmann, C. A.

    2015-12-01

    Search has progressed through several stages due to the increasing size of the Web. Search engines first focused on text and its rate of occurrence; then focused on the notion of link analysis and citation then on interactivity and guided search; and now on the use of social media - who we interact with, what we comment on, and who we follow (and who follows us). The next stage, referred to as "deep search," requires solutions that can bring together text, images, video, importance, interactivity, and social media to solve this challenging problem. The Apache Nutch project provides an open framework for large-scale, targeted, vertical search with capabilities to support all past and potential future search engine foci. Nutch is a flexible infrastructure allowing open access to ranking; URL selection and filtering approaches, to the link graph generated from search, and Nutch has spawned entire sub communities including Apache Hadoop and Apache Tika. It addresses many current needs with the capability to support new technologies such as image and video. On the DARPA Memex project, we are creating create specific extensions to Nutch that will directly improve its overall technological superiority for search and that will directly allow us to address complex search problems including human trafficking. We are integrating state-of-the-art algorithms developed by Kitware for IARPA Aladdin combined with work by Harvard to provide image and video understanding support allowing automatic detection of people and things and massive deployment via Nutch. We are expanding Apache Tika for scene understanding, object/person detection and classification in images/video. We are delivering an interactive and visual interface for initiating Nutch crawls. The interface uses Python technologies to expose Nutch data and to provide a domain specific language for crawls. With the Bokeh visualization library the interface we are delivering simple interactive crawl visualization and

  8. LOG FILE ANALYSIS AND CREATION OF MORE INTELLIGENT WEB SITES

    Directory of Open Access Journals (Sweden)

    Mislav Šimunić

    2012-07-01

    Full Text Available To enable successful performance of any company or business system, both inthe world and in the Republic of Croatia, among many problems relating to its operationsand particularly to maximum utilization and efficiency of the Internet as a media forrunning business (especially in terms of marketing, they should make the best possible useof the present-day global trends and advantages of sophisticated technologies andapproaches to running a business. Bearing in mind the fact of daily increasing competitionand more demanding market, this paper addresses certain scientific and practicalcontribution to continuous analysis of demand market and adaptation thereto by analyzingthe log files and by retroactive effect on the web site. A log file is a carrier of numerousdata and indicators that should be used in the best possible way to improve the entirebusiness operations of a company. However, this is not always simple and easy. The websites differ in size, purpose, and technology used for designing them. For this very reason,the analytic analysis frameworks should be such that can cover any web site and at thesame time leave some space for analyzing and investigating the specific characteristicof each web site and provide for its dynamics by analyzing the log file records. Thoseconsiderations were a basis for this paper

  9. Survey of Techniques for Deep Web Source Selection and Surfacing the Hidden Web Content

    OpenAIRE

    Khushboo Khurana; M.B. Chandak

    2016-01-01

    Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in...

  10. Prediction of the behavior of reinforced concrete deep beams with web openings using the finite ele

    Directory of Open Access Journals (Sweden)

    Ashraf Ragab Mohamed

    2014-06-01

    Full Text Available The exact analysis of reinforced concrete deep beams is a complex problem and the presence of web openings aggravates the situation. However, no code provision exists for the analysis of deep beams with web opening. The code implemented strut and tie models are debatable and no unique solution using these models is available. In this study, the finite element method is utilized to study the behavior of reinforced concrete deep beams with and without web openings. Furthermore, the effect of the reinforcement distribution on the beam overall capacity has been studied and compared to the Egyptian code guidelines. The damaged plasticity model has been used for the analysis. Models of simply supported deep beams under 3 and 4-point bending and continuous deep beams with and without web openings have been analyzed. Model verification has shown good agreement to literature experimental work. Results of the parametric analysis have shown that web openings crossing the expected compression struts should be avoided, and the depth of the opening should not exceed 20% of the beam overall depth. The reinforcement distribution should be in the range of 0.1–0.2 beam depth for simply supported deep beams.

  11. From machine learning to deep learning: progress in machine intelligence for rational drug discovery.

    Science.gov (United States)

    Zhang, Lu; Tan, Jianjun; Han, Dan; Zhu, Hao

    2017-11-01

    Machine intelligence, which is normally presented as artificial intelligence, refers to the intelligence exhibited by computers. In the history of rational drug discovery, various machine intelligence approaches have been applied to guide traditional experiments, which are expensive and time-consuming. Over the past several decades, machine-learning tools, such as quantitative structure-activity relationship (QSAR) modeling, were developed that can identify potential biological active molecules from millions of candidate compounds quickly and cheaply. However, when drug discovery moved into the era of 'big' data, machine learning approaches evolved into deep learning approaches, which are a more powerful and efficient way to deal with the massive amounts of data generated from modern drug discovery approaches. Here, we summarize the history of machine learning and provide insight into recently developed deep learning approaches and their applications in rational drug discovery. We suggest that this evolution of machine intelligence now provides a guide for early-stage drug design and discovery in the current big data era. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Intelligent (Autonomous) Power Controller Development for Human Deep Space Exploration

    Science.gov (United States)

    Soeder, James; Raitano, Paul; McNelis, Anne

    2016-01-01

    As NASAs Evolvable Mars Campaign and other exploration initiatives continue to mature they have identified the need for more autonomous operations of the power system. For current human space operations such as the International Space Station, the paradigm is to perform the planning, operation and fault diagnosis from the ground. However, the dual problems of communication lag as well as limited communication bandwidth beyond GEO synchronous orbit, underscore the need to change the operation methodology for human operation in deep space. To address this need, for the past several years the Glenn Research Center has had an effort to develop an autonomous power controller for human deep space vehicles. This presentation discusses the present roadmap for deep space exploration along with a description of conceptual power system architecture for exploration modules. It then contrasts the present ground centric control and management architecture with limited autonomy on-board the spacecraft with an advanced autonomous power control system that features ground based monitoring with a spacecraft mission manager with autonomous control of all core systems, including power. It then presents a functional breakdown of the autonomous power control system and examines its operation in both normal and fault modes. Finally, it discusses progress made in the development of a real-time power system model and how it is being used to evaluate the performance of the controller and well as using it for verification of the overall operation.

  13. Intelligent Detection of Structure from Remote Sensing Images Based on Deep Learning Method

    Science.gov (United States)

    Xin, L.

    2018-04-01

    Utilizing high-resolution remote sensing images for earth observation has become the common method of land use monitoring. It requires great human participation when dealing with traditional image interpretation, which is inefficient and difficult to guarantee the accuracy. At present, the artificial intelligent method such as deep learning has a large number of advantages in the aspect of image recognition. By means of a large amount of remote sensing image samples and deep neural network models, we can rapidly decipher the objects of interest such as buildings, etc. Whether in terms of efficiency or accuracy, deep learning method is more preponderant. This paper explains the research of deep learning method by a great mount of remote sensing image samples and verifies the feasibility of building extraction via experiments.

  14. IAServ: an intelligent home care web services platform in a cloud for aging-in-place.

    Science.gov (United States)

    Su, Chuan-Jun; Chiang, Chang-Yu

    2013-11-12

    As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients' needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform) to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet.

  15. IAServ: An Intelligent Home Care Web Services Platform in a Cloud for Aging-in-Place

    Directory of Open Access Journals (Sweden)

    Chang-Yu Chiang

    2013-11-01

    Full Text Available As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients’ needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet.

  16. An intelligent framework for dynamic web services composition in the semantic web

    OpenAIRE

    Thakker, D

    2008-01-01

    As Web services are being increasingly adopted as the distributed computing technology of choice to securely publish application services beyond the firewall, the importance of composing them to create new, value-added service, is increasing. Thus far, the most successful practical approach to Web services composition, largely endorsed by the industry falls under the static composition category where the service selection and flow management are done a priori and manually. The second approach...

  17. A deep knowledge architecture for intelligent support of nuclear waste transportation decisions

    International Nuclear Information System (INIS)

    Batra, D.; Bowen, W.M.; Hill, T.R.; Weeks, K.D.

    1988-01-01

    The concept of intelligent decision support has been discussed and explored in several recent papers, one of which has suggested the use of a Deep Knowledge Architecture. This paper explores this concept through application to a specific decision environment. The complex problems involved in nuclear waste disposal decisions provide an excellent test case. The resulting architecture uses an integrated, multi-level model base to represent the deep knowledge of the problem. Combined with the surface level knowledge represented by the database, the proposed knowledge base complements that of the decision-maker, allowing analysis at a range of levels of decisions which may also occur at a range of levels

  18. A novel method for intelligent fault diagnosis of rolling bearings using ensemble deep auto-encoders

    Science.gov (United States)

    Shao, Haidong; Jiang, Hongkai; Lin, Ying; Li, Xingqiu

    2018-03-01

    Automatic and accurate identification of rolling bearings fault categories, especially for the fault severities and fault orientations, is still a major challenge in rotating machinery fault diagnosis. In this paper, a novel method called ensemble deep auto-encoders (EDAEs) is proposed for intelligent fault diagnosis of rolling bearings. Firstly, different activation functions are employed as the hidden functions to design a series of auto-encoders (AEs) with different characteristics. Secondly, EDAEs are constructed with various auto-encoders for unsupervised feature learning from the measured vibration signals. Finally, a combination strategy is designed to ensure accurate and stable diagnosis results. The proposed method is applied to analyze the experimental bearing vibration signals. The results confirm that the proposed method can get rid of the dependence on manual feature extraction and overcome the limitations of individual deep learning models, which is more effective than the existing intelligent diagnosis methods.

  19. Artificial Intelligence as Structural Estimation: Economic Interpretations of Deep Blue, Bonanza, and AlphaGo

    OpenAIRE

    Igami, Mitsuru

    2017-01-01

    Artificial intelligence (AI) has achieved superhuman performance in a growing number of tasks, but understanding and explaining AI remain challenging. This paper clarifies the connections between machine-learning algorithms to develop AIs and the econometrics of dynamic structural models through the case studies of three famous game AIs. Chess-playing Deep Blue is a calibrated value function, whereas shogi-playing Bonanza is an estimated value function via Rust's (1987) nested fixed-point met...

  20. A Dynamic Recommender System for Improved Web Usage Mining and CRM Using Swarm Intelligence.

    Science.gov (United States)

    Alphy, Anna; Prabakaran, S

    2015-01-01

    In modern days, to enrich e-business, the websites are personalized for each user by understanding their interests and behavior. The main challenges of online usage data are information overload and their dynamic nature. In this paper, to address these issues, a WebBluegillRecom-annealing dynamic recommender system that uses web usage mining techniques in tandem with software agents developed for providing dynamic recommendations to users that can be used for customizing a website is proposed. The proposed WebBluegillRecom-annealing dynamic recommender uses swarm intelligence from the foraging behavior of a bluegill fish. It overcomes the information overload by handling dynamic behaviors of users. Our dynamic recommender system was compared against traditional collaborative filtering systems. The results show that the proposed system has higher precision, coverage, F1 measure, and scalability than the traditional collaborative filtering systems. Moreover, the recommendations given by our system overcome the overspecialization problem by including variety in recommendations.

  1. Deep Web Search Interface Identification: A Semi-Supervised Ensemble Approach

    OpenAIRE

    Hong Wang; Qingsong Xu; Lifeng Zhou

    2014-01-01

    To surface the Deep Web, one crucial task is to predict whether a given web page has a search interface (searchable HyperText Markup Language (HTML) form) or not. Previous studies have focused on supervised classification with labeled examples. However, labeled data are scarce, hard to get and requires tediousmanual work, while unlabeled HTML forms are abundant and easy to obtain. In this research, we consider the plausibility of using both labeled and unlabeled data to train better models to...

  2. Food web structure and vulnerability of a deep-sea ecosystem in the NW Mediterranean Sea

    OpenAIRE

    Tecchio, Samuele; Coll, Marta; Christensen, Villy; Company, Joan B.; Ramirez-Llodra, Eva; Sarda, Francisco

    2013-01-01

    There is increasing fishing pressure on the continental margins of the oceans, and this raises concerns about the vulnerability of the ecosystems thriving there. The current knowledge of the biology of deep-water fish species identifies potential reduced resilience to anthropogenic disturbance. However, there are extreme difficulties in sampling the deep sea, resulting in poorly resolved and indirectly obtained food-web relationships. Here, we modelled the flows and biomasses of a Mediterrane...

  3. SMART CITIES INTELLIGENCE SYSTEM (SMACiSYS INTEGRATING SENSOR WEB WITH SPATIAL DATA INFRASTRUCTURES (SENSDI

    Directory of Open Access Journals (Sweden)

    D. Bhattacharya

    2017-09-01

    Full Text Available The paper endeavours to enhance the Sensor Web with crucial geospatial analysis capabilities through integration with Spatial Data Infrastructure. The objective is development of automated smart cities intelligence system (SMACiSYS with sensor-web access (SENSDI utilizing geomatics for sustainable societies. There has been a need to develop automated integrated system to categorize events and issue information that reaches users directly. At present, no web-enabled information system exists which can disseminate messages after events evaluation in real time. Research work formalizes a notion of an integrated, independent, generalized, and automated geo-event analysing system making use of geo-spatial data under popular usage platform. Integrating Sensor Web With Spatial Data Infrastructures (SENSDI aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. The other benefit, conversely, is the expansion of spatial data infrastructure to utilize sensor web, dynamically and in real time for smart applications that smarter cities demand nowadays. Hence, SENSDI augments existing smart cities platforms utilizing sensor web and spatial information achieved by coupling pairs of otherwise disjoint interfaces and APIs formulated by Open Geospatial Consortium (OGC keeping entire platform open access and open source. SENSDI is based on Geonode, QGIS and Java, that bind most of the functionalities of Internet, sensor web and nowadays Internet of Things superseding Internet of Sensors as well. In a nutshell, the project delivers a generalized real-time accessible and analysable platform for sensing the environment and mapping the captured information for optimal decision-making and societal benefit.

  4. Smart Cities Intelligence System (SMACiSYS) Integrating Sensor Web with Spatial Data Infrastructures (sensdi)

    Science.gov (United States)

    Bhattacharya, D.; Painho, M.

    2017-09-01

    The paper endeavours to enhance the Sensor Web with crucial geospatial analysis capabilities through integration with Spatial Data Infrastructure. The objective is development of automated smart cities intelligence system (SMACiSYS) with sensor-web access (SENSDI) utilizing geomatics for sustainable societies. There has been a need to develop automated integrated system to categorize events and issue information that reaches users directly. At present, no web-enabled information system exists which can disseminate messages after events evaluation in real time. Research work formalizes a notion of an integrated, independent, generalized, and automated geo-event analysing system making use of geo-spatial data under popular usage platform. Integrating Sensor Web With Spatial Data Infrastructures (SENSDI) aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. The other benefit, conversely, is the expansion of spatial data infrastructure to utilize sensor web, dynamically and in real time for smart applications that smarter cities demand nowadays. Hence, SENSDI augments existing smart cities platforms utilizing sensor web and spatial information achieved by coupling pairs of otherwise disjoint interfaces and APIs formulated by Open Geospatial Consortium (OGC) keeping entire platform open access and open source. SENSDI is based on Geonode, QGIS and Java, that bind most of the functionalities of Internet, sensor web and nowadays Internet of Things superseding Internet of Sensors as well. In a nutshell, the project delivers a generalized real-time accessible and analysable platform for sensing the environment and mapping the captured information for optimal decision-making and societal benefit.

  5. The Linking Probability of Deep Spider-Web Networks

    OpenAIRE

    Pippenger, Nicholas

    2005-01-01

    We consider crossbar switching networks with base $b$ (that is, constructed from $b\\times b$ crossbar switches), scale $k$ (that is, with $b^k$ inputs, $b^k$ outputs and $b^k$ links between each consecutive pair of stages) and depth $l$ (that is, with $l$ stages). We assume that the crossbars are interconnected according to the spider-web pattern, whereby two diverging paths reconverge only after at least $k$ stages. We assume that each vertex is independently idle with probability $q$, the v...

  6. Autonomous development and learning in artificial intelligence and robotics: Scaling up deep learning to human-like learning.

    Science.gov (United States)

    Oudeyer, Pierre-Yves

    2017-01-01

    Autonomous lifelong development and learning are fundamental capabilities of humans, differentiating them from current deep learning systems. However, other branches of artificial intelligence have designed crucial ingredients towards autonomous learning: curiosity and intrinsic motivation, social learning and natural interaction with peers, and embodiment. These mechanisms guide exploration and autonomous choice of goals, and integrating them with deep learning opens stimulating perspectives.

  7. A Web-Based Authoring Tool for Algebra-Related Intelligent Tutoring Systems

    Directory of Open Access Journals (Sweden)

    Maria Virvou

    2000-01-01

    Full Text Available This paper describes the development of a web-based authoring tool for Intelligent Tutoring Systems. The tool aims to be useful to teachers and students of domains that make use of algebraic equations. The initial input to the tool is a "description" of a specific domain given by a human teacher. In return the tool provides assistance at the construction of exercises by the human teacher and then monitors the students while they are solving the exercises and provides appropriate feedback. The tool incorporates intelligence in its diagnostic component, which performs error diagnosis to students’ errors. It also handles the teaching material in a flexible and individualised way.

  8. Intelligent Information Fusion in the Aviation Domain: A Semantic-Web based Approach

    Science.gov (United States)

    Ashish, Naveen; Goforth, Andre

    2005-01-01

    Information fusion from multiple sources is a critical requirement for System Wide Information Management in the National Airspace (NAS). NASA and the FAA envision creating an "integrated pool" of information originally coming from different sources, which users, intelligent agents and NAS decision support tools can tap into. In this paper we present the results of our initial investigations into the requirements and prototype development of such an integrated information pool for the NAS. We have attempted to ascertain key requirements for such an integrated pool based on a survey of DSS tools that will benefit from this integrated pool. We then advocate key technologies from computer science research areas such as the semantic web, information integration, and intelligent agents that we believe are well suited to achieving the envisioned system wide information management capabilities.

  9. Deep pelagic food web structure as revealed by in situ feeding observations.

    Science.gov (United States)

    Choy, C Anela; Haddock, Steven H D; Robison, Bruce H

    2017-12-06

    Food web linkages, or the feeding relationships between species inhabiting a shared ecosystem, are an ecological lens through which ecosystem structure and function can be assessed, and thus are fundamental to informing sustainable resource management. Empirical feeding datasets have traditionally been painstakingly generated from stomach content analysis, direct observations and from biochemical trophic markers (stable isotopes, fatty acids, molecular tools). Each approach carries inherent biases and limitations, as well as advantages. Here, using 27 years (1991-2016) of in situ feeding observations collected by remotely operated vehicles (ROVs), we quantitatively characterize the deep pelagic food web of central California within the California Current, complementing existing studies of diet and trophic interactions with a unique perspective. Seven hundred and forty-three independent feeding events were observed with ROVs from near-surface waters down to depths approaching 4000 m, involving an assemblage of 84 different predators and 82 different prey types, for a total of 242 unique feeding relationships. The greatest diversity of prey was consumed by narcomedusae, followed by physonect siphonophores, ctenophores and cephalopods. We highlight key interactions within the poorly understood 'jelly web', showing the importance of medusae, ctenophores and siphonophores as key predators, whose ecological significance is comparable to large fish and squid species within the central California deep pelagic food web. Gelatinous predators are often thought to comprise relatively inefficient trophic pathways within marine communities, but we build upon previous findings to document their substantial and integral roles in deep pelagic food webs. © 2017 The Authors.

  10. Cluo: Web-Scale Text Mining System For Open Source Intelligence Purposes

    Directory of Open Access Journals (Sweden)

    Przemyslaw Maciolek

    2013-01-01

    Full Text Available The amount of textual information published on the Internet is considered tobe in billions of web pages, blog posts, comments, social media updates andothers. Analyzing such quantities of data requires high level of distribution –both data and computing. This is especially true in case of complex algorithms,often used in text mining tasks.The paper presents a prototype implementation of CLUO – an Open SourceIntelligence (OSINT system, which extracts and analyzes significant quantitiesof openly available information.

  11. Rapid and accurate intraoperative pathological diagnosis by artificial intelligence with deep learning technology.

    Science.gov (United States)

    Zhang, Jing; Song, Yanlin; Xia, Fan; Zhu, Chenjing; Zhang, Yingying; Song, Wenpeng; Xu, Jianguo; Ma, Xuelei

    2017-09-01

    Frozen section is widely used for intraoperative pathological diagnosis (IOPD), which is essential for intraoperative decision making. However, frozen section suffers from some drawbacks, such as time consuming and high misdiagnosis rate. Recently, artificial intelligence (AI) with deep learning technology has shown bright future in medicine. We hypothesize that AI with deep learning technology could help IOPD, with a computer trained by a dataset of intraoperative lesion images. Evidences supporting our hypothesis included the successful use of AI with deep learning technology in diagnosing skin cancer, and the developed method of deep-learning algorithm. Large size of the training dataset is critical to increase the diagnostic accuracy. The performance of the trained machine could be tested by new images before clinical use. Real-time diagnosis, easy to use and potential high accuracy were the advantages of AI for IOPD. In sum, AI with deep learning technology is a promising method to help rapid and accurate IOPD. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Inside the Web: A Look at Digital Libraries and the Invisible/Deep Web

    Science.gov (United States)

    Su, Mila C.

    2009-01-01

    The evolution of the Internet and the World Wide Web continually exceeds expectations with the "swift pace" of technological innovations. Information is added, and just as quickly becomes outdated at a rapid pace. Researchers have found that Digital materials can provide access to primary source materials and connect the researcher to institutions…

  13. Obstacle Detection for Intelligent Transportation Systems Using Deep Stacked Autoencoder and k-Nearest Neighbor Scheme

    KAUST Repository

    Dairi, Abdelkader; Harrou, Fouzi; Sun, Ying; Senouci, Mohamed

    2018-01-01

    Obstacle detection is an essential element for the development of intelligent transportation systems so that accidents can be avoided. In this study, we propose a stereovisionbased method for detecting obstacles in urban environment. The proposed method uses a deep stacked auto-encoders (DSA) model that combines the greedy learning features with the dimensionality reduction capacity and employs an unsupervised k-nearest neighbors algorithm (KNN) to accurately and reliably detect the presence of obstacles. We consider obstacle detection as an anomaly detection problem. We evaluated the proposed method by using practical data from three publicly available datasets, the Malaga stereovision urban dataset (MSVUD), the Daimler urban segmentation dataset (DUSD), and Bahnhof dataset. Also, we compared the efficiency of DSA-KNN approach to the deep belief network (DBN)-based clustering schemes. Results show that the DSA-KNN is suitable to visually monitor urban scenes.

  14. Obstacle Detection for Intelligent Transportation Systems Using Deep Stacked Autoencoder and k-Nearest Neighbor Scheme

    KAUST Repository

    Dairi, Abdelkader

    2018-04-30

    Obstacle detection is an essential element for the development of intelligent transportation systems so that accidents can be avoided. In this study, we propose a stereovisionbased method for detecting obstacles in urban environment. The proposed method uses a deep stacked auto-encoders (DSA) model that combines the greedy learning features with the dimensionality reduction capacity and employs an unsupervised k-nearest neighbors algorithm (KNN) to accurately and reliably detect the presence of obstacles. We consider obstacle detection as an anomaly detection problem. We evaluated the proposed method by using practical data from three publicly available datasets, the Malaga stereovision urban dataset (MSVUD), the Daimler urban segmentation dataset (DUSD), and Bahnhof dataset. Also, we compared the efficiency of DSA-KNN approach to the deep belief network (DBN)-based clustering schemes. Results show that the DSA-KNN is suitable to visually monitor urban scenes.

  15. Informatics in radiology: automated Web-based graphical dashboard for radiology operational business intelligence.

    Science.gov (United States)

    Nagy, Paul G; Warnock, Max J; Daly, Mark; Toland, Christopher; Meenan, Christopher D; Mezrich, Reuben S

    2009-11-01

    Radiology departments today are faced with many challenges to improve operational efficiency, performance, and quality. Many organizations rely on antiquated, paper-based methods to review their historical performance and understand their operations. With increased workloads, geographically dispersed image acquisition and reading sites, and rapidly changing technologies, this approach is increasingly untenable. A Web-based dashboard was constructed to automate the extraction, processing, and display of indicators and thereby provide useful and current data for twice-monthly departmental operational meetings. The feasibility of extracting specific metrics from clinical information systems was evaluated as part of a longer-term effort to build a radiology business intelligence architecture. Operational data were extracted from clinical information systems and stored in a centralized data warehouse. Higher-level analytics were performed on the centralized data, a process that generated indicators in a dynamic Web-based graphical environment that proved valuable in discussion and root cause analysis. Results aggregated over a 24-month period since implementation suggest that this operational business intelligence reporting system has provided significant data for driving more effective management decisions to improve productivity, performance, and quality of service in the department.

  16. A COMPARATIVE ANALYSIS OF WEB INFORMATION EXTRACTION TECHNIQUES DEEP LEARNING vs. NAÏVE BAYES vs. BACK PROPAGATION NEURAL NETWORKS IN WEB DOCUMENT EXTRACTION

    Directory of Open Access Journals (Sweden)

    J. Sharmila

    2016-01-01

    Full Text Available Web mining related exploration is getting the chance to be more essential these days in view of the reason that a lot of information is overseen through the web. Web utilization is expanding in an uncontrolled way. A particular framework is required for controlling such extensive measure of information in the web space. Web mining is ordered into three noteworthy divisions: Web content mining, web usage mining and web structure mining. Tak-Lam Wong has proposed a web content mining methodology in the exploration with the aid of Bayesian Networks (BN. In their methodology, they were learning on separating the web data and characteristic revelation in view of the Bayesian approach. Roused from their investigation, we mean to propose a web content mining methodology, in view of a Deep Learning Algorithm. The Deep Learning Algorithm gives the interest over BN on the basis that BN is not considered in any learning architecture planning like to propose system. The main objective of this investigation is web document extraction utilizing different grouping algorithm and investigation. This work extricates the data from the web URL. This work shows three classification algorithms, Deep Learning Algorithm, Bayesian Algorithm and BPNN Algorithm. Deep Learning is a capable arrangement of strategies for learning in neural system which is connected like computer vision, speech recognition, and natural language processing and biometrics framework. Deep Learning is one of the simple classification technique and which is utilized for subset of extensive field furthermore Deep Learning has less time for classification. Naive Bayes classifiers are a group of basic probabilistic classifiers in view of applying Bayes hypothesis with concrete independence assumptions between the features. At that point the BPNN algorithm is utilized for classification. Initially training and testing dataset contains more URL. We extract the content presently from the dataset. The

  17. Post-processing of Deep Web Information Extraction Based on Domain Ontology

    Directory of Open Access Journals (Sweden)

    PENG, T.

    2013-11-01

    Full Text Available Many methods are utilized to extract and process query results in deep Web, which rely on the different structures of Web pages and various designing modes of databases. However, some semantic meanings and relations are ignored. So, in this paper, we present an approach for post-processing deep Web query results based on domain ontology which can utilize the semantic meanings and relations. A block identification model (BIM based on node similarity is defined to extract data blocks that are relevant to specific domain after reducing noisy nodes. Feature vector of domain books is obtained by result set extraction model (RSEM based on vector space model (VSM. RSEM, in combination with BIM, builds the domain ontology on books which can not only remove the limit of Web page structures when extracting data information, but also make use of semantic meanings of domain ontology. After extracting basic information of Web pages, a ranking algorithm is adopted to offer an ordered list of data records to users. Experimental results show that BIM and RSEM extract data blocks and build domain ontology accurately. In addition, relevant data records and basic information are extracted and ranked. The performances precision and recall show that our proposed method is feasible and efficient.

  18. An Intelligent Web Digital Image Metadata Service Platform for Social Curation Commerce Environment

    Directory of Open Access Journals (Sweden)

    Seong-Yong Hong

    2015-01-01

    Full Text Available Information management includes multimedia data management, knowledge management, collaboration, and agents, all of which are supporting technologies for XML. XML technologies have an impact on multimedia databases as well as collaborative technologies and knowledge management. That is, e-commerce documents are encoded in XML and are gaining much popularity for business-to-business or business-to-consumer transactions. Recently, the internet sites, such as e-commerce sites and shopping mall sites, deal with a lot of image and multimedia information. This paper proposes an intelligent web digital image information retrieval platform, which adopts XML technology for social curation commerce environment. To support object-based content retrieval on product catalog images containing multiple objects, we describe multilevel metadata structures representing the local features, global features, and semantics of image data. To enable semantic-based and content-based retrieval on such image data, we design an XML-Schema for the proposed metadata. We also describe how to automatically transform the retrieval results into the forms suitable for the various user environments, such as web browser or mobile device, using XSLT. The proposed scheme can be utilized to enable efficient e-catalog metadata sharing between systems, and it will contribute to the improvement of the retrieval correctness and the user’s satisfaction on semantic-based web digital image information retrieval.

  19. Deep learning architectures for multi-label classification of intelligent health risk prediction.

    Science.gov (United States)

    Maxwell, Andrew; Li, Runzhi; Yang, Bei; Weng, Heng; Ou, Aihua; Hong, Huixiao; Zhou, Zhaoxian; Gong, Ping; Zhang, Chaoyang

    2017-12-28

    Multi-label classification of data remains to be a challenging problem. Because of the complexity of the data, it is sometimes difficult to infer information about classes that are not mutually exclusive. For medical data, patients could have symptoms of multiple different diseases at the same time and it is important to develop tools that help to identify problems early. Intelligent health risk prediction models built with deep learning architectures offer a powerful tool for physicians to identify patterns in patient data that indicate risks associated with certain types of chronic diseases. Physical examination records of 110,300 anonymous patients were used to predict diabetes, hypertension, fatty liver, a combination of these three chronic diseases, and the absence of disease (8 classes in total). The dataset was split into training (90%) and testing (10%) sub-datasets. Ten-fold cross validation was used to evaluate prediction accuracy with metrics such as precision, recall, and F-score. Deep Learning (DL) architectures were compared with standard and state-of-the-art multi-label classification methods. Preliminary results suggest that Deep Neural Networks (DNN), a DL architecture, when applied to multi-label classification of chronic diseases, produced accuracy that was comparable to that of common methods such as Support Vector Machines. We have implemented DNNs to handle both problem transformation and algorithm adaption type multi-label methods and compare both to see which is preferable. Deep Learning architectures have the potential of inferring more information about the patterns of physical examination data than common classification methods. The advanced techniques of Deep Learning can be used to identify the significance of different features from physical examination data as well as to learn the contributions of each feature that impact a patient's risk for chronic diseases. However, accurate prediction of chronic disease risks remains a challenging

  20. Deep Learning-Based Noise Reduction Approach to Improve Speech Intelligibility for Cochlear Implant Recipients.

    Science.gov (United States)

    Lai, Ying-Hui; Tsao, Yu; Lu, Xugang; Chen, Fei; Su, Yu-Ting; Chen, Kuang-Chao; Chen, Yu-Hsuan; Chen, Li-Ching; Po-Hung Li, Lieber; Lee, Chin-Hui

    2018-01-20

    We investigate the clinical effectiveness of a novel deep learning-based noise reduction (NR) approach under noisy conditions with challenging noise types at low signal to noise ratio (SNR) levels for Mandarin-speaking cochlear implant (CI) recipients. The deep learning-based NR approach used in this study consists of two modules: noise classifier (NC) and deep denoising autoencoder (DDAE), thus termed (NC + DDAE). In a series of comprehensive experiments, we conduct qualitative and quantitative analyses on the NC module and the overall NC + DDAE approach. Moreover, we evaluate the speech recognition performance of the NC + DDAE NR and classical single-microphone NR approaches for Mandarin-speaking CI recipients under different noisy conditions. The testing set contains Mandarin sentences corrupted by two types of maskers, two-talker babble noise, and a construction jackhammer noise, at 0 and 5 dB SNR levels. Two conventional NR techniques and the proposed deep learning-based approach are used to process the noisy utterances. We qualitatively compare the NR approaches by the amplitude envelope and spectrogram plots of the processed utterances. Quantitative objective measures include (1) normalized covariance measure to test the intelligibility of the utterances processed by each of the NR approaches; and (2) speech recognition tests conducted by nine Mandarin-speaking CI recipients. These nine CI recipients use their own clinical speech processors during testing. The experimental results of objective evaluation and listening test indicate that under challenging listening conditions, the proposed NC + DDAE NR approach yields higher intelligibility scores than the two compared classical NR techniques, under both matched and mismatched training-testing conditions. When compared to the two well-known conventional NR techniques under challenging listening condition, the proposed NC + DDAE NR approach has superior noise suppression capabilities and gives less distortion

  1. Applying artificial intelligence to disease staging: Deep learning for improved staging of diabetic retinopathy.

    Science.gov (United States)

    Takahashi, Hidenori; Tampo, Hironobu; Arai, Yusuke; Inoue, Yuji; Kawashima, Hidetoshi

    2017-01-01

    Disease staging involves the assessment of disease severity or progression and is used for treatment selection. In diabetic retinopathy, disease staging using a wide area is more desirable than that using a limited area. We investigated if deep learning artificial intelligence (AI) could be used to grade diabetic retinopathy and determine treatment and prognosis. The retrospective study analyzed 9,939 posterior pole photographs of 2,740 patients with diabetes. Nonmydriatic 45° field color fundus photographs were taken of four fields in each eye annually at Jichi Medical University between May 2011 and June 2015. A modified fully randomly initialized GoogLeNet deep learning neural network was trained on 95% of the photographs using manual modified Davis grading of three additional adjacent photographs. We graded 4,709 of the 9,939 posterior pole fundus photographs using real prognoses. In addition, 95% of the photographs were learned by the modified GoogLeNet. Main outcome measures were prevalence and bias-adjusted Fleiss' kappa (PABAK) of AI staging of the remaining 5% of the photographs. The PABAK to modified Davis grading was 0.64 (accuracy, 81%; correct answer in 402 of 496 photographs). The PABAK to real prognosis grading was 0.37 (accuracy, 96%). We propose a novel AI disease-staging system for grading diabetic retinopathy that involves a retinal area not typically visualized on fundoscopy and another AI that directly suggests treatments and determines prognoses.

  2. Deep Web Search Interface Identification: A Semi-Supervised Ensemble Approach

    Directory of Open Access Journals (Sweden)

    Hong Wang

    2014-12-01

    Full Text Available To surface the Deep Web, one crucial task is to predict whether a given web page has a search interface (searchable HyperText Markup Language (HTML form or not. Previous studies have focused on supervised classification with labeled examples. However, labeled data are scarce, hard to get and requires tediousmanual work, while unlabeled HTML forms are abundant and easy to obtain. In this research, we consider the plausibility of using both labeled and unlabeled data to train better models to identify search interfaces more effectively. We present a semi-supervised co-training ensemble learning approach using both neural networks and decision trees to deal with the search interface identification problem. We show that the proposed model outperforms previous methods using only labeled data. We also show that adding unlabeled data improves the effectiveness of the proposed model.

  3. Bioaccumulation of tributyltin and triphenyltin compounds through the food web in deep offshore water

    OpenAIRE

    KONO, Kumiko; MINAMI, Takashi; YAMADA, Hisashi; TANAKA, Hiroyuki; KOYAMA, Jiro

    2008-01-01

    Concentrations of tributyltin (TBT) and triphenyltin (TPT) compounds were determined in bottom seawater, sediments, and organisms of various trophic levels in the marine benthic food web in the Sea of Japan to clarify how the bioaccumulation patterns of TBT and TPT in the deep-sea ecosystem differ. TBT was detected in all samples: 0.3-0.8 ng/l in bottom seawater, 4.4-16 ng/g dry wt in sediment, and 1.8-240 ng/g dry wt in various organisms. TBT and TPT concentrations were lower in bottom seawa...

  4. Deep neural networks: A promising tool for fault characteristic mining and intelligent diagnosis of rotating machinery with massive data

    Science.gov (United States)

    Jia, Feng; Lei, Yaguo; Lin, Jing; Zhou, Xin; Lu, Na

    2016-05-01

    Aiming to promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rotating machinery. Among these studies, the methods based on artificial neural networks (ANNs) are commonly used, which employ signal processing techniques for extracting features and further input the features to ANNs for classifying faults. Though these methods did work in intelligent fault diagnosis of rotating machinery, they still have two deficiencies. (1) The features are manually extracted depending on much prior knowledge about signal processing techniques and diagnostic expertise. In addition, these manual features are extracted according to a specific diagnosis issue and probably unsuitable for other issues. (2) The ANNs adopted in these methods have shallow architectures, which limits the capacity of ANNs to learn the complex non-linear relationships in fault diagnosis issues. As a breakthrough in artificial intelligence, deep learning holds the potential to overcome the aforementioned deficiencies. Through deep learning, deep neural networks (DNNs) with deep architectures, instead of shallow ones, could be established to mine the useful information from raw data and approximate complex non-linear functions. Based on DNNs, a novel intelligent method is proposed in this paper to overcome the deficiencies of the aforementioned intelligent diagnosis methods. The effectiveness of the proposed method is validated using datasets from rolling element bearings and planetary gearboxes. These datasets contain massive measured signals involving different health conditions under various operating conditions. The diagnosis results show that the proposed method is able to not only adaptively mine available fault characteristics from the measured signals, but also obtain superior diagnosis accuracy compared with the existing methods.

  5. Biomagnification of persistent organic pollutants in a deep-sea, temperate food web.

    Science.gov (United States)

    Romero-Romero, Sonia; Herrero, Laura; Fernández, Mario; Gómara, Belén; Acuña, José Luis

    2017-12-15

    Polychlorinated biphenyls (PCBs), polybrominated diphenyl ethers (PBDEs) and polychlorinated dibenzo-p-dioxins and -furans (PCDD/Fs) were measured in a temperate, deep-sea ecosystem, the Avilés submarine Canyon (AC; Cantabrian Sea, Southern Bay of Biscay). There was an increase of contaminant concentration with the trophic level of the organisms, as calculated from stable nitrogen isotope data (δ 15 N). Such biomagnification was only significant for the pelagic food web and its magnitude was highly dependent on the type of top predators included in the analysis. The trophic magnification factor (TMF) for PCB-153 in the pelagic food web (spanning four trophic levels) was 6.2 or 2.2, depending on whether homeotherm top predators (cetaceans and seabirds) were included or not in the analysis, respectively. Since body size is significantly correlated with δ 15 N, it can be used as a proxy to estimate trophic magnification, what can potentially lead to a simple and convenient method to calculate the TMF. In spite of their lower biomagnification, deep-sea fishes showed higher concentrations than their shallower counterparts, although those differences were not significant. In summary, the AC fauna exhibits contaminant levels comparable or lower than those reported in other systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Intelligent Networks Data Fusion Web-based Services for Ad-hoc Integrated WSNs-RFID

    Directory of Open Access Journals (Sweden)

    Falah Alshahrany

    2016-01-01

    Full Text Available The use of variety of data fusion tools and techniques for big data processing poses the problem of the data and information integration called data fusion having objectives which can differ from one application to another. The design of network data fusion systems aimed at meeting these objectives, need to take into account of the necessary synergy that can result from distributed data processing within the data networks and data centres, involving increased computation and communication. This papers reports on how this processing distribution is functionally structured as configurable integrated web-based support services, in the context of an ad-hoc wireless sensor network used for sensing and tracking, in the context of distributed detection based on complete observations to support real rime decision making. The interrelated functional and hardware RFID-WSN integration is an essential aspect of the data fusion framework that focuses on multi-sensor collaboration as an innovative approach to extend the heterogeneity of the devices and sensor nodes of ad-hoc networks generating a huge amount of heterogeneous soft and hard raw data. The deployment and configuration of these networks require data fusion processing that includes network and service management and enhances the performance and reliability of networks data fusion support systems providing intelligent capabilities for real-time control access and fire detection.

  7. Food web flows through a sub-arctic deep-sea benthic community

    Science.gov (United States)

    Gontikaki, E.; van Oevelen, D.; Soetaert, K.; Witte, U.

    2011-11-01

    The benthic food web of the deep Faroe-Shetland Channel (FSC) was modelled by using the linear inverse modelling methodology. The reconstruction of carbon pathways by inverse analysis was based on benthic oxygen uptake rates, biomass data and transfer of labile carbon through the food web as revealed by a pulse-chase experiment. Carbon deposition was estimated at 2.2 mmol C m -2 d -1. Approximately 69% of the deposited carbon was respired by the benthic community with bacteria being responsible for 70% of the total respiration. The major fraction of the labile detritus flux was recycled within the microbial loop leaving merely 2% of the deposited labile phytodetritus available for metazoan consumption. Bacteria assimilated carbon at high efficiency (0.55) but only 24% of bacterial production was grazed by metazoans; the remaining returned to the dissolved organic matter pool due to viral lysis. Refractory detritus was the basal food resource for nematodes covering ∼99% of their carbon requirements. On the contrary, macrofauna seemed to obtain the major part of their metabolic needs from bacteria (49% of macrofaunal consumption). Labile detritus transfer was well-constrained, based on the data from the pulse-chase experiment, but appeared to be of limited importance to the diet of the examined benthic organisms (preferred prey, in this case, was other macrofaunal animals rather than nematodes. Bacteria and detritus contributed 53% and 12% to the total carbon ingestion of carnivorous polychaetes suggesting a high degree of omnivory among higher consumers in the FSC benthic food web. Overall, this study provided a unique insight into the functioning of a deep-sea benthic community and demonstrated how conventional data can be exploited further when combined with state-of-the-art modelling approaches.

  8. New in protein structure and function annotation: hotspots, single nucleotide polymorphisms and the 'Deep Web'.

    Science.gov (United States)

    Bromberg, Yana; Yachdav, Guy; Ofran, Yanay; Schneider, Reinhard; Rost, Burkhard

    2009-05-01

    The rapidly increasing quantity of protein sequence data continues to widen the gap between available sequences and annotations. Comparative modeling suggests some aspects of the 3D structures of approximately half of all known proteins; homology- and network-based inferences annotate some aspect of function for a similar fraction of the proteome. For most known protein sequences, however, there is detailed knowledge about neither their function nor their structure. Comprehensive efforts towards the expert curation of sequence annotations have failed to meet the demand of the rapidly increasing number of available sequences. Only the automated prediction of protein function in the absence of homology can close the gap between available sequences and annotations in the foreseeable future. This review focuses on two novel methods for automated annotation, and briefly presents an outlook on how modern web software may revolutionize the field of protein sequence annotation. First, predictions of protein binding sites and functional hotspots, and the evolution of these into the most successful type of prediction of protein function from sequence will be discussed. Second, a new tool, comprehensive in silico mutagenesis, which contributes important novel predictions of function and at the same time prepares for the onset of the next sequencing revolution, will be described. While these two new sub-fields of protein prediction represent the breakthroughs that have been achieved methodologically, it will then be argued that a different development might further change the way biomedical researchers benefit from annotations: modern web software can connect the worldwide web in any browser with the 'Deep Web' (ie, proprietary data resources). The availability of this direct connection, and the resulting access to a wealth of data, may impact drug discovery and development more than any existing method that contributes to protein annotation.

  9. Radiologic diagnosis of bone tumours using Webonex, a web-based artificial intelligence program

    International Nuclear Information System (INIS)

    Rasuli, P.; Rasouli, F.; Rasouli, T.

    2001-01-01

    Knowledge-based system is a decision support system in which an expert's knowledge and reasoning can be applied to problems in bounded knowledge domains. These systems, using knowledge and inference techniques, mimic human reasoning to solve problems. Knowledge-based systems are said to be 'intelligent' because they possess massive stores of information and exhibit many attributes commonly associated with human experts performing difficult tasks and using specialized knowledge and sophisticated problem-solving strategies. Knowledge-based systems differ from conventional software such as database systems in that they are able to reason about data and draw conclusions employing heuristic rules. Heuristics embody human expertise in some knowledge domain and are sometimes characterized as the 'rules of thumb' that one acquires through practical experience and uses to solve everyday problems. Knowledge-based systems have been developed in a variety of fields, including medical disciplines. A decision support system has been assisting clinicians in areas such as infectious disease therapy for many years. For example, these systems can help radiologists formulate and evaluate diagnostic hypotheses by recalling associations between diseases and imaging findings. Although radiologic technology relies heavily on computers, it has been slow to develop a knowledge-based system to aid in diagnoses. These systems can be valuable interactive educational tools for medical students. In 1992, we developed a DOS-based Bonex, a menu-driven expert system for the differential diagnosis of bone tumours using PDC Prolog. It was a rule-based expert system that led the user through a menu of questions and generated a hard copy report and a list of diagnoses with an estimate of the likelihood of each. Bonex was presented at the 1992 Annual Meeting of the Radiological Society of North America (RSNA) in Chicago. We also developed an expert system for the differential diagnosis of brain lesions

  10. Radiologic diagnosis of bone tumours using Webonex, a web-based artificial intelligence program

    Energy Technology Data Exchange (ETDEWEB)

    Rasuli, P. [Univ. of Ottawa, Dept. of Radiology, Ottawa Hospital, Ottawa, Ontario (Canada); Rasouli, F. [Research, Development and Engineering Center, PMUSA, Richmond, VA (United States); Rasouli, T. [Johns Hopkins Univ., Dept. of Cognitive Science, Baltimore, Maryland (United States)

    2001-08-01

    Knowledge-based system is a decision support system in which an expert's knowledge and reasoning can be applied to problems in bounded knowledge domains. These systems, using knowledge and inference techniques, mimic human reasoning to solve problems. Knowledge-based systems are said to be 'intelligent' because they possess massive stores of information and exhibit many attributes commonly associated with human experts performing difficult tasks and using specialized knowledge and sophisticated problem-solving strategies. Knowledge-based systems differ from conventional software such as database systems in that they are able to reason about data and draw conclusions employing heuristic rules. Heuristics embody human expertise in some knowledge domain and are sometimes characterized as the 'rules of thumb' that one acquires through practical experience and uses to solve everyday problems. Knowledge-based systems have been developed in a variety of fields, including medical disciplines. A decision support system has been assisting clinicians in areas such as infectious disease therapy for many years. For example, these systems can help radiologists formulate and evaluate diagnostic hypotheses by recalling associations between diseases and imaging findings. Although radiologic technology relies heavily on computers, it has been slow to develop a knowledge-based system to aid in diagnoses. These systems can be valuable interactive educational tools for medical students. In 1992, we developed a DOS-based Bonex, a menu-driven expert system for the differential diagnosis of bone tumours using PDC Prolog. It was a rule-based expert system that led the user through a menu of questions and generated a hard copy report and a list of diagnoses with an estimate of the likelihood of each. Bonex was presented at the 1992 Annual Meeting of the Radiological Society of North America (RSNA) in Chicago. We also developed an expert system for the differential

  11. ComplexContact: a web server for inter-protein contact prediction using deep learning

    KAUST Repository

    Zeng, Hong; Wang, Sheng; Zhou, Tianming; Zhao, Feifeng; Li, Xiufeng; Wu, Qing; Xu, Jinbo

    2018-01-01

    ComplexContact (http://raptorx2.uchicago.edu/ComplexContact/) is a web server for sequence-based interfacial residue-residue contact prediction of a putative protein complex. Interfacial residue-residue contacts are critical for understanding how proteins form complex and interact at residue level. When receiving a pair of protein sequences, ComplexContact first searches for their sequence homologs and builds two paired multiple sequence alignments (MSA), then it applies co-evolution analysis and a CASP-winning deep learning (DL) method to predict interfacial contacts from paired MSAs and visualizes the prediction as an image. The DL method was originally developed for intra-protein contact prediction and performed the best in CASP12. Our large-scale experimental test further shows that ComplexContact greatly outperforms pure co-evolution methods for inter-protein contact prediction, regardless of the species.

  12. ComplexContact: a web server for inter-protein contact prediction using deep learning

    KAUST Repository

    Zeng, Hong

    2018-05-20

    ComplexContact (http://raptorx2.uchicago.edu/ComplexContact/) is a web server for sequence-based interfacial residue-residue contact prediction of a putative protein complex. Interfacial residue-residue contacts are critical for understanding how proteins form complex and interact at residue level. When receiving a pair of protein sequences, ComplexContact first searches for their sequence homologs and builds two paired multiple sequence alignments (MSA), then it applies co-evolution analysis and a CASP-winning deep learning (DL) method to predict interfacial contacts from paired MSAs and visualizes the prediction as an image. The DL method was originally developed for intra-protein contact prediction and performed the best in CASP12. Our large-scale experimental test further shows that ComplexContact greatly outperforms pure co-evolution methods for inter-protein contact prediction, regardless of the species.

  13. ComplexContact: a web server for inter-protein contact prediction using deep learning.

    Science.gov (United States)

    Zeng, Hong; Wang, Sheng; Zhou, Tianming; Zhao, Feifeng; Li, Xiufeng; Wu, Qing; Xu, Jinbo

    2018-05-22

    ComplexContact (http://raptorx2.uchicago.edu/ComplexContact/) is a web server for sequence-based interfacial residue-residue contact prediction of a putative protein complex. Interfacial residue-residue contacts are critical for understanding how proteins form complex and interact at residue level. When receiving a pair of protein sequences, ComplexContact first searches for their sequence homologs and builds two paired multiple sequence alignments (MSA), then it applies co-evolution analysis and a CASP-winning deep learning (DL) method to predict interfacial contacts from paired MSAs and visualizes the prediction as an image. The DL method was originally developed for intra-protein contact prediction and performed the best in CASP12. Our large-scale experimental test further shows that ComplexContact greatly outperforms pure co-evolution methods for inter-protein contact prediction, regardless of the species.

  14. Intelligence

    Science.gov (United States)

    Sternberg, Robert J.

    2012-01-01

    Intelligence is the ability to learn from experience and to adapt to, shape, and select environments. Intelligence as measured by (raw scores on) conventional standardized tests varies across the lifespan, and also across generations. Intelligence can be understood in part in terms of the biology of the brain—especially with regard to the functioning in the prefrontal cortex—and also correlates with brain size, at least within humans. Studies of the effects of genes and environment suggest that the heritability coefficient (ratio of genetic to phenotypic variation) is between .4 and .8, although heritability varies as a function of socioeconomic status and other factors. Racial differences in measured intelligence have been observed, but race is a socially constructed rather than biological variable, so such differences are difficult to interpret. PMID:22577301

  15. Intelligence.

    Science.gov (United States)

    Sternberg, Robert J

    2012-03-01

    Intelligence is the ability to learn from experience and to adapt to, shape, and select environments. Intelligence as measured by (raw scores on) conventional standardized tests varies across the lifespan, and also across generations. Intelligence can be understood in part in terms of the biology of the brain-especially with regard to the functioning in the prefrontal cortex-and also correlates with brain size, at least within humans. Studies of the effects of genes and environment suggest that the heritability coefficient (ratio of genetic to phenotypic variation) is between .4 and .8, although heritability varies as a function of socioeconomic status and other factors. Racial differences in measured intelligence have been observed, but race is a socially constructed rather than biological variable, so such differences are difficult to interpret.

  16. FUDAOWANG: A Web-Based Intelligent Tutoring System Implementing Advanced Education Concepts

    Science.gov (United States)

    Xu, Wei; Zhao, Ke; Li, Yatao; Yi, Zhenzhen

    2012-01-01

    Determining how to provide good tutoring functions is an important research direction of intelligent tutoring systems. In this study, the authors develop an intelligent tutoring system with good tutoring functions, called "FUDAOWANG." The research domain that FUDAOWANG treats is junior middle school mathematics, which belongs to the objective…

  17. Potential and Challenges of Web-based Collective Intelligence to Tackle Societal Problems

    Directory of Open Access Journals (Sweden)

    Birutė Pitrėnaitė-Žilėnienė

    2014-03-01

    Full Text Available Purpose – to research what are conditions and challenges for collective intelligence (hereinafter – CI, i.e., emerging applying social technologies, to tackle societal problems. Several objectives were set in order to achieve the goal: to analyze the scientific concepts of CI and its contents; to summarize possibilities and challenges of application of CI in largescale online argumentation; following theoretical attitudes towards CI, to analyze Lithuanian praxis of application of CI technologies in large-scale online argumentation.Methodology – the methods of document analysis and content analysis of virtual community projects were applied. Theoretical analysis enabled recognition of CI phenomena and the variety of interpretations on CI as well as preconditions and difficulties to be tackled in order to ensure effective application of CI technologies in the processes of different policies design and/or societal problem solving. Having theoretical analysis as a base, the authors researched how the theoretical frameworks correspond to practices of Lithuanian virtual community projects, which are oriented to identification and analysis of relevant problems that communities are facing.Findings – scientific documents analysis demonstrates the variety of possible interpretations of CI. Such interpretations depend on the researcher’s attitudes towards this phenomenon: some authors explain CI in a very broad sense not including the aspects of social technologies. However, in the last decades, with the emergence of the Internet, social technologies have become concurrent dimension of CI. The main principles of Web-based CI are geographically spread users and a big number of them. Materialization of these principles ensures variety of elements needed for emerging of CI. There are diverse web-based mediums, where CI is being developed. However, not all of them ensure collective action, which is obligatory for CI. Researchers have analyzed

  18. PredMP: A Web Resource for Computationally Predicted Membrane Proteins via Deep Learning

    KAUST Repository

    Wang, Sheng

    2018-02-06

    Experimental determination of membrane protein (MP) structures is challenging as they are often too large for nuclear magnetic resonance (NMR) experiments and difficult to crystallize. Currently there are only about 510 non-redundant MPs with solved structures in Protein Data Bank (PDB). To elucidate the MP structures computationally, we developed a novel web resource, denoted as PredMP (http://52.87.130.56:3001/#/proteinindex), that delivers one-dimensional (1D) annotation of the membrane topology and secondary structure, two-dimensional (2D) prediction of the contact/distance map, together with three-dimensional (3D) modeling of the MP structure in the lipid bilayer, for each MP target from a given model organism. The precision of the computationally constructed MP structures is leveraged by state-of-the-art deep learning methods as well as cutting-edge modeling strategies. In particular, (i) we annotate 1D property via DeepCNF (Deep Convolutional Neural Fields) that not only models complex sequence-structure relationship but also interdependency between adjacent property labels; (ii) we predict 2D contact/distance map through Deep Transfer Learning which learns the patterns as well as the complex relationship between contacts/distances and protein features from non-membrane proteins; and (iii) we model 3D structure by feeding its predicted contacts and secondary structure to the Crystallography & NMR System (CNS) suite combined with a membrane burial potential that is residue-specific and depth-dependent. PredMP currently contains more than 2,200 multi-pass transmembrane proteins (length<700 residues) from Human. These transmembrane proteins are classified according to IUPHAR/BPS Guide, which provides a hierarchical organization of receptors, channels, transporters, enzymes and other drug targets according to their molecular relationships and physiological functions. Among these MPs, we estimated that our approach could predict correct folds for 1

  19. Hospital-based nurses' perceptions of the adoption of Web 2.0 tools for knowledge sharing, learning, social interaction and the production of collective intelligence.

    Science.gov (United States)

    Lau, Adela S M

    2011-11-11

    Web 2.0 provides a platform or a set of tools such as blogs, wikis, really simple syndication (RSS), podcasts, tags, social bookmarks, and social networking software for knowledge sharing, learning, social interaction, and the production of collective intelligence in a virtual environment. Web 2.0 is also becoming increasingly popular in e-learning and e-social communities. The objectives were to investigate how Web 2.0 tools can be applied for knowledge sharing, learning, social interaction, and the production of collective intelligence in the nursing domain and to investigate what behavioral perceptions are involved in the adoption of Web 2.0 tools by nurses. The decomposed technology acceptance model was applied to construct the research model on which the hypotheses were based. A questionnaire was developed based on the model and data from nurses (n = 388) were collected from late January 2009 until April 30, 2009. Pearson's correlation analysis and t tests were used for data analysis. Intention toward using Web 2.0 tools was positively correlated with usage behavior (r = .60, P Web 2.0 tools and enable them to better plan the strategy of implementation of Web 2.0 tools for knowledge sharing, learning, social interaction, and the production of collective intelligence.

  20. Hospital-Based Nurses’ Perceptions of the Adoption of Web 2.0 Tools for Knowledge Sharing, Learning, Social Interaction and the Production of Collective Intelligence

    Science.gov (United States)

    2011-01-01

    Background Web 2.0 provides a platform or a set of tools such as blogs, wikis, really simple syndication (RSS), podcasts, tags, social bookmarks, and social networking software for knowledge sharing, learning, social interaction, and the production of collective intelligence in a virtual environment. Web 2.0 is also becoming increasingly popular in e-learning and e-social communities. Objectives The objectives were to investigate how Web 2.0 tools can be applied for knowledge sharing, learning, social interaction, and the production of collective intelligence in the nursing domain and to investigate what behavioral perceptions are involved in the adoption of Web 2.0 tools by nurses. Methods The decomposed technology acceptance model was applied to construct the research model on which the hypotheses were based. A questionnaire was developed based on the model and data from nurses (n = 388) were collected from late January 2009 until April 30, 2009. Pearson’s correlation analysis and t tests were used for data analysis. Results Intention toward using Web 2.0 tools was positively correlated with usage behavior (r = .60, P Web 2.0 tools and enable them to better plan the strategy of implementation of Web 2.0 tools for knowledge sharing, learning, social interaction, and the production of collective intelligence. PMID:22079851

  1. Development of an asynchronous communication channel between wireless sensor nodes, smartphone devices, and web applications using RESTful Web Services for intelligent farming

    Science.gov (United States)

    De Leon, Marlene M.; Estuar, Maria Regina E.; Lim, Hadrian Paulo; Victorino, John Noel C.; Co, Jerelyn; Saddi, Ivan Lester; Paelmo, Sharlene Mae; Dela Cruz, Bon Lemuel

    2017-09-01

    Environment and agriculture related applications have been gaining ground for the past several years and have been the context for researches in ubiquitous and pervasive computing. This study is a part of a bigger study that uses artificial intelligence in developing models to detect, monitor, and forecast the spread of Fusarium oxysporum cubense TR4 (FOC TR4) on Cavendish bananas cultivated in the Philippines. To implement an Intelligent Farming system, 1) wireless sensor nodes (WSNs) are deployed in Philippine banana plantations to collect soil parameter data that is considered to affect the health of Cavendish bananas, 2) a custom built smartphone application is used for collecting, storing, and transmitting soil data, plant images and plant status data to a cloud storage, and 3) a custom built web application is used to load and display results of physico-chemical analysis of soil, analysis of data models, and geographic locations of plants being monitored. This study discusses the issues, considerations, and solutions implemented in the development of an asynchronous communication channel to ensure that all data collected by WSNs and smartphone applications are transmitted with a high degree of accuracy and reliability. From a design standpoint: standard API documentation on usage of data type is required to avoid inconsistencies in parameter passing. From a technical standpoint, there is a need to include error-handling mechanisms especially for delays in transmission of data as well as generalize method of parsing thru multidimensional array of data. Strategies are presented in the paper.

  2. Applying Adaptive Swarm Intelligence Technology with Structuration in Web-Based Collaborative Learning

    Science.gov (United States)

    Huang, Yueh-Min; Liu, Chien-Hung

    2009-01-01

    One of the key challenges in the promotion of web-based learning is the development of effective collaborative learning environments. We posit that the structuration process strongly influences the effectiveness of technology used in web-based collaborative learning activities. In this paper, we propose an ant swarm collaborative learning (ASCL)…

  3. Intelligent Web-Based Learning System with Personalized Learning Path Guidance

    Science.gov (United States)

    Chen, C. M.

    2008-01-01

    Personalized curriculum sequencing is an important research issue for web-based learning systems because no fixed learning paths will be appropriate for all learners. Therefore, many researchers focused on developing e-learning systems with personalized learning mechanisms to assist on-line web-based learning and adaptively provide learning paths…

  4. The Potential Transformative Impact of Web 2.0 Technology on the Intelligence Community

    Science.gov (United States)

    2008-12-01

    wikis, mashups and folksonomies .24 As the web is considered a platform, web 2.0 lacks concrete boundaries; instead, it possesses a gravitational...engagement and marketing Folksonomy The practice and method of collaboratively creating and managing tags147 to annotate and categorize content

  5. Design and development of an IoT-based web application for an intelligent remote SCADA system

    Science.gov (United States)

    Kao, Kuang-Chi; Chieng, Wei-Hua; Jeng, Shyr-Long

    2018-03-01

    This paper presents a design of an intelligent remote electrical power supervisory control and data acquisition (SCADA) system based on the Internet of Things (IoT), with Internet Information Services (IIS) for setting up web servers, an ASP.NET model-view- controller (MVC) for establishing a remote electrical power monitoring and control system by using responsive web design (RWD), and a Microsoft SQL Server as the database. With the web browser connected to the Internet, the sensing data is sent to the client by using the TCP/IP protocol, which supports mobile devices with different screen sizes. The users can provide instructions immediately without being present to check the conditions, which considerably reduces labor and time costs. The developed system incorporates a remote measuring function by using a wireless sensor network and utilizes a visual interface to make the human-machine interface (HMI) more instinctive. Moreover, it contains an analog input/output and a basic digital input/output that can be applied to a motor driver and an inverter for integration with a remote SCADA system based on IoT, and thus achieve efficient power management.

  6. Pro deep learning with TensorFlow a mathematical approach to advanced artificial intelligence in Python

    CERN Document Server

    Pattanayak, Santanu

    2017-01-01

    Deploy deep learning solutions in production with ease using TensorFlow. You'll also develop the mathematical understanding and intuition required to invent new deep learning architectures and solutions on your own. Pro Deep Learning with TensorFlow provides practical, hands-on expertise so you can learn deep learning from scratch and deploy meaningful deep learning solutions. This book will allow you to get up to speed quickly using TensorFlow and to optimize different deep learning architectures. All of the practical aspects of deep learning that are relevant in any industry are emphasized in this book. You will be able to use the prototypes demonstrated to build new deep learning applications. The code presented in the book is available in the form of iPython notebooks and scripts which allow you to try out examples and extend them in interesting ways. You will be equipped with the mathematical foundation and scientific knowledge to pursue research in this field and give back to the community.

  7. International Conference on Computational Intelligence 2015

    CERN Document Server

    Saha, Sujan

    2017-01-01

    This volume comprises the proceedings of the International Conference on Computational Intelligence 2015 (ICCI15). This book aims to bring together work from leading academicians, scientists, researchers and research scholars from across the globe on all aspects of computational intelligence. The work is composed mainly of original and unpublished results of conceptual, constructive, empirical, experimental, or theoretical work in all areas of computational intelligence. Specifically, the major topics covered include classical computational intelligence models and artificial intelligence, neural networks and deep learning, evolutionary swarm and particle algorithms, hybrid systems optimization, constraint programming, human-machine interaction, computational intelligence for the web analytics, robotics, computational neurosciences, neurodynamics, bioinspired and biomorphic algorithms, cross disciplinary topics and applications. The contents of this volume will be of use to researchers and professionals alike....

  8. The Social Semantic Web in Intelligent Learning Environments: State of the Art and Future Challenges

    Science.gov (United States)

    Jovanovic, Jelena; Gasevic, Dragan; Torniai, Carlo; Bateman, Scott; Hatala, Marek

    2009-01-01

    Today's technology-enhanced learning practices cater to students and teachers who use many different learning tools and environments and are used to a paradigm of interaction derived from open, ubiquitous, and socially oriented services. In this context, a crucial issue for education systems in general, and for Intelligent Learning Environments…

  9. Polite Web-Based Intelligent Tutors: Can They Improve Learning in Classrooms?

    Science.gov (United States)

    McLaren, Bruce M.; DeLeeuw, Krista E.; Mayer, Richard E.

    2011-01-01

    Should an intelligent software tutor be polite, in an effort to motivate and cajole students to learn, or should it use more direct language? If it should be polite, under what conditions? In a series of studies in different contexts (e.g., lab versus classroom) with a variety of students (e.g., low prior knowledge versus high prior knowledge),…

  10. Ubiquitous Computing Services Discovery and Execution Using a Novel Intelligent Web Services Algorithm

    Science.gov (United States)

    Choi, Okkyung; Han, SangYong

    2007-01-01

    Ubiquitous Computing makes it possible to determine in real time the location and situations of service requesters in a web service environment as it enables access to computers at any time and in any place. Though research on various aspects of ubiquitous commerce is progressing at enterprises and research centers, both domestically and overseas, analysis of a customer's personal preferences based on semantic web and rule based services using semantics is not currently being conducted. This paper proposes a Ubiquitous Computing Services System that enables a rule based search as well as semantics based search to support the fact that the electronic space and the physical space can be combined into one and the real time search for web services and the construction of efficient web services thus become possible.

  11. Implementation of E-Service Intelligence in the Field of Web Mining

    OpenAIRE

    PROF. MS. S. P. SHINDE,; PROF. V.P.DESHMUKH

    2011-01-01

    The World Wide Web is a popular and interactive medium to disseminate information today .The web is huge, diverse, dynamic, widely distributed global information service centre. We are familiar with the terms like e-commerce, e-governance, e-market, e-finance, e-learning, e-banking etc. These terms come under online services called e-service applications. E-services involve various types of delivery systems, advanced information technologies, methodologies and applications of online services....

  12. Students in a Teacher College of Education Develop Educational Programs and Activities Related to Intelligent Use of the Web: Cultivating New Knowledge

    Science.gov (United States)

    Wadmany, Rivka; Zeichner, Orit; Melamed, Orly

    2014-01-01

    Students in a teacher training college in Israel have developed and taught curricula on the intelligent use of the Web. The educational programs were based on activities thematically related to the world of digital citizenship, such as the rights of the child and the Internet, identity theft, copyrights, freedom of expression and its limitations,…

  13. Intelligible Artificial Intelligence

    OpenAIRE

    Weld, Daniel S.; Bansal, Gagan

    2018-01-01

    Since Artificial Intelligence (AI) software uses techniques like deep lookahead search and stochastic optimization of huge neural networks to fit mammoth datasets, it often results in complex behavior that is difficult for people to understand. Yet organizations are deploying AI algorithms in many mission-critical settings. In order to trust their behavior, we must make it intelligible --- either by using inherently interpretable models or by developing methods for explaining otherwise overwh...

  14. Intelligent Access to Sequence and Structure Databases (IASSD) - an interface for accessing information from major web databases.

    Science.gov (United States)

    Ganguli, Sayak; Gupta, Manoj Kumar; Basu, Protip; Banik, Rahul; Singh, Pankaj Kumar; Vishal, Vineet; Bera, Abhisek Ranjan; Chakraborty, Hirak Jyoti; Das, Sasti Gopal

    2014-01-01

    With the advent of age of big data and advances in high throughput technology accessing data has become one of the most important step in the entire knowledge discovery process. Most users are not able to decipher the query result that is obtained when non specific keywords or a combination of keywords are used. Intelligent access to sequence and structure databases (IASSD) is a desktop application for windows operating system. It is written in Java and utilizes the web service description language (wsdl) files and Jar files of E-utilities of various databases such as National Centre for Biotechnology Information (NCBI) and Protein Data Bank (PDB). Apart from that IASSD allows the user to view protein structure using a JMOL application which supports conditional editing. The Jar file is freely available through e-mail from the corresponding author.

  15. Deep Brain Stimulation of the Subthalamic Nucleus Parameter Optimization for Vowel Acoustics and Speech Intelligibility in Parkinson's Disease

    Science.gov (United States)

    Knowles, Thea; Adams, Scott; Abeyesekera, Anita; Mancinelli, Cynthia; Gilmore, Greydon; Jog, Mandar

    2018-01-01

    Purpose: The settings of 3 electrical stimulation parameters were adjusted in 12 speakers with Parkinson's disease (PD) with deep brain stimulation of the subthalamic nucleus (STN-DBS) to examine their effects on vowel acoustics and speech intelligibility. Method: Participants were tested under permutations of low, mid, and high STN-DBS frequency,…

  16. A COMPARATIVE ANALYSIS OF WEB INFORMATION EXTRACTION TECHNIQUES DEEP LEARNING vs. NAÏVE BAYES vs. BACK PROPAGATION NEURAL NETWORKS IN WEB DOCUMENT EXTRACTION

    OpenAIRE

    J. Sharmila; A. Subramani

    2016-01-01

    Web mining related exploration is getting the chance to be more essential these days in view of the reason that a lot of information is overseen through the web. Web utilization is expanding in an uncontrolled way. A particular framework is required for controlling such extensive measure of information in the web space. Web mining is ordered into three noteworthy divisions: Web content mining, web usage mining and web structure mining. Tak-Lam Wong has proposed a web content mining methodolog...

  17. TRSDL: Tag-Aware Recommender System Based on Deep Learning–Intelligent Computing Systems

    Directory of Open Access Journals (Sweden)

    Nan Liang

    2018-05-01

    Full Text Available In recommender systems (RS, many models are designed to predict ratings of items for the target user. To improve the performance for rating prediction, some studies have introduced tags into recommender systems. Tags benefit RS considerably, however, they are also redundant and ambiguous. In this paper, we propose a hybrid deep learning model TRSDL (tag-aware recommender system based on deep learning to improve the performance of tag-aware recommender systems (TRS. First, TRSDL uses pre-trained word embeddings to represent user-defined tags, and constructs item and user profiles based on the items’ tags set and users’ tagging behaviors. Then, it utilizes deep neural networks (DNNs and recurrent neural networks (RNNs to extract the latent features of items and users, respectively. Finally, it predicts ratings from these latent features. The model not only addresses tag limitations and takes advantage of semantic tag information but also learns more advanced implicit features via deep structures. We evaluated our proposed approach and several baselines on MovieLens-20 m, and the experimental results demonstrate that TRSDL significantly outperforms all the baselines (including the state-of-the-art models BiasedMF and I-AutoRec. In addition, we also explore the impacts of network depth and type on model performance.

  18. Deep Learning for Drug Design: an Artificial Intelligence Paradigm for Drug Discovery in the Big Data Era.

    Science.gov (United States)

    Jing, Yankang; Bian, Yuemin; Hu, Ziheng; Wang, Lirong; Xie, Xiang-Qun Sean

    2018-03-30

    Over the last decade, deep learning (DL) methods have been extremely successful and widely used to develop artificial intelligence (AI) in almost every domain, especially after it achieved its proud record on computational Go. Compared to traditional machine learning (ML) algorithms, DL methods still have a long way to go to achieve recognition in small molecular drug discovery and development. And there is still lots of work to do for the popularization and application of DL for research purpose, e.g., for small molecule drug research and development. In this review, we mainly discussed several most powerful and mainstream architectures, including the convolutional neural network (CNN), recurrent neural network (RNN), and deep auto-encoder networks (DAENs), for supervised learning and nonsupervised learning; summarized most of the representative applications in small molecule drug design; and briefly introduced how DL methods were used in those applications. The discussion for the pros and cons of DL methods as well as the main challenges we need to tackle were also emphasized.

  19. Food web transport of trace metals and radionuclides from the deep sea: a review

    International Nuclear Information System (INIS)

    Young, J.S.

    1979-06-01

    This report summarizes aspects of the potential distribution pathways of metals and radionuclides, particularly Co and Ni, through a biological trophic framework after their deposition at 4000 to 5000 meters in the North Atlantic or North Pacific. It discusses (a) the basic, deep-sea trophic structure of eutrophic and oligotrophic regions; (b) the transport pathways of biologically available energy to and from the deep sea, pathways that may act as accumulators and vectors of radionuclide distribution, and (c) distribution routes that have come into question as potential carriers of radionuclides from the deep-sea bed to man

  20. Science.Gov - A single gateway to the deep web knowledge of U.S. science agencies

    International Nuclear Information System (INIS)

    Hitson, B.A.

    2004-01-01

    The impact of science and technology on our daily lives is easily demonstrated. From new drug discoveries, to new and more efficient energy sources, to the incorporation of new technologies into business and industry, the productive applications of R and D are innumerable. The possibility of creating such applications depends most heavily on the availability of one resource: knowledge. Knowledge must be shared for scientific progress to occur. In the past, the ability to share knowledge electronically has been limited by the 'deep Web' nature of scientific databases and the lack of technology to simultaneously search disparate and decentralized information collections. U.S. science agencies invest billions of dollars each year on basic and applied research and development projects. To make the collective knowledge from this R and D more easily accessible and searchable, 12 science agencies collaborated to develop Science.gov - a single, searchable gateway to the deep Web knowledge of U.S. science agencies. This paper will describe Science.gov and its contribution to nuclear knowledge management. (author)

  1. Bacterioplankton communities of Crater Lake, OR: Dynamic changes with euphotic zone food web structure and stable deep water populations

    Science.gov (United States)

    Urbach, E.; Vergin, K.L.; Larson, G.L.; Giovannoni, S.J.

    2007-01-01

    The distribution of bacterial and archaeal species in Crater Lake plankton varies dramatically over depth and with time, as assessed by hybridization of group-specific oligonucleotides to RNA extracted from lakewater. Nonmetric, multidimensional scaling (MDS) analysis of relative bacterial phylotype densities revealed complex relationships among assemblages sampled from depth profiles in July, August and September of 1997 through 1999. CL500-11 green nonsulfur bacteria (Phylum Chloroflexi) and marine Group I crenarchaeota are consistently dominant groups in the oxygenated deep waters at 300 and 500 m. Other phylotypes found in the deep waters are similar to surface and mid-depth populations and vary with time. Euphotic zone assemblages are dominated either by ??-proteobacteria or CL120-10 verrucomicrobia, and ACK4 actinomycetes. MDS analyses of euphotic zone populations in relation to environmental variables and phytoplankton and zooplankton population structures reveal apparent links between Daphnia pulicaria zooplankton population densities and microbial community structure. These patterns may reflect food web interactions that link kokanee salmon population densities to community structure of the bacterioplankton, via fish predation on Daphnia with cascading consequences to Daphnia bacterivory and predation on bacterivorous protists. These results demonstrate a stable bottom-water microbial community. They also extend previous observations of food web-driven changes in euphotic zone bacterioplankton community structure to an oligotrophic setting. ?? 2007 Springer Science+Business Media B.V.

  2. Effects of internal phosphorus loadings and food-web structure on the recovery of a deep lake from eutrophication

    Science.gov (United States)

    Lepori, Fabio; Roberts, James J.

    2017-01-01

    We used monitoring data from Lake Lugano (Switzerland and Italy) to assess key ecosystem responses to three decades of nutrient management (1983–2014). We investigated whether reductions in external phosphorus loadings (Lext) caused declines in lake phosphorus concentrations (P) and phytoplankton biomass (Chl a), as assumed by the predictive models that underpinned the management plan. Additionally, we examined the hypothesis that deep lakes respond quickly to Lext reductions. During the study period, nutrient management reduced Lext by approximately a half. However, the effects of such reduction on P and Chl a were complex. Far from the scenarios predicted by classic nutrient-management approaches, the responses of P and Chl a did not only reflect changes in Lext, but also variation in internal P loadings (Lint) and food-web structure. In turn, Lint varied depending on basin morphometry and climatic effects, whereas food-web structure varied due to apparently stochastic events of colonization and near-extinction of key species. Our results highlight the complexity of the trajectory of deep-lake ecosystems undergoing nutrient management. From an applied standpoint, they also suggest that [i] the recovery of warm monomictic lakes may be slower than expected due to the development of Lint, and that [ii] classic P and Chl a models based on Lext may be useful in nutrient management programs only if their predictions are used as starting points within adaptive frameworks.

  3. Use of a Deep Recurrent Neural Network to Reduce Wind Noise: Effects on Judged Speech Intelligibility and Sound Quality

    Science.gov (United States)

    Keshavarzi, Mahmoud; Goehring, Tobias; Zakis, Justin; Turner, Richard E.; Moore, Brian C. J.

    2018-01-01

    Despite great advances in hearing-aid technology, users still experience problems with noise in windy environments. The potential benefits of using a deep recurrent neural network (RNN) for reducing wind noise were assessed. The RNN was trained using recordings of the output of the two microphones of a behind-the-ear hearing aid in response to male and female speech at various azimuths in the presence of noise produced by wind from various azimuths with a velocity of 3 m/s, using the “clean” speech as a reference. A paired-comparison procedure was used to compare all possible combinations of three conditions for subjective intelligibility and for sound quality or comfort. The conditions were unprocessed noisy speech, noisy speech processed using the RNN, and noisy speech that was high-pass filtered (which also reduced wind noise). Eighteen native English-speaking participants were tested, nine with normal hearing and nine with mild-to-moderate hearing impairment. Frequency-dependent linear amplification was provided for the latter. Processing using the RNN was significantly preferred over no processing by both subject groups for both subjective intelligibility and sound quality, although the magnitude of the preferences was small. High-pass filtering (HPF) was not significantly preferred over no processing. Although RNN was significantly preferred over HPF only for sound quality for the hearing-impaired participants, for the results as a whole, there was a preference for RNN over HPF. Overall, the results suggest that reduction of wind noise using an RNN is possible and might have beneficial effects when used in hearing aids. PMID:29708061

  4. Use of a Deep Recurrent Neural Network to Reduce Wind Noise: Effects on Judged Speech Intelligibility and Sound Quality.

    Science.gov (United States)

    Keshavarzi, Mahmoud; Goehring, Tobias; Zakis, Justin; Turner, Richard E; Moore, Brian C J

    2018-01-01

    Despite great advances in hearing-aid technology, users still experience problems with noise in windy environments. The potential benefits of using a deep recurrent neural network (RNN) for reducing wind noise were assessed. The RNN was trained using recordings of the output of the two microphones of a behind-the-ear hearing aid in response to male and female speech at various azimuths in the presence of noise produced by wind from various azimuths with a velocity of 3 m/s, using the "clean" speech as a reference. A paired-comparison procedure was used to compare all possible combinations of three conditions for subjective intelligibility and for sound quality or comfort. The conditions were unprocessed noisy speech, noisy speech processed using the RNN, and noisy speech that was high-pass filtered (which also reduced wind noise). Eighteen native English-speaking participants were tested, nine with normal hearing and nine with mild-to-moderate hearing impairment. Frequency-dependent linear amplification was provided for the latter. Processing using the RNN was significantly preferred over no processing by both subject groups for both subjective intelligibility and sound quality, although the magnitude of the preferences was small. High-pass filtering (HPF) was not significantly preferred over no processing. Although RNN was significantly preferred over HPF only for sound quality for the hearing-impaired participants, for the results as a whole, there was a preference for RNN over HPF. Overall, the results suggest that reduction of wind noise using an RNN is possible and might have beneficial effects when used in hearing aids.

  5. Food web functioning of the benthopelagic community in a deep-sea seamount based on diet and stable isotope analyses

    Science.gov (United States)

    Preciado, Izaskun; Cartes, Joan E.; Punzón, Antonio; Frutos, Inmaculada; López-López, Lucía; Serrano, Alberto

    2017-03-01

    Trophic interactions in the deep-sea fish community of the Galicia Bank seamount (NE Atlantic) were inferred by using stomach contents analyses (SCA) and stable isotope analyses (SIA) of 27 fish species and their main prey items. Samples were collected during three surveys performed in 2009, 2010 and 2011 between 625 and 1800 m depth. Three main trophic guilds were determined using SCA data: pelagic, benthopelagic and benthic feeders, respectively. Vertically migrating macrozooplankton and meso-bathypelagic shrimps were identified to play a key role as pelagic prey for the deep sea fish community of the Galicia Bank. Habitat overlap was hardly detected; as a matter of fact, when species coexisted most of them evidenced a low dietary overlap, indicating a high degree of resource partitioning. A high potential competition, however, was observed among benthopelagic feeders, i.e.: Etmopterus spinax, Hoplostethus mediterraneus and Epigonus telescopus. A significant correlation was found between δ15N and δ13C for all the analysed species. When calculating Trophic Levels (TLs) for the main fish species, using both the SCA and SIA approaches, some discrepancies arose: TLs calculated from SIA were significantly higher than those obtained from SCA, probably indicating a higher consumption of benthic-suprabenthic prey in the previous months. During the summer, food web functioning in the Galicia Bank was more influenced by the assemblages dwelling in the water column than by deep-sea benthos, which was rather scarce in the summer samples. These discrepancies demonstrate the importance of using both approaches, SCA (snapshot of diet) and SIA (assimilated food in previous months), when attempting trophic studies, if an overview of food web dynamics in different compartments of the ecosystem is to be obtained.

  6. Predicting IVF Outcome: A Proposed Web-based System Using Artificial Intelligence.

    Science.gov (United States)

    Siristatidis, Charalampos; Vogiatzi, Paraskevi; Pouliakis, Abraham; Trivella, Marialenna; Papantoniou, Nikolaos; Bettocchi, Stefano

    2016-01-01

    To propose a functional in vitro fertilization (IVF) prediction model to assist clinicians in tailoring personalized treatment of subfertile couples and improve assisted reproduction outcome. Construction and evaluation of an enhanced web-based system with a novel Artificial Neural Network (ANN) architecture and conformed input and output parameters according to the clinical and bibliographical standards, driven by a complete data set and "trained" by a network expert in an IVF setting. The system is capable to act as a routine information technology platform for the IVF unit and is capable of recalling and evaluating a vast amount of information in a rapid and automated manner to provide an objective indication on the outcome of an artificial reproductive cycle. ANNs are an exceptional candidate in providing the fertility specialist with numerical estimates to promote personalization of healthcare and adaptation of the course of treatment according to the indications. Copyright © 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  7. A new intelligent algorithm to create a profile for user based on web interactions

    Directory of Open Access Journals (Sweden)

    Zeinab khademali

    2013-04-01

    Full Text Available This paper presents a method to classify the web user’s navigation patterns automatically. The proposed model of this paper classifies user’s navigation patterns and predicts his/her upcoming requirements. To create users’ profile, a new method is introduced by recording user’s settings active and user’s similarity measurement with neighboring users. The proposed model is capable of creating the profile implicitly. Besides, it updates the profile based on created changes. In fact, we try to improve the function of recommender engine using user’s navigation patterns and clustering. The method is based on user’s navigation patterns and is able to present the result of recommender engine based on user’s requirement and interest. In addition, this method has the ability to help customize websites, more efficiently.

  8. Ontology-driven education: Teaching anatomy with intelligent 3D games on the web

    Science.gov (United States)

    Nilsen, Trond

    Human anatomy is a challenging and intimidating subject whose understanding is essential to good medical practice, taught primarily using a combination of lectures and the dissection of human cadavers. Lectures are cheap and scalable, but do a poor job of teaching spatial understanding, whereas dissection lets students experience the body's interior first-hand, but is expensive, cannot be repeated, and is often imperfect. Educational games and online learning activities have the potential to supplement these teaching methods in a cheap and relatively effective way, but they are difficult for educators to customize for particular curricula and lack the tutoring support that human instructors provide. I present an approach to the creation of learning activities for anatomy called ontology-driven education, in which the Foundational Model of Anatomy, an ontological representation of knowledge about anatomy, is leveraged to generate educational content, model student knowledge, and support learning activities and games in a configurable web-based educational framework for anatomy.

  9. Mobile Tourist Guide – An Intelligent Wireless System to Improve Tourism, using Semantic Web

    Directory of Open Access Journals (Sweden)

    Hosam El-Sofany

    2011-10-01

    Full Text Available With the recent advances in Internet and mobile technologies, there are increasing demands for electronic access to tourist information systems for service coordination and process integration. Mobile computing and mobile devices are used to implement various tourist services (e.g. electronic tourist guides, digital interactive maps, and tourist e-commerce transactions. However, due to disparate tourist information and service resources such as airlines, hotels, tour operators, it is still difficult for tourists to use them effectively during their trips or even in the planning stage. Neither can current tourist portals assist tourists proactively. To overcome this problem, we propose the analysis, design, and implementation of the “Mobile tourist guide" system, that access through wireless devices and use Semantic Web technologies for effective organization of information resources and service processes. The proposed system provides the users with various services such as: 1 displaying the shortest path between the sources and destinations the visitors specify, 2 displaying general information of shops, newest events of the plaza and shops, 3 provides service of hotel, restaurant and cinema-ticket reservations, 4 provides user-friendly administration service. The Admin can manage the position, blocking path details, general information of hotel, restaurant, shops and plaza, and reservation details via web browser without changing the framework of the system. The system prototype has been developed on the top of Java 2 Micro Edition which offers an ideal platform for the development of full-fledged, interactive and portable applications tailored for resource constrained mobile devices. The paper presents our development experiences and highlights its main advantages and limitations in relation to the implementation of such kind of applications.

  10. Realizing the Promise of Web 2.0: Engaging Community Intelligence

    Science.gov (United States)

    HESSE, BRADFORD W.; O’CONNELL, MARY; AUGUSTSON, ERIK M.; CHOU, WEN-YING SYLVIA; SHAIKH, ABDUL R.; RUTTEN, LILA J. FINNEY

    2011-01-01

    Discussions of “Health 2.0,” first coined in 2005, were guided by three main tenets: (a) health was to become more participatory, as an evolution in the Web encouraged more direct consumer engagement in their own healthcare; (b) data was to become the new “Intel Inside” for systems supporting the “vital decisions” in health; and (c) a sense of “collective intelligence” from the network would supplement traditional sources of knowledge in health decision-making. Interests in understanding the implications of a new paradigm for patient engagement in health and healthcare were kindled by findings from surveys such as the National Cancer Institute’s Health Information National Trends Survey (HINTS), showing that patients were quick to look online for information to help them cope with disease. This paper considers how these three facets of Health 2.0–participation, data, and collective intelligence–can be harnessed to improve the health of the nation according to Healthy People 2020 goals. We begin with an examination of evidence from behavioral science to understand how Web 2.0 participative technologies may influence patient processes and outcomes, better or worse, in an era of changing communication technologies. The paper then focuses specifically on the clinical implications of “Health 2.0” and offers recommendations to ensure that changes in the communication environment do not detract from national (e.g., Health People 2020) health goals. Changes in the clinical environment, as catalyzed by the Health Information Technology for Economic and Clinical Health (HITECH) Act to take advantage of Health 2.0 principles in evidence-based ways, are also considered. PMID:21843093

  11. Bio-AIMS Collection of Chemoinformatics Web Tools based on Molecular Graph Information and Artificial Intelligence Models.

    Science.gov (United States)

    Munteanu, Cristian R; Gonzalez-Diaz, Humberto; Garcia, Rafael; Loza, Mabel; Pazos, Alejandro

    2015-01-01

    The molecular information encoding into molecular descriptors is the first step into in silico Chemoinformatics methods in Drug Design. The Machine Learning methods are a complex solution to find prediction models for specific biological properties of molecules. These models connect the molecular structure information such as atom connectivity (molecular graphs) or physical-chemical properties of an atom/group of atoms to the molecular activity (Quantitative Structure - Activity Relationship, QSAR). Due to the complexity of the proteins, the prediction of their activity is a complicated task and the interpretation of the models is more difficult. The current review presents a series of 11 prediction models for proteins, implemented as free Web tools on an Artificial Intelligence Model Server in Biosciences, Bio-AIMS (http://bio-aims.udc.es/TargetPred.php). Six tools predict protein activity, two models evaluate drug - protein target interactions and the other three calculate protein - protein interactions. The input information is based on the protein 3D structure for nine models, 1D peptide amino acid sequence for three tools and drug SMILES formulas for two servers. The molecular graph descriptor-based Machine Learning models could be useful tools for in silico screening of new peptides/proteins as future drug targets for specific treatments.

  12. Overview of Intelligent Power Controller Development for Human Deep Space Exploration

    Science.gov (United States)

    Soeder, James F.; Dever, Timothy P.; McNelis, Anne M.; Beach, Raymond F.; Trase, Larry M.; May, Ryan D.

    2014-01-01

    Intelligent or autonomous control of an entire spacecraft is a major technology that must be developed to enable NASA to meet its human exploration goals. NASA's current long term human space platform, the International Space Station, is in low Earth orbit with almost continuous communication with the ground based mission control. This permits the near real-time control by the ground of all of the core systems including power. As NASA moves beyond low Earth orbit, the issues of communication time-lag and lack of communication bandwidth beyond geosynchronous orbit does not permit this type of operation. This paper presents the work currently ongoing at NASA to develop an architecture for an autonomous power control system as well as the effort to assemble that controller into the framework of the vehicle mission manager and other subsystem controllers to enable autonomous control of the complete spacecraft. Due to the common problems faced in both space power systems and terrestrial power system, the potential for spin-off applications of this technology for use in micro-grids located at the edge or user end of terrestrial power grids for peak power accommodation and reliability are described.

  13. Not Deep Learning but Autonomous Learning of Open Innovation for Sustainable Artificial Intelligence

    Directory of Open Access Journals (Sweden)

    JinHyo Joseph Yun

    2016-08-01

    Full Text Available What do we need for sustainable artificial intelligence that is not harmful but beneficial human life? This paper builds up the interaction model between direct and autonomous learning from the human’s cognitive learning process and firms’ open innovation process. It conceptually establishes a direct and autonomous learning interaction model. The key factor of this model is that the process to respond to entries from external environments through interactions between autonomous learning and direct learning as well as to rearrange internal knowledge is incessant. When autonomous learning happens, the units of knowledge determinations that arise from indirect learning are separated. They induce not only broad autonomous learning made through the horizontal combinations that surpass the combinations that occurred in direct learning but also in-depth autonomous learning made through vertical combinations that appear so that new knowledge is added. The core of the interaction model between direct and autonomous learning is the variability of the boundary between proven knowledge and hypothetical knowledge, limitations in knowledge accumulation, as well as complementarity and conflict between direct and autonomous learning. Therefore, these should be considered when introducing the interaction model between direct and autonomous learning into navigations, cleaning robots, search engines, etc. In addition, we should consider the relationship between direct learning and autonomous learning when building up open innovation strategies and policies.

  14. Learning Deep Visual Object Models From Noisy Web Data: How to Make it Work

    OpenAIRE

    Massouh, Nizar; Babiloni, Francesca; Tommasi, Tatiana; Young, Jay; Hawes, Nick; Caputo, Barbara

    2017-01-01

    Deep networks thrive when trained on large scale data collections. This has given ImageNet a central role in the development of deep architectures for visual object classification. However, ImageNet was created during a specific period in time, and as such it is prone to aging, as well as dataset bias issues. Moving beyond fixed training datasets will lead to more robust visual systems, especially when deployed on robots in new environments which must train on the objects they encounter there...

  15. PredMP: A Web Resource for Computationally Predicted Membrane Proteins via Deep Learning

    KAUST Repository

    Wang, Sheng; Fei, Shiyang; Zongan, Wang; Li, Yu; Zhao, Feng; Gao, Xin

    2018-01-01

    structures in Protein Data Bank (PDB). To elucidate the MP structures computationally, we developed a novel web resource, denoted as PredMP (http://52.87.130.56:3001/#/proteinindex), that delivers one-dimensional (1D) annotation of the membrane topology

  16. Linking shallow, Linking deep : how scientific intermediaries use the Web for their network of collaborators

    NARCIS (Netherlands)

    Vasileiadou, E.; Besselaar, van den P.

    2006-01-01

    In this paper we explore the possibility of using Web links to study collaborations between organisations, combining the results of qualitative analysis of interviews and quantitative analysis of linking patterns. We use case studies of scientific intermediaries, that is, organisations that mediate

  17. Speech Intelligibility Potential of General and Specialized Deep Neural Network Based Speech Enhancement Systems

    DEFF Research Database (Denmark)

    Kolbæk, Morten; Tan, Zheng-Hua; Jensen, Jesper

    2017-01-01

    In this paper, we study aspects of single microphone speech enhancement (SE) based on deep neural networks (DNNs). Specifically, we explore the generalizability capabilities of state-of-the-art DNN-based SE systems with respect to the background noise type, the gender of the target speaker...... general. Finally, we compare how a DNN-based SE system trained to be noise type general, speaker general, and SNR general performs relative to a state-of-the-art short-time spectral amplitude minimum mean square error (STSA-MMSE) based SE algorithm. We show that DNN-based SE systems, when trained...... a state-of-the-art STSA-MMSE based SE method, when tested using a range of unseen speakers and noise types. Finally, a listening test using several DNN-based SE systems tested in unseen speaker conditions show that these systems can improve SI for some SNR and noise type configurations but degrade SI...

  18. Intelligent Image Recognition System for Marine Fouling Using Softmax Transfer Learning and Deep Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    C. S. Chin

    2017-01-01

    Full Text Available The control of biofouling on marine vessels is challenging and costly. Early detection before hull performance is significantly affected is desirable, especially if “grooming” is an option. Here, a system is described to detect marine fouling at an early stage of development. In this study, an image of fouling can be transferred wirelessly via a mobile network for analysis. The proposed system utilizes transfer learning and deep convolutional neural network (CNN to perform image recognition on the fouling image by classifying the detected fouling species and the density of fouling on the surface. Transfer learning using Google’s Inception V3 model with Softmax at last layer was carried out on a fouling database of 10 categories and 1825 images. Experimental results gave acceptable accuracies for fouling detection and recognition.

  19. Pixel-Level Deep Segmentation: Artificial Intelligence Quantifies Muscle on Computed Tomography for Body Morphometric Analysis.

    Science.gov (United States)

    Lee, Hyunkwang; Troschel, Fabian M; Tajmir, Shahein; Fuchs, Georg; Mario, Julia; Fintelmann, Florian J; Do, Synho

    2017-08-01

    Pretreatment risk stratification is key for personalized medicine. While many physicians rely on an "eyeball test" to assess whether patients will tolerate major surgery or chemotherapy, "eyeballing" is inherently subjective and difficult to quantify. The concept of morphometric age derived from cross-sectional imaging has been found to correlate well with outcomes such as length of stay, morbidity, and mortality. However, the determination of the morphometric age is time intensive and requires highly trained experts. In this study, we propose a fully automated deep learning system for the segmentation of skeletal muscle cross-sectional area (CSA) on an axial computed tomography image taken at the third lumbar vertebra. We utilized a fully automated deep segmentation model derived from an extended implementation of a fully convolutional network with weight initialization of an ImageNet pre-trained model, followed by post processing to eliminate intramuscular fat for a more accurate analysis. This experiment was conducted by varying window level (WL), window width (WW), and bit resolutions in order to better understand the effects of the parameters on the model performance. Our best model, fine-tuned on 250 training images and ground truth labels, achieves 0.93 ± 0.02 Dice similarity coefficient (DSC) and 3.68 ± 2.29% difference between predicted and ground truth muscle CSA on 150 held-out test cases. Ultimately, the fully automated segmentation system can be embedded into the clinical environment to accelerate the quantification of muscle and expanded to volume analysis of 3D datasets.

  20. Modeling food web interactions in benthic deep-sea ecosystems. A practical guide

    NARCIS (Netherlands)

    Soetaert, K.E.R.; Van Oevelen, D.J.

    2009-01-01

    Deep-sea benthic systems are notoriously difficult to sample. Even more than for other benthic systems, many flows among biological groups cannot be directly measured, and data sets remain incomplete and uncertain. In such cases, mathematical models are often used to quantify unmeasured biological

  1. The benefit of combining a deep neural network architecture with ideal ratio mask estimation in computational speech segregation to improve speech intelligibility.

    Science.gov (United States)

    Bentsen, Thomas; May, Tobias; Kressner, Abigail A; Dau, Torsten

    2018-01-01

    Computational speech segregation attempts to automatically separate speech from noise. This is challenging in conditions with interfering talkers and low signal-to-noise ratios. Recent approaches have adopted deep neural networks and successfully demonstrated speech intelligibility improvements. A selection of components may be responsible for the success with these state-of-the-art approaches: the system architecture, a time frame concatenation technique and the learning objective. The aim of this study was to explore the roles and the relative contributions of these components by measuring speech intelligibility in normal-hearing listeners. A substantial improvement of 25.4 percentage points in speech intelligibility scores was found going from a subband-based architecture, in which a Gaussian Mixture Model-based classifier predicts the distributions of speech and noise for each frequency channel, to a state-of-the-art deep neural network-based architecture. Another improvement of 13.9 percentage points was obtained by changing the learning objective from the ideal binary mask, in which individual time-frequency units are labeled as either speech- or noise-dominated, to the ideal ratio mask, where the units are assigned a continuous value between zero and one. Therefore, both components play significant roles and by combining them, speech intelligibility improvements were obtained in a six-talker condition at a low signal-to-noise ratio.

  2. Artificial intelligence in fracture detection: transfer learning from deep convolutional neural networks.

    Science.gov (United States)

    Kim, D H; MacKinnon, T

    2018-05-01

    To identify the extent to which transfer learning from deep convolutional neural networks (CNNs), pre-trained on non-medical images, can be used for automated fracture detection on plain radiographs. The top layer of the Inception v3 network was re-trained using lateral wrist radiographs to produce a model for the classification of new studies as either "fracture" or "no fracture". The model was trained on a total of 11,112 images, after an eightfold data augmentation technique, from an initial set of 1,389 radiographs (695 "fracture" and 694 "no fracture"). The training data set was split 80:10:10 into training, validation, and test groups, respectively. An additional 100 wrist radiographs, comprising 50 "fracture" and 50 "no fracture" images, were used for final testing and statistical analysis. The area under the receiver operator characteristic curve (AUC) for this test was 0.954. Setting the diagnostic cut-off at a threshold designed to maximise both sensitivity and specificity resulted in values of 0.9 and 0.88, respectively. The AUC scores for this test were comparable to state-of-the-art providing proof of concept for transfer learning from CNNs in fracture detection on plain radiographs. This was achieved using only a moderate sample size. This technique is largely transferable, and therefore, has many potential applications in medical imaging, which may lead to significant improvements in workflow productivity and in clinical risk reduction. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  3. Species- and habitat-specific bioaccumulation of total mercury and methylmercury in the food web of a deep oligotrophic lake.

    Science.gov (United States)

    Arcagni, Marina; Juncos, Romina; Rizzo, Andrea; Pavlin, Majda; Fajon, Vesna; Arribére, María A; Horvat, Milena; Ribeiro Guevara, Sergio

    2018-01-15

    Niche segregation between introduced and native fish in Lake Nahuel Huapi, a deep oligotrophic lake in Northwest Patagonia (Argentina), occurs through the consumption of different prey. Therefore, in this work we analyzed total mercury [THg] and methylmercury [MeHg] concentrations in top predator fish and in their main prey to test whether their feeding habits influence [Hg]. Results indicate that [THg] and [MeHg] varied by foraging habitat and they increased with greater percentage of benthic diet and decreased with pelagic diet in Lake Nahuel Huapi. This is consistent with the fact that the native creole perch, a mostly benthivorous feeder, which shares the highest trophic level of the food web with introduced salmonids, had higher [THg] and [MeHg] than the more pelagic feeder rainbow trout and bentho-pelagic feeder brown trout. This differential THg and MeHg bioaccumulation observed in native and introduced fish provides evidence to the hypothesis that there are two main Hg transfer pathways from the base of the food web to top predators: a pelagic pathway where Hg is transferred from water, through plankton (with Hg in inorganic species mostly), forage fish to salmonids, and a benthic pathway, as Hg is transferred from the sediments (where Hg methylation occurs mostly), through crayfish (with higher [MeHg] than plankton), to native fish, leading to one fold higher [Hg]. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. The deep web, dark matter, metabundles and the broadband elites: do you need an informaticist?

    Science.gov (United States)

    Holden, Gary; Rosenberg, Gary

    2003-01-01

    The World Wide Web (WWW) is growing in size and is becoming a substantial component of life. This seems especially true for US professionals, including social workers. It will require effort by these professionals to use the WWW effectively and efficiently. One of the main issues that these professionals will encounter in these efforts is the quality of materials located on the WWW. This paper reviews some of the factors related to improving the quality of information obtained from the WWW by social workers.

  5. Machine listening intelligence

    Science.gov (United States)

    Cella, C. E.

    2017-05-01

    This manifesto paper will introduce machine listening intelligence, an integrated research framework for acoustic and musical signals modelling, based on signal processing, deep learning and computational musicology.

  6. A minimalistic microbial food web in an excavated deep subsurface clay rock.

    Science.gov (United States)

    Bagnoud, Alexandre; de Bruijn, Ino; Andersson, Anders F; Diomidis, Nikitas; Leupin, Olivier X; Schwyn, Bernhard; Bernier-Latmani, Rizlan

    2016-01-01

    Clay rocks are being considered for radioactive waste disposal, but relatively little is known about the impact of microbes on the long-term safety of geological repositories. Thus, a more complete understanding of microbial community structure and function in these environments would provide further detail for the evaluation of the safety of geological disposal of radioactive waste in clay rocks. It would also provide a unique glimpse into a poorly studied deep subsurface microbial ecosystem. Previous studies concluded that microorganisms were present in pristine Opalinus Clay, but inactive. In this work, we describe the microbial community and assess the metabolic activities taking place within borehole water. Metagenomic sequencing and genome-binning of a porewater sample containing suspended clay particles revealed a remarkably simple heterotrophic microbial community, fueled by sedimentary organic carbon, mainly composed of two organisms: a Pseudomonas sp. fermenting bacterium growing on organic macromolecules and releasing organic acids and H2, and a sulfate-reducing Peptococcaceae able to oxidize organic molecules to CO(2). In Opalinus Clay, this microbial system likely thrives where pore space allows it. In a repository, this may occur where the clay rock has been locally damaged by excavation or in engineered backfills. © FEMS 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Energy transfer in the Congo deep-sea fan: From terrestrially-derived organic matter to chemosynthetic food webs

    Science.gov (United States)

    Pruski, A. M.; Decker, C.; Stetten, E.; Vétion, G.; Martinez, P.; Charlier, K.; Senyarich, C.; Olu, K.

    2017-08-01

    Large amounts of recent terrestrial organic matter (OM) from the African continent are delivered to the abyssal plain by turbidity currents and accumulate in the Congo deep-sea fan. In the recent lobe complex, large clusters of vesicomyid bivalves are found all along the active channel in areas of reduced sediment. These soft-sediment communities resemble those fuelled by chemoautotrophy in cold-seep settings. The aim of this study was to elucidate feeding strategies in these macrofaunal assemblages as part of a greater effort to understand the link between the inputs of terrestrially-derived OM and the chemosynthetic habitats. The biochemical composition of the sedimentary OM was first analysed in order to evaluate how nutritious the available particulate OM is for the benthic macrofauna. The terrestrial OM is already degraded when it reaches the final depositional area. However, high biopolymeric carbon contents (proteins, carbohydrates and lipids) are found in the channel of the recent lobe complex. In addition, about one to two thirds of the nitrogen can be assigned to peptide-like material. Even if this soil-derived OM is poorly digestible, turbiditic deposits contain such high amounts of organic carbon that there is enough biopolymeric carbon and proteacinous nitrogen to support dense benthic communities that contrast with the usual depauperate abyssal plains. Stable carbon and nitrogen isotopes and fatty acid biomarkers were then used to shed light on the feeding strategies allowing the energy transfer from the terrestrial OM brought by the turbidity currents to the abyssal food web. In the non-reduced sediment, surface detritivorous holothurians and suspension-feeding poriferans rely on detritic OM, thereby depending directly on the turbiditic deposits. The sulphur-oxidising symbiont bearing vesicomyids closely depend on the reprocessing of OM with methane and sulphide as final products. Their carbon and nitrogen isotopic signatures vary greatly among sites

  8. Web-based telemonitoring and delivery of caregiver support for patients with Parkinson disease after deep brain stimulation: protocol.

    Science.gov (United States)

    Marceglia, Sara; Rossi, Elena; Rosa, Manuela; Cogiamanian, Filippo; Rossi, Lorenzo; Bertolasi, Laura; Vogrig, Alberto; Pinciroli, Francesco; Barbieri, Sergio; Priori, Alberto

    2015-03-06

    The increasing number of patients, the high costs of management, and the chronic progress of the disease that prevents patients from performing even simple daily activities make Parkinson disease (PD) a complex pathology with a high impact on society. In particular, patients implanted with deep brain stimulation (DBS) electrodes face a highly fragile stabilization period, requiring specific support at home. However, DBS patients are followed usually by untrained personnel (caregivers or family), without specific care pathways and supporting systems. This projects aims to (1) create a reference consensus guideline and a shared requirements set for the homecare and monitoring of DBS patients, (2) define a set of biomarkers that provides alarms to caregivers for continuous home monitoring, and (3) implement an information system architecture allowing communication between health care professionals and caregivers and improving the quality of care for DBS patients. The definitions of the consensus care pathway and of caregiver needs will be obtained by analyzing the current practices for patient follow-up through focus groups and structured interviews involving health care professionals, patients, and caregivers. The results of this analysis will be represented in a formal graphical model of the process of DBS patient care at home. To define the neurophysiological biomarkers to be used to raise alarms during the monitoring process, neurosignals will be acquired from DBS electrodes through a new experimental system that records while DBS is turned ON and transmits signals by radiofrequency. Motor, cognitive, and behavioral protocols will be used to study possible feedback/alarms to be provided by the system. Finally, a set of mobile apps to support the caregiver at home in managing and monitoring the patient will be developed and tested in the community of caregivers that participated in the focus groups. The set of developed apps will be connected to the already

  9. The natural diet of a hexactinellid sponge: Benthic pelagic coupling in a deep-sea microbial food web

    Science.gov (United States)

    Pile, Adele J.; Young, Craig M.

    2006-07-01

    Dense communities of shallow-water suspension feeders are known to sidestep the microbial loop by grazing on ultraplankton at its base. We quantified the diet, rates of water processing, and abundance of the deep-sea hexactinellid sponge Sericolophus hawaiicus, and found that, like their demosponge relatives in shallow water, hexactinellids are a significant sink for ultraplankton. S. hawaiicus forms a dense bed of sponges on the Big Island of Hawaii between 360 and 460 m depth, with a mean density of 4.7 sponges m -2. Grazing of S. hawaiicus on ultraplankton was quantified from in situ samples using flow cytometry, and was found to be unselective. Rates of water processing were determined with dye visualization and ranged from 1.62 to 3.57 cm s -1, resulting in a processing rate of 7.9±2.4 ml sponge -1 s -1. The large amount of water processed by these benthic suspension feeders results in the transfer of approximately 55 mg carbon and 7.3 mg N d -1 m -2 from the water column to the benthos. The magnitude of this flux places S. hawaiicus squarely within the functional group of organisms that link the pelagic microbial food web to the benthos.

  10. The benefit of combining a deep neural network architecture with ideal ratio mask estimation in computational speech segregation to improve speech intelligibility

    DEFF Research Database (Denmark)

    Bentsen, Thomas; May, Tobias; Kressner, Abigail Anne

    2018-01-01

    Computational speech segregation attempts to automatically separate speech from noise. This is challenging in conditions with interfering talkers and low signal-to-noise ratios. Recent approaches have adopted deep neural networks and successfully demonstrated speech intelligibility improvements....... A selection of components may be responsible for the success with these state-of-the-art approaches: the system architecture, a time frame concatenation technique and the learning objective. The aim of this study was to explore the roles and the relative contributions of these components by measuring speech......, to a state-of-the-art deep neural network-based architecture. Another improvement of 13.9 percentage points was obtained by changing the learning objective from the ideal binary mask, in which individual time-frequency units are labeled as either speech- or noise-dominated, to the ideal ratio mask, where...

  11. Efficacy of a web-based intelligent tutoring system for communicating genetic risk of breast cancer: a fuzzy-trace theory approach.

    Science.gov (United States)

    Wolfe, Christopher R; Reyna, Valerie F; Widmer, Colin L; Cedillos, Elizabeth M; Fisher, Christopher R; Brust-Renck, Priscila G; Weil, Audrey M

    2015-01-01

    . Many healthy women consider genetic testing for breast cancer risk, yet BRCA testing issues are complex. . To determine whether an intelligent tutor, BRCA Gist, grounded in fuzzy-trace theory (FTT), increases gist comprehension and knowledge about genetic testing for breast cancer risk, improving decision making. . In 2 experiments, 410 healthy undergraduate women were randomly assigned to 1 of 3 groups: an online module using a Web-based tutoring system (BRCA Gist) that uses artificial intelligence technology, a second group read highly similar content from the National Cancer Institute (NCI) Web site, and a third that completed an unrelated tutorial. . BRCA Gist applied FTT and was designed to help participants develop gist comprehension of topics relevant to decisions about BRCA genetic testing, including how breast cancer spreads, inherited genetic mutations, and base rates. . We measured content knowledge, gist comprehension of decision-relevant information, interest in testing, and genetic risk and testing judgments. . Control knowledge scores ranged from 54% to 56%, NCI improved significantly to 65% and 70%, and BRCA Gist improved significantly more to 75% and 77%, P tutors, such as BRCA Gist, are scalable, cost-effective ways of helping people understand complex issues, improving decision making. © The Author(s) 2014.

  12. Estimating Ground-Level PM2.5 by Fusing Satellite and Station Observations: A Geo-Intelligent Deep Learning Approach

    Science.gov (United States)

    Li, Tongwen; Shen, Huanfeng; Yuan, Qiangqiang; Zhang, Xuechen; Zhang, Liangpei

    2017-12-01

    Fusing satellite observations and station measurements to estimate ground-level PM2.5 is promising for monitoring PM2.5 pollution. A geo-intelligent approach, which incorporates geographical correlation into an intelligent deep learning architecture, is developed to estimate PM2.5. Specifically, it considers geographical distance and spatiotemporally correlated PM2.5 in a deep belief network (denoted as Geoi-DBN). Geoi-DBN can capture the essential features associated with PM2.5 from latent factors. It was trained and tested with data from China in 2015. The results show that Geoi-DBN performs significantly better than the traditional neural network. The out-of-sample cross-validation R2 increases from 0.42 to 0.88, and RMSE decreases from 29.96 to 13.03 μg/m3. On the basis of the derived PM2.5 distribution, it is predicted that over 80% of the Chinese population live in areas with an annual mean PM2.5 of greater than 35 μg/m3. This study provides a new perspective for air pollution monitoring in large geographic regions.

  13. TOPIC MODELING: CLUSTERING OF DEEP WEBPAGES

    OpenAIRE

    Muhunthaadithya C; Rohit J.V; Sadhana Kesavan; E. Sivasankar

    2015-01-01

    The internet is comprised of massive amount of information in the form of zillions of web pages.This information can be categorized into the surface web and the deep web. The existing search engines can effectively make use of surface web information.But the deep web remains unexploited yet. Machine learning techniques have been commonly employed to access deep web content.

  14. [Development and application of a web-based expert system using artificial intelligence for management of mental health by Korean emigrants].

    Science.gov (United States)

    Bae, Jeongyee

    2013-04-01

    The purpose of this project was to develop an international web-based expert system using principals of artificial intelligence and user-centered design for management of mental health by Korean emigrants. Using this system, anyone can access the system via computer access to the web. Our design process utilized principles of user-centered design with 4 phases: needs assessment, analysis, design/development/testing, and application release. A survey was done with 3,235 Korean emigrants. Focus group interviews were also conducted. Survey and analysis results guided the design of the web-based expert system. With this system, anyone can check their mental health status by themselves using a personal computer. The system analyzes facts based on answers to automated questions, and suggests solutions accordingly. A history tracking mechanism enables monitoring and future analysis. In addition, this system will include intervention programs to promote mental health status. This system is interactive and accessible to anyone in the world. It is expected that this management system will contribute to Korean emigrants' mental health promotion and allow researchers and professionals to share information on mental health.

  15. Definición y desarrollo de herramienta web de gestión de metadatos Business Intelligence

    OpenAIRE

    Montalvillo Mendizabal, Leticia

    2012-01-01

    Hoy en día las grandes empresas cuentan con sistemas de Business Intelligence (BI) para diversos objetivos y tareas que deben realizar. Este proyecto se centra en la definición de un repositorio de metadatos BI que almacenará los datos relativos a los Indicadores Clave de Rendimiento (KPI).

  16. Deep web entity monitoring

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2012-01-01

    Accessing information is an essential factor in decision making processes occurring in different domains. Therefore, broadening the coverage of available information for the decision makers is of a vital importance. In such a information-thirsty environment, accessing every source of information is

  17. Deep Web Content Monitoring

    NARCIS (Netherlands)

    Khelghati, Mohammadreza

    2016-01-01

    Data is one of the keys to success. Whether you are a fraud detection officer in a tax office, a data journalist or a business analyst, your primary concern is to access all the relevant data to your topics of interest. In such an information-thirsty environment, accessing every source of

  18. Deep Web Entity Monitoring

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    Accessing information is an essential factor in decision making processes occurring in different domains. Therefore, broadening the coverage of available information for the decision makers is of a vital importance. In such a information-thirsty environment, accessing every source of information is

  19. Deep smarts.

    Science.gov (United States)

    Leonard, Dorothy; Swap, Walter

    2004-09-01

    When a person sizes up a complex situation and rapidly comes to a decision that proves to be not just good but brilliant, you think, "That was smart." After you watch him do this a few times, you realize you're in the presence of something special. It's not raw brainpower, though that helps. It's not emotional intelligence, either, though that, too, is often involved. It's deep smarts. Deep smarts are not philosophical--they're not"wisdom" in that sense, but they're as close to wisdom as business gets. You see them in the manager who understands when and how to move into a new international market, in the executive who knows just what kind of talk to give when her organization is in crisis, in the technician who can track a product failure back to an interaction between independently produced elements. These are people whose knowledge would be hard to purchase on the open market. Their insight is based on know-how more than on know-what; it comprises a system view as well as expertise in individual areas. Because deep smarts are experienced based and often context specific, they can't be produced overnight or readily imported into an organization. It takes years for an individual to develop them--and no time at all for an organization to lose them when a valued veteran walks out the door. They can be taught, however, with the right techniques. Drawing on their forthcoming book Deep Smarts, Dorothy Leonard and Walter Swap say the best way to transfer such expertise to novices--and, on a larger scale, to make individual knowledge institutional--isn't through PowerPoint slides, a Web site of best practices, online training, project reports, or lectures. Rather, the sage needs to teach the neophyte individually how to draw wisdom from experience. Companies have to be willing to dedicate time and effort to such extensive training, but the investment more than pays for itself.

  20. The Influence of Surface and Deep Cues on Primary and Secondary School Students' Assessment of Relevance in Web Menus

    Science.gov (United States)

    Rouet, Jean-Francois; Ros, Christine; Goumi, Antonine; Macedo-Rouet, Monica; Dinet, Jerome

    2011-01-01

    Two experiments investigated primary and secondary school students' Web menu selection strategies using simulated Web search tasks. It was hypothesized that students' selections of websites depend on their perception and integration of multiple relevance cues. More specifically, students should be able to disentangle superficial cues (e.g.,…

  1. DESIGN OF A WEB SEMI-INTELLIGENT METADATA SEARCH MODEL APPLIED IN DATA WAREHOUSING SYSTEMS DISEÑO DE UN MODELO SEMIINTELIGENTE DE BÚSQUEDA DE METADATOS EN LA WEB, APLICADO A SISTEMAS DATA WAREHOUSING

    Directory of Open Access Journals (Sweden)

    Enrique Luna Ramírez

    2008-12-01

    Full Text Available In this paper, the design of a Web metadata search model with semi-intelligent features is proposed. The search model is oriented to retrieve the metadata associated to a data warehouse in a fast, flexible and reliable way. Our proposal includes a set of distinctive functionalities, which consist of the temporary storage of the frequently used metadata in an exclusive store, different to the global data warehouse metadata store, and of the use of control processes to retrieve information from both stores through aliases of concepts.En este artículo se propone el diseño de un modelo para la búsqueda Web de metadatos con características semiinteligentes. El modelo ha sido concebido para recuperar de manera rápida, flexible y fiable los metadatos asociados a un data warehouse corporativo. Nuestra propuesta incluye un conjunto de funcionalidades distintivas consistentes en el almacenamiento temporal de los metadatos de uso frecuente en un almacén exclusivo, diferente al almacén global de metadatos, y al uso de procesos de control para recuperar información de ambos almacenes a través de alias de conceptos.

  2. GROWTH OF COLLECTIVE INTELLIGENCE BY LINKING KNOWLEDGE WORKERS THROUGH SOCIAL MEDIA

    Directory of Open Access Journals (Sweden)

    JAROSLAVA KUBÁTOVÁ

    2012-05-01

    Full Text Available Collective intelligence can be defined, very broadly, as groups of individuals that do things collectively, and that seem to be intelligent. Collective intelligence has existed for ages. Families, tribes, companies, countries, etc., are all groups of individuals doing things collectively, and that seem to be intelligent. However, over the past two decades, the rise of the Internet has given upturn to new types of collective intelligence. Companies can take advantage from the so-called Web-enabled collective intelligence. Web-enabled collective intelligence is based on linking knowledge workers through social media. That means that companies can hire geographically dispersed knowledge workers and create so-called virtual teams of these knowledge workers (members of the virtual teams are connected only via the Internet and do not meet face to face. By providing an online social network, the companies can achieve significant growth of collective intelligence. But to create and use an online social network within a company in a really efficient way, the managers need to have a deep understanding of how such a system works. Thus the purpose of this paper is to share the knowledge about effective use of social networks in companies. The main objectives of this paper are as follows: to introduce some good practices of the use of social media in companies, to analyze these practices and to generalize recommendations for a successful introduction and use of social media to increase collective intelligence of a company.

  3. Using artificial intelligence and web media data to evaluate the growth potential of companies in emerging industry sectors

    DEFF Research Database (Denmark)

    Tanev, Stoyan; Droll, Andrew; Khan, Shahzad

    2017-01-01

    In this article, we describe our efforts to adapt and validate a web search and analytics tool – the Gnowit Cognitive Insight Engine – to evaluate the growth and competitive potential of new technology startups and existing firms in the newly emerging precision medicine sector. The results are ba...

  4. Carbon cycling in the deep eastern North Pacific benthic food web: Investigating the effect of organic carbon input

    NARCIS (Netherlands)

    Dunlop, K.M.; Van Oevelen, D.; Ruhl, H.A.; Huffard, C.L.; Kuhnz, L.A.; Smith, K.L.

    2016-01-01

    The deep ocean benthic environment plays a role in long-term carbon sequestration. Understanding carbon cycling in the deep ocean floor is critical to evaluate the impact of changing climate on the oceanic systems. Linear inverse modeling was used to quantify carbon transfer between compartments in

  5. A Web-based Multilingual Intelligent Tutor System based on Jackson's Learning Styles Profiler and Expert Systems

    OpenAIRE

    Ghadirli, Hossein Movafegh; Rastgarpour, Maryam

    2013-01-01

    Nowadays, Intelligent Tutoring Systems (ITSs) are so regarded in order to improve education quality via new technologies in this area. One of the problems is that the language of ITSs is different from the learner's. It forces the learners to learn the system language. This paper tries to remove this necessity by using an Automatic Translator Component in system structure like Google Translate API. This system carry out a pre-test and post-test by using Expert System and Jackson Model before ...

  6. BEETLE II: Deep Natural Language Understanding and Automatic Feedback Generation for Intelligent Tutoring in Basic Electricity and Electronics

    Science.gov (United States)

    Dzikovska, Myroslava; Steinhauser, Natalie; Farrow, Elaine; Moore, Johanna; Campbell, Gwendolyn

    2014-01-01

    Within STEM domains, physics is considered to be one of the most difficult topics to master, in part because many of the underlying principles are counter-intuitive. Effective teaching methods rely on engaging the student in active experimentation and encouraging deep reasoning, often through the use of self-explanation. Supporting such…

  7. Using Artificial Intelligence and Web Media Data to Evaluate the Growth Potential of Companies in Emerging Industry Sectors

    Directory of Open Access Journals (Sweden)

    Andrew Droll

    2017-06-01

    Full Text Available In this article, we describe our efforts to adapt and validate a web search and analytics tool – the Gnowit Cognitive Insight Engine – to evaluate the growth and competitive potential of new technology startups and existing firms in the newly emerging precision medicine sector. The results are based on two different search ontologies and two different samples of firms. The first sample includes established drug companies operating in the precision medicine field and was used to estimate the relationship between the firms’ innovativeness and the extent of online discussions focusing on their potential growth. The second sample includes new technology firms in the same sector. The firms in the second sample were used as test cases to determine whether their growth-related web search scores would relate to the degree of their innovativeness. The second part of the study applied the same methodology to the real-time monitoring of the firms’ competitive actions. In our findings, we see that our methodology reveals a moderate degree of correlation between the Insight Engine’s algorithmically computed relevance scores and independent measures of innovation potential. The existence of such correlations invites future work in attempting to analyze company growth potential using techniques founded in web content scraping, natural language processing, and machine learning.

  8. Swiss Première of the film "Deep Web" | 11 March 7 p.m. | CERN Main Auditorium

    CERN Multimedia

    2016-01-01

    On Friday 11 March, the CineGlobe Film Festival at CERN will host the FIFDH (International Film Festival and Forum on Human Rights) in the CERN Main Auditorium for the Swiss Première of the film Deep Web.   Starting from the online black market Silk Road, this investigation immerses us in the universe of the Tor network and the Dark Web, the cryptic and anonymous side of the Internet. In this modern version of the Far West, inhabited by bounty hunters, libertarians and political dissidents, everything is paid in bitcoins.  After the screening, filmmaker Miruna Coca-Cozma will moderate a discussion on security and the evolution of the web, with the participation of the director of the DiploFoundation, Jovan Kubalija, and CERN Computer Security Officer Stefan Lueders. Doors open at 7:00 p.m., film begins at 7:30 p.m.. Entry is free with reservation by email to deepweb.cern@fifdh.org. Anyone interested in volunteering for the s...

  9. Situation-aware GeoVisualization considering applied logic and extensibility: a new architecture and mechanism for intelligent GeoWeb

    Science.gov (United States)

    He, Xuelin; Gold, Christopher

    2010-11-01

    Recent years have witnessed the emerging Virtual Globe technology which has been increasingly exhibiting powerful features and capabilities. However, the current technical architecture for geovisualization is still the traditional data- viewer mode, i.e. KML-Geobrowser. Current KML is basically an encoding format for wrapping static snapshots of information frozen at discrete time points, and a geobrowser is virtually a data renderer for geovisualization. In the real world spatial-temporal objects and elements possess specific semantics, applied logic and operational rules, naturally or socially, which need to be considered and to be executed when corresponding data is integrated or visualized in a visual geocontext. However, currently there is no a way to express and execute this kind of applied logic and control rules within the current geobrowsing architecture. This paper proposes a novel architecture by originating a new mechanism, DKML, and implementing a DKML-supporting prototype geobrowser. Embedded programming script within KML files can express applied logic, control conditions, situation-aware analysis utilities and special functionality, to achieve intelligent, controllable and applied logic-conformant geovisualization, and to flexibly extend and customize the DKMLsupporting geobrowser. Benefiting from the mechanism developed in this research, geobrowsers can truly evolve into powerful multi-purpose GeoWeb platforms with promising potential and prospects.

  10. Artificial Intelligence, Machine Learning, Deep Learning, and Cognitive Computing: What Do These Terms Mean and How Will They Impact Health Care?

    Science.gov (United States)

    Bini, Stefano A

    2018-02-27

    This article was presented at the 2017 annual meeting of the American Association of Hip and Knee Surgeons to introduce the members gathered as the audience to the concepts behind artificial intelligence (AI) and the applications that AI can have in the world of health care today. We discuss the origin of AI, progress to machine learning, and then discuss how the limits of machine learning lead data scientists to develop artificial neural networks and deep learning algorithms through biomimicry. We will place all these technologies in the context of practical clinical examples and show how AI can act as a tool to support and amplify human cognitive functions for physicians delivering care to increasingly complex patients. The aim of this article is to provide the reader with a basic understanding of the fundamentals of AI. Its purpose is to demystify this technology for practicing surgeons so they can better understand how and where to apply it. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Intelligence in Artificial Intelligence

    OpenAIRE

    Datta, Shoumen Palit Austin

    2016-01-01

    The elusive quest for intelligence in artificial intelligence prompts us to consider that instituting human-level intelligence in systems may be (still) in the realm of utopia. In about a quarter century, we have witnessed the winter of AI (1990) being transformed and transported to the zenith of tabloid fodder about AI (2015). The discussion at hand is about the elements that constitute the canonical idea of intelligence. The delivery of intelligence as a pay-per-use-service, popping out of ...

  12. Collaborative web-based annotation of video footage of deep-sea life, ecosystems and geological processes

    Science.gov (United States)

    Kottmann, R.; Ratmeyer, V.; Pop Ristov, A.; Boetius, A.

    2012-04-01

    More and more seagoing scientific expeditions use video-controlled research platforms such as Remote Operating Vehicles (ROV), Autonomous Underwater Vehicles (AUV), and towed camera systems. These produce many hours of video material which contains detailed and scientifically highly valuable footage of the biological, chemical, geological, and physical aspects of the oceans. Many of the videos contain unique observations of unknown life-forms which are rare, and which cannot be sampled and studied otherwise. To make such video material online accessible and to create a collaborative annotation environment the "Video Annotation and processing platform" (V-App) was developed. A first solely web-based installation for ROV videos is setup at the German Center for Marine Environmental Sciences (available at http://videolib.marum.de). It allows users to search and watch videos with a standard web browser based on the HTML5 standard. Moreover, V-App implements social web technologies allowing a distributed world-wide scientific community to collaboratively annotate videos anywhere at any time. It has several features fully implemented among which are: • User login system for fine grained permission and access control • Video watching • Video search using keywords, geographic position, depth and time range and any combination thereof • Video annotation organised in themes (tracks) such as biology and geology among others in standard or full screen mode • Annotation keyword management: Administrative users can add, delete, and update single keywords for annotation or upload sets of keywords from Excel-sheets • Download of products for scientific use This unique web application system helps making costly ROV videos online available (estimated cost range between 5.000 - 10.000 Euros per hour depending on the combination of ship and ROV). Moreover, with this system each expert annotation adds instantaneous available and valuable knowledge to otherwise uncharted

  13. Plant identification based on noisy web data: the amazing performance of deep learning (LifeCLEF 2017)

    OpenAIRE

    Goeau, Herve; Bonnet, Pierre; Joly, Alexis

    2017-01-01

    International audience; The 2017-th edition of the LifeCLEF plant identification challenge is an important milestone towards automated plant identification systems working at the scale of continental floras with 10.000 plant species living mainly in Europe and North America illustrated by a total of 1.1M images. Nowadays, such ambitious systems are enabled thanks to the conjunction of the dazzling recent progress in image classification with deep learning and several outstanding international...

  14. Food-web dynamics and isotopic niches in deep-sea communities residing in a submarine canyon and on the adjacent open slopes

    Science.gov (United States)

    Demopoulos, Amanda W.J.; McClain-Counts, Jennifer; Ross, Steve W.; Brooke, Sandra; Mienis, Furu

    2017-01-01

    Examination of food webs and trophic niches provide insights into organisms' functional ecology, yet few studies have examined trophodynamics within submarine canyons, where the interaction of canyon morphology and oceanography influences habitat provision and food deposition. Using stable isotope analysis and Bayesian ellipses, we documented deep-sea food-web structure and trophic niches in Baltimore Canyon and the adjacent open slopes in the US Mid-Atlantic Region. Results revealed isotopically diverse feeding groups, comprising approximately 5 trophic levels. Regression analysis indicated that consumer isotope data are structured by habitat (canyon vs. slope), feeding group, and depth. Benthic feeders were enriched in 13C and 15N relative to suspension feeders, consistent with consuming older, more refractory organic matter. In contrast, canyon suspension feeders had the largest and more distinct isotopic niche, indicating they consume an isotopically discrete food source, possibly fresher organic material. The wider isotopic niche observed for canyon consumers indicated the presence of feeding specialists and generalists. High dispersion in δ13C values for canyon consumers suggests that the isotopic composition of particulate organic matter changes, which is linked to depositional dynamics, resulting in discrete zones of organic matter accumulation or resuspension. Heterogeneity in habitat and food availability likely enhances trophic diversity in canyons. Given their abundance in the world's oceans, our results from Baltimore Canyon suggest that submarine canyons may represent important havens for trophic diversity.

  15. Web-based tool for visualization of electric field distribution in deep-seated body structures and planning of electroporation-based treatments.

    Science.gov (United States)

    Marčan, Marija; Pavliha, Denis; Kos, Bor; Forjanič, Tadeja; Miklavčič, Damijan

    2015-01-01

    Treatments based on electroporation are a new and promising approach to treating tumors, especially non-resectable ones. The success of the treatment is, however, heavily dependent on coverage of the entire tumor volume with a sufficiently high electric field. Ensuring complete coverage in the case of deep-seated tumors is not trivial and can in best way be ensured by patient-specific treatment planning. The basis of the treatment planning process consists of two complex tasks: medical image segmentation, and numerical modeling and optimization. In addition to previously developed segmentation algorithms for several tissues (human liver, hepatic vessels, bone tissue and canine brain) and the algorithms for numerical modeling and optimization of treatment parameters, we developed a web-based tool to facilitate the translation of the algorithms and their application in the clinic. The developed web-based tool automatically builds a 3D model of the target tissue from the medical images uploaded by the user and then uses this 3D model to optimize treatment parameters. The tool enables the user to validate the results of the automatic segmentation and make corrections if necessary before delivering the final treatment plan. Evaluation of the tool was performed by five independent experts from four different institutions. During the evaluation, we gathered data concerning user experience and measured performance times for different components of the tool. Both user reports and performance times show significant reduction in treatment-planning complexity and time-consumption from 1-2 days to a few hours. The presented web-based tool is intended to facilitate the treatment planning process and reduce the time needed for it. It is crucial for facilitating expansion of electroporation-based treatments in the clinic and ensuring reliable treatment for the patients. The additional value of the tool is the possibility of easy upgrade and integration of modules with new

  16. Stable-isotope analysis of a deep-sea benthic-fish assemblage: evidence of an enriched benthic food web.

    Science.gov (United States)

    Boyle, M D; Ebert, D A; Cailliet, G M

    2012-04-01

    In this study, fishes and invertebrates collected from the continental slope (1000 m) of the eastern North Pacific Ocean were analysed using stable-isotope analysis (SIA). Resulting trophic positions (T(P) ) were compared to known diets and habitats from the literature. Dual isotope plots indicated that most species groups (invertebrates and fishes) sorted as expected along the carbon and nitrogen axes, with less intraspecific variability than interspecific variability. Results also indicated an isotopically distinct benthic and pelagic food web, as the benthic food web was more enriched in both nitrogen and carbon isotopes. Trophic positions from SIA supported this finding, resulting in the assignment of fishes to different trophic positions from those expected based on published dietary information. These differences can be explained largely by the habitat of the prey and the percentage of the diet that was scavenged. A mixing model estimated dietary contributions of prey similar to those of the known diet of Bathyraja trachura from stomach-content analysis (SCA). Linear regressions indicated that trophic positions calculated from SIA and SCA, when plotted against B. trachura total length for 32 individuals, exhibited similar variation and patterns. Only the T(P) from SCA yielded significant results (stomach content: P 0·05). © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.

  17. Food-web and ecosystem structure of the open-ocean and deep-sea environments of the Azores, NE Atlantic

    Directory of Open Access Journals (Sweden)

    Telmo Morato

    2016-12-01

    Full Text Available The Marine Strategy Framework Directive intends to adopt ecosystem-based management for resources, biodiversity and habitats that puts emphasis on maintaining the health of the ecosystem alongside appropriate human use of the marine environment, for the benefit of current and future generations. Within the overall framework of ecosystem-based management, ecosystem models are tools to evaluate and gain insights in ecosystem properties. The low data availability and complexity of modelling deep-water ecosystems has limited the application of ecosystem models to few deep-water ecosystems. Here, we aim to develop an ecosystem model for the deep-sea and open ocean in the Azores exclusive economic zone with the overarching objective of characterising the food-web and ecosystem structure of the ecosystem. An ecosystem model with 45 functional groups, including a detritus group, two primary producer groups, eight invertebrate groups, 29 fish groups, three marine mammal groups, a turtle and a seabird group was built. Overall data quality measured by the pedigree index was estimated to be higher than the mean value of all published models. Therefore, the model was built with source data of an overall reasonable quality, especially considering the normally low data availability for deep-sea ecosystems. The total biomass (excluding detritus of the modelled ecosystem for the whole area was calculated as 24.7 t km-². The mean trophic level for the total marine catch of the Azores was estimated to be 3.95, similar to the trophic level of the bathypelagic and medium-size pelagic fish. Trophic levels for the different functional groups were estimated to be similar to those obtained with stable isotopes and stomach contents analyses, with some exceptions on both ends of the trophic spectra. Omnivory indices were in general low, indicating prey speciation for the majority of the groups. Cephalopods, pelagic sharks and toothed whales were identified as groups with

  18. A Comparative Survey of Lotka and Pao’s Laws Conformity with the Number of Researchers and Their Articles in Computer Science and Artificial Intelligence Fields in Web of Science (1986-2009

    Directory of Open Access Journals (Sweden)

    Farideh Osareh

    2011-10-01

    Full Text Available The purpose of this research was to examine the validity of Lotka and Pao’s laws with authorship distribution of "Computer Science" and "Artificial Intelligence" fields using Web of Science (WoS during 1986 to 2009 and comparing the results of examinations. This study was done by using the methods of citation analysis which are scientometrics techniques. The research sample includes all articles in computer science and artificial intelligence fields indexed in the databases accessible via Web of Science during 1986-2009; that were stored in 500 records files and added to "ISI.exe" software for analysis to be performed. Then, the required output of this software was saved in Excel. There were 19150 articles in the computer science field (by 45713 authors and 958 articles in artificial intelligence field (by 2487 authors. Then for final counting and analyzing, the data converted to “Excel” spreadsheet software. Lotka and Pao’s laws were tested using both Lotka’s formula: (for Lotka’s Law; also for testing Pao’s law the values of the exponent n and the constant c are computed and Kolmogorov-Smirnov goodness-of-fit tests were applied. The results suggested that author productivity distribution predicted in “Lotka's generalized inverse square law” was not applicable to computer science and artificial intelligence; but Pao’s law was applicable to these subject areas. Survey both literature and original examining of Lotka and Pao’s Laws witnessed some aspects should be considered. The main elements involved in fitting in a bibliometrics method have been identified: using Lotka or Pao’s law, subject area, period of time, measurement of authors, and a criterion for assessing goodness-of-fit.

  19. An algorithm for management of deep brain stimulation battery replacements: devising a web-based battery estimator and clinical symptom approach.

    Science.gov (United States)

    Montuno, Michael A; Kohner, Andrew B; Foote, Kelly D; Okun, Michael S

    2013-01-01

    Deep brain stimulation (DBS) is an effective technique that has been utilized to treat advanced and medication-refractory movement and psychiatric disorders. In order to avoid implanted pulse generator (IPG) failure and consequent adverse symptoms, a better understanding of IPG battery longevity and management is necessary. Existing methods for battery estimation lack the specificity required for clinical incorporation. Technical challenges prevent higher accuracy longevity estimations, and a better approach to managing end of DBS battery life is needed. The literature was reviewed and DBS battery estimators were constructed by the authors and made available on the web at http://mdc.mbi.ufl.edu/surgery/dbs-battery-estimator. A clinical algorithm for management of DBS battery life was constructed. The algorithm takes into account battery estimations and clinical symptoms. Existing methods of DBS battery life estimation utilize an interpolation of averaged current drains to calculate how long a battery will last. Unfortunately, this technique can only provide general approximations. There are inherent errors in this technique, and these errors compound with each iteration of the battery estimation. Some of these errors cannot be accounted for in the estimation process, and some of the errors stem from device variation, battery voltage dependence, battery usage, battery chemistry, impedance fluctuations, interpolation error, usage patterns, and self-discharge. We present web-based battery estimators along with an algorithm for clinical management. We discuss the perils of using a battery estimator without taking into account the clinical picture. Future work will be needed to provide more reliable management of implanted device batteries; however, implementation of a clinical algorithm that accounts for both estimated battery life and for patient symptoms should improve the care of DBS patients. © 2012 International Neuromodulation Society.

  20. Semantic Business Intelligence - a New Generation of Business Intelligence

    Directory of Open Access Journals (Sweden)

    Dinu AIRINEI

    2012-01-01

    Full Text Available Business Intelligence Solutions represents applications used by companies to manage process and analyze data to provide substantiated decision. In the context of Semantic Web develop-ment trend is to integrate semantic unstructured data, making business intelligence solutions to be redesigned in such a manner that can analyze, process and synthesize, in addition to traditional data and data integrated with semantic another form and structure. This invariably leads appearance of new BI solution, called Semantic Business Intelligence.

  1. Artificial intelligence: Deep neural reasoning

    Science.gov (United States)

    Jaeger, Herbert

    2016-10-01

    The human brain can solve highly abstract reasoning problems using a neural network that is entirely physical. The underlying mechanisms are only partially understood, but an artificial network provides valuable insight. See Article p.471

  2. Why & When Deep Learning Works: Looking Inside Deep Learnings

    OpenAIRE

    Ronen, Ronny

    2017-01-01

    The Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI) has been heavily supporting Machine Learning and Deep Learning research from its foundation in 2012. We have asked six leading ICRI-CI Deep Learning researchers to address the challenge of "Why & When Deep Learning works", with the goal of looking inside Deep Learning, providing insights on how deep networks function, and uncovering key observations on their expressiveness, limitations, and potential. The outp...

  3. Engineering Adaptive Web Applications

    DEFF Research Database (Denmark)

    Dolog, Peter

    2007-01-01

    suit the user profile the most. This paper summarizes the domain engineering framework for such adaptive web applications. The framework provides guidelines to develop adaptive web applications as members of a family. It suggests how to utilize the design artifacts as knowledge which can be used......Information and services on the web are accessible for everyone. Users of the web differ in their background, culture, political and social environment, interests and so on. Ambient intelligence was envisioned as a concept for systems which are able to adapt to user actions and needs....... With the growing amount of information and services, the web applications become natural candidates to adopt the concepts of ambient intelligence. Such applications can deal with divers user intentions and actions based on the user profile and can suggest the combination of information content and services which...

  4. A novel design of hidden web crawler using ontology

    OpenAIRE

    Manvi; Bhatia, Komal Kumar; Dixit, Ashutosh

    2015-01-01

    Deep Web is content hidden behind HTML forms. Since it represents a large portion of the structured, unstructured and dynamic data on the Web, accessing Deep-Web content has been a long challenge for the database community. This paper describes a crawler for accessing Deep-Web using Ontologies. Performance evaluation of the proposed work showed that this new approach has promising results.

  5. Data transfer based on intelligent ethernet card

    International Nuclear Information System (INIS)

    Zhu Haitao; Chinese Academy of Sciences, Beijing; Chu Yuanping; Zhao Jingwei

    2007-01-01

    Intelligent Ethernet Cards are widely used in systems where the network throughout is very large, such as the DAQ systems for modern high energy physics experiments, web service. With the example of a commercial intelligent Ethernet card, this paper introduces the architecture, the principle and the process of intelligent Ethernet cards. In addition, the results of several experiments showing the differences between intelligent Ethernet cards and general ones are also presented. (authors)

  6. Deep learning with Python

    CERN Document Server

    Chollet, Francois

    2018-01-01

    DESCRIPTION Deep learning is applicable to a widening range of artificial intelligence problems, such as image classification, speech recognition, text classification, question answering, text-to-speech, and optical character recognition. Deep Learning with Python is structured around a series of practical code examples that illustrate each new concept introduced and demonstrate best practices. By the time you reach the end of this book, you will have become a Keras expert and will be able to apply deep learning in your own projects. KEY FEATURES • Practical code examples • In-depth introduction to Keras • Teaches the difference between Deep Learning and AI ABOUT THE TECHNOLOGY Deep learning is the technology behind photo tagging systems at Facebook and Google, self-driving cars, speech recognition systems on your smartphone, and much more. AUTHOR BIO Francois Chollet is the author of Keras, one of the most widely used libraries for deep learning in Python. He has been working with deep neural ...

  7. Semantic Web

    Directory of Open Access Journals (Sweden)

    Anna Lamandini

    2011-06-01

    Full Text Available The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013. As a system it seeks to overcome overload or excess of irrelevant information in Internet, in order to facilitate specific or pertinent research. It is an extension of the existing Web in which the aim is for cooperation between and the computer and people (the dream of Sir Tim Berners –Lee where machines can give more support to people when integrating and elaborating data in order to obtain inferences and a global sharing of data. It is a technology that is able to favour the development of a “data web” in other words the creation of a space in both sets of interconnected and shared data (Linked Data which allows users to link different types of data coming from different sources. It is a technology that will have great effect on everyday life since it will permit the planning of “intelligent applications” in various sectors such as education and training, research, the business world, public information, tourism, health, and e-government. It is an innovative technology that activates a social transformation (socio-semantic Web on a world level since it redefines the cognitive universe of users and enables the sharing not only of information but of significance (collective and connected intelligence.

  8. Web Caching

    Indian Academy of Sciences (India)

    leveraged through Web caching technology. Specifically, Web caching becomes an ... Web routing can improve the overall performance of the Internet. Web caching is similar to memory system caching - a Web cache stores Web resources in ...

  9. A business intelligence approach using web search tools and online data reduction techniques to examine the value of product-enabled services

    DEFF Research Database (Denmark)

    Tanev, Stoyan; Liotta, Giacomo; Kleismantas, Andrius

    2015-01-01

    in Canada and Europe. It adopts an innovative methodology based on online textual data that could be implemented in advanced business intelligence tools aiming at the facilitation of innovation, marketing and business decision making. Combinations of keywords referring to different aspects of service value......-service innovation as a competitive advantage on the marketplace. On the other hand, the focus of EU firms on innovative hybrid offerings is not explicitly related to business differentiation and competitiveness....

  10. Artificial intelligence

    CERN Document Server

    Hunt, Earl B

    1975-01-01

    Artificial Intelligence provides information pertinent to the fundamental aspects of artificial intelligence. This book presents the basic mathematical and computational approaches to problems in the artificial intelligence field.Organized into four parts encompassing 16 chapters, this book begins with an overview of the various fields of artificial intelligence. This text then attempts to connect artificial intelligence problems to some of the notions of computability and abstract computing devices. Other chapters consider the general notion of computability, with focus on the interaction bet

  11. Intelligent mechatronics; Intelligent mechatronics

    Energy Technology Data Exchange (ETDEWEB)

    Hashimoto, H. [The University of Tokyo, Tokyo (Japan). Institute of Industrial Science

    1995-10-01

    Intelligent mechatronics (IM) was explained as follows: a study of IM essentially targets realization of a robot namely, but in the present stage the target is a creation of new values by intellectualization of machine, that is, a combination of the information infrastructure and the intelligent machine system. IM is also thought to be constituted of computers positively used and micromechatronics. The paper next introduces examples of IM study, mainly those the author is concerned with as shown below: sensor gloves, robot hands, robot eyes, tele operation, three-dimensional object recognition, mobile robot, magnetic bearing, construction of remote controlled unmanned dam, robot network, sensitivity communication using neuro baby, etc. 27 figs.

  12. Artificial Intelligence and Moral intelligence

    OpenAIRE

    Laura Pana

    2008-01-01

    We discuss the thesis that the implementation of a moral code in the behaviour of artificial intelligent systems needs a specific form of human and artificial intelligence, not just an abstract intelligence. We present intelligence as a system with an internal structure and the structural levels of the moral system, as well as certain characteristics of artificial intelligent agents which can/must be treated as 1- individual entities (with a complex, specialized, autonomous or selfdetermined,...

  13. Applied Semantic Web Technologies

    CERN Document Server

    Sugumaran, Vijayan

    2011-01-01

    The rapid advancement of semantic web technologies, along with the fact that they are at various levels of maturity, has left many practitioners confused about the current state of these technologies. Focusing on the most mature technologies, Applied Semantic Web Technologies integrates theory with case studies to illustrate the history, current state, and future direction of the semantic web. It maintains an emphasis on real-world applications and examines the technical and practical issues related to the use of semantic technologies in intelligent information management. The book starts with

  14. Semantic Business Intelligence - a New Generation of Business Intelligence

    OpenAIRE

    Dinu AIRINEI; Dora-Anca BERTA

    2012-01-01

    Business Intelligence Solutions represents applications used by companies to manage process and analyze data to provide substantiated decision. In the context of Semantic Web develop-ment trend is to integrate semantic unstructured data, making business intelligence solutions to be redesigned in such a manner that can analyze, process and synthesize, in addition to traditional data and data integrated with semantic another form and structure. This invariably leads appearance of new BI solutio...

  15. Food-web dynamics and isotopic niches in deep-sea communities residing in a submarine canyon and on the adjacent open slopes

    NARCIS (Netherlands)

    Demopoulos, A.W.J.; McClain-Counts, J.; Ross, S.W.; Brooke, S.; Mienis, F.

    2017-01-01

    Examination of food webs and trophic niches provide insights into organisms’ functionalecology, yet few studies have examined trophodynamics within submarine canyons, wherethe interaction of canyon morphology and oceanography influences habitat provision and fooddeposition. Using stable isotope

  16. Fuzzy Clustering: An Approachfor Mining Usage Profilesfrom Web

    OpenAIRE

    Ms.Archana N. Boob; Prof. D. M. Dakhane

    2012-01-01

    Web usage mining is an application of data mining technology to mining the data of the web server log file. It can discover the browsing patterns of user and some kind of correlations between the web pages. Web usage mining provides the support for the web site design, providing personalization server and other business making decision, etc. Web mining applies the data mining, the artificial intelligence and the chart technology and so on to the web data and traces users' visiting characteris...

  17. Veille technologique et intelligence économique en PME et TPE : réalités d'une approche nouvelle avec le Web 2.0

    OpenAIRE

    DIAKHATE, Djibril

    2011-01-01

    With the web as a platform and the development of collaborative tools (blogs, RSS feeds, social networking, information sharing ...) information is changing and its production becomes increasingly simple. The average user, who since the advent of the Internet has merely a short role as consumer of information, is transformed into a "consumer-producer" of information. He is at the heart of the new system of production and dissemination of information. This change whose characteristics cannot b...

  18. Intelligence Artificielle Distribuée Et Gestion Des Connaissances : Ontologies Et Systèmes Multi-Agents Pour Un Web Sémantique Organisationnel

    OpenAIRE

    Gandon , Fabien

    2002-01-01

    This work concerns multi-agents systems for the management of a corporate semantic web based on an ontology. It was carried out in the context of the European project CoMMA focusing on two application scenarios: support technology monitoring activities and assist the integration of a new employee to the organisation. Three aspects were essentially developed in this work: the design of a multi-agents architecture supporting both scenarios, and the organisational top-down approach followed to i...

  19. Exploring the academic invisible web

    OpenAIRE

    Lewandowski, Dirk; Mayr, Philipp

    2006-01-01

    Purpose: To provide a critical review of Bergman’s 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scientific search engines. We provide an overview of approaches followed thus far. Design/methodol...

  20. Space Radiation Intelligence System (SPRINTS), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — NextGen Federal Systems proposes an innovative SPace Radiation INTelligence System (SPRINTS) which provides an interactive and web-delivered capability that...

  1. Artificial Intelligence in Cardiology.

    Science.gov (United States)

    Johnson, Kipp W; Torres Soto, Jessica; Glicksberg, Benjamin S; Shameer, Khader; Miotto, Riccardo; Ali, Mohsin; Ashley, Euan; Dudley, Joel T

    2018-06-12

    Artificial intelligence and machine learning are poised to influence nearly every aspect of the human condition, and cardiology is not an exception to this trend. This paper provides a guide for clinicians on relevant aspects of artificial intelligence and machine learning, reviews selected applications of these methods in cardiology to date, and identifies how cardiovascular medicine could incorporate artificial intelligence in the future. In particular, the paper first reviews predictive modeling concepts relevant to cardiology such as feature selection and frequent pitfalls such as improper dichotomization. Second, it discusses common algorithms used in supervised learning and reviews selected applications in cardiology and related disciplines. Third, it describes the advent of deep learning and related methods collectively called unsupervised learning, provides contextual examples both in general medicine and in cardiovascular medicine, and then explains how these methods could be applied to enable precision cardiology and improve patient outcomes. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Novel applications of intelligent systems

    CERN Document Server

    Kasabov, Nikola; Filev, Dimitar; Jotsov, Vladimir

    2016-01-01

    In this carefully edited book some selected results of theoretical and applied research in the field of broadly perceived intelligent systems are presented. The problems vary from industrial to web and problem independent applications. All this is united under the slogan: "Intelligent systems conquer the world”. The book brings together innovation projects with analytical research, invention, retrieval and processing of knowledge and logical applications in technology. This book is aiming to a wide circle of readers and particularly to the young generation of IT/ICT experts who will build the next generations of intelligent systems.

  3. Artificial Intelligence.

    Science.gov (United States)

    Information Technology Quarterly, 1985

    1985-01-01

    This issue of "Information Technology Quarterly" is devoted to the theme of "Artificial Intelligence." It contains two major articles: (1) Artificial Intelligence and Law" (D. Peter O'Neill and George D. Wood); (2) "Artificial Intelligence: A Long and Winding Road" (John J. Simon, Jr.). In addition, it contains two sidebars: (1) "Calculating and…

  4. Competitive Intelligence.

    Science.gov (United States)

    Bergeron, Pierrette; Hiller, Christine A.

    2002-01-01

    Reviews the evolution of competitive intelligence since 1994, including terminology and definitions and analytical techniques. Addresses the issue of ethics; explores how information technology supports the competitive intelligence process; and discusses education and training opportunities for competitive intelligence, including core competencies…

  5. Intelligence Ethics:

    DEFF Research Database (Denmark)

    Rønn, Kira Vrist

    2016-01-01

    Questions concerning what constitutes a morally justified conduct of intelligence activities have received increased attention in recent decades. However, intelligence ethics is not yet homogeneous or embedded as a solid research field. The aim of this article is to sketch the state of the art...... of intelligence ethics and point out subjects for further scrutiny in future research. The review clusters the literature on intelligence ethics into two groups: respectively, contributions on external topics (i.e., the accountability of and the public trust in intelligence agencies) and internal topics (i.......e., the search for an ideal ethical framework for intelligence actions). The article concludes that there are many holes to fill for future studies on intelligence ethics both in external and internal discussions. Thus, the article is an invitation – especially, to moral philosophers and political theorists...

  6. The Intelligent Technologies of Electronic Information System

    Science.gov (United States)

    Li, Xianyu

    2017-08-01

    Based upon the synopsis of system intelligence and information services, this paper puts forward the attributes and the logic structure of information service, sets forth intelligent technology framework of electronic information system, and presents a series of measures, such as optimizing business information flow, advancing data decision capability, improving information fusion precision, strengthening deep learning application and enhancing prognostic and health management, and demonstrates system operation effectiveness. This will benefit the enhancement of system intelligence.

  7. ICT-Supported Gaming for Competitive Intelligence

    NARCIS (Netherlands)

    Achterbergh, J.M.I.M.; Khosrow-Pour, M.

    2005-01-01

    Collecting and processing competitive intelligence for the purpose of strategy formulation are complex activities requiring deep insight in and models of the “organization in its environment.” These insights and models need to be not only shared between CI (competitive intelligence) practitioners

  8. Intelligence Naturelle et Intelligence Artificielle

    OpenAIRE

    Dubois, Daniel

    2011-01-01

    Cet article présente une approche systémique du concept d’intelligence naturelle en ayant pour objectif de créer une intelligence artificielle. Ainsi, l’intelligence naturelle, humaine et animale non-humaine, est une fonction composée de facultés permettant de connaître et de comprendre. De plus, l'intelligence naturelle reste indissociable de la structure, à savoir les organes du cerveau et du corps. La tentation est grande de doter les systèmes informatiques d’une intelligence artificielle ...

  9. Applying an intelligent model and sensitivity analysis to inspect mass transfer kinetics, shrinkage and crust color changes of deep-fat fried ostrich meat cubes.

    Science.gov (United States)

    Amiryousefi, Mohammad Reza; Mohebbi, Mohebbat; Khodaiyan, Faramarz

    2014-01-01

    The objectives of this study were to use image analysis and artificial neural network (ANN) to predict mass transfer kinetics as well as color changes and shrinkage of deep-fat fried ostrich meat cubes. Two generalized feedforward networks were separately developed by using the operation conditions as inputs. Results based on the highest numerical quantities of the correlation coefficients between the experimental versus predicted values, showed proper fitting. Sensitivity analysis results of selected ANNs showed that among the input variables, frying temperature was the most sensitive to moisture content (MC) and fat content (FC) compared to other variables. Sensitivity analysis results of selected ANNs showed that MC and FC were the most sensitive to frying temperature compared to other input variables. Similarly, for the second ANN architecture, microwave power density was the most impressive variable having the maximum influence on both shrinkage percentage and color changes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Geospatial Semantics and the Semantic Web

    CERN Document Server

    Ashish, Naveen

    2011-01-01

    The availability of geographic and geospatial information and services, especially on the open Web has become abundant in the last several years with the proliferation of online maps, geo-coding services, geospatial Web services and geospatially enabled applications. The need for geospatial reasoning has significantly increased in many everyday applications including personal digital assistants, Web search applications, local aware mobile services, specialized systems for emergency response, medical triaging, intelligence analysis and more. Geospatial Semantics and the Semantic Web: Foundation

  11. 2nd International Conference on Intelligent Computing, Communication & Devices

    CERN Document Server

    Popentiu-Vladicescu, Florin

    2017-01-01

    The book presents high quality papers presented at 2nd International Conference on Intelligent Computing, Communication & Devices (ICCD 2016) organized by Interscience Institute of Management and Technology (IIMT), Bhubaneswar, Odisha, India, during 13 and 14 August, 2016. The book covers all dimensions of intelligent sciences in its three tracks, namely, intelligent computing, intelligent communication and intelligent devices. intelligent computing track covers areas such as intelligent and distributed computing, intelligent grid and cloud computing, internet of things, soft computing and engineering applications, data mining and knowledge discovery, semantic and web technology, hybrid systems, agent computing, bioinformatics, and recommendation systems. Intelligent communication covers communication and network technologies, including mobile broadband and all optical networks that are the key to groundbreaking inventions of intelligent communication technologies. This covers communication hardware, soft...

  12. Maximum Spanning Tree Model on Personalized Web Based Collaborative Learning in Web 3.0

    OpenAIRE

    Padma, S.; Seshasaayee, Ananthi

    2012-01-01

    Web 3.0 is an evolving extension of the current web environme bnt. Information in web 3.0 can be collaborated and communicated when queried. Web 3.0 architecture provides an excellent learning experience to the students. Web 3.0 is 3D, media centric and semantic. Web based learning has been on high in recent days. Web 3.0 has intelligent agents as tutors to collect and disseminate the answers to the queries by the students. Completely Interactive learner's query determine the customization of...

  13. Zooplankton predators and prey: body size and stable isotope to investigate the pelagic food web in a deep lake (Lake Iseo, Northern Italy

    Directory of Open Access Journals (Sweden)

    Barbara Leoni

    2016-09-01

    Full Text Available Seasonal changes in trophic position and food sources of deep subalpine lake (Lake Iseo, Northern Italy zooplankton taxa were investigated during the year 2011. Furthermore, it's combined carbon and nitrogen Stable Isotope Analysis (SIA with size-specific analyses of both, the major predatory cladoceran (Leptodora kindtii, Focke and two potential preys (Daphnia longispina complex and Eubosmina longicornis. SIA studies have been extremely useful to track the energy flow through complex trophic network, however, if it is applied to analyze relation between two/few species may lead to misleading interpretations. In fact, integrating size-specificity allowed for understanding why L. kindtii nitrogen isotopic fingerprint fully overlapped with Daphnia, in spring. By investigating changes in L. kindtii's feeding basket, we found that in spring, L. kindtii mainly relied upon E. longicornis as prey, Daphnia being of too large body size for being captured by L. kindtii. Among preys encountered directly in front by a free-swimming Leptodora, only those able to fit into basket opening can be captured. As basket diameter increases with animal body length, size selection of prey depends on L. kindtii body length. As in other deep, subalpine lakes, E. longicornis was less 15N-enriched than Daphnia, most likely because of exploiting nitrogen fixing, cyanobacteria colonies, commonly detected in Lake Iseo with the onset of thermal stratification. Cyclopoid adults were at the top of zooplankton food chain and they could potentially be feeding on Daphnia. They, however, likely fed in a different habitat (>20 m deep water, as suggested by a rather than negligible carbon fractionation. The results overall suggest that size-specificity is crucial for addressing space and time changes in trophic links between organisms composing the two hierarchical levels within open water zooplankton community. 

  14. Underwater Web Work

    Science.gov (United States)

    Wighting, Mervyn J.; Lucking, Robert A.; Christmann, Edwin P.

    2004-01-01

    Teachers search for ways to enhance oceanography units in the classroom. There are many online resources available to help one explore the mysteries of the deep. This article describes a collection of Web sites on this topic appropriate for middle level classrooms.

  15. Building Program Vector Representations for Deep Learning

    OpenAIRE

    Mou, Lili; Li, Ge; Liu, Yuxuan; Peng, Hao; Jin, Zhi; Xu, Yan; Zhang, Lu

    2014-01-01

    Deep learning has made significant breakthroughs in various fields of artificial intelligence. Advantages of deep learning include the ability to capture highly complicated features, weak involvement of human engineering, etc. However, it is still virtually impossible to use deep learning to analyze programs since deep architectures cannot be trained effectively with pure back propagation. In this pioneering paper, we propose the "coding criterion" to build program vector representations, whi...

  16. Artificial Intelligence.

    Science.gov (United States)

    Wash, Darrel Patrick

    1989-01-01

    Making a machine seem intelligent is not easy. As a consequence, demand has been rising for computer professionals skilled in artificial intelligence and is likely to continue to go up. These workers develop expert systems and solve the mysteries of machine vision, natural language processing, and neural networks. (Editor)

  17. Intelligent Design

    DEFF Research Database (Denmark)

    Hjorth, Poul G.

    2005-01-01

    Forestillingen om at naturen er designet af en guddommelig 'intelligens' er et smukt filosofisk princip. Teorier om Intelligent Design som en naturvidenskabeligt baseret teori er derimod helt forfærdelig.......Forestillingen om at naturen er designet af en guddommelig 'intelligens' er et smukt filosofisk princip. Teorier om Intelligent Design som en naturvidenskabeligt baseret teori er derimod helt forfærdelig....

  18. Keeping Dublin Core Simple: Cross-Domain Discovery or Resource Description?; First Steps in an Information Commerce Economy: Digital Rights Management in the Emerging E-Book Environment; Interoperability: Digital Rights Management and the Emerging EBook Environment; Searching the Deep Web: Direct Query Engine Applications at the Department of Energy.

    Science.gov (United States)

    Lagoze, Carl; Neylon, Eamonn; Mooney, Stephen; Warnick, Walter L.; Scott, R. L.; Spence, Karen J.; Johnson, Lorrie A.; Allen, Valerie S.; Lederman, Abe

    2001-01-01

    Includes four articles that discuss Dublin Core metadata, digital rights management and electronic books, including interoperability; and directed query engines, a type of search engine designed to access resources on the deep Web that is being used at the Department of Energy. (LRW)

  19. miRDis: a Web tool for endogenous and exogenous microRNA discovery based on deep-sequencing data analysis.

    Science.gov (United States)

    Zhang, Hanyuan; Vieira Resende E Silva, Bruno; Cui, Juan

    2018-05-01

    Small RNA sequencing is the most widely used tool for microRNA (miRNA) discovery, and shows great potential for the efficient study of miRNA cross-species transport, i.e., by detecting the presence of exogenous miRNA sequences in the host species. Because of the increased appreciation of dietary miRNAs and their far-reaching implication in human health, research interests are currently growing with regard to exogenous miRNAs bioavailability, mechanisms of cross-species transport and miRNA function in cellular biological processes. In this article, we present microRNA Discovery (miRDis), a new small RNA sequencing data analysis pipeline for both endogenous and exogenous miRNA detection. Specifically, we developed and deployed a Web service that supports the annotation and expression profiling data of known host miRNAs and the detection of novel miRNAs, other noncoding RNAs, and the exogenous miRNAs from dietary species. As a proof-of-concept, we analyzed a set of human plasma sequencing data from a milk-feeding study where 225 human miRNAs were detected in the plasma samples and 44 show elevated expression after milk intake. By examining the bovine-specific sequences, data indicate that three bovine miRNAs (bta-miR-378, -181* and -150) are present in human plasma possibly because of the dietary uptake. Further evaluation based on different sets of public data demonstrates that miRDis outperforms other state-of-the-art tools in both detection and quantification of miRNA from either animal or plant sources. The miRDis Web server is available at: http://sbbi.unl.edu/miRDis/index.php.

  20. E-Learning 3.0 = E-Learning 2.0 + Web 3.0?

    Science.gov (United States)

    Hussain, Fehmida

    2012-01-01

    Web 3.0, termed as the semantic web or the web of data is the transformed version of Web 2.0 with technologies and functionalities such as intelligent collaborative filtering, cloud computing, big data, linked data, openness, interoperability and smart mobility. If Web 2.0 is about social networking and mass collaboration between the creator and…

  1. Intelligent playgrounds

    DEFF Research Database (Denmark)

    Larsen, Lasse Juel

    2009-01-01

    This paper examines play, gaming and learning in regard to intelligent playware developed for outdoor use. The key questions are how does these novel artefacts influence the concept of play, gaming and learning. Up until now play and game have been understood as different activities. This paper...... examines if the sharp differentiation between the two can be uphold in regard to intelligent playware for outdoor use. Play and game activities will be analysed and viewed in conjunction with learning contexts. This paper will stipulate that intelligent playware facilitates rapid shifts in contexts...

  2. Artificial intelligence

    CERN Document Server

    Ennals, J R

    1987-01-01

    Artificial Intelligence: State of the Art Report is a two-part report consisting of the invited papers and the analysis. The editor first gives an introduction to the invited papers before presenting each paper and the analysis, and then concludes with the list of references related to the study. The invited papers explore the various aspects of artificial intelligence. The analysis part assesses the major advances in artificial intelligence and provides a balanced analysis of the state of the art in this field. The Bibliography compiles the most important published material on the subject of

  3. Artificial Intelligence

    CERN Document Server

    Warwick, Kevin

    2011-01-01

    if AI is outside your field, or you know something of the subject and would like to know more then Artificial Intelligence: The Basics is a brilliant primer.' - Nick Smith, Engineering and Technology Magazine November 2011 Artificial Intelligence: The Basics is a concise and cutting-edge introduction to the fast moving world of AI. The author Kevin Warwick, a pioneer in the field, examines issues of what it means to be man or machine and looks at advances in robotics which have blurred the boundaries. Topics covered include: how intelligence can be defined whether machines can 'think' sensory

  4. DATA EXTRACTION AND LABEL ASSIGNMENT FOR WEB DATABASES

    OpenAIRE

    T. Rajesh; T. Prathap; S.Naveen Nambi; A.R. Arunachalam

    2015-01-01

    Deep Web contents are accessed by queries submitted to Web databases and the returned data records are en wrapped in dynamically generated Web pages (they will be called deep Web pages in this paper). The structured data that Extracting from deep Web pages is a challenging problem due to the underlying intricate structures of such pages. Until now, a too many number of techniques have been proposed to address this problem, but all of them have limitations because they are Web-page-programming...

  5. Power is only skin deep: an institutional ethnography of nurse-driven outpatient psoriasis treatment in the era of clinic web sites.

    Science.gov (United States)

    Winkelman, Warren J; Halifax, Nancy V Davis

    2007-04-01

    We present an institutional ethnography of hospital-based psoriasis day treatment in the context of evaluating readiness to supplement services and support with a new web site. Through observation, interviews and a critical consideration of documents, forms and other textually-mediated discourses in the day-to-day work of nurses and physicians, we come to understand how the historical gender-determined power structure of nurses and physicians impacts nurses' work. On the one hand, nurses' work can have certain social benefits that would usually be considered untenable in traditional healthcare: nurses as primary decision-makers, nurses as experts in the treatment of disease, physicians as secondary consultants, and patients as co-facilitators in care delivery processes. However, benefits seem to have come at the nurses' expense, as they are required to maintain a cloak of invisibility for themselves and for their workplace, so that the Centre appears like all other outpatient clinics, and the nurses do not enjoy appropriate economic recognition. Implications for this negotiated invisibility on the implementation of new information systems in healthcare are discussed.

  6. SLITHER: a web server for generating contiguous conformations of substrate molecules entering into deep active sites of proteins or migrating through channels in membrane transporters.

    Science.gov (United States)

    Lee, Po-Hsien; Kuo, Kuei-Ling; Chu, Pei-Ying; Liu, Eric M; Lin, Jung-Hsin

    2009-07-01

    Many proteins use a long channel to guide the substrate or ligand molecules into the well-defined active sites for catalytic reactions or for switching molecular states. In addition, substrates of membrane transporters can migrate to another side of cellular compartment by means of certain selective mechanisms. SLITHER (http://bioinfo.mc.ntu.edu.tw/slither/or http://slither.rcas.sinica.edu.tw/) is a web server that can generate contiguous conformations of a molecule along a curved tunnel inside a protein, and the binding free energy profile along the predicted channel pathway. SLITHER adopts an iterative docking scheme, which combines with a puddle-skimming procedure, i.e. repeatedly elevating the potential energies of the identified global minima, thereby determines the contiguous binding modes of substrates inside the protein. In contrast to some programs that are widely used to determine the geometric dimensions in the ion channels, SLITHER can be applied to predict whether a substrate molecule can crawl through an inner channel or a half-channel of proteins across surmountable energy barriers. Besides, SLITHER also provides the list of the pore-facing residues, which can be directly compared with many genetic diseases. Finally, the adjacent binding poses determined by SLITHER can also be used for fragment-based drug design.

  7. Intelligent Advertising

    OpenAIRE

    Díaz Pinedo, Edilfredo Eliot

    2012-01-01

    Intelligent Advertisement diseña e implementa un sistema de publicidad para dispositivos móviles en un centro comercial, donde los clientes reciben publicidad de forma pasiva en sus dispositivos mientras están dentro.

  8. BUSINESS INTELLIGENCE

    OpenAIRE

    Bogdan Mohor Dumitrita

    2011-01-01

    The purpose of this work is to present business intelligence systems. These systems can be extremely complex and important in modern market competition. Its effectiveness also reflects in price, so we have to exlore their financial potential before investment. The systems have 20 years long history and during that time many of such tools have been developed, but they are rarely still in use. Business intelligence system consists of three main areas: Data Warehouse, ETL tools and tools f...

  9. Intelligent indexing

    International Nuclear Information System (INIS)

    Farkas, J.

    1992-01-01

    In this paper we discuss the relevance of artificial intelligence to the automatic indexing of natural language text. We describe the use of domain-specific semantically-based thesauruses and address the problem of creating adequate knowledge bases for intelligent indexing systems. We also discuss the relevance of the Hilbert space ι 2 to the compact representation of documents and to the definition of the similarity of natural language texts. (author). 17 refs., 2 figs

  10. Intelligent indexing

    Energy Technology Data Exchange (ETDEWEB)

    Farkas, J

    1993-12-31

    In this paper we discuss the relevance of artificial intelligence to the automatic indexing of natural language text. We describe the use of domain-specific semantically-based thesauruses and address the problem of creating adequate knowledge bases for intelligent indexing systems. We also discuss the relevance of the Hilbert space {iota}{sup 2} to the compact representation of documents and to the definition of the similarity of natural language texts. (author). 17 refs., 2 figs.

  11. Semantic web for the working ontologist effective modeling in RDFS and OWL

    CERN Document Server

    Allemang, Dean

    2011-01-01

    Semantic Web models and technologies provide information in machine-readable languages that enable computers to access the Web more intelligently and perform tasks automatically without the direction of users. These technologies are relatively recent and advancing rapidly, creating a set of unique challenges for those developing applications. Semantic Web for the Working Ontologist is the essential, comprehensive resource on semantic modeling, for practitioners in health care, artificial intelligence, finance, engineering, military intelligence, enterprise architecture, and more. Focused on

  12. New challenges in computational collective intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Ngoc Thanh; Katarzyniak, Radoslaw Piotr [Wroclaw Univ. of Technology (Poland). Inst. of Informatics; Janiak, Adam (eds.) [Wroclaw Univ. of Technology (Poland). Inst. of Computer Engineering, Control and Robotics

    2009-07-01

    The book consists of 29 chapters which have been selected and invited from the submissions to the 1{sup st} International Conference on Collective Intelligence - Semantic Web, Social Networks and Multiagent Systems (ICCCI 2009). All chapters in the book discuss various examples of applications of computational collective intelligence and related technologies to such fields as semantic web, information systems ontologies, social networks, agent and multiagent systems. The editors hope that the book can be useful for graduate and Ph.D. students in Computer Science, in particular participants to courses on Soft Computing, Multi-Agent Systems and Robotics. This book can also be useful for researchers working on the concept of computational collective intelligence in artificial populations. It is the hope of the editors that readers of this volume can find many inspiring ideas and use them to create new cases intelligent collectives. Many such challenges are suggested by particular approaches and models presented in particular chapters of this book. (orig.)

  13. Web Mining

    Science.gov (United States)

    Fürnkranz, Johannes

    The World-Wide Web provides every internet citizen with access to an abundance of information, but it becomes increasingly difficult to identify the relevant pieces of information. Research in web mining tries to address this problem by applying techniques from data mining and machine learning to Web data and documents. This chapter provides a brief overview of web mining techniques and research areas, most notably hypertext classification, wrapper induction, recommender systems and web usage mining.

  14. Learning Structural Classification Rules for Web-page Categorization

    NARCIS (Netherlands)

    Stuckenschmidt, Heiner; Hartmann, Jens; Van Harmelen, Frank

    2002-01-01

    Content-related metadata plays an important role in the effort of developing intelligent web applications. One of the most established form of providing content-related metadata is the assignment of web-pages to content categories. We describe the Spectacle system for classifying individual web

  15. 1st International Conference on Intelligent Computing and Communication

    CERN Document Server

    Satapathy, Suresh; Sanyal, Manas; Bhateja, Vikrant

    2017-01-01

    The book covers a wide range of topics in Computer Science and Information Technology including swarm intelligence, artificial intelligence, evolutionary algorithms, and bio-inspired algorithms. It is a collection of papers presented at the First International Conference on Intelligent Computing and Communication (ICIC2) 2016. The prime areas of the conference are Intelligent Computing, Intelligent Communication, Bio-informatics, Geo-informatics, Algorithm, Graphics and Image Processing, Graph Labeling, Web Security, Privacy and e-Commerce, Computational Geometry, Service Orient Architecture, and Data Engineering.

  16. Artificial intelligence in medicine.

    Science.gov (United States)

    Hamet, Pavel; Tremblay, Johanne

    2017-04-01

    Artificial Intelligence (AI) is a general term that implies the use of a computer to model intelligent behavior with minimal human intervention. AI is generally accepted as having started with the invention of robots. The term derives from the Czech word robota, meaning biosynthetic machines used as forced labor. In this field, Leonardo Da Vinci's lasting heritage is today's burgeoning use of robotic-assisted surgery, named after him, for complex urologic and gynecologic procedures. Da Vinci's sketchbooks of robots helped set the stage for this innovation. AI, described as the science and engineering of making intelligent machines, was officially born in 1956. The term is applicable to a broad range of items in medicine such as robotics, medical diagnosis, medical statistics, and human biology-up to and including today's "omics". AI in medicine, which is the focus of this review, has two main branches: virtual and physical. The virtual branch includes informatics approaches from deep learning information management to control of health management systems, including electronic health records, and active guidance of physicians in their treatment decisions. The physical branch is best represented by robots used to assist the elderly patient or the attending surgeon. Also embodied in this branch are targeted nanorobots, a unique new drug delivery system. The societal and ethical complexities of these applications require further reflection, proof of their medical utility, economic value, and development of interdisciplinary strategies for their wider application. Copyright © 2017. Published by Elsevier Inc.

  17. Intelligent Information Systems for Web Product Search

    NARCIS (Netherlands)

    D. Vandic (Damir)

    2017-01-01

    markdownabstractOver the last few years, we have experienced an increase in online shopping. Consequently, there is a need for efficient and effective product search engines. The rapid growth of e-commerce, however, has also introduced some challenges. Studies show that users can get overwhelmed by

  18. DEEPWATER AND NEARSHORE FOOD WEB CHARACTERIZATIONS IN LAKE SUPERIOR

    Science.gov (United States)

    Due to the difficulty associated with sampling deep aquatic systems, food web relationships among deepwater fauna are often poorly known. We are characterizing nearshore versus offshore habitats in the Great Lakes and investigating food web linkages among profundal, pelagic, and ...

  19. Internet-based intelligent information processing systems

    CERN Document Server

    Tonfoni, G; Ichalkaranje, N S

    2003-01-01

    The Internet/WWW has made it possible to easily access quantities of information never available before. However, both the amount of information and the variation in quality pose obstacles to the efficient use of the medium. Artificial intelligence techniques can be useful tools in this context. Intelligent systems can be applied to searching the Internet and data-mining, interpreting Internet-derived material, the human-Web interface, remote condition monitoring and many other areas. This volume presents the latest research on the interaction between intelligent systems (neural networks, adap

  20. New trends in computational collective intelligence

    CERN Document Server

    Kim, Sang-Wook; Trawiński, Bogdan

    2015-01-01

    This book consists of 20 chapters in which the authors deal with different theoretical and practical aspects of new trends in Collective Computational Intelligence techniques. Computational Collective Intelligence methods and algorithms are one the current trending research topics from areas related to Artificial Intelligence, Soft Computing or Data Mining among others. Computational Collective Intelligence is a rapidly growing field that is most often understood as an AI sub-field dealing with soft computing methods which enable making group decisions and processing knowledge among autonomous units acting in distributed environments. Web-based Systems, Social Networks, and Multi-Agent Systems very often need these tools for working out consistent knowledge states, resolving conflicts and making decisions. The chapters included in this volume cover a selection of topics and new trends in several domains related to Collective Computational Intelligence: Language and Knowledge Processing, Data Mining Methods an...

  1. Intelligent systems

    CERN Document Server

    Irwin, J David

    2011-01-01

    Technology has now progressed to the point that intelligent systems are replacing humans in the decision making processes as well as aiding in the solution of very complex problems. In many cases intelligent systems are already outperforming human activities. Artificial neural networks are not only capable of learning how to classify patterns, such images or sequence of events, but they can also effectively model complex nonlinear systems. Their ability to classify sequences of events is probably more popular in industrial applications where there is an inherent need to model nonlinear system

  2. Intelligent Universe

    Energy Technology Data Exchange (ETDEWEB)

    Hoyle, F

    1983-01-01

    The subject is covered in chapters, entitled: chance and the universe (synthesis of proteins; the primordial soup); the gospel according to Darwin (discussion of Darwin theory of evolution); life did not originate on earth (fossils from space; life in space); the interstellar connection (living dust between the stars; bacteria in space falling to the earth; interplanetary dust); evolution by cosmic control (microorganisms; genetics); why aren't the others here (a cosmic origin of life); after the big bang (big bang and steady state); the information rich universe; what is intelligence up to; the intelligent universe.

  3. Artificial intelligence

    International Nuclear Information System (INIS)

    Perret-Galix, D.

    1992-01-01

    A vivid example of the growing need for frontier physics experiments to make use of frontier technology is in the field of artificial intelligence and related themes. This was reflected in the second international workshop on 'Software Engineering, Artificial Intelligence and Expert Systems in High Energy and Nuclear Physics' which took place from 13-18 January at France Telecom's Agelonde site at La Londe des Maures, Provence. It was the second in a series, the first having been held at Lyon in 1990

  4. Artificial Intelligence and Moral intelligence

    Directory of Open Access Journals (Sweden)

    Laura Pana

    2008-07-01

    Full Text Available We discuss the thesis that the implementation of a moral code in the behaviour of artificial intelligent systems needs a specific form of human and artificial intelligence, not just an abstract intelligence. We present intelligence as a system with an internal structure and the structural levels of the moral system, as well as certain characteristics of artificial intelligent agents which can/must be treated as 1- individual entities (with a complex, specialized, autonomous or selfdetermined, even unpredictable conduct, 2- entities endowed with diverse or even multiple intelligence forms, like moral intelligence, 3- open and, even, free-conduct performing systems (with specific, flexible and heuristic mechanisms and procedures of decision, 4 – systems which are open to education, not just to instruction, 5- entities with “lifegraphy”, not just “stategraphy”, 6- equipped not just with automatisms but with beliefs (cognitive and affective complexes, 7- capable even of reflection (“moral life” is a form of spiritual, not just of conscious activity, 8 – elements/members of some real (corporal or virtual community, 9 – cultural beings: free conduct gives cultural value to the action of a ”natural” or artificial being. Implementation of such characteristics does not necessarily suppose efforts to design, construct and educate machines like human beings. The human moral code is irremediably imperfect: it is a morality of preference, of accountability (not of responsibility and a morality of non-liberty, which cannot be remedied by the invention of ethical systems, by the circulation of ideal values and by ethical (even computing education. But such an imperfect morality needs perfect instruments for its implementation: applications of special logic fields; efficient psychological (theoretical and technical attainments to endow the machine not just with intelligence, but with conscience and even spirit; comprehensive technical

  5. Present situation and trend of precision guidance technology and its intelligence

    Science.gov (United States)

    Shang, Zhengguo; Liu, Tiandong

    2017-11-01

    This paper first introduces the basic concepts of precision guidance technology and artificial intelligence technology. Then gives a brief introduction of intelligent precision guidance technology, and with the help of development of intelligent weapon based on deep learning project in foreign: LRASM missile project, TRACE project, and BLADE project, this paper gives an overview of the current foreign precision guidance technology. Finally, the future development trend of intelligent precision guidance technology is summarized, mainly concentrated in the multi objectives, intelligent classification, weak target detection and recognition, intelligent between complex environment intelligent jamming and multi-source, multi missile cooperative fighting and other aspects.

  6. Plant intelligence

    Science.gov (United States)

    Lipavská, Helena; Žárský, Viktor

    2009-01-01

    The concept of plant intelligence, as proposed by Anthony Trewavas, has raised considerable discussion. However, plant intelligence remains loosely defined; often it is either perceived as practically synonymous to Darwinian fitness, or reduced to a mere decorative metaphor. A more strict view can be taken, emphasizing necessary prerequisites such as memory and learning, which requires clarifying the definition of memory itself. To qualify as memories, traces of past events have to be not only stored, but also actively accessed. We propose a criterion for eliminating false candidates of possible plant intelligence phenomena in this stricter sense: an “intelligent” behavior must involve a component that can be approximated by a plausible algorithmic model involving recourse to stored information about past states of the individual or its environment. Re-evaluation of previously presented examples of plant intelligence shows that only some of them pass our test. “You were hurt?” Kumiko said, looking at the scar. Sally looked down. “Yeah.” “Why didn't you have it removed?” “Sometimes it's good to remember.” “Being hurt?” “Being stupid.”—(W. Gibson: Mona Lisa Overdrive) PMID:19816094

  7. Speech Intelligibility

    Science.gov (United States)

    Brand, Thomas

    Speech intelligibility (SI) is important for different fields of research, engineering and diagnostics in order to quantify very different phenomena like the quality of recordings, communication and playback devices, the reverberation of auditoria, characteristics of hearing impairment, benefit using hearing aids or combinations of these things.

  8. Web service composition: a semantic web and automated planning technique application

    Directory of Open Access Journals (Sweden)

    Jaime Alberto Guzmán Luna

    2008-09-01

    Full Text Available This article proposes applying semantic web and artificial intelligence planning techniques to a web services composition model dealing with problems of ambiguity in web service description and handling incomplete web information. The model uses an OWL-S services and implements a planning technique which handles open world semantics in its reasoning process to resolve these problems. This resulted in a web services composition system incorporating a module for interpreting OWL-S services and converting them into a planning problem in PDDL (a planning module handling incomplete information and an execution service module concurrently interacting with the planner for executing each composition plan service.

  9. Designing Adaptive Web Applications

    DEFF Research Database (Denmark)

    Dolog, Peter

    2008-01-01

    Learning system to study a discipline. In business to business interaction, different requirements and parameters of exchanged business requests might be served by different services from third parties. Such applications require certain intelligence and a slightly different approach to design. Adpative web......The unique characteristic of web applications is that they are supposed to be used by much bigger and diverse set of users and stakeholders. An example application area is e-Learning or business to business interaction. In eLearning environment, various users with different background use the e......-based applications aim to leave some of their features at the design stage in the form of variables which are dependent on several criteria. The resolution of the variables is called adaptation and can be seen from two perspectives: adaptation by humans to the changed requirements of stakeholders and dynamic system...

  10. Web survey methodology

    CERN Document Server

    Callegaro, Mario; Vehovar, Asja

    2015-01-01

    Web Survey Methodology guides the reader through the past fifteen years of research in web survey methodology. It both provides practical guidance on the latest techniques for collecting valid and reliable data and offers a comprehensive overview of research issues. Core topics from preparation to questionnaire design, recruitment testing to analysis and survey software are all covered in a systematic and insightful way. The reader will be exposed to key concepts and key findings in the literature, covering measurement, non-response, adjustments, paradata, and cost issues. The book also discusses the hottest research topics in survey research today, such as internet panels, virtual interviewing, mobile surveys and the integration with passive measurements, e-social sciences, mixed modes and business intelligence. The book is intended for students, practitioners, and researchers in fields such as survey and market research, psychological research, official statistics and customer satisfaction research.

  11. 76 FR 22940 - Intelligent Transportation Systems Program Advisory Committee; Notice of Meeting

    Science.gov (United States)

    2011-04-25

    ... DEPARTMENT OF TRANSPORTATION Intelligent Transportation Systems Program Advisory Committee; Notice...-363; 5 U.S.C. app. 2), a Web conference of the Intelligent Transportation Systems (ITS) Program... implementation of intelligent transportation systems. Through its sponsor, the ITS Joint Program Office (JPO...

  12. Memory Based Machine Intelligence Techniques in VLSI hardware

    OpenAIRE

    James, Alex Pappachen

    2012-01-01

    We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...

  13. Personalization of Rule-based Web Services.

    Science.gov (United States)

    Choi, Okkyung; Han, Sang Yong

    2008-04-04

    Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work.

  14. Web archives

    DEFF Research Database (Denmark)

    Finnemann, Niels Ole

    2018-01-01

    This article deals with general web archives and the principles for selection of materials to be preserved. It opens with a brief overview of reasons why general web archives are needed. Section two and three present major, long termed web archive initiatives and discuss the purposes and possible...... values of web archives and asks how to meet unknown future needs, demands and concerns. Section four analyses three main principles in contemporary web archiving strategies, topic centric, domain centric and time-centric archiving strategies and section five discuss how to combine these to provide...... a broad and rich archive. Section six is concerned with inherent limitations and why web archives are always flawed. The last sections deal with the question how web archives may fit into the rapidly expanding, but fragmented landscape of digital repositories taking care of various parts...

  15. Computational Intelligence and Decision Making Trends and Applications

    CERN Document Server

    Madureira, Ana; Marques, Viriato

    2013-01-01

    This book provides a general overview and original analysis of new developments and applications in several areas of Computational Intelligence and Information Systems. Computational Intelligence has become an important tool for engineers to develop and analyze novel techniques to solve problems in basic sciences such as physics, chemistry, biology, engineering, environment and social sciences.   The material contained in this book addresses the foundations and applications of Artificial Intelligence and Decision Support Systems, Complex and Biological Inspired Systems, Simulation and Evolution of Real and Artificial Life Forms, Intelligent Models and Control Systems, Knowledge and Learning Technologies, Web Semantics and Ontologies, Intelligent Tutoring Systems, Intelligent Power Systems, Self-Organized and Distributed Systems, Intelligent Manufacturing Systems and Affective Computing. The contributions have all been written by international experts, who provide current views on the topics discussed and pr...

  16. Inferring Trust Relationships in Web-Based Social Networks

    National Research Council Canada - National Science Library

    Golbeck, Jennifer; Hendler, James

    2006-01-01

    The growth of web-based social networking and the properties of those networks have created great potential for producing intelligent software that integrates a user's social network and preferences...

  17. Artificial Intelligence.

    Science.gov (United States)

    Lawrence, David R; Palacios-González, César; Harris, John

    2016-04-01

    It seems natural to think that the same prudential and ethical reasons for mutual respect and tolerance that one has vis-à-vis other human persons would hold toward newly encountered paradigmatic but nonhuman biological persons. One also tends to think that they would have similar reasons for treating we humans as creatures that count morally in our own right. This line of thought transcends biological boundaries-namely, with regard to artificially (super)intelligent persons-but is this a safe assumption? The issue concerns ultimate moral significance: the significance possessed by human persons, persons from other planets, and hypothetical nonorganic persons in the form of artificial intelligence (AI). This article investigates why our possible relations to AI persons could be more complicated than they first might appear, given that they might possess a radically different nature to us, to the point that civilized or peaceful coexistence in a determinate geographical space could be impossible to achieve.

  18. Intelligent Tutor

    Science.gov (United States)

    1990-01-01

    NASA also seeks to advance American education by employing the technology utilization process to develop a computerized, artificial intelligence-based Intelligent Tutoring System (ITS) to help high school and college physics students. The tutoring system is designed for use with the lecture and laboratory portions of a typical physics instructional program. Its importance lies in its ability to observe continually as a student develops problem solutions and to intervene when appropriate with assistance specifically directed at the student's difficulty and tailored to his skill level and learning style. ITS originated as a project of the Johnson Space Center (JSC). It is being developed by JSC's Software Technology Branch in cooperation with Dr. R. Bowen Loftin at the University of Houston-Downtown. Program is jointly sponsored by NASA and ACOT (Apple Classrooms of Tomorrow). Other organizations providing support include Texas Higher Education Coordinating Board, the National Research Council, Pennzoil Products Company and the George R. Brown Foundation. The Physics I class of Clear Creek High School, League City, Texas are providing the classroom environment for test and evaluation of the system. The ITS is a spinoff product developed earlier to integrate artificial intelligence into training/tutoring systems for NASA astronauts flight controllers and engineers.

  19. Artificial intelligence in radiology.

    Science.gov (United States)

    Hosny, Ahmed; Parmar, Chintan; Quackenbush, John; Schwartz, Lawrence H; Aerts, Hugo J W L

    2018-05-17

    Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.

  20. Intelligent Design and Intelligent Failure

    Science.gov (United States)

    Jerman, Gregory

    2015-01-01

    Good Evening, my name is Greg Jerman and for nearly a quarter century I have been performing failure analysis on NASA's aerospace hardware. During that time I had the distinct privilege of keeping the Space Shuttle flying for two thirds of its history. I have analyzed a wide variety of failed hardware from simple electrical cables to cryogenic fuel tanks to high temperature turbine blades. During this time I have found that for all the time we spend intelligently designing things, we need to be equally intelligent about understanding why things fail. The NASA Flight Director for Apollo 13, Gene Kranz, is best known for the expression "Failure is not an option." However, NASA history is filled with failures both large and small, so it might be more accurate to say failure is inevitable. It is how we react and learn from our failures that makes the difference.

  1. Türkçe Öğrenimi İçin Web Tabanlı Zeki Öğretim Sistemi (Türkzös Ve Değerlendirmesi Web Based Intelligent Tutoring System For Turkish Learning (Türkzös And Evaluation

    Directory of Open Access Journals (Sweden)

    Nursal ARICI

    2013-09-01

    Full Text Available E-learning is a learning model which is the product ofdevelopments in information technologies. This model providesadvantages in order to enriche the learning contents with audio-visualitems and to deliver this contents to the people at any time and anyplace on time. In language trainning, particularly in English, the elearningmodels are mostly used. This model is crucial in languagelearning and teaching area. With this awareness of the e-learningsystem in language learning and teaching area, a special e-learningsystem has been developed for the Turkish Language teaching namedas TÜRKZÖS. In this article, the contributions of the develepod elearningsystem has been introduced and explained with the opinions ofthe Turkish teachers and teacher candidates about the system. Thepurpose of the developing the TÜRKZÖS is to give assistance for theTurkish language learners and teachers by providing opportunities ofinformation technologies. The system has been designed to servethrough the Internet and it provides the opportunity to use enrichedelements such as web, speech synthesis-recognition systems,animation, image and shape. The system supported by enrichedelements is aimed to improve the basic skills such as reading,speaking, writing and listening in Turkish training. An another featureof TÜRKZÖS, is an intelligent tutoring system developed by usingartificial intelligence tecniques.The features which makes the systemintelligent are the followings: (i.It can introduce the loaded domaincontent adaptable to the student’s knowledge level and personalcapability, (ii. It can lead and give intelligent assistance and suitableguidance to the student.TÜRKZÖS consists of eight compenents. Threeof them are the standart compenents in Intelligent Tutoring Systems.The other compenents are the speech synthesis and the speechrecognition which give the opportunity to arrange the activities toimprove the reading and speaking skills and can be exercised by

  2. Web Engineering

    Energy Technology Data Exchange (ETDEWEB)

    White, Bebo

    2003-06-23

    Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: (a) why is it needed? (b) what is its domain of operation? (c) how does it help and what should it do to improve Web application development? and (d) how should it be incorporated in education and training? The paper discusses the significant differences that exist between Web applications and conventional software, the taxonomy of Web applications, the progress made so far and the research issues and experience of creating a specialization at the master's level. The paper reaches a conclusion that Web Engineering at this stage is a moving target since Web technologies are constantly evolving, making new types of applications possible, which in turn may require innovations in how they are built, deployed and maintained.

  3. International Conference on Frontiers of Intelligent Computing : Theory and Applications

    CERN Document Server

    Udgata, Siba; Biswal, Bhabendra

    2014-01-01

    This volume contains the papers presented at the Second International Conference on Frontiers in Intelligent Computing: Theory and Applications (FICTA-2013) held during 14-16 November 2013 organized by Bhubaneswar Engineering College (BEC), Bhubaneswar, Odisha, India. It contains 63 papers focusing on application of intelligent techniques which includes evolutionary computation techniques like genetic algorithm, particle swarm optimization techniques, teaching-learning based optimization etc  for various engineering applications such as data mining, Fuzzy systems, Machine Intelligence and ANN, Web technologies and Multimedia applications and Intelligent computing and Networking etc.

  4. Invited talk: Deep Learning Meets Physics

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Deep Learning has emerged as one of the most successful fields of machine learning and artificial intelligence with overwhelming success in industrial speech, text and vision benchmarks. Consequently it evolved into the central field of research for IT giants like Google, facebook, Microsoft, Baidu, and Amazon. Deep Learning is founded on novel neural network techniques, the recent availability of very fast computers, and massive data sets. In its core, Deep Learning discovers multiple levels of abstract representations of the input. The main obstacle to learning deep neural networks is the vanishing gradient problem. The vanishing gradient impedes credit assignment to the first layers of a deep network or to early elements of a sequence, therefore limits model selection. Major advances in Deep Learning can be related to avoiding the vanishing gradient like stacking, ReLUs, residual networks, highway networks, and LSTM. For Deep Learning, we suggested self-normalizing neural networks (SNNs) which automatica...

  5. Responsible vendors, intelligent consumers: Silk Road, the online revolution in drug trading.

    Science.gov (United States)

    Van Hout, Marie Claire; Bingham, Tim

    2014-03-01

    Silk Road is located on the Deep Web and provides an anonymous transacting infrastructure for the retail of drugs and pharmaceuticals. Members are attracted to the site due to protection of identity by screen pseudonyms, variety and quality of product listings, selection of vendors based on reviews, reduced personal risks, stealth of product delivery, development of personal connections with vendors in stealth modes and forum activity. The study aimed to explore vendor accounts of Silk Road as retail infrastructure. A single and holistic case study with embedded units approach (Yin, 2003) was chosen to explore the accounts of vendor subunits situated within the Silk Road marketplace. Vendors (n=10) completed an online interview via the direct message facility and via Tor mail. Vendors described themselves as 'intelligent and responsible' consumers of drugs. Decisions to commence vending operations on the site centred on simplicity in setting up vendor accounts, and opportunity to operate within a low risk, high traffic, high mark-up, secure and anonymous Deep Web infrastructure. The embedded online culture of harm reduction ethos appealed to them in terms of the responsible vending and use of personally tested high quality products. The professional approach to running their Silk Road businesses and dedication to providing a quality service was characterised by professional advertising of quality products, professional communication and visibility on forum pages, speedy dispatch of slightly overweight products, competitive pricing, good stealth techniques and efforts to avoid customer disputes. Vendors appeared content with a fairly constant buyer demand and described a relatively competitive market between small and big time market players. Concerns were evident with regard to Bitcoin instability. The greatest threat to Silk Road and other sites operating on the Deep Web is not law enforcement or market dynamics, it is technology itself. Copyright © 2013 Elsevier

  6. Deep learning for visual understanding

    NARCIS (Netherlands)

    Guo, Y.

    2017-01-01

    With the dramatic growth of the image data on the web, there is an increasing demand of the algorithms capable of understanding the visual information automatically. Deep learning, served as one of the most significant breakthroughs, has brought revolutionary success in diverse visual applications,

  7. Quantum neuromorphic hardware for quantum artificial intelligence

    Science.gov (United States)

    Prati, Enrico

    2017-08-01

    The development of machine learning methods based on deep learning boosted the field of artificial intelligence towards unprecedented achievements and application in several fields. Such prominent results were made in parallel with the first successful demonstrations of fault tolerant hardware for quantum information processing. To which extent deep learning can take advantage of the existence of a hardware based on qubits behaving as a universal quantum computer is an open question under investigation. Here I review the convergence between the two fields towards implementation of advanced quantum algorithms, including quantum deep learning.

  8. Web 25

    DEFF Research Database (Denmark)

    the reader on an exciting time travel journey to learn more about the prehistory of the hyperlink, the birth of the Web, the spread of the early Web, and the Web’s introduction to the general public in mainstream media. Fur- thermore, case studies of blogs, literature, and traditional media going online...

  9. Intelligent medical information filtering.

    Science.gov (United States)

    Quintana, Y

    1998-01-01

    This paper describes an intelligent information filtering system to assist users to be notified of updates to new and relevant medical information. Among the major problems users face is the large volume of medical information that is generated each day, and the need to filter and retrieve relevant information. The Internet has dramatically increased the amount of electronically accessible medical information and reduced the cost and time needed to publish. The opportunity of the Internet for the medical profession and consumers is to have more information to make decisions and this could potentially lead to better medical decisions and outcomes. However, without the assistance from professional medical librarians, retrieving new and relevant information from databases and the Internet remains a challenge. Many physicians do not have access to the services of a medical librarian. Most physicians indicate on surveys that they do not prefer to retrieve the literature themselves, or visit libraries because of the lack of recent materials, poor organisation and indexing of materials, lack of appropriate and available material, and lack of time. The information filtering system described in this paper records the online web browsing behaviour of each user and creates a user profile of the index terms found on the web pages visited by the user. A relevance-ranking algorithm then matches the user profiles to the index terms of new health care web pages that are added each day. The system creates customised summaries of new information for each user. A user can then connect to the web site to read the new information. Relevance feedback buttons on each page ask the user to rate the usefulness of the page to their immediate information needs. Errors in relevance ranking are reduced in this system by having both the user profile and medical information represented in the same representation language using a controlled vocabulary. This system also updates the user profiles

  10. Web Page Recommendation Using Web Mining

    OpenAIRE

    Modraj Bhavsar; Mrs. P. M. Chavan

    2014-01-01

    On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1) First we describe the basics of web mining, types of web mining. 2) Details of each...

  11. Intelligent automotive battery systems

    Science.gov (United States)

    Witehira, P.

    A single power-supply battery is incompatible with modern vehicles. A one-cmbination 12 cell/12 V battery, developed by Power Beat International Limited (PBIL), is described. The battery is designed to be a 'drop in' replacement for existing batteries. The cell structures, however, are designed according to load function, i.e., high-current shallow-discharge cycles and low-current deep-discharge cycles. The preferred energy discharge management logic and integration into the power distribution network of the vehicle to provide safe user-friendly usage is described. The system is designed to operate transparent to the vehicle user. The integrity of the volatile high-current cells is maintained by temperature-sensitive voltage control and discharge management. The deep-cycle cells can be fully utilized without affecting startability under extreme conditions. Electric energy management synchronization with engine starting will provide at least 6% overall reduction in hydrocarbon emissions using an intelligent on-board power-supply technology developed by PBIL.

  12. Business Intelligence

    OpenAIRE

    Petersen, Anders

    2001-01-01

    Cílem této bakalářské práce je seznámení s Business Intelligence a zpracování vývojového trendu, který ovlivňuje podobu řešení Business Intelligence v podniku ? Business Activity Monitoring. Pro zpracování tohoto tématu byla použita metoda studia odborných pramenů, a to jak v českém, tak v anglickém jazyce. Hlavním přínosem práce je ucelený, v českém jazyce zpracovaný materiál pojednávající o Business Activity Monitoring. Práce je rozdělena do šesti hlavních kapitol. Prvních pět je věnováno p...

  13. Deep Corals, Deep Learning: Moving the Deep Net Towards Real-Time Image Annotation

    OpenAIRE

    Lea-Anne Henry; Sankha S. Mukherjee; Neil M. Roberston; Laurence De Clippele; J. Murray Roberts

    2016-01-01

    The mismatch between human capacity and the acquisition of Big Data such as Earth imagery undermines commitments to Convention on Biological Diversity (CBD) and Aichi targets. Artificial intelligence (AI) solutions to Big Data issues are urgently needed as these could prove to be faster, more accurate, and cheaper. Reducing costs of managing protected areas in remote deep waters and in the High Seas is of great importance, and this is a realm where autonomous technology will be transformative.

  14. Artificial intelligence

    OpenAIRE

    Duda, Antonín

    2009-01-01

    Abstract : Issue of this work is to acquaint the reader with the history of artificial inteligence, esspecialy branch of chess computing. Main attention is given to progress from fifties to the present. The work also deals with fighting chess programs against each other, and against human opponents. The greatest attention is focused on 1997 and duel Garry Kasparov against chess program Deep Blue. The work is divided into chapters according to chronological order.

  15. Categorization of web pages - Performance enhancement to search engine

    Digital Repository Service at National Institute of Oceanography (India)

    Lakshminarayana, S.

    of Artificial Intelligence, Volume III. Los Altos, CA.: William Kaufmann. pp 1-74. 18. Brin, S. & Page, L. (1998). The anatomy of a large scale hyper-textual web search engine. In Proceedings of the seventh World Wide Web conference, Brisbane, Australia. 19...

  16. An Immune Agent for Web-Based AI Course

    Science.gov (United States)

    Gong, Tao; Cai, Zixing

    2006-01-01

    To overcome weakness and faults of a web-based e-learning course such as Artificial Intelligence (AI), an immune agent was proposed, simulating a natural immune mechanism against a virus. The immune agent was built on the multi-dimension education agent model and immune algorithm. The web-based AI course was comprised of many files, such as HTML…

  17. A World Wide Web Region-Based Image Search Engine

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper the development of an intelligent image content-based search engine for the World Wide Web is presented. This system will offer a new form of media representation and access of content available in WWW. Information Web Crawlers continuously traverse the Internet and collect images...

  18. Intelligence and negotiating

    International Nuclear Information System (INIS)

    George, D.G.

    1990-01-01

    This paper discusses the role of US intelligence during arms control negotiations between 1982 and 1987. It also covers : the orchestration of intelligence projects; an evaluation of the performance of intelligence activities; the effect intelligence work had on actual arms negotiations; and suggestions for improvements in the future

  19. Intelligent products : A survey

    NARCIS (Netherlands)

    Meyer, G.G.; Främling, K.; Holmström, J.

    This paper presents an overview of the field of Intelligent Products. As Intelligent Products have many facets, this paper is mainly focused on the concept behind Intelligent Products, the technical foundations, and the achievable practical goals of Intelligent Products. A novel classification of

  20. Intelligence Issues for Congress

    Science.gov (United States)

    2013-04-23

    open source information— osint (newspapers...by user agencies. Section 1052 of the Intelligence Reform Act expressed the sense of Congress that there should be an open source intelligence ...center to coordinate the collection, analysis, production, and dissemination of open source intelligence to other intelligence agencies. An Open Source

  1. Deep frying

    NARCIS (Netherlands)

    Koerten, van K.N.

    2016-01-01

    Deep frying is one of the most used methods in the food processing industry. Though practically any food can be fried, French fries are probably the most well-known deep fried products. The popularity of French fries stems from their unique taste and texture, a crispy outside with a mealy soft

  2. Sensor web

    Science.gov (United States)

    Delin, Kevin A. (Inventor); Jackson, Shannon P. (Inventor)

    2011-01-01

    A Sensor Web formed of a number of different sensor pods. Each of the sensor pods include a clock which is synchronized with a master clock so that all of the sensor pods in the Web have a synchronized clock. The synchronization is carried out by first using a coarse synchronization which takes less power, and subsequently carrying out a fine synchronization to make a fine sync of all the pods on the Web. After the synchronization, the pods ping their neighbors to determine which pods are listening and responded, and then only listen during time slots corresponding to those pods which respond.

  3. Intelligent Governmentality

    Directory of Open Access Journals (Sweden)

    Willem de Lint

    2008-10-01

    Full Text Available Recently, within liberal democracies, the post-Westphalian consolidation of security and intelligence has ushered in the normalization not only of security in ‘securitization’ but also of intelligence in what is proposed here as ‘intelligencification.’ In outlining the features of intelligencified governance, my aim is to interrogate the view that effects or traces, and productivity rather than negation is as persuasive as commonly thought by the constructivists. After all, counter-intelligence is both about purging and reconstructing the archive for undisclosed values. In practice, what is being normalized is the authorized and legalized use of release and retention protocols of politically actionable information. The intelligencification of governmentality affords a sovereignty shell-game or the instrumentalization of sovereign power by interests that are dependent on, yet often inimical to, the power of state, national, and popular sovereignty. On voit le politique et le social comme dépendant de contingences exclusives. Récemment, au sein des démocraties libérales, la consolidation de la sécurité et des services de renseignements de sécurité qui a suivi les traités de la Westphalie a donné lieu à la normalisation non seulement de la sécurité en «sécurisation» mais aussi des services de renseignements de sécurité en ce qui est proposé ici comme «intelligencification» [terme anglais créé par l’auteur, dérivé du mot anglais «intelligence» dans le sens de renseignements des écurité]. En particulier, ce que l’on normalise dans le but de contourner des contingences exclusives est l’utilisation autorisée et légalisée de protocoles de communication et de rétention d’information qui, politiquement, pourrait mener à des poursuites. En esquissant les traits de la gouvernance «intelligencifiée», mon but est d’interroger le point de vue que les effets ou les traces, et la productivité plutôt que la

  4. Pathogen intelligence

    Directory of Open Access Journals (Sweden)

    Michael eSteinert

    2014-01-01

    Full Text Available Different species inhabit different sensory worlds and thus have evolved diverse means of processing information, learning and memory. In the escalated arms race with host defense, each pathogenic bacterium not only has evolved its individual cellular sensing and behaviour, but also collective sensing, interbacterial communication, distributed information processing, joint decision making, dissociative behaviour, and the phenotypic and genotypic heterogeneity necessary for epidemiologic success. Moreover, pathogenic populations take advantage of dormancy strategies and rapid evolutionary speed, which allow them to save co-generated intelligent traits in a collective genomic memory. This review discusses how these mechanisms add further levels of complexity to bacterial pathogenicity and transmission, and how mining for these mechanisms could help to develop new anti-infective strategies.

  5. Intelligent Routines

    CERN Document Server

    Anastassiou, George A

    Intelligent Routines II: Solving Linear Algebra and Differential Geometry with Sage” contains numerous of examples and problems as well as many unsolved problems. This book extensively applies the successful software Sage, which can be found free online http://www.sagemath.org/. Sage is a recent and popular software for mathematical computation, available freely and simple to use. This book is useful to all applied scientists in mathematics, statistics and engineering, as well for late undergraduate and graduate students of above subjects. It is the first such book in solving symbolically with Sage problems in Linear Algebra and Differential Geometry. Plenty of SAGE applications are given at each step of the exposition.

  6. Deep learning

    CERN Document Server

    Goodfellow, Ian; Courville, Aaron

    2016-01-01

    Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language proces...

  7. A Semantically Automated Protocol Adapter for Mapping SOAP Web Services to RESTful HTTP Format to Enable the Web Infrastructure, Enhance Web Service Interoperability and Ease Web Service Migration

    Directory of Open Access Journals (Sweden)

    Frank Doheny

    2012-04-01

    Full Text Available Semantic Web Services (SWS are Web Service (WS descriptions augmented with semantic information. SWS enable intelligent reasoning and automation in areas such as service discovery, composition, mediation, ranking and invocation. This paper applies SWS to a previous protocol adapter which, operating within clearly defined constraints, maps SOAP Web Services to RESTful HTTP format. However, in the previous adapter, the configuration element is manual and the latency implications are locally based. This paper applies SWS technologies to automate the configuration element and the latency tests are conducted in a more realistic Internet based setting.

  8. Intelligence: Real or artificial?

    OpenAIRE

    Schlinger, Henry D.

    1992-01-01

    Throughout the history of the artificial intelligence movement, researchers have strived to create computers that could simulate general human intelligence. This paper argues that workers in artificial intelligence have failed to achieve this goal because they adopted the wrong model of human behavior and intelligence, namely a cognitive essentialist model with origins in the traditional philosophies of natural intelligence. An analysis of the word “intelligence” suggests that it originally r...

  9. Web Analytics

    Science.gov (United States)

    EPA’s Web Analytics Program collects, analyzes, and provides reports on traffic, quality assurance, and customer satisfaction metrics for EPA’s website. The program uses a variety of analytics tools, including Google Analytics and CrazyEgg.

  10. Web Service

    Science.gov (United States)

    ... topic data in XML format. Using the Web service, software developers can build applications that utilize MedlinePlus health topic information. The service accepts keyword searches as requests and returns relevant ...

  11. Artificial intelligence for Mariáš

    OpenAIRE

    Kaštánková, Petra

    2016-01-01

    This thesis focuses on the implementation of a card game, Mariáš, and an artificial intelligence for this game. The game is designed for three players and it can be played with either other human players, or with a computer adversary. The game is designed as a client-server application, whereby the player connects to the game using a web page. The basis of the artificial intelligence is the Minimax algorithm. To speed it up we use the Alpha-Beta pruning, hash tables for storing equivalent sta...

  12. How much data resides in a web collection: how to estimate size of a web collection

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2013-01-01

    With increasing amount of data in deep web sources (hidden from general search engines behind web forms), accessing this data has gained more attention. In the algorithms applied for this purpose, it is the knowledge of a data source size that enables the algorithms to make accurate decisions in

  13. Deep learning application: rubbish classification with aid of an android device

    Science.gov (United States)

    Liu, Sijiang; Jiang, Bo; Zhan, Jie

    2017-06-01

    Deep learning is a very hot topic currently in pattern recognition and artificial intelligence researches. Aiming at the practical problem that people usually don't know correct classifications some rubbish should belong to, based on the powerful image classification ability of the deep learning method, we have designed a prototype system to help users to classify kinds of rubbish. Firstly the CaffeNet Model was adopted for our classification network training on the ImageNet dataset, and the trained network was deployed on a web server. Secondly an android app was developed for users to capture images of unclassified rubbish, upload images to the web server for analyzing backstage and retrieve the feedback, so that users can obtain the classification guide by an android device conveniently. Tests on our prototype system of rubbish classification show that: an image of one single type of rubbish with origin shape can be better used to judge its classification, while an image containing kinds of rubbish or rubbish with changed shape may fail to help users to decide rubbish's classification. However, the system still shows promising auxiliary function for rubbish classification if the network training strategy can be optimized further.

  14. 3rd Euro-China Conference on Intelligent Data Analysis and Applications

    CERN Document Server

    Snášel, Václav; Sung, Tien-Wen; Wang, Xiao

    2017-01-01

    This book gathers papers presented at the ECC 2016, the Third Euro-China Conference on Intelligent Data Analysis and Applications, which was held in Fuzhou City, China from November 7 to 9, 2016. The aim of the ECC is to provide an internationally respected forum for scientific research in the broad areas of intelligent data analysis, computational intelligence, signal processing, and all associated applications of artificial intelligence (AI). The third installment of the ECC was jointly organized by Fujian University of Technology, China, and VSB-Technical University of Ostrava, Czech Republic. The conference was co-sponsored by Taiwan Association for Web Intelligence Consortium, and Immersion Co., Ltd.

  15. Educational Programs for Intelligence Professionals.

    Science.gov (United States)

    Miller, Jerry P.

    1994-01-01

    Discusses the need for education programs for competitive intelligence professionals. Highlights include definitions of intelligence functions, focusing on business intelligence; information utilization by decision makers; information sources; competencies for intelligence professionals; and the development of formal education programs. (38…

  16. A New Dimension of Business Intelligence: Location-based Intelligence

    OpenAIRE

    Zeljko Panian

    2012-01-01

    Through the course of this paper we define Locationbased Intelligence (LBI) which is outgrowing from process of amalgamation of geolocation and Business Intelligence. Amalgamating geolocation with traditional Business Intelligence (BI) results in a new dimension of BI named Location-based Intelligence. LBI is defined as leveraging unified location information for business intelligence. Collectively, enterprises can transform location data into business intelligence applic...

  17. Inteligência Organizacional e Competitiva e a Web 2.0

    Directory of Open Access Journals (Sweden)

    Kira Tarapanoff

    2013-11-01

    Full Text Available New possibilities are analyzed for Competitive and Organizational Intelligence under Web 2.0. The underlying theoretical approach is based on Information Science, Information and Knowledge Management, Competitive Intelligence and related disciplines, in an integrated manner. The thesis that is defended in this essay considers that the basic elements that constitute what is understood by competitive and organizational intelligence 2.0 are the adequate use of collective intelligence accessible at the Web 2.0 in order to create and share knowledge; associated with the emerging concepts of the corporate world such as sustainability.

  18. Virtual Sensor Web Architecture

    Science.gov (United States)

    Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.

    2006-12-01

    NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.

  19. Intelligent Extruder

    Energy Technology Data Exchange (ETDEWEB)

    AlperEker; Mark Giammattia; Paul Houpt; Aditya Kumar; Oscar Montero; Minesh Shah; Norberto Silvi; Timothy Cribbs

    2003-04-24

    ''Intelligent Extruder'' described in this report is a software system and associated support services for monitoring and control of compounding extruders to improve material quality, reduce waste and energy use, with minimal addition of new sensors or changes to the factory floor system components. Emphasis is on process improvements to the mixing, melting and de-volatilization of base resins, fillers, pigments, fire retardants and other additives in the :finishing'' stage of high value added engineering polymer materials. While GE Plastics materials were used for experimental studies throughout the program, the concepts and principles are broadly applicable to other manufacturers materials. The project involved a joint collaboration among GE Global Research, GE Industrial Systems and Coperion Werner & Pleiderer, USA, a major manufacturer of compounding equipment. Scope of the program included development of a algorithms for monitoring process material viscosity without rheological sensors or generating waste streams, a novel detection scheme for rapid detection of process upsets and an adaptive feedback control system to compensate for process upsets where at line adjustments are feasible. Software algorithms were implemented and tested on a laboratory scale extruder (50 lb/hr) at GE Global Research and data from a production scale system (2000 lb/hr) at GE Plastics was used to validate the monitoring and detection software. Although not evaluated experimentally, a new concept for extruder process monitoring through estimation of high frequency drive torque without strain gauges is developed and demonstrated in simulation. A plan to commercialize the software system is outlined, but commercialization has not been completed.

  20. Rancang Bangun Sistem Business Intelligence Universitas Sebagai Pendukung Pengambilan Keputusan Akademik

    Directory of Open Access Journals (Sweden)

    Zainal Arifin

    2016-01-01

    Full Text Available Sistem business intelligence universitas dimulai dengan tahapan integrasi data, analisis data, membuat laporan dan membuatweb portal dan kemudian mengitegrasikan laporan tersebut dengan web portal. Analisis data diolah dengan OLAP, KPI dandata mining untuk mengekstrak informasi dari data yang tersimpan didalam data warehouse. Hasil proses analisis datatersebut di representasikan dalam bentuk laporan statistik dan dashboard, kemudian digunakan sebagai pendukungpengambilan keputusan akademik. Penelitian ini bertujuan merancang bangun sistem business intelligence universitassebagai pendukung pengambilan keputusan akademik pada Universitas Mulawarman berbasis web dengan OLAP. Penelitianini menghasilkan kerangka sistem dan web portal sistem business intelligence universitas yang diakses melalui browsersecara online. Business Intelligence dapat digunakan sebagai solusi untuk mempertimbangkan proses pengambilan keputusandalam pengelolaan universitas dan solusi dalam peningkatan kinerja pengelolaan akademik untuk mencapai keunggulanakademik.Kata kunci : Business Intelligence; Data warehouse; OLAP; KPI; Data mining

  1. Autonomous Mission Operations for Sensor Webs

    Science.gov (United States)

    Underbrink, A.; Witt, K.; Stanley, J.; Mandl, D.

    2008-12-01

    We present interim results of a 2005 ROSES AIST project entitled, "Using Intelligent Agents to Form a Sensor Web for Autonomous Mission Operations", or SWAMO. The goal of the SWAMO project is to shift the control of spacecraft missions from a ground-based, centrally controlled architecture to a collaborative, distributed set of intelligent agents. The network of intelligent agents intends to reduce management requirements by utilizing model-based system prediction and autonomic model/agent collaboration. SWAMO agents are distributed throughout the Sensor Web environment, which may include multiple spacecraft, aircraft, ground systems, and ocean systems, as well as manned operations centers. The agents monitor and manage sensor platforms, Earth sensing systems, and Earth sensing models and processes. The SWAMO agents form a Sensor Web of agents via peer-to-peer coordination. Some of the intelligent agents are mobile and able to traverse between on-orbit and ground-based systems. Other agents in the network are responsible for encapsulating system models to perform prediction of future behavior of the modeled subsystems and components to which they are assigned. The software agents use semantic web technologies to enable improved information sharing among the operational entities of the Sensor Web. The semantics include ontological conceptualizations of the Sensor Web environment, plus conceptualizations of the SWAMO agents themselves. By conceptualizations of the agents, we mean knowledge of their state, operational capabilities, current operational capacities, Web Service search and discovery results, agent collaboration rules, etc. The need for ontological conceptualizations over the agents is to enable autonomous and autonomic operations of the Sensor Web. The SWAMO ontology enables automated decision making and responses to the dynamic Sensor Web environment and to end user science requests. The current ontology is compatible with Open Geospatial Consortium (OGC

  2. An intelligent sales assistant for configurable products

    OpenAIRE

    Molina, Martin

    2001-01-01

    Some of the recent proposals of web-based applications are oriented to provide advanced search services through virtual shops. Within this context, this paper proposes an advanced type of software application that simulates how a sales assistant dialogues with a consumer to dynamically configure a product according to particular needs. The paper presents the general knowl- edge model that uses artificial intelligence and knowledge-based techniques to simulate the configuration process. Finall...

  3. A Web Observatory for the Machine Processability of Structured Data on the Web

    NARCIS (Netherlands)

    Beek, W.; Groth, P.; Schlobach, S.; Hoekstra, R.

    2014-01-01

    General human intelligence is needed in order to process Linked Open Data (LOD). On the Semantic Web (SW), content is intended to be machine-processable as well. But the extent to which a machine is able to navigate, access, and process the SW has not been extensively researched. We present LOD

  4. Intelligent Mission Controller Node

    National Research Council Canada - National Science Library

    Perme, David

    2002-01-01

    The goal of the Intelligent Mission Controller Node (IMCN) project was to improve the process of translating mission taskings between real-world Command, Control, Communications, Computers, and Intelligence (C41...

  5. Algorithms in ambient intelligence

    NARCIS (Netherlands)

    Aarts, E.H.L.; Korst, J.H.M.; Verhaegh, W.F.J.; Verhaegh, W.F.J.; Aarts, E.H.L.; Korst, J.H.M.

    2004-01-01

    In this chapter, we discuss the new paradigm for user-centered computing known as ambient intelligence and its relation with methods and techniques from the field of computational intelligence, including problem solving, machine learning, and expert systems.

  6. Advanced intelligent systems

    CERN Document Server

    Ryoo, Young; Jang, Moon-soo; Bae, Young-Chul

    2014-01-01

    Intelligent systems have been initiated with the attempt to imitate the human brain. People wish to let machines perform intelligent works. Many techniques of intelligent systems are based on artificial intelligence. According to changing and novel requirements, the advanced intelligent systems cover a wide spectrum: big data processing, intelligent control, advanced robotics, artificial intelligence and machine learning. This book focuses on coordinating intelligent systems with highly integrated and foundationally functional components. The book consists of 19 contributions that features social network-based recommender systems, application of fuzzy enforcement, energy visualization, ultrasonic muscular thickness measurement, regional analysis and predictive modeling, analysis of 3D polygon data, blood pressure estimation system, fuzzy human model, fuzzy ultrasonic imaging method, ultrasonic mobile smart technology, pseudo-normal image synthesis, subspace classifier, mobile object tracking, standing-up moti...

  7. SchNet - A deep learning architecture for molecules and materials

    Science.gov (United States)

    Schütt, K. T.; Sauceda, H. E.; Kindermans, P.-J.; Tkatchenko, A.; Müller, K.-R.

    2018-06-01

    Deep learning has led to a paradigm shift in artificial intelligence, including web, text, and image search, speech recognition, as well as bioinformatics, with growing impact in chemical physics. Machine learning, in general, and deep learning, in particular, are ideally suitable for representing quantum-mechanical interactions, enabling us to model nonlinear potential-energy surfaces or enhancing the exploration of chemical compound space. Here we present the deep learning architecture SchNet that is specifically designed to model atomistic systems by making use of continuous-filter convolutional layers. We demonstrate the capabilities of SchNet by accurately predicting a range of properties across chemical space for molecules and materials, where our model learns chemically plausible embeddings of atom types across the periodic table. Finally, we employ SchNet to predict potential-energy surfaces and energy-conserving force fields for molecular dynamics simulations of small molecules and perform an exemplary study on the quantum-mechanical properties of C20-fullerene that would have been infeasible with regular ab initio molecular dynamics.

  8. Load Forecasting with Artificial Intelligence on Big Data

    OpenAIRE

    Glauner, Patrick; State, Radu

    2016-01-01

    In the domain of electrical power grids, there is a particular interest in time series analysis using artificial intelligence. Machine learning is the branch of artificial intelligence giving computers the ability to learn patterns from data without being explicitly programmed. Deep Learning is a set of cutting-edge machine learning algorithms that are inspired by how the human brain works. It allows to self-learn feature hierarchies from the data rather than modeling hand-crafted features. I...

  9. Fiber webs

    Science.gov (United States)

    Roger M. Rowell; James S. Han; Von L. Byrd

    2005-01-01

    Wood fibers can be used to produce a wide variety of low-density three-dimensional webs, mats, and fiber-molded products. Short wood fibers blended with long fibers can be formed into flexible fiber mats, which can be made by physical entanglement, nonwoven needling, or thermoplastic fiber melt matrix technologies. The most common types of flexible mats are carded, air...

  10. Web Sitings.

    Science.gov (United States)

    Lo, Erika

    2001-01-01

    Presents seven mathematics games, located on the World Wide Web, for elementary students, including: Absurd Math: Pre-Algebra from Another Dimension; The Little Animals Activity Centre; MathDork Game Room (classic video games focusing on algebra); Lemonade Stand (students practice math and business skills); Math Cats (teaches the artistic beauty…

  11. Tracheal web

    International Nuclear Information System (INIS)

    Legasto, A.C.; Haller, J.O.; Giusti, R.J.

    2004-01-01

    Congenital tracheal web is a rare entity often misdiagnosed as refractory asthma. Clinical suspicion based on patient history, examination, and pulmonary function tests should lead to its consideration. Bronchoscopy combined with CT imaging and multiplanar reconstruction is an accepted, highly sensitive means of diagnosis. (orig.)

  12. Artificial Intelligence Project

    Science.gov (United States)

    1990-01-01

    Symposium on Aritificial Intelligence and Software Engineering Working Notes, March 1989. Blumenthal, Brad, "An Architecture for Automating...Artificial Intelligence Project Final Technical Report ARO Contract: DAAG29-84-K-OGO Artificial Intelligence LaboratO"ry The University of Texas at...Austin N>.. ~ ~ JA 1/I 1991 n~~~ Austin, Texas 78712 ________k A,.tificial Intelligence Project i Final Technical Report ARO Contract: DAAG29-84-K-0060

  13. International Conference on Intelligent and Interactive Systems and Applications

    CERN Document Server

    Patnaik, Srikanta; Yu, Zhengtao

    2017-01-01

    This book provides the latest research findings and developments in the field of interactive intelligent systems, addressing diverse areas such as autonomous systems, Internet and cloud computing, pattern recognition and vision systems, mobile computing and intelligent networking, and e-enabled systems. It gathers selected papers from the International Conference on Intelligent and Interactive Systems and Applications (IISA2016) held on June 25–26, 2016 in Shanghai, China. Interactive intelligent systems are among the most important multi-disciplinary research and development domains of artificial intelligence, human–computer interaction, machine learning and new Internet-based technologies. Accordingly, these systems embrace a considerable number of application areas such as autonomous systems, expert systems, mobile systems, recommender systems, knowledge-based and semantic web-based systems, virtual communication environments, and decision support systems, to name a few. To date, research on interactiv...

  14. 8th Asian Conference on Intelligent Information and Database Systems

    CERN Document Server

    Madeyski, Lech; Nguyen, Ngoc

    2016-01-01

    The objective of this book is to contribute to the development of the intelligent information and database systems with the essentials of current knowledge, experience and know-how. The book contains a selection of 40 chapters based on original research presented as posters during the 8th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2016) held on 14–16 March 2016 in Da Nang, Vietnam. The papers to some extent reflect the achievements of scientific teams from 17 countries in five continents. The volume is divided into six parts: (a) Computational Intelligence in Data Mining and Machine Learning, (b) Ontologies, Social Networks and Recommendation Systems, (c) Web Services, Cloud Computing, Security and Intelligent Internet Systems, (d) Knowledge Management and Language Processing, (e) Image, Video, Motion Analysis and Recognition, and (f) Advanced Computing Applications and Technologies. The book is an excellent resource for researchers, those working in artificial intelligence, mu...

  15. Towards deep learning with segregated dendrites.

    Science.gov (United States)

    Guerguiev, Jordan; Lillicrap, Timothy P; Richards, Blake A

    2017-12-05

    Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations-the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons.

  16. Web-enabling technologies for the factory floor: a web-enabling strategy for emanufacturing

    Science.gov (United States)

    Velez, Ricardo; Lastra, Jose L. M.; Tuokko, Reijo O.

    2001-10-01

    This paper is intended to address the different technologies available for Web-enabling of the factory floor. It will give an overview of the importance of Web-enabling of the factory floor, in the application of the concepts of flexible and intelligent manufacturing, in conjunction with e-commerce. As a last section, it will try to define a Web-enabling strategy for the application in eManufacturing. This is made under the scope of the electronics manufacturing industry, so every application, technology or related matter is presented under such scope.

  17. Orchestrating Multiple Intelligences

    Science.gov (United States)

    Moran, Seana; Kornhaber, Mindy; Gardner, Howard

    2006-01-01

    Education policymakers often go astray when they attempt to integrate multiple intelligences theory into schools, according to the originator of the theory, Howard Gardner, and his colleagues. The greatest potential of a multiple intelligences approach to education grows from the concept of a profile of intelligences. Each learner's intelligence…

  18. Algorithms in ambient intelligence

    NARCIS (Netherlands)

    Aarts, E.H.L.; Korst, J.H.M.; Verhaegh, W.F.J.; Weber, W.; Rabaey, J.M.; Aarts, E.

    2005-01-01

    We briefly review the concept of ambient intelligence and discuss its relation with the domain of intelligent algorithms. By means of four examples of ambient intelligent systems, we argue that new computing methods and quantification measures are needed to bridge the gap between the class of

  19. Designing with computational intelligence

    CERN Document Server

    Lopes, Heitor; Mourelle, Luiza

    2017-01-01

    This book discusses a number of real-world applications of computational intelligence approaches. Using various examples, it demonstrates that computational intelligence has become a consolidated methodology for automatically creating new competitive solutions to complex real-world problems. It also presents a concise and efficient synthesis of different systems using computationally intelligent techniques.

  20. Reflection on robotic intelligence

    NARCIS (Netherlands)

    Bartneck, C.

    2006-01-01

    This paper reflects on the development or robots, both their physical shape as well as their intelligence. The later strongly depends on the progress made in the artificial intelligence (AI) community which does not yet provide the models and tools necessary to create intelligent robots. It is time

  1. EIIS: An Educational Information Intelligent Search Engine Supported by Semantic Services

    Science.gov (United States)

    Huang, Chang-Qin; Duan, Ru-Lin; Tang, Yong; Zhu, Zhi-Ting; Yan, Yong-Jian; Guo, Yu-Qing

    2011-01-01

    The semantic web brings a new opportunity for efficient information organization and search. To meet the special requirements of the educational field, this paper proposes an intelligent search engine enabled by educational semantic support service, where three kinds of searches are integrated into Educational Information Intelligent Search (EIIS)…

  2. Web components and the semantic web

    OpenAIRE

    Casey, Maire; Pahl, Claus

    2003-01-01

    Component-based software engineering on the Web differs from traditional component and software engineering. We investigate Web component engineering activites that are crucial for the development,com position, and deployment of components on the Web. The current Web Services and Semantic Web initiatives strongly influence our work. Focussing on Web component composition we develop description and reasoning techniques that support a component developer in the composition activities,fo cussing...

  3. Computational intelligence for technology enhanced learning

    Energy Technology Data Exchange (ETDEWEB)

    Xhafa, Fatos [Polytechnic Univ. of Catalonia, Barcelona (Spain). Dept. of Languages and Informatics Systems; Caballe, Santi; Daradoumis, Thanasis [Open Univ. of Catalonia, Barcelona (Spain). Dept. of Computer Sciences Multimedia and Telecommunications; Abraham, Ajith [Machine Intelligence Research Labs (MIR Labs), Auburn, WA (United States). Scientific Network for Innovation and Research Excellence; Juan Perez, Angel Alejandro (eds.) [Open Univ. of Catalonia, Barcelona (Spain). Dept. of Information Sciences

    2010-07-01

    E-Learning has become one of the most wide spread ways of distance teaching and learning. Technologies such as Web, Grid, and Mobile and Wireless networks are pushing teaching and learning communities to find new and intelligent ways of using these technologies to enhance teaching and learning activities. Indeed, these new technologies can play an important role in increasing the support to teachers and learners, to shorten the time to learning and teaching; yet, it is necessary to use intelligent techniques to take advantage of these new technologies to achieve the desired support to teachers and learners and enhance learners' performance in distributed learning environments. The chapters of this volume bring advances in using intelligent techniques for technology enhanced learning as well as development of e-Learning applications based on such techniques and supported by technology. Such intelligent techniques include clustering and classification for personalization of learning, intelligent context-aware techniques, adaptive learning, data mining techniques and ontologies in e-Learning systems, among others. Academics, scientists, software developers, teachers and tutors and students interested in e-Learning will find this book useful for their academic, research and practice activity. (orig.)

  4. Social intelligence, human intelligence and niche construction.

    Science.gov (United States)

    Sterelny, Kim

    2007-04-29

    This paper is about the evolution of hominin intelligence. I agree with defenders of the social intelligence hypothesis in thinking that externalist models of hominin intelligence are not plausible: such models cannot explain the unique cognition and cooperation explosion in our lineage, for changes in the external environment (e.g. increasing environmental unpredictability) affect many lineages. Both the social intelligence hypothesis and the social intelligence-ecological complexity hybrid I outline here are niche construction models. Hominin evolution is hominin response to selective environments that earlier hominins have made. In contrast to social intelligence models, I argue that hominins have both created and responded to a unique foraging mode; a mode that is both social in itself and which has further effects on hominin social environments. In contrast to some social intelligence models, on this view, hominin encounters with their ecological environments continue to have profound selective effects. However, though the ecological environment selects, it does not select on its own. Accidents and their consequences, differential success and failure, result from the combination of the ecological environment an agent faces and the social features that enhance some opportunities and suppress others and that exacerbate some dangers and lessen others. Individuals do not face the ecological filters on their environment alone, but with others, and with the technology, information and misinformation that their social world provides.

  5. Philosophical engineering toward a philosophy of the web

    CERN Document Server

    Halpin, Harry

    2013-01-01

    This is the first interdisciplinary exploration of the philosophical foundations of the Web, a new area of inquiry that has important implications across a range of domains. Contains twelve essays that bridge the fields of philosophy, cognitive science, and phenomenologyTackles questions such as the impact of Google on intelligence and epistemology, the philosophical status of digital objects, ethics on the Web, semantic and ontological changes caused by the Web, and the potential of the Web to serve as a genuine cognitive extensionBrings together insightful new scholarship from well-known an

  6. Quality control of intelligence research

    International Nuclear Information System (INIS)

    Lu Yan; Xin Pingping; Wu Jian

    2014-01-01

    Quality control of intelligence research is the core issue of intelligence management, is a problem in study of information science This paper focuses on the performance of intelligence to explain the significance of intelligence research quality control. In summing up the results of the study on the basis of the analysis, discusses quality control methods in intelligence research, introduces the experience of foreign intelligence research quality control, proposes some recommendations to improve quality control in intelligence research. (authors)

  7. Artificial Intelligence in planetary spectroscopy

    Science.gov (United States)

    Waldmann, Ingo

    2017-10-01

    The field of exoplanetary spectroscopy is as fast moving as it is new. Analysing currently available observations of exoplanetary atmospheres often invoke large and correlated parameter spaces that can be difficult to map or constrain. This is true for both: the data analysis of observations as well as the theoretical modelling of their atmospheres.Issues of low signal-to-noise data and large, non-linear parameter spaces are nothing new and commonly found in many fields of engineering and the physical sciences. Recent years have seen vast improvements in statistical data analysis and machine learning that have revolutionised fields as diverse as telecommunication, pattern recognition, medical physics and cosmology.In many aspects, data mining and non-linearity challenges encountered in other data intensive fields are directly transferable to the field of extrasolar planets. In this conference, I will discuss how deep neural networks can be designed to facilitate solving said issues both in exoplanet atmospheres as well as for atmospheres in our own solar system. I will present a deep belief network, RobERt (Robotic Exoplanet Recognition), able to learn to recognise exoplanetary spectra and provide artificial intelligences to state-of-the-art atmospheric retrieval algorithms. Furthermore, I will present a new deep convolutional network that is able to map planetary surface compositions using hyper-spectral imaging and demonstrate its uses on Cassini-VIMS data of Saturn.

  8. Representing System Behaviors and Expert Behaviors for Intelligent Tutoring. Technical Report No. 108.

    Science.gov (United States)

    Towne, Douglas M.; And Others

    Simulation-based software tools that can infer system behaviors from a deep model of the system have the potential for automatically building the semantic representations required to support intelligent tutoring in fault diagnosis. The Intelligent Maintenance Training System (IMTS) is such a resource, designed for use in training troubleshooting…

  9. Deep Learning

    DEFF Research Database (Denmark)

    Jensen, Morten Bornø; Bahnsen, Chris Holmberg; Nasrollahi, Kamal

    2018-01-01

    I løbet af de sidste 10 år er kunstige neurale netværk gået fra at være en støvet, udstødt tekno-logi til at spille en hovedrolle i udviklingen af kunstig intelligens. Dette fænomen kaldes deep learning og er inspireret af hjernens opbygning.......I løbet af de sidste 10 år er kunstige neurale netværk gået fra at være en støvet, udstødt tekno-logi til at spille en hovedrolle i udviklingen af kunstig intelligens. Dette fænomen kaldes deep learning og er inspireret af hjernens opbygning....

  10. BUILDING A WEB APPLICATION WITH LARAVEL 5

    OpenAIRE

    Nguyen, Quang

    2015-01-01

    In modern IT industry, it is essential for web developers to know at least one battle-proven framework. Laravel is one of the most successful PHP framework in 2015, based on annual framework popularity survey conducted by SitePoint (SitePoint, The Best PHP Framework for 2015: SitePoint Survey Results, cited, 25.10.2015). There are several advantages and benefits of using web framework in general and Laravel in particular. Framework is a product of collective intelligence, comprising many ...

  11. Deep geothermics

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    The hot-dry-rocks located at 3-4 km of depth correspond to low permeable rocks carrying a large amount of heat. The extraction of this heat usually requires artificial hydraulic fracturing of the rock to increase its permeability before water injection. Hot-dry-rocks geothermics or deep geothermics is not today a commercial channel but only a scientific and technological research field. The Soultz-sous-Forets site (Northern Alsace, France) is characterized by a 6 degrees per meter geothermal gradient and is used as a natural laboratory for deep geothermal and geological studies in the framework of a European research program. Two boreholes have been drilled up to 3600 m of depth in the highly-fractured granite massif beneath the site. The aim is to create a deep heat exchanger using only the natural fracturing for water transfer. A consortium of german, french and italian industrial companies (Pfalzwerke, Badenwerk, EdF and Enel) has been created for a more active participation to the pilot phase. (J.S.). 1 fig., 2 photos

  12. Brain Intelligence: Go Beyond Artificial Intelligence

    OpenAIRE

    Lu, Huimin; Li, Yujie; Chen, Min; Kim, Hyoungseop; Serikawa, Seiichi

    2017-01-01

    Artificial intelligence (AI) is an important technology that supports daily social life and economic activities. It contributes greatly to the sustainable growth of Japan's economy and solves various social problems. In recent years, AI has attracted attention as a key for growth in developed countries such as Europe and the United States and developing countries such as China and India. The attention has been focused mainly on developing new artificial intelligence information communication ...

  13. Usare WebDewey

    OpenAIRE

    Baldi, Paolo

    2016-01-01

    This presentation shows how to use the WebDewey tool. Features of WebDewey. Italian WebDewey compared with American WebDewey. Querying Italian WebDewey. Italian WebDewey and MARC21. Italian WebDewey and UNIMARC. Numbers, captions, "equivalente verbale": Dewey decimal classification in Italian catalogues. Italian WebDewey and Nuovo soggettario. Italian WebDewey and LCSH. Italian WebDewey compared with printed version of Italian Dewey Classification (22. edition): advantages and disadvantages o...

  14. Understanding a Deep Learning Technique through a Neuromorphic System a Case Study with SpiNNaker Neuromorphic Platform

    OpenAIRE

    Sugiarto Indar; Pasila Felix

    2018-01-01

    Deep learning (DL) has been considered as a breakthrough technique in the field of artificial intelligence and machine learning. Conceptually, it relies on a many-layer network that exhibits a hierarchically non-linear processing capability. Some DL architectures such as deep neural networks, deep belief networks and recurrent neural networks have been developed and applied to many fields with incredible results, even comparable to human intelligence. However, many researchers are still scept...

  15. Analysing Student Programs in the PHP Intelligent Tutoring System

    Science.gov (United States)

    Weragama, Dinesha; Reye, Jim

    2014-01-01

    Programming is a subject that many beginning students find difficult. The PHP Intelligent Tutoring System (PHP ITS) has been designed with the aim of making it easier for novices to learn the PHP language in order to develop dynamic web pages. Programming requires practice. This makes it necessary to include practical exercises in any ITS that…

  16. Mutual intelligibility between closely related language in Europe.

    NARCIS (Netherlands)

    Gooskens, Charlotte; van Heuven, Vincent; Golubovic, Jelena; Schüppert, Anja; Swarte, Femke; Voigt, Stefanie

    2018-01-01

    By means of a large-scale web-based investigation, we established the degree of mutual intelligibility of 16 closely related spoken languages within the Germanic, Slavic and Romance language families in Europe. We first present the results of a selection of 1833 listeners representing the mutual

  17. Mutual intelligibility between closely related languages in Europe

    NARCIS (Netherlands)

    Gooskens, C.; Heuven, van V.J.J.P.; Golubović, J.; Schüppert, A.; Swarte, F.; Voigt, S.

    2017-01-01

    By means of a large-scale web-based investigation, we established the degree of mutual intelligibility of 16 closely related spoken languages within the Germanic, Slavic and Romance language families in Europe. We first present the results of a selection of 1833 listeners representing the mutual

  18. Competitive Intelligence on the Internet-Going for the Gold.

    Science.gov (United States)

    Kassler, Helene

    2000-01-01

    Discussion of competitive intelligence (CI) focuses on recent Web sties and several search techniques that provide valuable CI information. Highlights include links that display business relationships; information from vendors; general business sites; search engine strategies; local business newspapers; job postings; patent and trademark…

  19. Semantic Web Requirements through Web Mining Techniques

    OpenAIRE

    Hassanzadeh, Hamed; Keyvanpour, Mohammad Reza

    2012-01-01

    In recent years, Semantic web has become a topic of active research in several fields of computer science and has applied in a wide range of domains such as bioinformatics, life sciences, and knowledge management. The two fast-developing research areas semantic web and web mining can complement each other and their different techniques can be used jointly or separately to solve the issues in both areas. In addition, since shifting from current web to semantic web mainly depends on the enhance...

  20. Digging deeper on "deep" learning: A computational ecology approach.

    Science.gov (United States)

    Buscema, Massimo; Sacco, Pier Luigi

    2017-01-01

    We propose an alternative approach to "deep" learning that is based on computational ecologies of structurally diverse artificial neural networks, and on dynamic associative memory responses to stimuli. Rather than focusing on massive computation of many different examples of a single situation, we opt for model-based learning and adaptive flexibility. Cross-fertilization of learning processes across multiple domains is the fundamental feature of human intelligence that must inform "new" artificial intelligence.

  1. The new challenge for e-learning : the educational semantic web

    NARCIS (Netherlands)

    Aroyo, L.M.; Dicheva, D.

    2004-01-01

    The big question for many researchers in the area of educational systems now is what is the next step in the evolution of e-learning? Are we finally moving from a scattered intelligence to a coherent space of collaborative intelligence? How close we are to the vision of the Educational Semantic Web

  2. Training teachers to observation: an approach through multiple intelligences theory

    Directory of Open Access Journals (Sweden)

    Nicolini, P.

    2010-11-01

    Full Text Available Observation is a daily practice in scholastic and educational contexts, but it needs to develop into a professional competence in order to be helpful. In fact, to design an educative and didactic plan and to provide useful tools, activities and tasks to their students, teachers and educators need to collect information about learners. For these reasons we’ll built a Web-Observation (Web-Ob application, a tool able to support good practices in observation. In particular, the Web-Ob can provide Multiple Intelligences Theory as a framework through which children’s behaviors and attitudes can be observed, assessed and evaluated.

  3. Towards web documents quality assessment for digital humanities scholars

    NARCIS (Netherlands)

    Ceolin, D.; Noordegraaf, Julia; Aroyo, L.M.; van Son, C.M.; Nejdl, Wolfgang; Hall, Wendy; Parigi, Paolo; Staab, Steffen

    2016-01-01

    We present a framework for assessing the quality of Web documents, and a baseline of three quality dimensions: trustworthiness, objectivity and basic scholarly quality. Assessing Web document quality is a "deep data" problem necessitating approaches to handle both data size and complexity.

  4. Responsive web design workflow

    OpenAIRE

    LAAK, TIMO

    2013-01-01

    Responsive Web Design Workflow is a literature review about Responsive Web Design, a web standards based modern web design paradigm. The goals of this research were to define what responsive web design is, determine its importance in building modern websites and describe a workflow for responsive web design projects. Responsive web design is a paradigm to create adaptive websites, which respond to the properties of the media that is used to render them. The three key elements of responsi...

  5. Mathematical structures of natural intelligence

    CERN Document Server

    Neuman, Yair

    2017-01-01

    This book uncovers mathematical structures underlying natural intelligence and applies category theory as a modeling language for understanding human cognition, giving readers new insights into the nature of human thought. In this context, the book explores various topics and questions, such as the human representation of the number system, why our counting ability is different from that which is evident among non-human organisms, and why the idea of zero is so difficult to grasp. The book is organized into three parts: the first introduces the general reason for studying general structures underlying the human mind; the second part introduces category theory as a modeling language and use it for exposing the deep and fascinating structures underlying human cognition; and the third applies the general principles and ideas of the first two parts to reaching a better understanding of challenging aspects of the human mind such as our understanding of the number system, the metaphorical nature of our thinking and...

  6. [Artificial Intelligence in Drug Discovery].

    Science.gov (United States)

    Fujiwara, Takeshi; Kamada, Mayumi; Okuno, Yasushi

    2018-04-01

    According to the increase of data generated from analytical instruments, application of artificial intelligence(AI)technology in medical field is indispensable. In particular, practical application of AI technology is strongly required in "genomic medicine" and "genomic drug discovery" that conduct medical practice and novel drug development based on individual genomic information. In our laboratory, we have been developing a database to integrate genome data and clinical information obtained by clinical genome analysis and a computational support system for clinical interpretation of variants using AI. In addition, with the aim of creating new therapeutic targets in genomic drug discovery, we have been also working on the development of a binding affinity prediction system for mutated proteins and drugs by molecular dynamics simulation using supercomputer "Kei". We also have tackled for problems in a drug virtual screening. Our developed AI technology has successfully generated virtual compound library, and deep learning method has enabled us to predict interaction between compound and target protein.

  7. Quo Vadis, Artificial Intelligence?

    OpenAIRE

    Berrar, Daniel; Sato, Naoyuki; Schuster, Alfons

    2010-01-01

    Since its conception in the mid 1950s, artificial intelligence with its great ambition to understand and emulate intelligence in natural and artificial environments alike is now a truly multidisciplinary field that reaches out and is inspired by a great diversity of other fields. Rapid advances in research and technology in various fields have created environments into which artificial intelligence could embed itself naturally and comfortably. Neuroscience with its desire to understand nervou...

  8. Principles of artificial intelligence

    CERN Document Server

    Nilsson, Nils J

    1980-01-01

    A classic introduction to artificial intelligence intended to bridge the gap between theory and practice, Principles of Artificial Intelligence describes fundamental AI ideas that underlie applications such as natural language processing, automatic programming, robotics, machine vision, automatic theorem proving, and intelligent data retrieval. Rather than focusing on the subject matter of the applications, the book is organized around general computational concepts involving the kinds of data structures used, the types of operations performed on the data structures, and the properties of th

  9. Intelligence of programs

    Energy Technology Data Exchange (ETDEWEB)

    Novak, D

    1982-01-01

    A general discussion about the level of artificial intelligence in computer programs is presented. The suitability of various languages for the development of complex, intelligent programs is discussed, considering fourth-generation language as well as the well established structured COBOL language. It is concluded that the success of automation in many administrative fields depends to a large extent on the development of intelligent programs.

  10. Intelligence analysis – the royal discipline of Competitive Intelligence

    OpenAIRE

    František Bartes

    2011-01-01

    The aim of this article is to propose work methodology for Competitive Intelligence teams in one of the intelligence cycle’s specific area, in the so-called “Intelligence Analysis”. Intelligence Analysis is one of the stages of the Intelligence Cycle in which data from both the primary and secondary research are analyzed. The main result of the effort is the creation of added value for the information collected. Company Competiitve Intelligence, correctly understood and implemented in busines...

  11. The deep lymphatic anatomy of the hand.

    Science.gov (United States)

    Ma, Chuan-Xiang; Pan, Wei-Ren; Liu, Zhi-An; Zeng, Fan-Qiang; Qiu, Zhi-Qiang

    2018-04-03

    The deep lymphatic anatomy of the hand still remains the least described in medical literature. Eight hands were harvested from four nonembalmed human cadavers amputated above the wrist. A small amount of 6% hydrogen peroxide was employed to detect the lymphatic vessels around the superficial and deep palmar vascular arches, in webs from the index to little fingers, the thenar and hypothenar areas. A 30-gauge needle was inserted into the vessels and injected with a barium sulphate compound. Each specimen was dissected, photographed and radiographed to demonstrate deep lymphatic distribution of the hand. Five groups of deep collecting lymph vessels were found in the hand: superficial palmar arch lymph vessel (SPALV); deep palmar arch lymph vessel (DPALV); thenar lymph vessel (TLV); hypothenar lymph vessel (HTLV); deep finger web lymph vessel (DFWLV). Each group of vessels drained in different directions first, then all turned and ran towards the wrist in different layers. The deep lymphatic drainage of the hand has been presented. The results will provide an anatomical basis for clinical management, educational reference and scientific research. Copyright © 2018 Elsevier GmbH. All rights reserved.

  12. STANFORD ARTIFICIAL INTELLIGENCE PROJECT.

    Science.gov (United States)

    ARTIFICIAL INTELLIGENCE , GAME THEORY, DECISION MAKING, BIONICS, AUTOMATA, SPEECH RECOGNITION, GEOMETRIC FORMS, LEARNING MACHINES, MATHEMATICAL MODELS, PATTERN RECOGNITION, SERVOMECHANISMS, SIMULATION, BIBLIOGRAPHIES.

  13. Intelligent Optics Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Intelligent Optics Laboratory supports sophisticated investigations on adaptive and nonlinear optics; advancedimaging and image processing; ground-to-ground and...

  14. Intelligence and childlessness.

    Science.gov (United States)

    Kanazawa, Satoshi

    2014-11-01

    Demographers debate why people have children in advanced industrial societies where children are net economic costs. From an evolutionary perspective, however, the important question is why some individuals choose not to have children. Recent theoretical developments in evolutionary psychology suggest that more intelligent individuals may be more likely to prefer to remain childless than less intelligent individuals. Analyses of the National Child Development Study show that more intelligent men and women express preference to remain childless early in their reproductive careers, but only more intelligent women (not more intelligent men) are more likely to remain childless by the end of their reproductive careers. Controlling for education and earnings does not at all attenuate the association between childhood general intelligence and lifetime childlessness among women. One-standard-deviation increase in childhood general intelligence (15 IQ points) decreases women's odds of parenthood by 21-25%. Because women have a greater impact on the average intelligence of future generations, the dysgenic fertility among women is predicted to lead to a decline in the average intelligence of the population in advanced industrial nations. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Artificial Intelligence and Information Management

    Science.gov (United States)

    Fukumura, Teruo

    After reviewing the recent popularization of the information transmission and processing technologies, which are supported by the progress of electronics, the authors describe that by the introduction of the opto-electronics into the information technology, the possibility of applying the artificial intelligence (AI) technique to the mechanization of the information management has emerged. It is pointed out that althuogh AI deals with problems in the mental world, its basic methodology relies upon the verification by evidence, so the experiment on computers become indispensable for the study of AI. The authors also describe that as computers operate by the program, the basic intelligence which is concerned in AI is that expressed by languages. This results in the fact that the main tool of AI is the logical proof and it involves an intrinsic limitation. To answer a question “Why do you employ AI in your problem solving”, one must have ill-structured problems and intend to conduct deep studies on the thinking and the inference, and the memory and the knowledge-representation. Finally the authors discuss the application of AI technique to the information management. The possibility of the expert-system, processing of the query, and the necessity of document knowledge-base are stated.

  16. Routledge companion to intelligence studies

    CERN Document Server

    Dover, Robert; Hillebrand, Claudia

    2013-01-01

    The Routledge Companion to Intelligence Studies provides a broad overview of the growing field of intelligence studies. The recent growth of interest in intelligence and security studies has led to an increased demand for popular depictions of intelligence and reference works to explain the architecture and underpinnings of intelligence activity. Divided into five comprehensive sections, this Companion provides a strong survey of the cutting-edge research in the field of intelligence studies: Part I: The evolution of intelligence studies; Part II: Abstract approaches to intelligence; Part III: Historical approaches to intelligence; Part IV: Systems of intelligence; Part V: Contemporary challenges. With a broad focus on the origins, practices and nature of intelligence, the book not only addresses classical issues, but also examines topics of recent interest in security studies. The overarching aim is to reveal the rich tapestry of intelligence studies in both a sophisticated and accessible way. This Companion...

  17. High-Redshift Radio Galaxies from Deep Fields

    Indian Academy of Sciences (India)

    2016-01-27

    Jan 27, 2016 ... High-Redshift Radio Galaxies from Deep Fields ... Here we present results from the deep 150 MHz observations of LBDS-Lynx field, which has been imaged at 327, ... Articles are also visible in Web of Science immediately.

  18. Artificial Consciousness or Artificial Intelligence

    OpenAIRE

    Spanache Florin

    2017-01-01

    Artificial intelligence is a tool designed by people for the gratification of their own creative ego, so we can not confuse conscience with intelligence and not even intelligence in its human representation with conscience. They are all different concepts and they have different uses. Philosophically, there are differences between autonomous people and automatic artificial intelligence. This is the difference between intelligence and artificial intelligence, autonomous versus a...

  19. 2015 Chinese Intelligent Systems Conference

    CERN Document Server

    Du, Junping; Li, Hongbo; Zhang, Weicun; CISC’15

    2016-01-01

    This book presents selected research papers from the 2015 Chinese Intelligent Systems Conference (CISC’15), held in Yangzhou, China. The topics covered include multi-agent systems, evolutionary computation, artificial intelligence, complex systems, computation intelligence and soft computing, intelligent control, advanced control technology, robotics and applications, intelligent information processing, iterative learning control, and machine learning. Engineers and researchers from academia, industry and the government can gain valuable insights into solutions combining ideas from multiple disciplines in the field of intelligent systems.

  20. An Intelligent Framework for Website Usability

    Directory of Open Access Journals (Sweden)

    Alexiei Dingli

    2014-01-01

    Full Text Available With the major advances of the Internet throughout the past couple of years, websites have come to play a central role in the modern marketing business program. However, simply owning a website is not enough for a business to prosper on the Web. Indeed, it is the level of usability of a website that determines if a user stays or abandons it for another competing one. It is therefore crucial to understand the importance of usability on the web, and consequently the need for its evaluation. Nonetheless, there exist a number of obstacles preventing software organizations from successfully applying sound website usability evaluation strategies in practice. From this point of view automation of the latter is extremely beneficial, which not only assists designers in creating more usable websites, but also enhances the Internet users’ experience on the Web and increases their level of satisfaction. As a means of addressing this problem, an Intelligent Usability Evaluation (IUE tool is proposed that automates the usability evaluation process by employing a Heuristic Evaluation technique in an intelligent manner through the adoption of several research-based AI methods. Experimental results show there exists a high correlation between the tool and human annotators when identifying the considered usability violations.

  1. Role of Librarian in Internet and World Wide Web Environment

    Directory of Open Access Journals (Sweden)

    K. Nageswara Rao

    2001-01-01

    Full Text Available The transition of traditional library collections to digital or virtual collections presented the librarian with new opportunities. The Internet, Web en-vironment and associated sophisticated tools have given the librarian a new dynamic role to play and serve the new information based society in bet-ter ways than hitherto. Because of the powerful features of Web i.e. distributed, heterogeneous, collaborative, multimedia, multi-protocol, hyperme-dia-oriented architecture, World Wide Web has revolutionized the way people access information, and has opened up new possibilities in areas such as digital libraries, virtual libraries, scientific information retrieval and dissemination. Not only the world is becoming interconnected, but also the use of Internet and Web has changed the fundamental roles, paradigms, and organizational culture of libraries and librarians as well. The article describes the limitless scope of Internet and Web, the existence of the librarian in the changing environment, parallelism between information sci-ence and information technology, librarians and intelligent agents, working of intelligent agents, strengths, weaknesses, threats and opportunities in-volved in the relationship between librarians and the Web. The role of librarian in Internet and Web environment especially as intermediary, facilita-tor, end-user trainer, Web site builder, researcher, interface designer, knowledge manager and sifter of information resources is also described.

  2. Distributed intelligence in CAMAC

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1977-01-01

    The CAMAC digital interface standard has served us well since 1969. During this time there have been enormous advances in digital electronics. In particular, low cost microprocessors now make it feasible to consider use of distributed intelligence even in simple data acquisition systems. This paper describes a simple extension of the CAMAC standard which allows distributed intelligence at the crate level

  3. Intelligent design som videnskab?

    DEFF Research Database (Denmark)

    Klausen, Søren Harnow

    2007-01-01

    Diskuterer hvorvidt intelligent design kan betegnes som videnskab; argumenterer for at dette grundet fraværet af klare demarkationskriterier næppe kan afvises.......Diskuterer hvorvidt intelligent design kan betegnes som videnskab; argumenterer for at dette grundet fraværet af klare demarkationskriterier næppe kan afvises....

  4. Distributed intelligence in CAMAC

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1977-01-01

    A simple extension of the CAMAC standard is described which allows distributed intelligence at the crate level. By distributed intelligence is meant that there is more than one source of control in a system. This standard is just now emerging from the NIM Dataway Working Group and its European counterpart. 1 figure

  5. Intelligence and treaty ratification

    International Nuclear Information System (INIS)

    Cahn, A.H.

    1990-01-01

    This paper reports that there are two sets of questions applicable to the ratification phase: what is the role of intelligence in the ratification process? What effect did intelligence have on that process. The author attempts to answer these and other questions

  6. Applying Multiple Intelligences

    Science.gov (United States)

    Christodoulou, Joanna A.

    2009-01-01

    The ideas of multiple intelligences introduced by Howard Gardner of Harvard University more than 25 years ago have taken form in many ways, both in schools and in other sometimes-surprising settings. The silver anniversary of Gardner's learning theory provides an opportunity to reflect on the ways multiple intelligences theory has taken form and…

  7. Next generation Emotional Intelligence

    Science.gov (United States)

    J. Saveland

    2012-01-01

    Emotional Intelligence has been a hot topic in leadership training since Dan Goleman published his book on the subject in 1995. Emotional intelligence competencies are typically focused on recognition and regulation of emotions in one's self and social situations, yielding four categories: self-awareness, self-management, social awareness and relationship...

  8. Intelligence by consent

    DEFF Research Database (Denmark)

    Diderichsen, Adam; Rønn, Kira Vrist

    2017-01-01

    This article contributes to the current discussions concerning an adequate framework for intelligence ethics. The first part critically scrutinises the use of Just War Theory in intelligence ethics with specific focus on the just cause criterion. We argue that using self-defence as justifying cau...

  9. Intelligence and Physical Attractiveness

    Science.gov (United States)

    Kanazawa, Satoshi

    2011-01-01

    This brief research note aims to estimate the magnitude of the association between general intelligence and physical attractiveness with large nationally representative samples from two nations. In the United Kingdom, attractive children are more intelligent by 12.4 IQ points (r=0.381), whereas in the United States, the correlation between…

  10. Intelligence and treaty ratification

    International Nuclear Information System (INIS)

    Naftzinger, J.E.

    1990-01-01

    This paper describes the atmosphere leading up to the Senate INF hearings and then surveys the broad issues they raised. After that, the author highlights several aspects of the intelligence community's involvement and discusses the specific intelligence-related issues as the Senate committees saw them, notes their impact on the outcome, and finally draws several conclusions and lessons pertinent to the future

  11. Intelligence, Race, and Genetics

    Science.gov (United States)

    Sternberg, Robert J.; Grigorenko, Elena L.; Kidd, Kenneth K.

    2005-01-01

    In this article, the authors argue that the overwhelming portion of the literature on intelligence, race, and genetics is based on folk taxonomies rather than scientific analysis. They suggest that because theorists of intelligence disagree as to what it is, any consideration of its relationships to other constructs must be tentative at best. They…

  12. Multiple Intelligences in Action.

    Science.gov (United States)

    Campbell, Bruce

    1992-01-01

    Describes the investigation of the effects of a four-step model program used with third through fifth grade students to implement Gardener's concepts of seven human intelligences--linguistic, logical/mathematical, visual/spatial, musical, kinesthetic, intrapersonal, and interpersonal intelligence--into daily learning. (BB)

  13. The Reproduction of Intelligence

    Science.gov (United States)

    Meisenberg, Gerhard

    2010-01-01

    Although a negative relationship between fertility and education has been described consistently in most countries of the world, less is known about the relationship between intelligence and reproductive outcomes. Also the paths through which intelligence influences reproductive outcomes are uncertain. The present study uses the NLSY79 to analyze…

  14. Intelligent robot action planning

    Energy Technology Data Exchange (ETDEWEB)

    Vamos, T; Siegler, A

    1982-01-01

    Action planning methods used in intelligent robot control are discussed. Planning is accomplished through environment understanding, environment representation, task understanding and planning, motion analysis and man-machine communication. These fields are analysed in detail. The frames of an intelligent motion planning system are presented. Graphic simulation of the robot's environment and motion is used to support the planning. 14 references.

  15. Computational Intelligence in Intelligent Data Analysis

    CERN Document Server

    Nürnberger, Andreas

    2013-01-01

    Complex systems and their phenomena are ubiquitous as they can be found in biology, finance, the humanities, management sciences, medicine, physics and similar fields. For many problems in these fields, there are no conventional ways to mathematically or analytically solve them completely at low cost. On the other hand, nature already solved many optimization problems efficiently. Computational intelligence attempts to mimic nature-inspired problem-solving strategies and methods. These strategies can be used to study, model and analyze complex systems such that it becomes feasible to handle them. Key areas of computational intelligence are artificial neural networks, evolutionary computation and fuzzy systems. As only a few researchers in that field, Rudolf Kruse has contributed in many important ways to the understanding, modeling and application of computational intelligence methods. On occasion of his 60th birthday, a collection of original papers of leading researchers in the field of computational intell...

  16. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    Science.gov (United States)

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Intelligence and Prosocial Behavior

    DEFF Research Database (Denmark)

    Han, Ru; Shi, Jiannong; Yong, W.

    2012-01-01

    Results of prev ious studies of the relationship between prosocial behav ior and intelligence hav e been inconsistent. This study attempts to distinguish the dif f erences between sev eral prosocial tasks, and explores the way s in which cognitiv e ability inf luences prosocial behav ior. In Study...... One and Two, we reexamined the relationship between prosocial behav ior and intelligence by employ ing a costly signaling theory with f our games. The results rev ealed that the prosocial lev el of smarter children is higher than that of other children in more complicated tasks but not so in simple...... tasks. In Study Three, we tested the moderation ef f ect of the av erage intelligence across classes, and the results did not show any group intelligence ef f ect on the relationship between intelligence and prosocial behav ior....

  18. Business Intelligence Systems

    Directory of Open Access Journals (Sweden)

    Bogdan NEDELCU

    2014-02-01

    Full Text Available The aim of this article is to show the importance of business intelligence and its growing influence. It also shows when the concept of business intelligence was used for the first time and how it evolved over time. The paper discusses the utility of a business intelligence system in any organization and its contribution to daily activities. Furthermore, we highlight the role and the objectives of business intelligence systems inside an organization and the needs to grow the incomes and reduce the costs, to manage the complexity of the business environment and to cut IT costs so that the organization survives in the current competitive climate. The article contains information about architectural principles of a business intelligence system and how such a system can be achieved.

  19. Web TA Production (WebTA)

    Data.gov (United States)

    US Agency for International Development — WebTA is a web-based time and attendance system that supports USAID payroll administration functions, and is designed to capture hours worked, leave used and...

  20. Web server attack analyzer

    OpenAIRE

    Mižišin, Michal

    2013-01-01

    Web server attack analyzer - Abstract The goal of this work was to create prototype of analyzer of injection flaws attacks on web server. Proposed solution combines capabilities of web application firewall and web server log analyzer. Analysis is based on configurable signatures defined by regular expressions. This paper begins with summary of web attacks, followed by detection techniques analysis on web servers, description and justification of selected implementation. In the end are charact...

  1. Harvesting All Matching Information To A Given Query From a Deep Website

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice; Armano, Giuliano; Bozzon, Alessandro; Giuliani, Alessandro

    In this paper, the goal is harvesting all documents matching a given (entity) query from a deep web source. The objective is to retrieve all information about for instance "Denzel Washington", "Iran Nuclear Deal", or "FC Barcelona" from data hidden behind web forms. Policies of web search engines

  2. Web threat and its implication for E-business in Nigeria ...

    African Journals Online (AJOL)

    Web threat is any threat that uses the internet to facilitate identity theft , fraud, espionage and intelligence gathering. Web -based vulnerabilities now outnumber traditional computer security concerns. Such threats use multiple types of malware and fraud, all of which utilize HTTP or HTTPS protocols, but may also employ ...

  3. An Object-Oriented Architecture for a Web-Based CAI System.

    Science.gov (United States)

    Nakabayashi, Kiyoshi; Hoshide, Takahide; Seshimo, Hitoshi; Fukuhara, Yoshimi

    This paper describes the design and implementation of an object-oriented World Wide Web-based CAI (Computer-Assisted Instruction) system. The goal of the design is to provide a flexible CAI/ITS (Intelligent Tutoring System) framework with full extendibility and reusability, as well as to exploit Web-based software technologies such as JAVA, ASP (a…

  4. Nonvolatile Memory Materials for Neuromorphic Intelligent Machines.

    Science.gov (United States)

    Jeong, Doo Seok; Hwang, Cheol Seong

    2018-04-18

    Recent progress in deep learning extends the capability of artificial intelligence to various practical tasks, making the deep neural network (DNN) an extremely versatile hypothesis. While such DNN is virtually built on contemporary data centers of the von Neumann architecture, physical (in part) DNN of non-von Neumann architecture, also known as neuromorphic computing, can remarkably improve learning and inference efficiency. Particularly, resistance-based nonvolatile random access memory (NVRAM) highlights its handy and efficient application to the multiply-accumulate (MAC) operation in an analog manner. Here, an overview is given of the available types of resistance-based NVRAMs and their technological maturity from the material- and device-points of view. Examples within the strategy are subsequently addressed in comparison with their benchmarks (virtual DNN in deep learning). A spiking neural network (SNN) is another type of neural network that is more biologically plausible than the DNN. The successful incorporation of resistance-based NVRAM in SNN-based neuromorphic computing offers an efficient solution to the MAC operation and spike timing-based learning in nature. This strategy is exemplified from a material perspective. Intelligent machines are categorized according to their architecture and learning type. Also, the functionality and usefulness of NVRAM-based neuromorphic computing are addressed. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Artificial intelligence for analyzing orthopedic trauma radiographs.

    Science.gov (United States)

    Olczak, Jakub; Fahlberg, Niklas; Maki, Atsuto; Razavian, Ali Sharif; Jilert, Anthony; Stark, André; Sköldenberg, Olof; Gordon, Max

    2017-12-01

    Background and purpose - Recent advances in artificial intelligence (deep learning) have shown remarkable performance in classifying non-medical images, and the technology is believed to be the next technological revolution. So far it has never been applied in an orthopedic setting, and in this study we sought to determine the feasibility of using deep learning for skeletal radiographs. Methods - We extracted 256,000 wrist, hand, and ankle radiographs from Danderyd's Hospital and identified 4 classes: fracture, laterality, body part, and exam view. We then selected 5 openly available deep learning networks that were adapted for these images. The most accurate network was benchmarked against a gold standard for fractures. We furthermore compared the network's performance with 2 senior orthopedic surgeons who reviewed images at the same resolution as the network. Results - All networks exhibited an accuracy of at least 90% when identifying laterality, body part, and exam view. The final accuracy for fractures was estimated at 83% for the best performing network. The network performed similarly to senior orthopedic surgeons when presented with images at the same resolution as the network. The 2 reviewer Cohen's kappa under these conditions was 0.76. Interpretation - This study supports the use for orthopedic radiographs of artificial intelligence, which can perform at a human level. While current implementation lacks important features that surgeons require, e.g. risk of dislocation, classifications, measurements, and combining multiple exam views, these problems have technical solutions that are waiting to be implemented for orthopedics.

  6. Semantic web for dummies

    CERN Document Server

    Pollock, Jeffrey T

    2009-01-01

    Semantic Web technology is already changing how we interact with data on the Web. By connecting random information on the Internet in new ways, Web 3.0, as it is sometimes called, represents an exciting online evolution. Whether you're a consumer doing research online, a business owner who wants to offer your customers the most useful Web site, or an IT manager eager to understand Semantic Web solutions, Semantic Web For Dummies is the place to start! It will help you:Know how the typical Internet user will recognize the effects of the Semantic WebExplore all the benefits the data Web offers t

  7. In Pursuit of Alternatives in ELT Methodology: WebQuests

    Science.gov (United States)

    Sen, Ayfer; Neufeld, Steve

    2006-01-01

    Although the Internet has opened up a vast new source of information for university students to use and explore, many students lack the skills to find, critically evaluate and intelligently exploit web-based resources. This problem is accentuated in English-medium universities where students learn and use English as a foreign language. In these…

  8. A Web API ecosystem through feature-based reuse

    NARCIS (Netherlands)

    Verborgh, Ruben; Dumontier, Michel

    2016-01-01

    The current Web API landscape does not scale well: every API requires its own hardcoded clients in an unusually short-lived, tightly coupled relationship of highly subjective quality. This directly leads to inflated development costs, and prevents the design of a more intelligent generation of

  9. Bringing Web 2.0 to bioinformatics.

    Science.gov (United States)

    Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P

    2009-01-01

    Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.

  10. Emotional intelligence education in pre-registration nursing programmes: an integrative review.

    Science.gov (United States)

    Foster, Kim; McCloughen, Andrea; Delgado, Cynthia; Kefalas, Claudia; Harkness, Emily

    2015-03-01

    To investigate the state of knowledge on emotional intelligence (EI) education in pre-registration nursing programmes. Integrative literature review. CINAHL, Medline, Scopus, ERIC, and Web of Knowledge electronic databases were searched for abstracts published in English between 1992-2014. Data extraction and constant comparative analysis of 17 articles. Three categories were identified: Constructs of emotional intelligence; emotional intelligence curricula components; and strategies for emotional intelligence education. A wide range of emotional intelligence constructs were found, with a predominance of trait-based constructs. A variety of strategies to enhance students' emotional intelligence skills were identified, but limited curricula components and frameworks reported in the literature. An ability-based model for curricula and learning and teaching approaches is recommended. Copyright © 2014. Published by Elsevier Ltd.

  11. 7th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2015)

    CERN Document Server

    Nguyen, Ngoc; Batubara, John; New Trends in Intelligent Information and Database Systems

    2015-01-01

    Intelligent information and database systems are two closely related subfields of modern computer science which have been known for over thirty years. They focus on the integration of artificial intelligence and classic database technologies to create the class of next generation information systems. The book focuses on new trends in intelligent information and database systems and discusses topics addressed to the foundations and principles of data, information, and knowledge models, methodologies for intelligent information and database systems analysis, design, and implementation, their validation, maintenance and evolution. They cover a broad spectrum of research topics discussed both from the practical and theoretical points of view such as: intelligent information retrieval, natural language processing, semantic web, social networks, machine learning, knowledge discovery, data mining, uncertainty management and reasoning under uncertainty, intelligent optimization techniques in information systems, secu...

  12. Business Intelligence & Analytical Intelligence: hou het zakelijk

    OpenAIRE

    Van Nieuwenhuyse, Dries

    2013-01-01

    Technologie democratiseert, de markt consolideert, terwijl de hoeveelheid data explodeert. Het lijkt een ideale voedingsbodem voor projecten rond business intelligence en analytics. “Hoe minder de technologie het verschil zal maken, hoe prominenter de business aanwezig zal zijn.”

  13. Social Intelligence Design in Ambient Intelligence

    NARCIS (Netherlands)

    Nijholt, Antinus; Stock, Oliviero; Stock, O.; Nishida, T.; Nishida, Toyoaki

    2009-01-01

    This Special Issue of AI and Society contains a selection of papers presented at the 6th Social Intelligence Design Workshop held at ITC-irst, Povo (Trento, Italy) in July 2007. Being the 6th in a series means that there now is a well-established and also a growing research area. The interest in

  14. Spiritual Intelligence, Emotional Intelligence and Auditor’s Performance

    OpenAIRE

    Hanafi, Rustam

    2010-01-01

    The objective of this research was to investigate empirical evidence about influence audi-tor spiritual intelligence on the performance with emotional intelligence as a mediator variable. Linear regression models are developed to examine the hypothesis and path analysis. The de-pendent variable of each model is auditor performance, whereas the independent variable of model 1 is spiritual intelligence, of model 2 are emotional intelligence and spiritual intelligence. The parameters were estima...

  15. Naturalist Intelligence Among the Other Multiple Intelligences [In Bulgarian

    Directory of Open Access Journals (Sweden)

    R. Genkov

    2007-09-01

    Full Text Available The theory of multiple intelligences was presented by Gardner in 1983. The theory was revised later (1999 and among the other intelligences a naturalist intelligence was added. The criteria for distinguishing of the different types of intelligences are considered. While Gardner restricted the analysis of the naturalist intelligence with examples from the living nature only, the present paper considered this problem on wider background including objects and persons of the natural sciences.

  16. ANALYSIS OF WEB MINING APPLICATIONS AND BENEFICIAL AREAS

    Directory of Open Access Journals (Sweden)

    Khaleel Ahmad

    2011-10-01

    Full Text Available The main purpose of this paper is to study the process of Web mining techniques, features, application ( e-commerce and e-business and its beneficial areas. Web mining has become more popular and its widely used in varies application areas (such as business intelligent system, e-commerce and e-business. The e-commerce or e-business results are bettered by the application of the mining techniques such as data mining and text mining, among all the mining techniques web mining is better.

  17. The new community rules marketing on the social web

    CERN Document Server

    Weinberg, Tamar

    2009-01-01

    Blogs, networking sites, and other examples of the social web provide businesses with a largely untapped marketing channel for products and services. But how do you take advantage of them? With The New Community Rules, you''ll understand how social web technologies work, and learn the most practical and effective ways to reach people who frequent these sites. Written by an expert in social media and viral marketing, this book cuts through the hype and jargon to give you intelligent advice and strategies for positioning your business on the social web, with case studies that show how other c

  18. Intelligence and treaty ratification

    International Nuclear Information System (INIS)

    Sojka, G.L.

    1990-01-01

    What did the intelligence community and the Intelligence Committee di poorly in regard to the treaty ratification process for arms control? We failed to solve the compartmentalization problem/ This is a second-order problem, and, in general, analysts try to be very open; but there are problems nevertheless. There are very few, if any, people within the intelligence community who are cleared for everything relevant to our monitoring capability emdash short of probably the Director of Central Intelligence and the president emdash and this is a major problem. The formal monitoring estimates are drawn up by individuals who do not have access to all the information to make the monitoring judgements. This paper reports that the intelligence community did not present a formal document on either Soviet incentives of disincentives to cheat or on the possibility of cheating scenarios, and that was a mistake. However, the intelligence community was very responsive in producing those types of estimates, and, ultimately, the evidence behind them in response to questions. Nevertheless, the author thinks the intelligence community would do well to address this issue up front before a treaty is submitted to the Senate for advice and consent

  19. The Epistemic Status of Intelligence

    DEFF Research Database (Denmark)

    Rønn, Kira Vrist; Høffding, Simon

    2012-01-01

    We argue that the majority of intelligence definitions fail to recognize that the normative epistemic status of intelligence is knowledge and not an inferior alternative. We refute the counter-arguments that intelligence ought not to be seen as knowledge because of 1) its action-oriented scope...... and robustness of claims to intelligence-knowledge can be assessed....

  20. Moral Intelligence in the Schools

    Science.gov (United States)

    Clarken, Rodney H.

    2009-01-01

    Moral intelligence is newer and less studied than the more established cognitive, emotional and social intelligences, but has great potential to improve our understanding of learning and behavior. Moral intelligence refers to the ability to apply ethical principles to personal goals, values and actions. The construct of moral intelligence consists…

  1. Survey on deep learning for radiotherapy.

    Science.gov (United States)

    Meyer, Philippe; Noblet, Vincent; Mazzara, Christophe; Lallement, Alex

    2018-05-17

    More than 50% of cancer patients are treated with radiotherapy, either exclusively or in combination with other methods. The planning and delivery of radiotherapy treatment is a complex process, but can now be greatly facilitated by artificial intelligence technology. Deep learning is the fastest-growing field in artificial intelligence and has been successfully used in recent years in many domains, including medicine. In this article, we first explain the concept of deep learning, addressing it in the broader context of machine learning. The most common network architectures are presented, with a more specific focus on convolutional neural networks. We then present a review of the published works on deep learning methods that can be applied to radiotherapy, which are classified into seven categories related to the patient workflow, and can provide some insights of potential future applications. We have attempted to make this paper accessible to both radiotherapy and deep learning communities, and hope that it will inspire new collaborations between these two communities to develop dedicated radiotherapy applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. The Semantic Web: From Representation to Realization

    Science.gov (United States)

    Thórisson, Kristinn R.; Spivack, Nova; Wissner, James M.

    A semantically-linked web of electronic information - the Semantic Web - promises numerous benefits including increased precision in automated information sorting, searching, organizing and summarizing. Realizing this requires significantly more reliable meta-information than is readily available today. It also requires a better way to represent information that supports unified management of diverse data and diverse Manipulation methods: from basic keywords to various types of artificial intelligence, to the highest level of intelligent manipulation - the human mind. How this is best done is far from obvious. Relying solely on hand-crafted annotation and ontologies, or solely on artificial intelligence techniques, seems less likely for success than a combination of the two. In this paper describe an integrated, complete solution to these challenges that has already been implemented and tested with hundreds of thousands of users. It is based on an ontological representational level we call SemCards that combines ontological rigour with flexible user interface constructs. SemCards are machine- and human-readable digital entities that allow non-experts to create and use semantic content, while empowering machines to better assist and participate in the process. SemCards enable users to easily create semantically-grounded data that in turn acts as examples for automation processes, creating a positive iterative feedback loop of metadata creation and refinement between user and machine. They provide a holistic solution to the Semantic Web, supporting powerful management of the full lifecycle of data, including its creation, retrieval, classification, sorting and sharing. We have implemented the SemCard technology on the semantic Web site Twine.com, showing that the technology is indeed versatile and scalable. Here we present the key ideas behind SemCards and describe the initial implementation of the technology.

  3. Advanced intelligence and mechanism approach

    Institute of Scientific and Technical Information of China (English)

    ZHONG Yixin

    2007-01-01

    Advanced intelligence will feature the intelligence research in next 50 years.An understanding of the concept of advanced intelligence as well as its importance will be provided first,and detailed analysis on an approach,the mechanism approach.suitable to the advanced intelligence research will then be flolowed.And the mutual relationship among mechanism approach,traditional approaches existed in artificial intelligence research,and the cognitive informatics will be discussed.It is interesting to discover that mechanism approach is a good one to the Advanced Intelligence research and a tmified form of the existed approaches to artificial intelligence.

  4. Intelligent environmental sensing

    CERN Document Server

    Mukhopadhyay, Subhas

    2015-01-01

    Developing environmental sensing and monitoring technologies become essential especially for industries that may cause severe contamination. Intelligent environmental sensing uses novel sensor techniques, intelligent signal and data processing algorithms, and wireless sensor networks to enhance environmental sensing and monitoring. It finds applications in many environmental problems such as oil and gas, water quality, and agriculture. This book addresses issues related to three main approaches to intelligent environmental sensing and discusses their latest technological developments. Key contents of the book include:   Agricultural monitoring Classification, detection, and estimation Data fusion Geological monitoring Motor monitoring Multi-sensor systems Oil reservoirs monitoring Sensor motes Water quality monitoring Wireless sensor network protocol  

  5. Is Intelligence Artificial?

    OpenAIRE

    Greer, Kieran

    2014-01-01

    Our understanding of intelligence is directed primarily at the level of human beings. This paper attempts to give a more unifying definition that can be applied to the natural world in general. The definition would be used more to verify a degree of intelligence, not to quantify it and might help when making judgements on the matter. A version of an accepted test for AI is then put forward as the 'acid test' for Artificial Intelligence itself. It might be what a free-thinking program or robot...

  6. Expertik: Experience with Artificial Intelligence and Mobile Computing

    Directory of Open Access Journals (Sweden)

    José Edward Beltrán Lozano

    2013-06-01

    Full Text Available This article presents the experience in the development of services based in Artificial Intelligence, Service Oriented Architecture, mobile computing. It aims to combine technology offered by mobile computing provides techniques and artificial intelligence through a service provide diagnostic solutions to problems in industrial maintenance. It aims to combine technology offered by mobile computing and the techniques artificial intelligence through a service to provide diagnostic solutions to problems in industrial maintenance. For service creation are identified the elements of an expert system, the knowledge base, the inference engine and knowledge acquisition interfaces and their consultation. The applications were developed in ASP.NET under architecture three layers. The data layer was developed conjunction in SQL Server with data management classes; business layer in VB.NET and the presentation layer in ASP.NET with XHTML. Web interfaces for knowledge acquisition and query developed in Web and Mobile Web. The inference engine was conducted in web service developed for the fuzzy logic model to resolve requests from applications consulting knowledge (initially an exact rule-based logic within this experience to resolve requests from applications consulting knowledge. This experience seeks to strengthen a technology-based company to offer services based on AI for service companies Colombia.

  7. From outbound to inbound marketing for a web-development company

    OpenAIRE

    Liukkonen, Maria

    2016-01-01

    The objective of the thesis is transformation from outbound to inbound marketing of a web-development company based on social media channels. The company is called Tulikipuna and it offers web-development services, coding for web, intelligent websites solutions and software services to all kinds of corporate clients and companies. The theoretical framework was based on defining concept of digital marketing; the difference between otbound and inbound marketing,social media sites and curre...

  8. Multiple Intelligences and quotient spaces

    OpenAIRE

    Malatesta, Mike; Quintana, Yamilet

    2006-01-01

    The Multiple Intelligence Theory (MI) is one of the models that study and describe the cognitive abilities of an individual. In [7] is presented a referential system which allows to identify the Multiple Intelligences of the students of a course and to classify the level of development of such Intelligences. Following this tendency, the purpose of this paper is to describe the model of Multiple Intelligences as a quotient space, and also to study the Multiple Intelligences of an individual in...

  9. Business Intelligence using Software Agents

    OpenAIRE

    Ana-Ramona BOLOGA; Razvan BOLOGA

    2011-01-01

    This paper presents some ideas about business intelligence today and the importance of developing real time business solutions. The authors make an exploration of links between business intelligence and artificial intelligence and focuses specifically on the implementation of software agents-based systems in business intelligence. There are briefly presented some of the few solutions proposed so far that use software agents properties for the benefit of business intelligence. The authors then...

  10. Intelligent networks recent approaches and applications in medical systems

    CERN Document Server

    Ahamed, Syed V

    2013-01-01

    This textbook offers an insightful study of the intelligent Internet-driven revolutionary and fundamental forces at work in society. Readers will have access to tools and techniques to mentor and monitor these forces rather than be driven by changes in Internet technology and flow of money. These submerged social and human forces form a powerful synergistic foursome web of (a) processor technology, (b) evolving wireless networks of the next generation, (c) the intelligent Internet, and (d) the motivation that drives individuals and corporations. In unison, the technological forces can tear

  11. 1st International Conference on Computational Intelligence and Informatics

    CERN Document Server

    Prasad, V; Rani, B; Udgata, Siba; Raju, K

    2017-01-01

    The book covers a variety of topics which include data mining and data warehousing, high performance computing, parallel and distributed computing, computational intelligence, soft computing, big data, cloud computing, grid computing, cognitive computing, image processing, computer networks, wireless networks, social networks, wireless sensor networks, information and network security, web security, internet of things, bioinformatics and geoinformatics. The book is a collection of best papers submitted in the First International Conference on Computational Intelligence and Informatics (ICCII 2016) held during 28-30 May 2016 at JNTUH CEH, Hyderabad, India. It was hosted by Department of Computer Science and Engineering, JNTUH College of Engineering in association with Division V (Education & Research) CSI, India. .

  12. Het WEB leert begrijpen

    CERN Multimedia

    Stroeykens, Steven

    2004-01-01

    The WEB could be much more useful if the computers understood something of information on the Web pages. That explains the goal of the "semantic Web", a project in which takes part, amongst others, Tim Berners Lee, the inventor of the original WEB

  13. Instant responsive web design

    CERN Document Server

    Simmons, Cory

    2013-01-01

    A step-by-step tutorial approach which will teach the readers what responsive web design is and how it is used in designing a responsive web page.If you are a web-designer looking to expand your skill set by learning the quickly growing industry standard of responsive web design, this book is ideal for you. Knowledge of CSS is assumed.

  14. Facilitating Multiple Intelligences Through Multimodal Learning Analytics

    Directory of Open Access Journals (Sweden)

    Ayesha PERVEEN

    2018-01-01

    Full Text Available This paper develops a theoretical framework for employing learning analytics in online education to trace multiple learning variations of online students by considering their potential of being multiple intelligences based on Howard Gardner’s 1983 theory of multiple intelligences. The study first emphasizes the need to facilitate students as multiple intelligences by online education systems and then suggests a framework of the advanced form of learning analytics i.e., multimodal learning analytics for tracing and facilitating multiple intelligences while they are engaged in online ubiquitous learning. As multimodal learning analytics is still an evolving area, it poses many challenges for technologists, educationists as well as organizational managers. Learning analytics make machines meet humans, therefore, the educationists with an expertise in learning theories can help technologists devise latest technological methods for multimodal learning analytics and organizational managers can implement them for the improvement of online education. Therefore, a careful instructional design based on a deep understanding of students’ learning abilities, is required to develop teaching plans and technological possibilities for monitoring students’ learning paths. This is how learning analytics can help design an adaptive instructional design based on a quick analysis of the data gathered. Based on that analysis, the academicians can critically reflect upon the quick or delayed implementation of the existing instructional design based on students’ cognitive abilities or even about the single or double loop learning design. The researcher concludes that the online education is multimodal in nature, has the capacity to endorse multiliteracies and, therefore, multiple intelligences can be tracked and facilitated through multimodal learning analytics in an online mode. However, online teachers’ training both in technological implementations and

  15. Geospatial semantic web

    CERN Document Server

    Zhang, Chuanrong; Li, Weidong

    2015-01-01

    This book covers key issues related to Geospatial Semantic Web, including geospatial web services for spatial data interoperability; geospatial ontology for semantic interoperability; ontology creation, sharing, and integration; querying knowledge and information from heterogeneous data source; interfaces for Geospatial Semantic Web, VGI (Volunteered Geographic Information) and Geospatial Semantic Web; challenges of Geospatial Semantic Web; and development of Geospatial Semantic Web applications. This book also describes state-of-the-art technologies that attempt to solve these problems such as WFS, WMS, RDF, OWL, and GeoSPARQL, and demonstrates how to use the Geospatial Semantic Web technologies to solve practical real-world problems such as spatial data interoperability.

  16. Engineering general intelligence

    CERN Document Server

    Goertzel, Ben; Geisweiller, Nil

    2014-01-01

    The work outlines a novel conceptual and theoretical framework for understanding Artificial General Intelligence and based on this framework outlines a practical roadmap for the development of AGI with capability at the human level and ultimately beyond.

  17. Understanding US National Intelligence

    DEFF Research Database (Denmark)

    Leander, Anna

    2014-01-01

    In July 2010, the Washington Post (WP) published the results of a project on “Top Secret America” on which twenty investigative journalists had been working for two years. The project drew attention to the change and growth in National Intelligence following 9/11 (Washington Post 2010a......). The initial idea had been to work on intelligence generally, but given that this proved overwhelming, the team narrowed down to focus only on intelligence qualified as “top secret.” Even so, the growth in this intelligence activity is remarkable. This public is returning, or in this case expanding...... at an impressive speed confirming the general contention of this volume. Between 2001 and 2010 the budget had increased by 250 percent, reaching $75 billion (the GDP of the Czech Republic). Thirty-three building complexes for top secret work had been or were under construction in the Washington area; 1...

  18. Engineering general intelligence

    CERN Document Server

    Goertzel, Ben; Geisweiller, Nil

    2014-01-01

    The work outlines a detailed blueprint for the creation of an Artificial General Intelligence system with capability at the human level and ultimately beyond, according to the Cog Prime AGI design and the Open Cog software architecture.

  19. Intelligence Issues for Congress

    National Research Council Canada - National Science Library

    Best, Jr, Richard A

    2007-01-01

    To address the challenges facing the U.S. Intelligence Community in the 21st century, congressional and executive branch initiatives have sought to improve coordination among the different agencies and to encourage better analysis...

  20. Intelligence Issues for Congress

    National Research Council Canada - National Science Library

    Best, Jr, Richard A

    2006-01-01

    To address the challenges facing the U.S. Intelligence Community in the 21st Century, congressional and executive branch initiatives have sought to improve coordination among the different agencies and to encourage better analysis...

  1. Intelligence Issues for Congress

    National Research Council Canada - National Science Library

    Best, Jr, Richard A

    2008-01-01

    To address the challenges facing the U.S. Intelligence Community in the 21st century, congressional and executive branch initiatives have sought to improve coordination among the different agencies and to encourage better analysis...

  2. Intelligent Information Systems Institute

    National Research Council Canada - National Science Library

    Gomes, Carla

    2004-01-01

    ...) at Cornell during the first three years of operation. IISI's mandate is threefold: To perform and stimulate research in computational and data-intensive methods for intelligent decision making systems...

  3. Quo vadis, Intelligent Machine?

    Directory of Open Access Journals (Sweden)

    Rosemarie Velik

    2010-09-01

    Full Text Available Artificial Intelligence (AI is a branch of computer science concerned with making computers behave like humans. At least this was the original idea. However, it turned out that this is no task easy to be solved. This article aims to give a comprehensible review on the last 60 years of artificial intelligence taking a philosophical viewpoint. It is outlined what happened so far in AI, what is currently going on in this research area, and what can be expected in future. The goal is to mediate an understanding for the developments and changes in thinking in course of time about how to achieve machine intelligence. The clear message is that AI has to join forces with neuroscience and other brain disciplines in order to make a step towards the development of truly intelligent machines.

  4. Bibliography: Artificial Intelligence.

    Science.gov (United States)

    Smith, Richard L.

    1986-01-01

    Annotates reference material on artificial intelligence, mostly at an introductory level, with applications to education and learning. Topics include: (1) programing languages; (2) expert systems; (3) language instruction; (4) tutoring systems; and (5) problem solving and reasoning. (JM)

  5. Handbook of Intelligent Vehicles

    CERN Document Server

    2012-01-01

    The Handbook of Intelligent Vehicles provides a complete coverage of the fundamentals, new technologies, and sub-areas essential to the development of intelligent vehicles; it also includes advances made to date, challenges, and future trends. Significant strides in the field have been made to date; however, so far there has been no single book or volume which captures these advances in a comprehensive format, addressing all essential components and subspecialties of intelligent vehicles, as this book does. Since the intended users are engineering practitioners, as well as researchers and graduate students, the book chapters do not only cover fundamentals, methods, and algorithms but also include how software/hardware are implemented, and demonstrate the advances along with their present challenges. Research at both component and systems levels are required to advance the functionality of intelligent vehicles. This volume covers both of these aspects in addition to the fundamentals listed above.

  6. Genes, evolution and intelligence.

    Science.gov (United States)

    Bouchard, Thomas J

    2014-11-01

    I argue that the g factor meets the fundamental criteria of a scientific construct more fully than any other conception of intelligence. I briefly discuss the evidence regarding the relationship of brain size to intelligence. A review of a large body of evidence demonstrates that there is a g factor in a wide range of species and that, in the species studied, it relates to brain size and is heritable. These findings suggest that many species have evolved a general-purpose mechanism (a general biological intelligence) for dealing with the environments in which they evolved. In spite of numerous studies with considerable statistical power, we know of very few genes that influence g and the effects are very small. Nevertheless, g appears to be highly polygenic. Given the complexity of the human brain, it is not surprising that that one of its primary faculties-intelligence-is best explained by the near infinitesimal model of quantitative genetics.

  7. Modelling intelligent behavior

    Science.gov (United States)

    Green, H. S.; Triffet, T.

    1993-01-01

    An introductory discussion of the related concepts of intelligence and consciousness suggests criteria to be met in the modeling of intelligence and the development of intelligent materials. Methods for the modeling of actual structure and activity of the animal cortex have been found, based on present knowledge of the ionic and cellular constitution of the nervous system. These have led to the development of a realistic neural network model, which has been used to study the formation of memory and the process of learning. An account is given of experiments with simple materials which exhibit almost all properties of biological synapses and suggest the possibility of a new type of computer architecture to implement an advanced type of artificial intelligence.

  8. Emotional Intelligence: Requiring Attention

    Directory of Open Access Journals (Sweden)

    Monica Tudor

    2016-01-01

    Full Text Available This article aims to highlight the need for emotional intelligence. Two methods of measurementare presented in this research, in order to better understand the necessity of a correct result. Theresults of research can lead to recommendations for improving levels of emotional intelligence andare useful for obtaining data to better compare past and present result. The papers presented inthis research are significant for future study of this subject. The first paper presents the evolutionof emotional intelligence in the past two years, more specifically its decrease concerning certaincharacteristics. The second one presents a research on the differences between generations. Thethird one shows a difference in emotional intelligence levels of children from rural versus urbanenvironments and the obstacles that they encounter in their own development.

  9. Intelligence Issues for Congress

    National Research Council Canada - National Science Library

    Best. Jr, Richard A

    2006-01-01

    To address the challenges facing the U.S. Intelligence Community in the 21st century, congressional and executive branch initiatives have sought to improve coordination among the different agencies and to encourage better analysis...

  10. Towards Intelligent Supply Chains

    DEFF Research Database (Denmark)

    Siurdyban, Artur; Møller, Charles

    2012-01-01

    applied to the context of organizational processes can increase the success rate of business operations. The framework is created using a set of theoretical based constructs grounded in a discussion across several streams of research including psychology, pedagogy, artificial intelligence, learning...... of deploying inapt operations leading to deterioration of profits. To address this problem, we propose a unified business process design framework based on the paradigm of intelligence. Intelligence allows humans and human-designed systems cope with environmental volatility, and we argue that its principles......, business process management and supply chain management. It outlines a number of system tasks combined in four integrated management perspectives: build, execute, grow and innovate, put forward as business process design propositions for Intelligent Supply Chains....

  11. Business Intelligence Integrated Solutions

    Directory of Open Access Journals (Sweden)

    Cristescu Marian Pompiliu

    2017-01-01

    Full Text Available This paper shows how businesses make decisions better and faster in terms of customers, partners and operations by turning data into valuable business information. The paper describes how to bring together people's and business intelligence information to achieve successful business strategies. There is the possibility of developing business intelligence projects in large and medium-sized organizations only with the Microsoft product described in the paper, and possible alternatives can be discussed according to the required features.

  12. Artificial intelligence in medicine.

    OpenAIRE

    Ramesh, A. N.; Kambhampati, C.; Monson, J. R. T.; Drew, P. J.

    2004-01-01

    INTRODUCTION: Artificial intelligence is a branch of computer science capable of analysing complex medical data. Their potential to exploit meaningful relationship with in a data set can be used in the diagnosis, treatment and predicting outcome in many clinical scenarios. METHODS: Medline and internet searches were carried out using the keywords 'artificial intelligence' and 'neural networks (computer)'. Further references were obtained by cross-referencing from key articles. An overview of ...

  13. Artificial Intelligence Study (AIS).

    Science.gov (United States)

    1987-02-01

    ARTIFICIAL INTELLIGNECE HARDWARE ....... 2-50 AI Architecture ................................... 2-49 AI Hardware ....................................... 2...ftf1 829 ARTIFICIAL INTELLIGENCE STUDY (RIS)(U) MAY CONCEPTS 1/3 A~NLYSIS AGENCY BETHESA RD R B NOJESKI FED 6? CM-RP-97-1 NCASIFIED /01/6 M |K 1.0...p/ - - ., e -- CAA- RP- 87-1 SAOFŔ)11 I ARTIFICIAL INTELLIGENCE STUDY (AIS) tNo DTICFEBRUARY 1987 LECT 00 I PREPARED BY RESEARCH AND ANALYSIS

  14. Artificial Intelligence in Astronomy

    Science.gov (United States)

    Devinney, E. J.; Prša, A.; Guinan, E. F.; Degeorge, M.

    2010-12-01

    From the perspective (and bias) as Eclipsing Binary researchers, we give a brief overview of the development of Artificial Intelligence (AI) applications, describe major application areas of AI in astronomy, and illustrate the power of an AI approach in an application developed under the EBAI (Eclipsing Binaries via Artificial Intelligence) project, which employs Artificial Neural Network technology for estimating light curve solution parameters of eclipsing binary systems.

  15. Minimally Naturalistic Artificial Intelligence

    OpenAIRE

    Hansen, Steven Stenberg

    2017-01-01

    The rapid advancement of machine learning techniques has re-energized research into general artificial intelligence. While the idea of domain-agnostic meta-learning is appealing, this emerging field must come to terms with its relationship to human cognition and the statistics and structure of the tasks humans perform. The position of this article is that only by aligning our agents' abilities and environments with those of humans do we stand a chance at developing general artificial intellig...

  16. Artificial intelligence in cardiology

    OpenAIRE

    Bonderman, Diana

    2017-01-01

    Summary Decision-making is complex in modern medicine and should ideally be based on available data, structured knowledge and proper interpretation in the context of an individual patient. Automated algorithms, also termed artificial intelligence that are able to extract meaningful patterns from data collections and build decisions upon identified patterns may be useful assistants in clinical decision-making processes. In this article, artificial intelligence-based studies in clinical cardiol...

  17. Intelligent distributed computing

    CERN Document Server

    Thampi, Sabu

    2015-01-01

    This book contains a selection of refereed and revised papers of the Intelligent Distributed Computing Track originally presented at the third International Symposium on Intelligent Informatics (ISI-2014), September 24-27, 2014, Delhi, India.  The papers selected for this Track cover several Distributed Computing and related topics including Peer-to-Peer Networks, Cloud Computing, Mobile Clouds, Wireless Sensor Networks, and their applications.

  18. The intelligent data recorder

    International Nuclear Information System (INIS)

    Kojima, Mamoru; Hidekuma, Sigeru.

    1985-01-01

    The intelligent data recorder has been developed to data acquisition for a microwave interferometer. The 'RS-232C' which is the standard interface is used for data transmission to the host computer. Then, it's easy to connect with any computer which has general purpose serial port. In this report, the charcteristics of the intelligent data recorder and the way of developing the software are described. (author)

  19. Intelligent Lighting Control System

    OpenAIRE

    García, Elena; Rodríguez González, Sara; de Paz Santana, Juan F.; Bajo Pérez, Javier

    2014-01-01

    This paper presents an adaptive architecture that allows centralized control of public lighting and intelligent management, in order to economise on lighting and maintain maximum comfort status of the illuminated areas. To carry out this management, architecture merges various techniques of artificial intelligence (AI) and statistics such as artificial neural networks (ANN), multi-agent systems (MAS), EM algorithm, methods based on ANOVA and a Service Oriented Aproach (SOA). It performs optim...

  20. Virtual Web Services

    OpenAIRE

    Rykowski, Jarogniew

    2007-01-01

    In this paper we propose an application of software agents to provide Virtual Web Services. A Virtual Web Service is a linked collection of several real and/or virtual Web Services, and public and private agents, accessed by the user in the same way as a single real Web Service. A Virtual Web Service allows unrestricted comparison, information merging, pipelining, etc., of data coming from different sources and in different forms. Detailed architecture and functionality of a single Virtual We...

  1. The Semantic Web Revisited

    OpenAIRE

    Shadbolt, Nigel; Berners-Lee, Tim; Hall, Wendy

    2006-01-01

    The original Scientific American article on the Semantic Web appeared in 2001. It described the evolution of a Web that consisted largely of documents for humans to read to one that included data and information for computers to manipulate. The Semantic Web is a Web of actionable information--information derived from data through a semantic theory for interpreting the symbols.This simple idea, however, remains largely unrealized. Shopbots and auction bots abound on the Web, but these are esse...

  2. Web Project Management

    OpenAIRE

    Suralkar, Sunita; Joshi, Nilambari; Meshram, B B

    2013-01-01

    This paper describes about the need for Web project management, fundamentals of project management for web projects: what it is, why projects go wrong, and what's different about web projects. We also discuss Cost Estimation Techniques based on Size Metrics. Though Web project development is similar to traditional software development applications, the special characteristics of Web Application development requires adaption of many software engineering approaches or even development of comple...

  3. DeepPy: Pythonic deep learning

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo

    This technical report introduces DeepPy – a deep learning framework built on top of NumPy with GPU acceleration. DeepPy bridges the gap between highperformance neural networks and the ease of development from Python/NumPy. Users with a background in scientific computing in Python will quickly...... be able to understand and change the DeepPy codebase as it is mainly implemented using high-level NumPy primitives. Moreover, DeepPy supports complex network architectures by letting the user compose mathematical expressions as directed graphs. The latest version is available at http...

  4. An Intelligent Tool for Activity Data Collection

    Directory of Open Access Journals (Sweden)

    A. M. Jehad Sarkar

    2011-04-01

    Full Text Available Activity recognition systems using simple and ubiquitous sensors require a large variety of real-world sensor data for not only evaluating their performance but also training the systems for better functioning. However, a tremendous amount of effort is required to setup an environment for collecting such data. For example, expertise and resources are needed to design and install the sensors, controllers, network components, and middleware just to perform basic data collections. It is therefore desirable to have a data collection method that is inexpensive, flexible, user-friendly, and capable of providing large and diverse activity datasets. In this paper, we propose an intelligent activity data collection tool which has the ability to provide such datasets inexpensively without physically deploying the testbeds. It can be used as an inexpensive and alternative technique to collect human activity data. The tool provides a set of web interfaces to create a web-based activity data collection environment. It also provides a web-based experience sampling tool to take the user’s activity input. The tool generates an activity log using its activity knowledge and the user-given inputs. The activity knowledge is mined from the web. We have performed two experiments to validate the tool’s performance in producing reliable datasets.

  5. An intelligent tool for activity data collection.

    Science.gov (United States)

    Sarkar, A M Jehad

    2011-01-01

    Activity recognition systems using simple and ubiquitous sensors require a large variety of real-world sensor data for not only evaluating their performance but also training the systems for better functioning. However, a tremendous amount of effort is required to setup an environment for collecting such data. For example, expertise and resources are needed to design and install the sensors, controllers, network components, and middleware just to perform basic data collections. It is therefore desirable to have a data collection method that is inexpensive, flexible, user-friendly, and capable of providing large and diverse activity datasets. In this paper, we propose an intelligent activity data collection tool which has the ability to provide such datasets inexpensively without physically deploying the testbeds. It can be used as an inexpensive and alternative technique to collect human activity data. The tool provides a set of web interfaces to create a web-based activity data collection environment. It also provides a web-based experience sampling tool to take the user's activity input. The tool generates an activity log using its activity knowledge and the user-given inputs. The activity knowledge is mined from the web. We have performed two experiments to validate the tool's performance in producing reliable datasets.

  6. Professionalizing Intelligence Analysis

    Directory of Open Access Journals (Sweden)

    James B. Bruce

    2015-09-01

    Full Text Available This article examines the current state of professionalism in national security intelligence analysis in the U.S. Government. Since the introduction of major intelligence reforms directed by the Intelligence Reform and Terrorism Prevention Act (IRTPA in December, 2004, we have seen notable strides in many aspects of intelligence professionalization, including in analysis. But progress is halting, uneven, and by no means permanent. To consolidate its gains, and if it is to continue improving, the U.S. intelligence community (IC should commit itself to accomplishing a new program of further professionalization of analysis to ensure that it will develop an analytic cadre that is fully prepared to deal with the complexities of an emerging multipolar and highly dynamic world that the IC itself is forecasting. Some recent reforms in intelligence analysis can be assessed against established standards of more fully developed professions; these may well fall short of moving the IC closer to the more fully professionalized analytical capability required for producing the kind of analysis needed now by the United States.

  7. GABA predicts visual intelligence.

    Science.gov (United States)

    Cook, Emily; Hammett, Stephen T; Larsson, Jonas

    2016-10-06

    Early psychological researchers proposed a link between intelligence and low-level perceptual performance. It was recently suggested that this link is driven by individual variations in the ability to suppress irrelevant information, evidenced by the observation of strong correlations between perceptual surround suppression and cognitive performance. However, the neural mechanisms underlying such a link remain unclear. A candidate mechanism is neural inhibition by gamma-aminobutyric acid (GABA), but direct experimental support for GABA-mediated inhibition underlying suppression is inconsistent. Here we report evidence consistent with a global suppressive mechanism involving GABA underlying the link between sensory performance and intelligence. We measured visual cortical GABA concentration, visuo-spatial intelligence and visual surround suppression in a group of healthy adults. Levels of GABA were strongly predictive of both intelligence and surround suppression, with higher levels of intelligence associated with higher levels of GABA and stronger surround suppression. These results indicate that GABA-mediated neural inhibition may be a key factor determining cognitive performance and suggests a physiological mechanism linking surround suppression and intelligence. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  8. Alzheimer's disease and intelligence.

    Science.gov (United States)

    Yeo, R A; Arden, R; Jung, R E

    2011-06-01

    A significant body of evidence has accumulated suggesting that individual variation in intellectual ability, whether assessed directly by intelligence tests or indirectly through proxy measures, is related to risk of developing Alzheimer's disease (AD) in later life. Important questions remain unanswered, however, such as the specificity of risk for AD vs. other forms of dementia, and the specific links between premorbid intelligence and development of the neuropathology characteristic of AD. Lower premorbid intelligence has also emerged as a risk factor for greater mortality across myriad health and mental health diagnoses. Genetic covariance contributes importantly to these associations, and pleiotropic genetic effects may impact diverse organ systems through similar processes, including inefficient design and oxidative stress. Through such processes, the genetic underpinnings of intelligence, specifically, mutation load, may also increase the risk of developing AD. We discuss how specific neurobiologic features of relatively lower premorbid intelligence, including reduced metabolic efficiency, may facilitate the development of AD neuropathology. The cognitive reserve hypothesis, the most widely accepted account of the intelligence-AD association, is reviewed in the context of this larger literature.

  9. WEB STRUCTURE MINING

    Directory of Open Access Journals (Sweden)

    CLAUDIA ELENA DINUCĂ

    2011-01-01

    Full Text Available The World Wide Web became one of the most valuable resources for information retrievals and knowledge discoveries due to the permanent increasing of the amount of data available online. Taking into consideration the web dimension, the users get easily lost in the web’s rich hyper structure. Application of data mining methods is the right solution for knowledge discovery on the Web. The knowledge extracted from the Web can be used to raise the performances for Web information retrievals, question answering and Web based data warehousing. In this paper, I provide an introduction of Web mining categories and I focus on one of these categories: the Web structure mining. Web structure mining, one of three categories of web mining for data, is a tool used to identify the relationship between Web pages linked by information or direct link connection. It offers information about how different pages are linked together to form this huge web. Web Structure Mining finds hidden basic structures and uses hyperlinks for more web applications such as web search.

  10. Semantic Web Technologies for the Adaptive Web

    DEFF Research Database (Denmark)

    Dolog, Peter; Nejdl, Wolfgang

    2007-01-01

    Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...... provide conceptualization for the links which are a main vehicle to access information on the web. The subject domain ontologies serve as constraints for generating only those links which are relevant for the domain a user is currently interested in. Furthermore, user model ontologies provide additional...... means for deciding which links to show, annotate, hide, generate, and reorder. The semantic web technologies provide means to formalize the domain ontologies and metadata created from them. The formalization enables reasoning for personalization decisions. This chapter describes which components...

  11. 78 FR 90 - Defense Intelligence Agency National Intelligence University Board of Visitors Closed Meeting

    Science.gov (United States)

    2013-01-02

    ... DEPARTMENT OF DEFENSE Office of the Secretary Defense Intelligence Agency National Intelligence University Board of Visitors Closed Meeting AGENCY: National Intelligence University, Defense Intelligence... hereby given that a closed meeting of the National Intelligence University Board of Visitors has been...

  12. Applying semantic web services to enterprise web

    OpenAIRE

    Hu, Y; Yang, Q P; Sun, X; Wei, P

    2008-01-01

    Enterprise Web provides a convenient, extendable, integrated platform for information sharing and knowledge management. However, it still has many drawbacks due to complexity and increasing information glut, as well as the heterogeneity of the information processed. Research in the field of Semantic Web Services has shown the possibility of adding higher level of semantic functionality onto the top of current Enterprise Web, enhancing usability and usefulness of resource, enabling decision su...

  13. Smart Aerospace eCommerce: Using Intelligent Agents in a NASA Mission Services Ordering Application

    Science.gov (United States)

    Moleski, Walt; Luczak, Ed; Morris, Kim; Clayton, Bill; Scherf, Patricia; Obenschain, Arthur F. (Technical Monitor)

    2002-01-01

    This paper describes how intelligent agent technology was successfully prototyped and then deployed in a smart eCommerce application for NASA. An intelligent software agent called the Intelligent Service Validation Agent (ISVA) was added to an existing web-based ordering application to validate complex orders for spacecraft mission services. This integration of intelligent agent technology with conventional web technology satisfies an immediate NASA need to reduce manual order processing costs. The ISVA agent checks orders for completeness, consistency, and correctness, and notifies users of detected problems. ISVA uses NASA business rules and a knowledge base of NASA services, and is implemented using the Java Expert System Shell (Jess), a fast rule-based inference engine. The paper discusses the design of the agent and knowledge base, and the prototyping and deployment approach. It also discusses future directions and other applications, and discusses lessons-learned that may help other projects make their aerospace eCommerce applications smarter.

  14. Sounds of Web Advertising

    DEFF Research Database (Denmark)

    Jessen, Iben Bredahl; Graakjær, Nicolai Jørgensgaard

    2010-01-01

    Sound seems to be a neglected issue in the study of web ads. Web advertising is predominantly regarded as visual phenomena–commercial messages, as for instance banner ads that we watch, read, and eventually click on–but only rarely as something that we listen to. The present chapter presents...... an overview of the auditory dimensions in web advertising: Which kinds of sounds do we hear in web ads? What are the conditions and functions of sound in web ads? Moreover, the chapter proposes a theoretical framework in order to analyse the communicative functions of sound in web advertising. The main...... argument is that an understanding of the auditory dimensions in web advertising must include a reflection on the hypertextual settings of the web ad as well as a perspective on how users engage with web content....

  15. A Lead Provided by Bookmarks - Intelligent Browsers

    Directory of Open Access Journals (Sweden)

    Dan Balanescu

    2015-05-01

    Full Text Available Browsers are applications that allow Internet access. A defining characteristic is their unidirectionality: Navigator-> Internet. The purpose of this article is to support the idea of Intelligent Browsers that is defined by bidirectional: Navigator-> Internet and Internet-> Navigator. The fundamental idea is that the Internet contains huge resources of knowledge, but they are “passive”. The purpose of this article is to propose the “activation” of this knowledge so that they, through “Intelligent Browsers”, to become from Sitting Ducks to Active Mentors. Following this idea, the present article proposes changes to Bookmarks function, from the current status of Favorites to Recommendations. The article presents an analysis of the utility of this function (by presenting a research of web browsing behaviors and in particular finds that the significance of this utility has decreased lately (to the point of becoming almost useless, as will be shown, in terms data-information-knowledge. Finally, it presents the idea of a project which aims to be an applied approach that anticipates the findings of this study and the concept of Intelligent Browsers (or Active Browsers required in the context of the Big Data concept.

  16. Intelligent Discovery for Learning Objects Using Semantic Web Technologies

    Science.gov (United States)

    Hsu, I-Ching

    2012-01-01

    The concept of learning objects has been applied in the e-learning field to promote the accessibility, reusability, and interoperability of learning content. Learning Object Metadata (LOM) was developed to achieve these goals by describing learning objects in order to provide meaningful metadata. Unfortunately, the conventional LOM lacks the…

  17. Intelligent Information Retrieval and Web Mining Architecture Using SOA

    Science.gov (United States)

    El-Bathy, Naser Ibrahim

    2010-01-01

    The study of this dissertation provides a solution to a very specific problem instance in the area of data mining, data warehousing, and service-oriented architecture in publishing and newspaper industries. The research question focuses on the integration of data mining and data warehousing. The research problem focuses on the development of…

  18. A Web Based Intelligent Training System for SMEs

    Science.gov (United States)

    Mullins, Roisin; Duan, Yanqing; Hamblin, David; Burrell, Phillip; Jin, Huan; Jerzy, Goluchowski; Ewa, Ziemba; Aleksander, Billewicz

    2007-01-01

    It is widely accepted that employees in small business suffer from a lack of knowledge and skills. This lack of skills means that small companies will miss out on new business opportunities. This is even more evident with respect to the adoption of Internet marketing in Small and Medium Enterprises (SMEs). This paper reports a pilot research…

  19. Business Intelligence using Software Agents

    Directory of Open Access Journals (Sweden)

    Ana-Ramona BOLOGA

    2011-12-01

    Full Text Available This paper presents some ideas about business intelligence today and the importance of developing real time business solutions. The authors make an exploration of links between business intelligence and artificial intelligence and focuses specifically on the implementation of software agents-based systems in business intelligence. There are briefly presented some of the few solutions proposed so far that use software agents properties for the benefit of business intelligence. The authors then propose some basic ideas for developing real-time agent-based software system for business intelligence in supply chain management, using Case Base Reasoning Agents.

  20. Fluid intelligence: A brief history.

    Science.gov (United States)

    Kent, Phillip

    2017-01-01

    The concept of fluid and crystallized intelligence was introduced to the psychological community approximately 75 years ago by Raymond B. Cattell, and it continues to be an area of active research and controversy. The purpose of this paper is to provide a brief overview of the origin of the concept, early efforts to define intelligence and uses of intelligence tests to address pressing social issues, and the ongoing controversies associated with fluid intelligence and the structure of intelligence. The putative neuropsychological underpinnings and neurological substrates of fluid intelligence are discussed.

  1. Parasites in food webs: the ultimate missing links

    Science.gov (United States)

    Lafferty, Kevin D.; Allesina, Stefano; Arim, Matias; Briggs, Cherie J.; De Leo, Giulio A.; Dobson, Andrew P.; Dunne, Jennifer A.; Johnson, Pieter T.J.; Kuris, Armand M.; Marcogliese, David J.; Martinez, Neo D.; Memmott, Jane; Marquet, Pablo A.; McLaughlin, John P.; Mordecai, Eerin A.; Pascual, Mercedes; Poulin, Robert; Thieltges, David W.

    2008-01-01

    Parasitism is the most common consumer strategy among organisms, yet only recently has there been a call for the inclusion of infectious disease agents in food webs. The value of this effort hinges on whether parasites affect food-web properties. Increasing evidence suggests that parasites have the potential to uniquely alter food-web topology in terms of chain length, connectance and robustness. In addition, parasites might affect food-web stability, interaction strength and energy flow. Food-web structure also affects infectious disease dynamics because parasites depend on the ecological networks in which they live. Empirically, incorporating parasites into food webs is straightforward. We may start with existing food webs and add parasites as nodes, or we may try to build food webs around systems for which we already have a good understanding of infectious processes. In the future, perhaps researchers will add parasites while they construct food webs. Less clear is how food-web theory can accommodate parasites. This is a deep and central problem in theoretical biology and applied mathematics. For instance, is representing parasites with complex life cycles as a single node equivalent to representing other species with ontogenetic niche shifts as a single node? Can parasitism fit into fundamental frameworks such as the niche model? Can we integrate infectious disease models into the emerging field of dynamic food-web modelling? Future progress will benefit from interdisciplinary collaborations between ecologists and infectious disease biologists.

  2. Understanding the Web from an Economic Perspective: The Evolution of Business Models and the Web

    Directory of Open Access Journals (Sweden)

    Louis Rinfret

    2014-08-01

    Full Text Available The advent of the World Wide Web is arguably amongst the most important changes that have occurred since the 1990s in the business landscape. It has fueled the rise of new industries, supported the convergence and reshaping of existing ones and enabled the development of new business models. During this time the web has evolved tremendously from a relatively static pagedisplay tool to a massive network of user-generated content, collective intelligence, applications and hypermedia. As technical standards continue to evolve, business models catch-up to the new capabilities. New ways of creating value, distributing it and profiting from it emerge more rapidly than ever. In this paper we explore how the World Wide Web and business models evolve and we identify avenues for future research in light of the web‟s ever-evolving nature and its influence on business models.

  3. Artificial intelligence and deep learning - Radiology's next frontier?

    Science.gov (United States)

    Mayo, Ray Cody; Leung, Jessica

    Tracing the use of computers in the radiology department from administrative functions through image acquisition, storage, and reporting, to early attempts at improved diagnosis, we begin to imagine possible new frontiers for their use in exam interpretation. Given their initially slow but ultimately substantial progress in the noninterpretive areas, we are left desiring and even expecting more in the interpretation realm. New technological advances may provide the next wave of progress and radiologists should be early adopters. Several potential applications are discussed and hopefully will serve to inspire future progress. Published by Elsevier Inc.

  4. New Perspectives on Intelligence Collection and Processing

    Science.gov (United States)

    2016-06-01

    MASINT Measurement and Signature Intelligence NPS Naval Postgraduate School OSINT Open Source Intelligence pdf Probability Density Function SIGINT...MASINT): different types of sensors • Open Source Intelligence ( OSINT ): from all open sources • Signals Intelligence (SIGINT): intercepting the

  5. Trends in ambient intelligent systems the role of computational intelligence

    CERN Document Server

    Khan, Mohammad; Abraham, Ajith

    2016-01-01

    This book demonstrates the success of Ambient Intelligence in providing possible solutions for the daily needs of humans. The book addresses implications of ambient intelligence in areas of domestic living, elderly care, robotics, communication, philosophy and others. The objective of this edited volume is to show that Ambient Intelligence is a boon to humanity with conceptual, philosophical, methodical and applicative understanding. The book also aims to schematically demonstrate developments in the direction of augmented sensors, embedded systems and behavioral intelligence towards Ambient Intelligent Networks or Smart Living Technology. It contains chapters in the field of Ambient Intelligent Networks, which received highly positive feedback during the review process. The book contains research work, with in-depth state of the art from augmented sensors, embedded technology and artificial intelligence along with cutting-edge research and development of technologies and applications of Ambient Intelligent N...

  6. Greedy Deep Dictionary Learning

    OpenAIRE

    Tariyal, Snigdha; Majumdar, Angshul; Singh, Richa; Vatsa, Mayank

    2016-01-01

    In this work we propose a new deep learning tool called deep dictionary learning. Multi-level dictionaries are learnt in a greedy fashion, one layer at a time. This requires solving a simple (shallow) dictionary learning problem, the solution to this is well known. We apply the proposed technique on some benchmark deep learning datasets. We compare our results with other deep learning tools like stacked autoencoder and deep belief network; and state of the art supervised dictionary learning t...

  7. A Framework for the Systematic Collection of Open Source Intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Pouchard, Line Catherine [ORNL; Trien, Joseph P [ORNL; Dobson, Jonathan D [ORNL

    2009-01-01

    Following legislative directions, the Intelligence Community has been mandated to make greater use of Open Source Intelligence (OSINT). Efforts are underway to increase the use of OSINT but there are many obstacles. One of these obstacles is the lack of tools helping to manage the volume of available data and ascertain its credibility. We propose a unique system for selecting, collecting and storing Open Source data from the Web and the Open Source Center. Some data management tasks are automated, document source is retained, and metadata containing geographical coordinates are added to the documents. Analysts are thus empowered to search, view, store, and analyze Web data within a single tool. We present ORCAT I and ORCAT II, two implementations of the system.

  8. Intelligent System for Data Tracking in Image Editing Company

    Directory of Open Access Journals (Sweden)

    Kimlong Ngin

    2017-11-01

    Full Text Available The success of data transaction in a company largely depends on the intelligence system used in its database and application system. The complex and heterogeneous data in the log file make it more difficult for users to manage data effectively. Therefore, this paper creates an application system that can manage data from the log file. A sample was collected from an image editing company in Cambodia by interviewing five customers and seven operators, who worked on the data files for 300 images. This paper found two results: first, the agent script was used for retrieving data from the log file, classifying data, and inserting data into a database; and second, the web interface was used for the viewing of results by the users. The intelligence capabilities of our application, together with a friendly web-based and window-based experience, allow the users to easily acquire, manage, and access the data in an image editing company.

  9. The Sensor Web: A Macro-Instrument for Coordinated Sensing

    Directory of Open Access Journals (Sweden)

    Kevin A. Delin

    2002-07-01

    Full Text Available The Sensor Web is a macro-instrument concept that allows for the spatiotemporal understanding of an environment through coordinated efforts between multiple numbers and types of sensing platforms, including both orbital and terrestrial and both fixed and mobile. Each of these platforms, or pods, communicates within their local neighborhood and thus distributes information to the instrument as a whole. Much as intelligence in the brain is a result of the myriad of connections between dendrites, it is anticipated that the Sensor Web will develop a macro-intelligence as a result of its distributed information with the pods reacting and adapting to their environment in a way that is much more than their individual sum. The sharing of data among individual pods will allow for a global perception and purpose of the instrument as a whole. The Sensor Web is to sensors what the Internet is to computers, with different platforms and operating systems communicating via a set of shared, robust protocols. This paper will outline the potential of the Sensor Web concept and describe the Jet Propulsion Laboratory (JPL Sensor Webs Project (http://sensorwebs.jpl.nasa.gov/. In particular, various fielded Sensor Webs will be discussed.

  10. Social Representations of Intelligence

    Directory of Open Access Journals (Sweden)

    Elena Zubieta

    2016-02-01

    Full Text Available The article stresses the relationship between Explicit and Implicit theories of Intelligence. Following the line of common sense epistemology and the theory of Social Representations, a study was carried out in order to analyze naive’s explanations about Intelligence Definitions. Based on Mugny & Carugati (1989 research, a self-administered questionnaire was designed and filled in by 286 subjects. Results are congruent with the main hyphotesis postulated: A general overlap between explicit and implicit theories showed up. According to the results Intelligence appears as both, a social attribute related to social adaptation and as a concept defined in relation with contextual variables similar to expert’s current discourses. Nevertheless, conceptions based on “gifted ideology” still are present stressing the main axes of Intelligence debate: biological and sociological determinism. In the same sense, unfamiliarity and social identity are reaffirmed as organizing principles of social representation. The distance with the object -measured as the belief in intelligence differences as a solve/non solve problem- and the level of implication with the topic -teachers/no teachers- appear as discriminating elements at the moment of supporting specific dimensions. 

  11. 9th Asian Conference on Intelligent Information and Database Systems

    CERN Document Server

    Nguyen, Ngoc; Shirai, Kiyoaki

    2017-01-01

    This book presents recent research in intelligent information and database systems. The carefully selected contributions were initially accepted for presentation as posters at the 9th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2017) held from to 5 April 2017 in Kanazawa, Japan. While the contributions are of an advanced scientific level, several are accessible for non-expert readers. The book brings together 47 chapters divided into six main parts: • Part I. From Machine Learning to Data Mining. • Part II. Big Data and Collaborative Decision Support Systems, • Part III. Computer Vision Analysis, Detection, Tracking and Recognition, • Part IV. Data-Intensive Text Processing, • Part V. Innovations in Web and Internet Technologies, and • Part VI. New Methods and Applications in Information and Software Engineering. The book is an excellent resource for researchers and those working in algorithmics, artificial and computational intelligence, collaborative systems, decisio...

  12. Qualitative Evaluation of the Java Intelligent Tutoring System

    Directory of Open Access Journals (Sweden)

    Edward Sykes

    2005-10-01

    Full Text Available In an effort to support the growing trend of the Java programming language and to promote web-based personalized education, the Java Intelligent Tutoring System (JITS was designed and developed. This tutoring system is unique in a number of ways. Most Intelligent Tutoring Systems require the teacher to author problems with corresponding solutions. JITS, on the other hand, requires the teacher to only supply the problem and problem specification. JITS is designed to "intelligently" examine the student's submitted code and determines appropriate feedback based on a number of factors such as JITS' cognitive model of the student, the student's skill level, and problem details. JITS is intended to be used by beginner programming students in their first year of College or University. This paper discusses the important aspects of the design and development of JITS, the qualitative methods and procedures, and findings. Research was conducted at the Sheridan Institute of Technology and Advanced Learning, Ontario, Canada.

  13. Web sites survey for electronic public participation

    International Nuclear Information System (INIS)

    Park, Moon Su; Lee, Young Wook; Kang, Chang Sun

    2004-01-01

    Public acceptance has been a key factor in nuclear industry as well as other fields. There are many ways to get public acceptance. Public participation in making a policy must be a good tool for this purpose. Moreover, the participation by means of internet may be an excellent way to increase voluntary participation. In this paper, the level of electronic public participation is defined and how easy and deep for lay public to participate electronically is assessed for some organization's web sites

  14. Building web information systems using web services

    NARCIS (Netherlands)

    Frasincar, F.; Houben, G.J.P.M.; Barna, P.; Vasilecas, O.; Eder, J.; Caplinskas, A.

    2006-01-01

    Hera is a model-driven methodology for designing Web information systems. In the past a CASE tool for the Hera methodology was implemented. This software had different components that together form one centralized application. In this paper, we present a distributed Web service-oriented architecture

  15. Modelling traffic flows with intelligent cars and intelligent roads

    NARCIS (Netherlands)

    van Arem, Bart; Tampere, Chris M.J.; Malone, Kerry

    2003-01-01

    This paper addresses the modeling of traffic flows with intelligent cars and intelligent roads. It will describe the modeling approach MIXIC and review the results for different ADA systems: Adaptive Cruise Control, a special lane for Intelligent Vehicles, cooperative following and external speed

  16. Intelligence analysis – the royal discipline of Competitive Intelligence

    Directory of Open Access Journals (Sweden)

    František Bartes

    2011-01-01

    Full Text Available The aim of this article is to propose work methodology for Competitive Intelligence teams in one of the intelligence cycle’s specific area, in the so-called “Intelligence Analysis”. Intelligence Analysis is one of the stages of the Intelligence Cycle in which data from both the primary and secondary research are analyzed. The main result of the effort is the creation of added value for the information collected. Company Competiitve Intelligence, correctly understood and implemented in business practice, is the “forecasting of the future”. That is forecasting about the future, which forms the basis for strategic decisions made by the company’s top management. To implement that requirement in corporate practice, the author perceives Competitive Intelligence as a systemic application discipline. This approach allows him to propose a “Work Plan” for Competitive Intelligence as a fundamental standardized document to steer Competitive Intelligence team activities. The author divides the Competitive Intelligence team work plan into five basic parts. Those parts are derived from the five-stage model of the intelligence cycle, which, in the author’s opinion, is more appropriate for complicated cases of Competitive Intelligence.

  17. An Intelligent Tutoring System for Learning Android Applications UI Development

    OpenAIRE

    Al Rekhawi , Hazem Awni; Abu Naser , Samy S

    2018-01-01

    International audience; The paper describes the design of a web based intelligent tutoring system for teaching Android Applications Development to students to overcome the difficulties they face. The basic idea of this system is a systematic introduction into the concept of Android Application Development. The system presents the topic of Android Application Development and administers automatically generated problems for the students to solve. The system is automatically adapted at run time ...

  18. Deep primary production in coastal pelagic systems

    DEFF Research Database (Denmark)

    Lyngsgaard, Maren Moltke; Richardson, Katherine; Markager, Stiig

    2014-01-01

    produced. The primary production (PP) occurring below the surface layer, i.e. in the pycnocline-bottom layer (PBL), is shown to contribute significantly to total PP. Oxygen concentrations in the PBL are shown to correlate significantly with the deep primary production (DPP) as well as with salinity...... that eutrophication effects may include changes in the structure of planktonic food webs and element cycling in the water column, both brought about through an altered vertical distribution of PP....

  19. The Literature of Competitive Intelligence.

    Science.gov (United States)

    Walker, Thomas D.

    1994-01-01

    Describes competitive intelligence (CI) literature in terms of its location, quantity, authorship, length, and problems of bibliographic access. Highlights include subject access; competitive intelligence research; espionage and security; monographs; and journals. (21 references) (LRW)

  20. A Windows Phone 7 Oriented Secure Architecture for Business Intelligence Mobile Applications

    Directory of Open Access Journals (Sweden)

    Silvia TRIF

    2011-01-01

    Full Text Available This paper present and implement a Windows Phone 7 Oriented Secure Architecture for Business Intelligence Mobile Application. In the developing process is used a Windows Phone 7 application that interact with a WCF Web Service and a database. The types of Business Intelligence Mobile Applications are presented. The Windows mobile devices security and restrictions are presented. The namespaces and security algorithms used in .NET Compact Framework for assuring the application security are presented. The proposed architecture is showed underlying the flows between the application and the web service.