WorldWideScience

Sample records for intelligent search engine

  1. EIIS: An Educational Information Intelligent Search Engine Supported by Semantic Services

    Science.gov (United States)

    Huang, Chang-Qin; Duan, Ru-Lin; Tang, Yong; Zhu, Zhi-Ting; Yan, Yong-Jian; Guo, Yu-Qing

    2011-01-01

    The semantic web brings a new opportunity for efficient information organization and search. To meet the special requirements of the educational field, this paper proposes an intelligent search engine enabled by educational semantic support service, where three kinds of searches are integrated into Educational Information Intelligent Search (EIIS)…

  2. The internet and intelligent machines: search engines, agents and robots

    International Nuclear Information System (INIS)

    Achenbach, S.; Alfke, H.

    2000-01-01

    The internet plays an important role in a growing number of medical applications. Finding relevant information is not always easy as the amount of available information on the Web is rising quickly. Even the best Search Engines can only collect links to a fraction of all existing Web pages. In addition, many of these indexed documents have been changed or deleted. The vast majority of information on the Web is not searchable with conventional methods. New search strategies, technologies and standards are combined in Intelligent Search Agents (ISA) an Robots, which can retrieve desired information in a specific approach. Conclusion: The article describes differences between ISAs and conventional Search Engines and how communication between Agents improves their ability to find information. Examples of existing ISAs are given and the possible influences on the current and future work in radiology is discussed. (orig.) [de

  3. Artificial Intelligence in Civil Engineering

    Directory of Open Access Journals (Sweden)

    Pengzhen Lu

    2012-01-01

    Full Text Available Artificial intelligence is a branch of computer science, involved in the research, design, and application of intelligent computer. Traditional methods for modeling and optimizing complex structure systems require huge amounts of computing resources, and artificial-intelligence-based solutions can often provide valuable alternatives for efficiently solving problems in the civil engineering. This paper summarizes recently developed methods and theories in the developing direction for applications of artificial intelligence in civil engineering, including evolutionary computation, neural networks, fuzzy systems, expert system, reasoning, classification, and learning, as well as others like chaos theory, cuckoo search, firefly algorithm, knowledge-based engineering, and simulated annealing. The main research trends are also pointed out in the end. The paper provides an overview of the advances of artificial intelligence applied in civil engineering.

  4. Intelligent techniques in engineering management theory and applications

    CERN Document Server

    Onar, Sezi

    2015-01-01

    This book presents recently developed intelligent techniques with applications and theory in the area of engineering management. The involved applications of intelligent techniques such as neural networks, fuzzy sets, Tabu search, genetic algorithms, etc. will be useful for engineering managers, postgraduate students, researchers, and lecturers. The book has been written considering the contents of a classical engineering management book but intelligent techniques are used for handling the engineering management problem areas. This comprehensive characteristics of the book makes it an excellent reference for the solution of complex problems of engineering management. The authors of the chapters are well-known researchers with their previous works in the area of engineering management.

  5. An Innovative Approach for online Meta Search Engine Optimization

    OpenAIRE

    Manral, Jai; Hossain, Mohammed Alamgir

    2015-01-01

    This paper presents an approach to identify efficient techniques used in Web Search Engine Optimization (SEO). Understanding SEO factors which can influence page ranking in search engine is significant for webmasters who wish to attract large number of users to their website. Different from previous relevant research, in this study we developed an intelligent Meta search engine which aggregates results from various search engines and ranks them based on several important SEO parameters. The r...

  6. A Secured Cognitive Agent based Multi-strategic Intelligent Search System

    Directory of Open Access Journals (Sweden)

    Neha Gulati

    2018-04-01

    Full Text Available Search Engine (SE is the most preferred information retrieval tool ubiquitously used. In spite of vast scale involvement of users in SE’s, their limited capabilities to understand the user/searcher context and emotions places high cognitive, perceptual and learning load on the user to maintain the search momentum. In this regard, the present work discusses a Cognitive Agent (CA based approach to support the user in Web-based search process. The work suggests a framework called Secured Cognitive Agent based Multi-strategic Intelligent Search System (CAbMsISS to assist the user in search process. It helps to reduce the contextual and emotional mismatch between the SE’s and user. After implementation of the proposed framework, performance analysis shows that CAbMsISS framework improves Query Retrieval Time (QRT and effectiveness for retrieving relevant results as compared to Present Search Engine (PSE. Supplementary to this, it also provides search suggestions when user accesses a resource previously tagged with negative emotions. Overall, the goal of the system is to enhance the search experience for keeping the user motivated. The framework provides suggestions through the search log that tracks the queries searched, resources accessed and emotions experienced during the search. The implemented framework also considers user security. Keywords: BDI model, Cognitive Agent, Emotion, Information retrieval, Intelligent search, Search Engine

  7. Durham Zoo: Powering a Search-&-Innovation Engine with Collective Intelligence

    Directory of Open Access Journals (Sweden)

    Richard Absalom

    2015-02-01

    Full Text Available Purpose – Durham Zoo (hereinafter – DZ is a project to design and operate a concept search engine for science and technology. In DZ, a concept includes a solution to a problem in a particular context.Design – Concept searching is rendered complex by the fuzzy nature of a concept, the many possible implementations of the same concept, and the many more ways that the many implementations can be expressed in natural language. An additional complexity is the diversity of languages and formats, in which the concepts can be disclosed.Humans understand language, inference, implication and abstraction and, hence, concepts much better than computers, that in turn are much better at storing and processing vast amounts of data.We are 7 billion on the planet and we have the Internet as the backbone for Collective Intelligence. So, our concept search engine uses humans to store concepts via a shorthand that can be stored, processed and searched by computers: so, humans IN and computers OUT.The shorthand is classification: metadata in a structure that can define the content of a disclosure. The classification is designed to be powerful in terms of defining and searching concepts, whilst suited to a crowdsourcing effort. It is simple and intuitive to use. Most importantly, it is adapted to restrict ambiguity, which is the poison of classification, without imposing a restrictive centralised management.In the classification scheme, each entity is shown together in a graphical representation with related entities. The entities are arranged on a sliding scale of similarity. This sliding scale is effectively fuzzy classification.Findings – The authors of the paper have been developing a first classification scheme for the technology of traffic cones, this in preparation for a trial of a working system. The process has enabled the authors to further explore the practicalities of concept classification. The CmapTools knowledge modelling kit to develop the

  8. iPixel: a visual content-based and semantic search engine for retrieving digitized mammograms by using collective intelligence.

    Science.gov (United States)

    Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A

    2012-09-01

    Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis.

  9. PubData: search engine for bioinformatics databases worldwide

    OpenAIRE

    Vand, Kasra; Wahlestedt, Thor; Khomtchouk, Kelly; Sayed, Mohammed; Wahlestedt, Claes; Khomtchouk, Bohdan

    2016-01-01

    We propose a search engine and file retrieval system for all bioinformatics databases worldwide. PubData searches biomedical data in a user-friendly fashion similar to how PubMed searches biomedical literature. PubData is built on novel network programming, natural language processing, and artificial intelligence algorithms that can patch into the file transfer protocol servers of any user-specified bioinformatics database, query its contents, retrieve files for download, and adapt to the use...

  10. The internet and intelligent machines: search engines, agents and robots; Radiologische Informationssuche im Internet: Datenbanken, Suchmaschinen und intelligente Agenten

    Energy Technology Data Exchange (ETDEWEB)

    Achenbach, S; Alfke, H [Marburg Univ. (Germany). Abt. fuer Strahlendiagnostik

    2000-04-01

    The internet plays an important role in a growing number of medical applications. Finding relevant information is not always easy as the amount of available information on the Web is rising quickly. Even the best Search Engines can only collect links to a fraction of all existing Web pages. In addition, many of these indexed documents have been changed or deleted. The vast majority of information on the Web is not searchable with conventional methods. New search strategies, technologies and standards are combined in Intelligent Search Agents (ISA) an Robots, which can retrieve desired information in a specific approach. Conclusion: The article describes differences between ISAs and conventional Search Engines and how communication between Agents improves their ability to find information. Examples of existing ISAs are given and the possible influences on the current and future work in radiology is discussed. (orig.) [German] Das Internet findet zunehmend in medizinischen Anwendungen Verbreitung, jedoch ist das Auffinden relevanter Informationen nicht immer leicht. Die Anzahl der verfuegbaren Dokumente im World wide web nimmt so schnell zu, dass die Suche zunehmend Probleme bereitet: Auch gute Suchmaschinen erfassen nur einige Prozent der vorhandenen Seiten in Ihren Datenbanken. Zusaetzlich sorgen staendige Veraenderungen dafuer, dass nur ein Teil dieser durchsuchbaren Dokumente ueberhaupt noch existiert. Der Grossteil des Internets ist daher mit konventionellen Methoden nicht zu erschliessen. Neue Standards, Suchstrategien und Technologien vereinen sich in den Suchagenten und Robots, die gezielter und intelligenter Inhalte ermitteln koennen. Schlussfolgerung: Der Artikel stellt dar, wie sich ein Intelligent search agent (ISA) von einer Suchmaschine unterscheidet und durch Kooperation mit anderen Agenten die Anforderungen der Benutzer besser erfuellen kann. Neben den Grundlagen werden exemplarische Anwendungen gezeigt, die heute im Netz existieren, und ein Ausblick

  11. Categorization of web pages - Performance enhancement to search engine

    Digital Repository Service at National Institute of Oceanography (India)

    Lakshminarayana, S.

    of Artificial Intelligence, Volume III. Los Altos, CA.: William Kaufmann. pp 1-74. 18. Brin, S. & Page, L. (1998). The anatomy of a large scale hyper-textual web search engine. In Proceedings of the seventh World Wide Web conference, Brisbane, Australia. 19...

  12. Internet Search Engines

    OpenAIRE

    Fatmaa El Zahraa Mohamed Abdou

    2004-01-01

    A general study about the internet search engines, the study deals main 7 points; the differance between search engines and search directories, components of search engines, the percentage of sites covered by search engines, cataloging of sites, the needed time for sites appearance in search engines, search capabilities, and types of search engines.

  13. Computational Intelligence for Engineering Systems

    CERN Document Server

    Madureira, A; Vale, Zita

    2011-01-01

    "Computational Intelligence for Engineering Systems" provides an overview and original analysis of new developments and advances in several areas of computational intelligence. Computational Intelligence have become the road-map for engineers to develop and analyze novel techniques to solve problems in basic sciences (such as physics, chemistry and biology) and engineering, environmental, life and social sciences. The contributions are written by international experts, who provide up-to-date aspects of the topics discussed and present recent, original insights into their own experien

  14. A World Wide Web Region-Based Image Search Engine

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper the development of an intelligent image content-based search engine for the World Wide Web is presented. This system will offer a new form of media representation and access of content available in WWW. Information Web Crawlers continuously traverse the Internet and collect images...

  15. Search for extraterrestrial intelligence (SETI)

    International Nuclear Information System (INIS)

    Morrison, P.; Billingham, J.; Wolfe, J.

    1977-01-01

    Findings are presented of a series of workshops on the existence of extraterrestrial intelligent life and ways in which extraterrestrial intelligence might be detected. The coverage includes the cosmic and cultural evolutions, search strategies, detection of other planetary systems, alternate methods of communication, and radio frequency interference. 17 references

  16. ENGINEERING OF UNIVERSITY INTELLIGENT LEARNING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Vasiliy M. Trembach

    2016-01-01

    Full Text Available In the article issues of engineering intelligent tutoring systems of University with adaptation are considered. The article also dwells on some modern approaches to engineering of information systems. It shows the role of engineering e-learning devices (systems in system engineering. The article describes the basic principles of system engineering and these principles are expanded regarding to intelligent information systems. The structure of intelligent learning systems with adaptation of the individual learning environments based on services is represented in the article.

  17. The Search for Extraterrestrial Intelligence (SETI)

    Science.gov (United States)

    Tarter, Jill

    The search for evidence of extraterrestrial intelligence is placed in the broader astronomical context of the search for extrasolar planets and biomarkers of primitive life elsewhere in the universe. A decision tree of possible search strategies is presented as well as a brief history of the search for extraterrestrial intelligence (SETI) projects since 1960. The characteristics of 14 SETI projects currently operating on telescopes are discussed and compared using one of many possible figures of merit. Plans for SETI searches in the immediate and more distant future are outlined. Plans for success, the significance of null results, and some opinions on deliberate transmission of signals (as well as listening) are also included. SETI results to date are negative, but in reality, not much searching has yet been done.

  18. Coupling artificial intelligence and numerical computation for engineering design (Invited paper)

    Science.gov (United States)

    Tong, S. S.

    1986-01-01

    The possibility of combining artificial intelligence (AI) systems and numerical computation methods for engineering designs is considered. Attention is given to three possible areas of application involving fan design, controlled vortex design of turbine stage blade angles, and preliminary design of turbine cascade profiles. Among the AI techniques discussed are: knowledge-based systems; intelligent search; and pattern recognition systems. The potential cost and performance advantages of an AI-based design-generation system are discussed in detail.

  19. A swarm intelligence framework for reconstructing gene networks: searching for biologically plausible architectures.

    Science.gov (United States)

    Kentzoglanakis, Kyriakos; Poole, Matthew

    2012-01-01

    In this paper, we investigate the problem of reverse engineering the topology of gene regulatory networks from temporal gene expression data. We adopt a computational intelligence approach comprising swarm intelligence techniques, namely particle swarm optimization (PSO) and ant colony optimization (ACO). In addition, the recurrent neural network (RNN) formalism is employed for modeling the dynamical behavior of gene regulatory systems. More specifically, ACO is used for searching the discrete space of network architectures and PSO for searching the corresponding continuous space of RNN model parameters. We propose a novel solution construction process in the context of ACO for generating biologically plausible candidate architectures. The objective is to concentrate the search effort into areas of the structure space that contain architectures which are feasible in terms of their topological resemblance to real-world networks. The proposed framework is initially applied to the reconstruction of a small artificial network that has previously been studied in the context of gene network reverse engineering. Subsequently, we consider an artificial data set with added noise for reconstructing a subnetwork of the genetic interaction network of S. cerevisiae (yeast). Finally, the framework is applied to a real-world data set for reverse engineering the SOS response system of the bacterium Escherichia coli. Results demonstrate the relative advantage of utilizing problem-specific knowledge regarding biologically plausible structural properties of gene networks over conducting a problem-agnostic search in the vast space of network architectures.

  20. Model of intelligent information searching system

    International Nuclear Information System (INIS)

    Yastrebkov, D.I.

    2004-01-01

    A brief description of the technique to search for electronic documents in large archives as well as drawbacks is presented. A solution close to intelligent information searching systems is proposed. (author)

  1. Meta Search Engines.

    Science.gov (United States)

    Garman, Nancy

    1999-01-01

    Describes common options and features to consider in evaluating which meta search engine will best meet a searcher's needs. Discusses number and names of engines searched; other sources and specialty engines; search queries; other search options; and results options. (AEF)

  2. Competing intelligent search agents in global optimization

    Energy Technology Data Exchange (ETDEWEB)

    Streltsov, S.; Vakili, P. [Boston Univ., MA (United States); Muchnik, I. [Rutgers Univ., Piscataway, NJ (United States)

    1996-12-31

    In this paper we present a new search methodology that we view as a development of intelligent agent approach to the analysis of complex system. The main idea is to consider search process as a competition mechanism between concurrent adaptive intelligent agents. Agents cooperate in achieving a common search goal and at the same time compete with each other for computational resources. We propose a statistical selection approach to resource allocation between agents that leads to simple and efficient on average index allocation policies. We use global optimization as the most general setting that encompasses many types of search problems, and show how proposed selection policies can be used to improve and combine various global optimization methods.

  3. Hybrid intelligent engineering systems

    CERN Document Server

    Jain, L C; Adelaide, Australia University of

    1997-01-01

    This book on hybrid intelligent engineering systems is unique, in the sense that it presents the integration of expert systems, neural networks, fuzzy systems, genetic algorithms, and chaos engineering. It shows that these new techniques enhance the capabilities of one another. A number of hybrid systems for solving engineering problems are presented.

  4. Readings in artificial intelligence and software engineering

    CERN Document Server

    Rich, Charles

    1986-01-01

    Readings in Artificial Intelligence and Software Engineering covers the main techniques and application of artificial intelligence and software engineering. The ultimate goal of artificial intelligence applied to software engineering is automatic programming. Automatic programming would allow a user to simply say what is wanted and have a program produced completely automatically. This book is organized into 11 parts encompassing 34 chapters that specifically tackle the topics of deductive synthesis, program transformations, program verification, and programming tutors. The opening parts p

  5. Artificial Intelligence in Civil Engineering

    OpenAIRE

    Lu, Pengzhen; Chen, Shengyong; Zheng, Yujun

    2012-01-01

    Artificial intelligence is a branch of computer science, involved in the research, design, and application of intelligent computer. Traditional methods for modeling and optimizing complex structure systems require huge amounts of computing resources, and artificial-intelligence-based solutions can often provide valuable alternatives for efficiently solving problems in the civil engineering. This paper summarizes recently developed methods and theories in the developing direction for applicati...

  6. Myanmar Language Search Engine

    OpenAIRE

    Pann Yu Mon; Yoshiki Mikami

    2011-01-01

    With the enormous growth of the World Wide Web, search engines play a critical role in retrieving information from the borderless Web. Although many search engines are available for the major languages, but they are not much proficient for the less computerized languages including Myanmar. The main reason is that those search engines are not considering the specific features of those languages. A search engine which capable of searching the Web documents written in those languages is highly n...

  7. Development of an Intelligent Car Engine Fault Troubleshooting ...

    African Journals Online (AJOL)

    Development of an Intelligent Car Engine Fault Troubleshooting System (CEFTS) ... and also provides a troubleshooting framework for other researchers to work on. Keywords: ... inference engine, knowledge acquisition, artificial intelligence.

  8. Teknik Perangkingan Meta-search Engine

    OpenAIRE

    Puspitaningrum, Diyah

    2014-01-01

    Meta-search engine mengorganisasikan penyatuan hasil dari berbagai search engine dengan tujuan untuk meningkatkan presisi hasil pencarian dokumen web. Pada survei teknik perangkingan meta-search engine ini akan didiskusikan isu-isu pra-pemrosesan, rangking, dan berbagai teknik penggabungan hasil pencarian dari search engine yang berbeda-beda (multi-kombinasi). Isu-isu implementasi penggabungan 2 search engine dan 3 search engine juga menjadi sorotan. Pada makalah ini juga dibahas arahan penel...

  9. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  10. Evolution Engines and Artificial Intelligence

    Science.gov (United States)

    Hemker, Andreas; Becks, Karl-Heinz

    In the last years artificial intelligence has achieved great successes, mainly in the field of expert systems and neural networks. Nevertheless the road to truly intelligent systems is still obscured. Artificial intelligence systems with a broad range of cognitive abilities are not within sight. The limited competence of such systems (brittleness) is identified as a consequence of the top-down design process. The evolution principle of nature on the other hand shows an alternative and elegant way to build intelligent systems. We propose to take an evolution engine as the driving force for the bottom-up development of knowledge bases and for the optimization of the problem-solving process. A novel data analysis system for the high energy physics experiment DELPHI at CERN shows the practical relevance of this idea. The system is able to reconstruct the physical processes after the collision of particles by making use of the underlying standard model of elementary particle physics. The evolution engine acts as a global controller of a population of inference engines working on the reconstruction task. By implementing the system on the Connection Machine (Model CM-2) we use the full advantage of the inherent parallelization potential of the evolutionary approach.

  11. International Conference on Intelligent Technologies and Engineering System (ICITES 2012)

    CERN Document Server

    Huang, Yi-Cheng; Intelligent Technologies and Engineering Systems

    2013-01-01

    This book concentrates on intelligent technologies as it relates to engineering systems. The book covers the following topics: networking, signal processing, artificial intelligence, control and software engineering, intelligent electronic circuits and systems, communications, and materials and mechanical engineering. The book is a collection of original papers that have been reviewed by technical editors. These papers were presented at the International Conference on Intelligent Technologies and Engineering Systems, held Dec. 13-15, 2012.

  12. Recent advances in intelligent image search and video retrieval

    CERN Document Server

    2017-01-01

    This book initially reviews the major feature representation and extraction methods and effective learning and recognition approaches, which have broad applications in the context of intelligent image search and video retrieval. It subsequently presents novel methods, such as improved soft assignment coding, Inheritable Color Space (InCS) and the Generalized InCS framework, the sparse kernel manifold learner method, the efficient Support Vector Machine (eSVM), and the Scale-Invariant Feature Transform (SIFT) features in multiple color spaces. Lastly, the book presents clothing analysis for subject identification and retrieval, and performance evaluation methods of video analytics for traffic monitoring. Digital images and videos are proliferating at an amazing speed in the fields of science, engineering and technology, media and entertainment. With the huge accumulation of such data, keyword searches and manual annotation schemes may no longer be able to meet the practical demand for retrieving relevant conte...

  13. Eighth International Conference on Intelligent Systems and Knowledge Engineering

    CERN Document Server

    Li, Tianrui; ISKE 2013; Foundations of Intelligent Systems; Knowledge Engineering and Management; Practical Applications of Intelligent Systems

    2014-01-01

    "Foundations of Intelligent Systems" presents selected papers from the 2013 International Conference on Intelligent Systems and Knowledge Engineering (ISKE2013). The aim of this conference is to bring together experts from different expertise areas to discuss the state-of-the-art in Intelligent Systems and Knowledge Engineering, and to present new research results and perspectives on future development. The topics in this volume include, but not limited to: Artificial Intelligence Theories, Pattern Recognition, Intelligent System Models, Speech Recognition, Computer Vision, Multi-Agent Systems, Machine Learning, Soft Computing and Fuzzy Systems, Biological Inspired Computation, Game Theory, Cognitive Systems and Information Processing, Computational Intelligence, etc. The proceedings are benefit for both researchers and practitioners who want to utilize intelligent methods in their specific research fields. Dr. Zhenkun Wen is a Professor at the College of Computer and Software Engineering, Shenzhen University...

  14. A Search Engine That's Aware of Your Needs

    Science.gov (United States)

    2005-01-01

    Internet research can be compared to trying to drink from a firehose. Such a wealth of information is available that even the simplest inquiry can sometimes generate tens of thousands of leads, more information than most people can handle, and more burdensome than most can endure. Like everyone else, NASA scientists rely on the Internet as a primary search tool. Unlike the average user, though, NASA scientists perform some pretty sophisticated, involved research. To help manage the Internet and to allow researchers at NASA to gain better, more efficient access to the wealth of information, the Agency needed a search tool that was more refined and intelligent than the typical search engine. Partnership NASA funded Stottler Henke, Inc., of San Mateo, California, a cutting-edge software company, with a Small Business Innovation Research (SBIR) contract to develop the Aware software for searching through the vast stores of knowledge quickly and efficiently. The partnership was through NASA s Ames Research Center.

  15. Custom Search Engines: Tools & Tips

    Science.gov (United States)

    Notess, Greg R.

    2008-01-01

    Few have the resources to build a Google or Yahoo! from scratch. Yet anyone can build a search engine based on a subset of the large search engines' databases. Use Google Custom Search Engine or Yahoo! Search Builder or any of the other similar programs to create a vertical search engine targeting sites of interest to users. The basic steps to…

  16. Effect of Undergraduates’ Emotional Intelligence on Information Search Behavior

    Directory of Open Access Journals (Sweden)

    Wang Haocheng

    2017-06-01

    Full Text Available [Purpose/significance] Information search capability is the focus of information literacy education. This paper explores the relationship between emotional intelligence and information search behavior. [Method/process] Based on the data from the questionnaires by 250 undergraduates, this paper used IBM SPSS Statistics 19.0 for statistical data analysis. [Result/conclusion]The correlation between emotional intelligence and information search capability is positively obvious. When it comes to all variables in the regression equation, information search behavior is mainly affected by regulation and utilization of the dimension of emotion. Utilization of emotion mainly affects retrieval strategies, information evaluation, behavior adjustment and total score; regulation of emotions mainly affects the information reference.

  17. BIOMedical Search Engine Framework: Lightweight and customized implementation of domain-specific biomedical search engines.

    Science.gov (United States)

    Jácome, Alberto G; Fdez-Riverola, Florentino; Lourenço, Anália

    2016-07-01

    Text mining and semantic analysis approaches can be applied to the construction of biomedical domain-specific search engines and provide an attractive alternative to create personalized and enhanced search experiences. Therefore, this work introduces the new open-source BIOMedical Search Engine Framework for the fast and lightweight development of domain-specific search engines. The rationale behind this framework is to incorporate core features typically available in search engine frameworks with flexible and extensible technologies to retrieve biomedical documents, annotate meaningful domain concepts, and develop highly customized Web search interfaces. The BIOMedical Search Engine Framework integrates taggers for major biomedical concepts, such as diseases, drugs, genes, proteins, compounds and organisms, and enables the use of domain-specific controlled vocabulary. Technologies from the Typesafe Reactive Platform, the AngularJS JavaScript framework and the Bootstrap HTML/CSS framework support the customization of the domain-oriented search application. Moreover, the RESTful API of the BIOMedical Search Engine Framework allows the integration of the search engine into existing systems or a complete web interface personalization. The construction of the Smart Drug Search is described as proof-of-concept of the BIOMedical Search Engine Framework. This public search engine catalogs scientific literature about antimicrobial resistance, microbial virulence and topics alike. The keyword-based queries of the users are transformed into concepts and search results are presented and ranked accordingly. The semantic graph view portraits all the concepts found in the results, and the researcher may look into the relevance of different concepts, the strength of direct relations, and non-trivial, indirect relations. The number of occurrences of the concept shows its importance to the query, and the frequency of concept co-occurrence is indicative of biological relations

  18. 15th IEEE International Conference on Intelligent Engineering Systems

    CERN Document Server

    Živčák, Jozef; Aspects of Computational Intelligence Theory and Applications

    2013-01-01

    This volume covers the state-of-the art of the research and development in various aspects of computational intelligence and gives some perspective directions of development. Except the traditional engineering areas that contain theoretical knowledge, applications, designs and projects, the book includes the area of use of computational intelligence in biomedical engineering. „Aspects of Computational Intelligence: Theory and Applications” is a compilation of carefully selected extended papers written on the basis of original contributions presented at the 15th IEEE International Conference on Intelligent Engineering Systems 2011, INES 2011 held at June 23.-26. 2011 in AquaCity Poprad, Slovakia.    

  19. Complex dynamics of our economic life on different scales: insights from search engine query data.

    Science.gov (United States)

    Preis, Tobias; Reith, Daniel; Stanley, H Eugene

    2010-12-28

    Search engine query data deliver insight into the behaviour of individuals who are the smallest possible scale of our economic life. Individuals are submitting several hundred million search engine queries around the world each day. We study weekly search volume data for various search terms from 2004 to 2010 that are offered by the search engine Google for scientific use, providing information about our economic life on an aggregated collective level. We ask the question whether there is a link between search volume data and financial market fluctuations on a weekly time scale. Both collective 'swarm intelligence' of Internet users and the group of financial market participants can be regarded as a complex system of many interacting subunits that react quickly to external changes. We find clear evidence that weekly transaction volumes of S&P 500 companies are correlated with weekly search volume of corresponding company names. Furthermore, we apply a recently introduced method for quantifying complex correlations in time series with which we find a clear tendency that search volume time series and transaction volume time series show recurring patterns.

  20. International conference in electrical engineering and intelligent systems

    CERN Document Server

    Gelman, Len; Electrical Engineering and Intelligent Systems

    2013-01-01

    The revised and extended papers collected in this volume represent the cutting-edge of research at the nexus of electrical engineering and intelligent systems. They were selected from well over 1000 papers submitted to the high-profile international World Congress on Engineering held in London in July 2011. The chapters cover material across the full spectrum of work in the field, including computational intelligence, control engineering, network management, and wireless networks. Readers will also find substantive papers on signal processing, Internet computing, high performance computing, and industrial applications.   The Electrical Engineering and Intelligent Systems conference, as part of the 2011 World Congress on Engineering was organized under the auspices of the non-profit International Association of Engineers (IAENG). With more than 30 nations represented on the conference committees alone, the Congress features the best and brightest scientific minds from a multitude of disciplines related to eng...

  1. Start Your Engines: Surfing with Search Engines for Kids.

    Science.gov (United States)

    Byerly, Greg; Brodie, Carolyn S.

    1999-01-01

    Suggests that to be an effective educator and user of the Web it is essential to know the basics about search engines. Presents tips for using search engines. Describes several search engines for children and young adults, as well as some general filtered search engines for children. (AEF)

  2. Intelligent Systems For Aerospace Engineering: An Overview

    Science.gov (United States)

    KrishnaKumar, K.

    2003-01-01

    Intelligent systems are nature-inspired, mathematically sound, computationally intensive problem solving tools and methodologies that have become extremely important for advancing the current trends in information technology. Artificially intelligent systems currently utilize computers to emulate various faculties of human intelligence and biological metaphors. They use a combination of symbolic and sub-symbolic systems capable of evolving human cognitive skills and intelligence, not just systems capable of doing things humans do not do well. Intelligent systems are ideally suited for tasks such as search and optimization, pattern recognition and matching, planning, uncertainty management, control, and adaptation. In this paper, the intelligent system technologies and their application potential are highlighted via several examples.

  3. Multimedia Search Engines : Concept, Performance, and Types

    OpenAIRE

    Sayed Rabeh Sayed

    2005-01-01

    A Research about multimedia search engines, it starts with definition of search engines at general and multimedia search engines, then explains how they work, and divided them into: Video search engines, Images search engines, and Audio search engines. Finally, it reviews a samples to multimedia search engines.

  4. Intelligent Search Optimization using Artificial Fuzzy Logics

    OpenAIRE

    Manral, Jai

    2015-01-01

    Information on the web is prodigious; searching relevant information is difficult making web users to rely on search engines for finding relevant information on the web. Search engines index and categorize web pages according to their contents using crawlers and rank them accordingly. For given user query they retrieve millions of webpages and display them to users according to web-page rank. Every search engine has their own algorithms based on certain parameters for ranking web-pages. Searc...

  5. Seventh International Conference on Intelligent Systems and Knowledge Engineering - Foundations and Applications of Intelligent Systems

    CERN Document Server

    Li, Tianrui; Li, Hongbo

    2014-01-01

    These proceedings present technical papers selected from the 2012 International Conference on Intelligent Systems and Knowledge Engineering (ISKE 2012), held on December 15-17 in Beijing. The aim of this conference is to bring together experts from different fields of expertise to discuss the state-of-the-art in Intelligent Systems and Knowledge Engineering, and to present new findings and perspectives on future developments. The proceedings introduce current scientific and technical advances in the fields of artificial intelligence, machine learning, pattern recognition, data mining, knowledge engineering, information retrieval, information theory, knowledge-based systems, knowledge representation and reasoning, multi-agent systems, and natural-language processing, etc. Furthermore they include papers on new intelligent computing paradigms, which combine new computing methodologies, e.g., cloud computing, service computing and pervasive computing with traditional intelligent methods. By presenting new method...

  6. Meta-Search Utilizing Evolitionary Recommendation: A Web Search Architecture Proposal

    Czech Academy of Sciences Publication Activity Database

    Húsek, Dušan; Keyhanipour, A.; Krömer, P.; Moshiri, B.; Owais, S.; Snášel, V.

    2008-01-01

    Roč. 33, - (2008), s. 189-200 ISSN 1870-4069 Institutional research plan: CEZ:AV0Z10300504 Keywords : web search * meta-search engine * intelligent re-ranking * ordered weighted averaging * Boolean search queries optimizing Subject RIV: IN - Informatics, Computer Science

  7. Market Dominance and Search Quality in the Search Engine Market

    NARCIS (Netherlands)

    Lianos, I.; Motchenkova, E.I.

    2013-01-01

    We analyze a search engine market from a law and economics perspective and incorporate the choice of quality-improving innovations by a search engine platform in a two-sided model of Internet search engine. In the proposed framework, we first discuss the legal issues the search engine market raises

  8. INTERFACING GOOGLE SEARCH ENGINE TO CAPTURE USER WEB SEARCH BEHAVIOR

    OpenAIRE

    Fadhilah Mat Yamin; T. Ramayah

    2013-01-01

    The behaviour of the searcher when using the search engine especially during the query formulation is crucial. Search engines capture users’ activities in the search log, which is stored at the search engine server. Due to the difficulty of obtaining this search log, this paper proposed and develops an interface framework to interface a Google search engine. This interface will capture users’ queries before redirect them to Google. The analysis of the search log will show that users are utili...

  9. Query Log Analysis of an Electronic Health Record Search Engine

    Science.gov (United States)

    Yang, Lei; Mei, Qiaozhu; Zheng, Kai; Hanauer, David A.

    2011-01-01

    We analyzed a longitudinal collection of query logs of a full-text search engine designed to facilitate information retrieval in electronic health records (EHR). The collection, 202,905 queries and 35,928 user sessions recorded over a course of 4 years, represents the information-seeking behavior of 533 medical professionals, including frontline practitioners, coding personnel, patient safety officers, and biomedical researchers for patient data stored in EHR systems. In this paper, we present descriptive statistics of the queries, a categorization of information needs manifested through the queries, as well as temporal patterns of the users’ information-seeking behavior. The results suggest that information needs in medical domain are substantially more sophisticated than those that general-purpose web search engines need to accommodate. Therefore, we envision there exists a significant challenge, along with significant opportunities, to provide intelligent query recommendations to facilitate information retrieval in EHR. PMID:22195150

  10. Vertical Search Engines

    OpenAIRE

    Curran, Kevin; Mc Glinchey, Jude

    2017-01-01

    This paper outlines the growth in popularity of vertical search engines, their origins, the differences between them and well-known broad based search engines such as Google and Yahoo. We also discuss their use in business-to-business, their marketing and advertising costs, what the revenue streams are and who uses them.

  11. [Advanced online search techniques and dedicated search engines for physicians].

    Science.gov (United States)

    Nahum, Yoav

    2008-02-01

    In recent years search engines have become an essential tool in the work of physicians. This article will review advanced search techniques from the world of information specialists, as well as some advanced search engine operators that may help physicians improve their online search capabilities, and maximize the yield of their searches. This article also reviews popular dedicated scientific and biomedical literature search engines.

  12. Web Search Engines

    OpenAIRE

    Rajashekar, TB

    1998-01-01

    The World Wide Web is emerging as an all-in-one information source. Tools for searching Web-based information include search engines, subject directories and meta search tools. We take a look at key features of these tools and suggest practical hints for effective Web searching.

  13. Mathematical modeling and computational intelligence in engineering applications

    CERN Document Server

    Silva Neto, Antônio José da; Silva, Geraldo Nunes

    2016-01-01

    This book brings together a rich selection of studies in mathematical modeling and computational intelligence, with application in several fields of engineering, like automation, biomedical, chemical, civil, electrical, electronic, geophysical and mechanical engineering, on a multidisciplinary approach. Authors from five countries and 16 different research centers contribute with their expertise in both the fundamentals and real problems applications based upon their strong background on modeling and computational intelligence. The reader will find a wide variety of applications, mathematical and computational tools and original results, all presented with rigorous mathematical procedures. This work is intended for use in graduate courses of engineering, applied mathematics and applied computation where tools as mathematical and computational modeling, numerical methods and computational intelligence are applied to the solution of real problems.

  14. Da "Search engines" a "Shop engines"

    OpenAIRE

    Lupi, Mauro

    2001-01-01

    The change occuring related to “search engines” is going towards e-commerce, transforming all the main search engines into information and commercial suggestion conveying means, basing their businnes on this activity. In a next future we will find two main series of search engines: from one side, the portals that will offer a general orientation guide being convoying means for services and to-buy products; from the other side, vertical portals able to offer information and products on specifi...

  15. Development and tuning of an original search engine for patent libraries in medicinal chemistry.

    Science.gov (United States)

    Pasche, Emilie; Gobeill, Julien; Kreim, Olivier; Oezdemir-Zaech, Fatma; Vachon, Therese; Lovis, Christian; Ruch, Patrick

    2014-01-01

    The large increase in the size of patent collections has led to the need of efficient search strategies. But the development of advanced text-mining applications dedicated to patents of the biomedical field remains rare, in particular to address the needs of the pharmaceutical & biotech industry, which intensively uses patent libraries for competitive intelligence and drug development. We describe here the development of an advanced retrieval engine to search information in patent collections in the field of medicinal chemistry. We investigate and combine different strategies and evaluate their respective impact on the performance of the search engine applied to various search tasks, which covers the putatively most frequent search behaviours of intellectual property officers in medical chemistry: 1) a prior art search task; 2) a technical survey task; and 3) a variant of the technical survey task, sometimes called known-item search task, where a single patent is targeted. The optimal tuning of our engine resulted in a top-precision of 6.76% for the prior art search task, 23.28% for the technical survey task and 46.02% for the variant of the technical survey task. We observed that co-citation boosting was an appropriate strategy to improve prior art search tasks, while IPC classification of queries was improving retrieval effectiveness for technical survey tasks. Surprisingly, the use of the full body of the patent was always detrimental for search effectiveness. It was also observed that normalizing biomedical entities using curated dictionaries had simply no impact on the search tasks we evaluate. The search engine was finally implemented as a web-application within Novartis Pharma. The application is briefly described in the report. We have presented the development of a search engine dedicated to patent search, based on state of the art methods applied to patent corpora. We have shown that a proper tuning of the system to adapt to the various search tasks

  16. Applications of artificial intelligence in engineering problems

    Energy Technology Data Exchange (ETDEWEB)

    Sriram, D; Adey, R

    1986-01-01

    This book presents the papers given at a conference on the use of artificial intelligence in engineering. Topics considered at the conference included Prolog logic, expert systems, knowledge representation and acquisition, knowledge bases, machine learning, robotics, least-square algorithms, vision systems for robots, natural language, probability, mechanical engineering, civil engineering, and electrical engineering.

  17. [Development of domain specific search engines].

    Science.gov (United States)

    Takai, T; Tokunaga, M; Maeda, K; Kaminuma, T

    2000-01-01

    As cyber space exploding in a pace that nobody has ever imagined, it becomes very important to search cyber space efficiently and effectively. One solution to this problem is search engines. Already a lot of commercial search engines have been put on the market. However these search engines respond with such cumbersome results that domain specific experts can not tolerate. Using a dedicate hardware and a commercial software called OpenText, we have tried to develop several domain specific search engines. These engines are for our institute's Web contents, drugs, chemical safety, endocrine disruptors, and emergent response for chemical hazard. These engines have been on our Web site for testing.

  18. 7th International Conference on Intelligent Systems and Knowledge Engineering

    CERN Document Server

    Li, Tianrui; Li, Hongbo

    2014-01-01

    These proceedings present technical papers selected from the 2012 International Conference on Intelligent Systems and Knowledge Engineering (ISKE 2012), held on December 15-17 in Beijing. The aim of this conference is to bring together experts from different fields of expertise to discuss the state-of-the-art in Intelligent Systems and Knowledge Engineering, and to present new findings and perspectives on future developments. The proceedings introduce current scientific and technical advances in the fields of artificial intelligence, machine learning, pattern recognition, data mining, knowledge engineering, information retrieval, information theory, knowledge-based systems, knowledge representation and reasoning, multi-agent systems, and natural-language processing, etc. Furthermore they include papers on new intelligent computing paradigms, which combine new computing methodologies, e.g., cloud computing, service computing and pervasive computing with traditional intelligent methods. By presenting new method...

  19. Foundations of Intelligent Systems : Proceedings of the Sixth International Conference on Intelligent Systems and Knowledge Engineering

    CERN Document Server

    Li, Tianrui

    2012-01-01

    Proceedings of The Sixth International Conference on Intelligent System and Knowledge Engineering presents selected papers from the conference ISKE 2011, held December 15-17 in Shanghai, China. This proceedings doesn’t only examine original research and approaches in the broad areas of intelligent systems and knowledge engineering, but also present new methodologies and practices in intelligent computing paradigms. The book introduces the current scientific and technical advances in the fields of artificial intelligence, machine learning, pattern recognition, data mining, information retrieval, knowledge-based systems, knowledge representation and reasoning, multi-agent systems, natural-language processing, etc. Furthermore, new computing methodologies are presented, including cloud computing, service computing and pervasive computing with traditional intelligent methods. The proceedings will be beneficial for both researchers and practitioners who want to utilize intelligent methods in their specific resea...

  20. A Python Engine for Teaching Artificial Intelligence in Games

    OpenAIRE

    Riedl, Mark O.

    2015-01-01

    Computer games play an important role in our society and motivate people to learn computer science. Since artificial intelligence is integral to most games, they can also be used to teach artificial intelligence. We introduce the Game AI Game Engine (GAIGE), a Python game engine specifically designed to teach about how AI is used in computer games. A progression of seven assignments builds toward a complete, working Multi-User Battle Arena (MOBA) game. We describe the engine, the assignments,...

  1. TECHNIQUES USED IN SEARCH ENGINE MARKETING

    OpenAIRE

    Assoc. Prof. Liviu Ion Ciora Ph. D; Lect. Ion Buligiu Ph. D

    2010-01-01

    Search engine marketing (SEM) is a generic term covering a variety of marketing techniques intended for attracting web traffic in search engines and directories. SEM is a popular tool since it has the potential of substantial gains with minimum investment. On the one side, most search engines and directories offer free or extremely cheap listing. On the other side, the traffic coming from search engines and directories tends to be motivated for acquisitions, making these visitors some of the ...

  2. A framework for intelligent data acquisition and real-time database searching for shotgun proteomics.

    Science.gov (United States)

    Graumann, Johannes; Scheltema, Richard A; Zhang, Yong; Cox, Jürgen; Mann, Matthias

    2012-03-01

    In the analysis of complex peptide mixtures by MS-based proteomics, many more peptides elute at any given time than can be identified and quantified by the mass spectrometer. This makes it desirable to optimally allocate peptide sequencing and narrow mass range quantification events. In computer science, intelligent agents are frequently used to make autonomous decisions in complex environments. Here we develop and describe a framework for intelligent data acquisition and real-time database searching and showcase selected examples. The intelligent agent is implemented in the MaxQuant computational proteomics environment, termed MaxQuant Real-Time. It analyzes data as it is acquired on the mass spectrometer, constructs isotope patterns and SILAC pair information as well as controls MS and tandem MS events based on real-time and prior MS data or external knowledge. Re-implementing a top10 method in the intelligent agent yields similar performance to the data dependent methods running on the mass spectrometer itself. We demonstrate the capabilities of MaxQuant Real-Time by creating a real-time search engine capable of identifying peptides "on-the-fly" within 30 ms, well within the time constraints of a shotgun fragmentation "topN" method. The agent can focus sequencing events onto peptides of specific interest, such as those originating from a specific gene ontology (GO) term, or peptides that are likely modified versions of already identified peptides. Finally, we demonstrate enhanced quantification of SILAC pairs whose ratios were poorly defined in survey spectra. MaxQuant Real-Time is flexible and can be applied to a large number of scenarios that would benefit from intelligent, directed data acquisition. Our framework should be especially useful for new instrument types, such as the quadrupole-Orbitrap, that are currently becoming available.

  3. Dyniqx: a novel meta-search engine for metadata based cross search

    OpenAIRE

    Zhu, Jianhan; Song, Dawei; Eisenstadt, Marc; Barladeanu, Cristi; Rüger, Stefan

    2008-01-01

    The effect of metadata in collection fusion has not been sufficiently studied. In response to this, we present a novel meta-search engine called Dyniqx for metadata based cross search. Dyniqx exploits the availability of metadata in academic search services such as PubMed and Google Scholar etc for fusing search results from heterogeneous search engines. In addition, metadata from these search engines are used for generating dynamic query controls such as sliders and tick boxes etc which are ...

  4. Search Engines: Gateway to a New ``Panopticon''?

    Science.gov (United States)

    Kosta, Eleni; Kalloniatis, Christos; Mitrou, Lilian; Kavakli, Evangelia

    Nowadays, Internet users are depending on various search engines in order to be able to find requested information on the Web. Although most users feel that they are and remain anonymous when they place their search queries, reality proves otherwise. The increasing importance of search engines for the location of the desired information on the Internet usually leads to considerable inroads into the privacy of users. The scope of this paper is to study the main privacy issues with regard to search engines, such as the anonymisation of search logs and their retention period, and to examine the applicability of the European data protection legislation to non-EU search engine providers. Ixquick, a privacy-friendly meta search engine will be presented as an alternative to privacy intrusive existing practices of search engines.

  5. Practical Applications of Intelligent Systems : Proceedings of the Sixth International Conference on Intelligent Systems and Knowledge Engineering

    CERN Document Server

    Li, Tianrui

    2012-01-01

    Proceedings of The Sixth International Conference on Intelligent System and Knowledge Engineering presents selected papers from the conference ISKE 2011, held December 15-17 in Shanghai, China. This proceedings doesn’t only examine original research and approaches in the broad areas of intelligent systems and knowledge engineering, but also present new methodologies and practices in intelligent computing paradigms. The book introduces the current scientific and technical advances in the fields of artificial intelligence, machine learning, pattern recognition, data mining, information retrieval, knowledge-based systems, knowledge representation and reasoning, multi-agent systems, natural-language processing, etc. Furthermore, new computing methodologies are presented, including cloud computing, service computing and pervasive computing with traditional intelligent methods. The proceedings will be beneficial for both researchers and practitioners who want to utilize intelligent methods in their specific res...

  6. Self-learning search engines

    NARCIS (Netherlands)

    Schuth, A.

    2015-01-01

    How does a search engine such as Google know which search results to display? There are many competing algorithms that generate search results, but which one works best? We developed a new probabilistic method for quickly comparing large numbers of search algorithms by examining the results users

  7. Predictors of effective leadership in industry - should engineering education focus on traditional intelligence, personality, or emotional intelligence?

    Science.gov (United States)

    Lappalainen, Pia

    2015-03-01

    Despite the changing global and industrial conditions requiring new approaches to leadership, management training as part of higher engineering education still remains understudied. The subsequent gap in engineering education calls for research on today's leader requirements and pedagogy supporting the inclusion of management competence in higher engineering education. Previous organisation and management studies have, on a general level, established the importance of managerial qualities for industrial performance, but the nature and make-up of these qualifications has not been adequately analysed. To fill the related research gap, the present work embarked on a quantitative empirical effort to identify predictors of successful leadership in engineering. In particular, this study investigated relationships between perceived leader performance and three dimensions of managerial capability: (1) mathematical-logical intelligence, (2) personality, and (3) socio-emotional intelligence. This work complemented previous research by resorting to both self-reports and other-reports: the results acquired from the managerial sample were compared to subordinate perceptions as measured through an emotive intelligence other-report and a general managerial competence multi-source appraisal. The sample comprised 80 superiors and 354 subordinates operating in seven organisations in engineering industries. The results from the quantitative measurements signalled the strongest correlation for socio-emotional intelligence and certain personality dimensions with successful leadership. Mathematical-logical intelligence demonstrated no correlation with subordinate perceptions of good leadership. These findings lay the foundation for the incorporation of socio-emotive skills into higher engineering education.

  8. NASA Indexing Benchmarks: Evaluating Text Search Engines

    Science.gov (United States)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  9. Utilization of a radiology-centric search engine.

    Science.gov (United States)

    Sharpe, Richard E; Sharpe, Megan; Siegel, Eliot; Siddiqui, Khan

    2010-04-01

    Internet-based search engines have become a significant component of medical practice. Physicians increasingly rely on information available from search engines as a means to improve patient care, provide better education, and enhance research. Specialized search engines have emerged to more efficiently meet the needs of physicians. Details about the ways in which radiologists utilize search engines have not been documented. The authors categorized every 25th search query in a radiology-centric vertical search engine by radiologic subspecialty, imaging modality, geographic location of access, time of day, use of abbreviations, misspellings, and search language. Musculoskeletal and neurologic imagings were the most frequently searched subspecialties. The least frequently searched were breast imaging, pediatric imaging, and nuclear medicine. Magnetic resonance imaging and computed tomography were the most frequently searched modalities. A majority of searches were initiated in North America, but all continents were represented. Searches occurred 24 h/day in converted local times, with a majority occurring during the normal business day. Misspellings and abbreviations were common. Almost all searches were performed in English. Search engine utilization trends are likely to mirror trends in diagnostic imaging in the region from which searches originate. Internet searching appears to function as a real-time clinical decision-making tool, a research tool, and an educational resource. A more thorough understanding of search utilization patterns can be obtained by analyzing phrases as actually entered as well as the geographic location and time of origination. This knowledge may contribute to the development of more efficient and personalized search engines.

  10. ROLE AND IMPORTANCE OF SEARCH ENGINE OPTIMIZATION

    OpenAIRE

    Gurneet Kaur

    2017-01-01

    Search Engines are an indispensible platform for users all over the globe to search for relevant information online. Search Engine Optimization (SEO) is the exercise of improving the position of a website in search engine rankings, for a chosen set of keywords. SEO is divided into two parts: On-Page and Off-Page SEO. In order to be successful, both the areas require equal attention. This paper aims to explain the functioning of the search engines along with the role and importance of search e...

  11. Recent Advances in Intelligent Engineering Systems

    CERN Document Server

    Klempous, Ryszard; Araujo, Carmen

    2012-01-01

    This volume is a collection of 19 chapters on intelligent engineering systems written by respectable experts of the fields. The book consists of three parts. The first part is devoted to the foundational aspects of computational intelligence. It consists of 8 chapters that include studies in genetic algorithms, fuzzy logic connectives, enhanced intelligence in product models, nature-inspired optimization technologies, particle swarm optimization, evolution algorithms, model complexity of neural networks, and fitness landscape analysis. The second part contains contributions to intelligent computation in networks, presented in 5 chapters. The covered subjects include the application of self-organizing maps for early detection of denial of service attacks, combating security threats via immunity and adaptability in cognitive radio networks, novel modifications in WSN network design for improved SNR and reliability, a conceptual framework for the design of audio based cognitive infocommunication channels, and a ...

  12. Budget constraints and optimization in sponsored search auctions

    CERN Document Server

    Yang, Yanwu

    2013-01-01

    The Intelligent Systems Series publishes reference works and handbooks in three core sub-topic areas: Intelligent Automation, Intelligent Transportation Systems, and Intelligent Computing. They include theoretical studies, design methods, and real-world implementations and applications. The series' readership is broad, but focuses on engineering, electronics, and computer science. Budget constraints and optimization in sponsored search auctions takes into account consideration of the entire life cycle of campaigns for researchers and developers working on search systems and ROI maximization

  13. Search Engine Liability for Copyright Infringement

    Science.gov (United States)

    Fitzgerald, B.; O'Brien, D.; Fitzgerald, A.

    The chapter provides a broad overview to the topic of search engine liability for copyright infringement. In doing so, the chapter examines some of the key copyright law principles and their application to search engines. The chapter also provides a discussion of some of the most important cases to be decided within the courts of the United States, Australia, China and Europe regarding the liability of search engines for copyright infringement. Finally, the chapter will conclude with some thoughts for reform, including how copyright law can be amended in order to accommodate and realise the great informative power which search engines have to offer society.

  14. Search engines that learn from their users

    NARCIS (Netherlands)

    Schuth, A.G.

    2016-01-01

    More than half the world’s population uses web search engines, resulting in over half a billion search queries every single day. For many people web search engines are among the first resources they go to when a question arises. Moreover, search engines have for many become the most trusted route to

  15. New generation of the multimedia search engines

    Science.gov (United States)

    Mijes Cruz, Mario Humberto; Soto Aldaco, Andrea; Maldonado Cano, Luis Alejandro; López Rodríguez, Mario; Rodríguez Vázqueza, Manuel Antonio; Amaya Reyes, Laura Mariel; Cano Martínez, Elizabeth; Pérez Rosas, Osvaldo Gerardo; Rodríguez Espejo, Luis; Flores Secundino, Jesús Abimelek; Rivera Martínez, José Luis; García Vázquez, Mireya Saraí; Zamudio Fuentes, Luis Miguel; Sánchez Valenzuela, Juan Carlos; Montoya Obeso, Abraham; Ramírez Acosta, Alejandro Álvaro

    2016-09-01

    Current search engines are based upon search methods that involve the combination of words (text-based search); which has been efficient until now. However, the Internet's growing demand indicates that there's more diversity on it with each passing day. Text-based searches are becoming limited, as most of the information on the Internet can be found in different types of content denominated multimedia content (images, audio files, video files). Indeed, what needs to be improved in current search engines is: search content, and precision; as well as an accurate display of expected search results by the user. Any search can be more precise if it uses more text parameters, but it doesn't help improve the content or speed of the search itself. One solution is to improve them through the characterization of the content for the search in multimedia files. In this article, an analysis of the new generation multimedia search engines is presented, focusing the needs according to new technologies. Multimedia content has become a central part of the flow of information in our daily life. This reflects the necessity of having multimedia search engines, as well as knowing the real tasks that it must comply. Through this analysis, it is shown that there are not many search engines that can perform content searches. The area of research of multimedia search engines of new generation is a multidisciplinary area that's in constant growth, generating tools that satisfy the different needs of new generation systems.

  16. Development of intelligent semantic search system for rubber research data in Thailand

    Science.gov (United States)

    Kaewboonma, Nattapong; Panawong, Jirapong; Pianhanuruk, Ekkawit; Buranarach, Marut

    2017-10-01

    The rubber production of Thailand increased not only by strong demand from the world market, but was also stimulated strongly through the replanting program of the Thai Government from 1961 onwards. With the continuous growth of rubber research data volume on the Web, the search for information has become a challenging task. Ontologies are used to improve the accuracy of information retrieval from the web by incorporating a degree of semantic analysis during the search. In this context, we propose an intelligent semantic search system for rubber research data in Thailand. The research methods included 1) analyzing domain knowledge, 2) ontologies development, and 3) intelligent semantic search system development to curate research data in trusted digital repositories may be shared among the wider Thailand rubber research community.

  17. Intelligent search in Big Data

    Science.gov (United States)

    Birialtsev, E.; Bukharaev, N.; Gusenkov, A.

    2017-10-01

    An approach to data integration, aimed on the ontology-based intelligent search in Big Data, is considered in the case when information objects are represented in the form of relational databases (RDB), structurally marked by their schemes. The source of information for constructing an ontology and, later on, the organization of the search are texts in natural language, treated as semi-structured data. For the RDBs, these are comments on the names of tables and their attributes. Formal definition of RDBs integration model in terms of ontologies is given. Within framework of the model universal RDB representation ontology, oil production subject domain ontology and linguistic thesaurus of subject domain language are built. Technique of automatic SQL queries generation for subject domain specialists is proposed. On the base of it, information system for TATNEFT oil-producing company RDBs was implemented. Exploitation of the system showed good relevance with majority of queries.

  18. Combining Search Engines for Comparative Proteomics

    Science.gov (United States)

    Tabb, David

    2012-01-01

    Many proteomics laboratories have found spectral counting to be an ideal way to recognize biomarkers that differentiate cohorts of samples. This approach assumes that proteins that differ in quantity between samples will generate different numbers of identifiable tandem mass spectra. Increasingly, researchers are employing multiple search engines to maximize the identifications generated from data collections. This talk evaluates four strategies to combine information from multiple search engines in comparative proteomics. The “Count Sum” model pools the spectra across search engines. The “Vote Counting” model combines the judgments from each search engine by protein. Two other models employ parametric and non-parametric analyses of protein-specific p-values from different search engines. We evaluated the four strategies in two different data sets. The ABRF iPRG 2009 study generated five LC-MS/MS analyses of “red” E. coli and five analyses of “yellow” E. coli. NCI CPTAC Study 6 generated five concentrations of Sigma UPS1 spiked into a yeast background. All data were identified with X!Tandem, Sequest, MyriMatch, and TagRecon. For both sample types, “Vote Counting” appeared to manage the diverse identification sets most effectively, yielding heightened discrimination as more search engines were added.

  19. Computational intelligence in nuclear engineering

    International Nuclear Information System (INIS)

    Uhrig, Robert E.; Hines, J. Wesley

    2005-01-01

    Approaches to several recent issues in the operation of nuclear power plants using computational intelligence are discussed. These issues include 1) noise analysis techniques, 2) on-line monitoring and sensor validation, 3) regularization of ill-posed surveillance and diagnostic measurements, 4) transient identification, 5) artificial intelligence-based core monitoring and diagnostic system, 6) continuous efficiency improvement of nuclear power plants, and 7) autonomous anticipatory control and intelligent-agents. Several Changes to the focus of Computational Intelligence in Nuclear Engineering have occurred in the past few years. With earlier activities focusing on the development of condition monitoring and diagnostic techniques for current nuclear power plants, recent activities have focused on the implementation of those methods and the development of methods for next generation plants and space reactors. These advanced techniques are expected to become increasingly important as current generation nuclear power plants have their licenses extended to 60 years and next generation reactors are being designed to operate for extended fuel cycles (up to 25 years), with less operator oversight, and especially for nuclear plants operating in severe environments such as space or ice-bound locations

  20. Searching for Extraterrestrial Intelligence SETI Past, Present, and Future

    CERN Document Server

    Shuch, H Paul

    2011-01-01

    This book is a collection of essays written by the very scientists and engineers who have led, and continue to lead, the scientific quest known as SETI, the search for extraterrestrial intelligence. Divided into three parts, the first section, ‘The Spirit of SETI Past’, written by the surviving pioneers of this then emerging discipline, reviews the major projects undertaken during the first 50 years of SETI science and the results of that research. In the second section, ‘The Spirit of SETI Present’, the present-day science and technology is discussed in detail, providing the technical background to contemporary SETI instruments, experiments, and analytical techniques, including the processing of the received signals to extract potential alien communications. In the third and final section, ‘The Spirit of SETI Future’, the book looks ahead to the possible directions that SETI will take in the next 50 years, addressing such important topics as interstellar message construction, the risks and assump...

  1. Artificial Intelligence in Unity Game Engine

    OpenAIRE

    Yu, Li

    2017-01-01

    This thesis was conducted for Oulu Game Lab. The aim of this bachelor thesis was to develop in Oulu Game Lab a game called the feels good to be evil. The main purpose of the project was to develop a game and learn game development focus in the artificial intelligence area. This thesis has explained the theory behind Artificial Intelligence. The game was developed in Unity Game Engine with C# language, and also Panda Behavior Tree was used in this project as an asset. The result was the ...

  2. Analysis of Search Engines and Meta Search Engines\\\\\\' Position by University of Isfahan Users Based on Rogers\\\\\\' Diffusion of Innovation Theory

    Directory of Open Access Journals (Sweden)

    Maryam Akbari

    2012-10-01

    Full Text Available The present study investigated the analysis of search engines and meta search engines adoption process by University of Isfahan users during 2009-2010 based on the Rogers' diffusion of innovation theory. The main aim of the research was to study the rate of adoption and recognizing the potentials and effective tools in search engines and meta search engines adoption among University of Isfahan users. The research method was descriptive survey study. The cases of the study were all of the post graduate students of the University of Isfahan. 351 students were selected as the sample and categorized by a stratified random sampling method. Questionnaire was used for collecting data. The collected data was analyzed using SPSS 16 in both descriptive and analytic statistic. For descriptive statistic frequency, percentage and mean were used, while for analytic statistic t-test and Kruskal-Wallis non parametric test (H-test were used. The finding of t-test and Kruscal-Wallis indicated that the mean of search engines and meta search engines adoption did not show statistical differences gender, level of education and the faculty. Special search engines adoption process was different in terms of gender but not in terms of the level of education and the faculty. Other results of the research indicated that among general search engines, Google had the most adoption rate. In addition, among the special search engines, Google Scholar and among the meta search engines Mamma had the most adopting rate. Findings also showed that friends played an important role on how students adopted general search engines while professors had important role on how students adopted special search engines and meta search engines. Moreover, results showed that the place where students got the most acquaintance with search engines and meta search engines was in the university. The finding showed that the curve of adoption rate was not normal and it was not also in S-shape. Morover

  3. Comparative analysis of some search engines

    Directory of Open Access Journals (Sweden)

    Taiwo O. Edosomwan

    2010-10-01

    Full Text Available We compared the information retrieval performances of some popular search engines (namely, Google, Yahoo, AlltheWeb, Gigablast, Zworks and AltaVista and Bing/MSN in response to a list of ten queries, varying in complexity. These queries were run on each search engine and the precision and response time of the retrieved results were recorded. The first ten documents on each retrieval output were evaluated as being ‘relevant’ or ‘non-relevant’ for evaluation of the search engine’s precision. To evaluate response time, normalised recall ratios were calculated at various cut-off points for each query and search engine. This study shows that Google appears to be the best search engine in terms of both average precision (70% and average response time (2 s. Gigablast and AlltheWeb performed the worst overall in this study.

  4. Search engine optimization

    OpenAIRE

    Marolt, Klemen

    2013-01-01

    Search engine optimization techniques, often shortened to “SEO,” should lead to first positions in organic search results. Some optimization techniques do not change over time, yet still form the basis for SEO. However, as the Internet and web design evolves dynamically, new optimization techniques flourish and flop. Thus, we looked at the most important factors that can help to improve positioning in search results. It is important to emphasize that none of the techniques can guarantee high ...

  5. Chemical-text hybrid search engines.

    Science.gov (United States)

    Zhou, Yingyao; Zhou, Bin; Jiang, Shumei; King, Frederick J

    2010-01-01

    As the amount of chemical literature increases, it is critical that researchers be enabled to accurately locate documents related to a particular aspect of a given compound. Existing solutions, based on text and chemical search engines alone, suffer from the inclusion of "false negative" and "false positive" results, and cannot accommodate diverse repertoire of formats currently available for chemical documents. To address these concerns, we developed an approach called Entity-Canonical Keyword Indexing (ECKI), which converts a chemical entity embedded in a data source into its canonical keyword representation prior to being indexed by text search engines. We implemented ECKI using Microsoft Office SharePoint Server Search, and the resultant hybrid search engine not only supported complex mixed chemical and keyword queries but also was applied to both intranet and Internet environments. We envision that the adoption of ECKI will empower researchers to pose more complex search questions that were not readily attainable previously and to obtain answers at much improved speed and accuracy.

  6. Computational intelligence and quantitative software engineering

    CERN Document Server

    Succi, Giancarlo; Sillitti, Alberto

    2016-01-01

    In a down-to-the earth manner, the volume lucidly presents how the fundamental concepts, methodology, and algorithms of Computational Intelligence are efficiently exploited in Software Engineering and opens up a novel and promising avenue of a comprehensive analysis and advanced design of software artifacts. It shows how the paradigm and the best practices of Computational Intelligence can be creatively explored to carry out comprehensive software requirement analysis, support design, testing, and maintenance. Software Engineering is an intensive knowledge-based endeavor of inherent human-centric nature, which profoundly relies on acquiring semiformal knowledge and then processing it to produce a running system. The knowledge spans a wide variety of artifacts, from requirements, captured in the interaction with customers, to design practices, testing, and code management strategies, which rely on the knowledge of the running system. This volume consists of contributions written by widely acknowledged experts ...

  7. Regulating Search Engines: Taking Stock And Looking Ahead

    OpenAIRE

    Gasser, Urs

    2006-01-01

    Since the creation of the first pre-Web Internet search engines in the early 1990s, search engines have become almost as important as email as a primary online activity. Arguably, search engines are among the most important gatekeepers in today's digitally networked environment. Thus, it does not come as a surprise that the evolution of search technology and the diffusion of search engines have been accompanied by a series of conflicts among stakeholders such as search operators, content crea...

  8. Database Search Engines: Paradigms, Challenges and Solutions.

    Science.gov (United States)

    Verheggen, Kenneth; Martens, Lennart; Berven, Frode S; Barsnes, Harald; Vaudel, Marc

    2016-01-01

    The first step in identifying proteins from mass spectrometry based shotgun proteomics data is to infer peptides from tandem mass spectra, a task generally achieved using database search engines. In this chapter, the basic principles of database search engines are introduced with a focus on open source software, and the use of database search engines is demonstrated using the freely available SearchGUI interface. This chapter also discusses how to tackle general issues related to sequence database searching and shows how to minimize their impact.

  9. Children's Search Engines from an Information Search Process Perspective.

    Science.gov (United States)

    Broch, Elana

    2000-01-01

    Describes cognitive and affective characteristics of children and teenagers that may affect their Web searching behavior. Reviews literature on children's searching in online public access catalogs (OPACs) and using digital libraries. Profiles two Web search engines. Discusses some of the difficulties children have searching the Web, in the…

  10. A fuzzy-match search engine for physician directories.

    Science.gov (United States)

    Rastegar-Mojarad, Majid; Kadolph, Christopher; Ye, Zhan; Wall, Daniel; Murali, Narayana; Lin, Simon

    2014-11-04

    A search engine to find physicians' information is a basic but crucial function of a health care provider's website. Inefficient search engines, which return no results or incorrect results, can lead to patient frustration and potential customer loss. A search engine that can handle misspellings and spelling variations of names is needed, as the United States (US) has culturally, racially, and ethnically diverse names. The Marshfield Clinic website provides a search engine for users to search for physicians' names. The current search engine provides an auto-completion function, but it requires an exact match. We observed that 26% of all searches yielded no results. The goal was to design a fuzzy-match algorithm to aid users in finding physicians easier and faster. Instead of an exact match search, we used a fuzzy algorithm to find similar matches for searched terms. In the algorithm, we solved three types of search engine failures: "Typographic", "Phonetic spelling variation", and "Nickname". To solve these mismatches, we used a customized Levenshtein distance calculation that incorporated Soundex coding and a lookup table of nicknames derived from US census data. Using the "Challenge Data Set of Marshfield Physician Names," we evaluated the accuracy of fuzzy-match engine-top ten (90%) and compared it with exact match (0%), Soundex (24%), Levenshtein distance (59%), and fuzzy-match engine-top one (71%). We designed, created a reference implementation, and evaluated a fuzzy-match search engine for physician directories. The open-source code is available at the codeplex website and a reference implementation is available for demonstration at the datamarsh website.

  11. Clinician search behaviors may be influenced by search engine design.

    Science.gov (United States)

    Lau, Annie Y S; Coiera, Enrico; Zrimec, Tatjana; Compton, Paul

    2010-06-30

    Searching the Web for documents using information retrieval systems plays an important part in clinicians' practice of evidence-based medicine. While much research focuses on the design of methods to retrieve documents, there has been little examination of the way different search engine capabilities influence clinician search behaviors. Previous studies have shown that use of task-based search engines allows for faster searches with no loss of decision accuracy compared with resource-based engines. We hypothesized that changes in search behaviors may explain these differences. In all, 75 clinicians (44 doctors and 31 clinical nurse consultants) were randomized to use either a resource-based or a task-based version of a clinical information retrieval system to answer questions about 8 clinical scenarios in a controlled setting in a university computer laboratory. Clinicians using the resource-based system could select 1 of 6 resources, such as PubMed; clinicians using the task-based system could select 1 of 6 clinical tasks, such as diagnosis. Clinicians in both systems could reformulate search queries. System logs unobtrusively capturing clinicians' interactions with the systems were coded and analyzed for clinicians' search actions and query reformulation strategies. The most frequent search action of clinicians using the resource-based system was to explore a new resource with the same query, that is, these clinicians exhibited a "breadth-first" search behaviour. Of 1398 search actions, clinicians using the resource-based system conducted 401 (28.7%, 95% confidence interval [CI] 26.37-31.11) in this way. In contrast, the majority of clinicians using the task-based system exhibited a "depth-first" search behavior in which they reformulated query keywords while keeping to the same task profiles. Of 585 search actions conducted by clinicians using the task-based system, 379 (64.8%, 95% CI 60.83-68.55) were conducted in this way. This study provides evidence that

  12. Intelligent systems engineering methodology

    Science.gov (United States)

    Fouse, Scott

    1990-01-01

    An added challenge for the designers of large scale systems such as Space Station Freedom is the appropriate incorporation of intelligent system technology (artificial intelligence, expert systems, knowledge-based systems, etc.) into their requirements and design. This presentation will describe a view of systems engineering which successfully addresses several aspects of this complex problem: design of large scale systems, design with requirements that are so complex they only completely unfold during the development of a baseline system and even then continue to evolve throughout the system's life cycle, design that involves the incorporation of new technologies, and design and development that takes place with many players in a distributed manner yet can be easily integrated to meet a single view of the requirements. The first generation of this methodology was developed and evolved jointly by ISX and the Lockheed Aeronautical Systems Company over the past five years on the Defense Advanced Research Projects Agency/Air Force Pilot's Associate Program, one of the largest, most complex, and most successful intelligent systems constructed to date. As the methodology has evolved it has also been applied successfully to a number of other projects. Some of the lessons learned from this experience may be applicable to Freedom.

  13. Building a semantic search engine with games and crowdsourcing

    OpenAIRE

    Wieser, Christoph

    2014-01-01

    Semantic search engines aim at improving conventional search with semantic information, or meta-data, on the data searched for and/or on the searchers. So far, approaches to semantic search exploit characteristics of the searchers like age, education, or spoken language for selecting and/or ranking search results. Such data allow to build up a semantic search engine as an extension of a conventional search engine. The crawlers of well established search engines like Google, Yahoo! or Bing ...

  14. Comparing Artificial Intelligence and Genetic Engineering: Commercialization Lessons

    OpenAIRE

    Dickson, Edward M.

    1984-01-01

    Artificial Intelligence is rapidly leaving its academic home and moving into the marketplace. There are few precedents for an arcane academic subject becoming commercialized so rapidly. But, genetic engineering, which recently burst forth from academia to become the foundation for the hot new biotechnology industry, provides useful insights into the rites of passage awaiting the commercialization of artificial intelligence. This article examines the structural similarities and dissimilarities...

  15. WANDERER IN THE MIST: THE SEARCH FOR INTELLIGENCE, SURVEILLANCE, AND RECONNAISSANCE (ISR) STRATEGY

    Science.gov (United States)

    2017-06-01

    the production of over 383,000 photographic prints to support various intelligence , mapping, and 15...WANDERER IN THE MIST: THE SEARCH FOR INTELLIGENCE , SURVEILLANCE, AND RECONNAISSANCE (ISR) STRATEGY BY MAJOR RYAN D. SKAGGS, USAF...program from the University of California at Los Angeles (UCLA) in 2004. He is a career intelligence officer with over 13 years of experience across a

  16. Knowledge in Artificial Intelligence Systems: Searching the Strategies for Application

    OpenAIRE

    Kornienko, Alla A.; Kornienko, Anatoly V.; Fofanov, Oleg B.; Chubik, Maxim P.

    2015-01-01

    The studies based on auto-epistemic logic are pointed out as an advanced direction for development of artificial intelligence (AI). Artificial intelligence is taken as a system that imitates the solution of complicated problems by human during the course of life. The structure of symbols and operations, by which intellectual solution is performed, as well as searching the strategic reference points for those solutions, which are caused by certain structures of symbols and operations, – are co...

  17. Funding the Search for Extraterrestrial Intelligence with a Lottery Bond

    OpenAIRE

    Haqq-Misra, Jacob

    2013-01-01

    I propose the establishment of a SETI Lottery Bond to provide a continued source of funding for the search for extraterrestrial intelligence (SETI). The SETI Lottery Bond is a fixed rate perpetual bond with a lottery at maturity, where maturity occurs only upon discovery and confirmation of extraterrestrial intelligent life. Investors in the SETI Lottery Bond purchase shares that yield a fixed rate of interest that continues indefinitely until SETI succeeds---at which point a random subset of...

  18. Experience of Developing a Meta-Semantic Search Engine

    OpenAIRE

    Mukhopadhyay, Debajyoti; Sharma, Manoj; Joshi, Gajanan; Pagare, Trupti; Palwe, Adarsha

    2013-01-01

    Thinking of todays web search scenario which is mainly keyword based, leads to the need of effective and meaningful search provided by Semantic Web. Existing search engines are vulnerable to provide relevant answers to users query due to their dependency on simple data available in web pages. On other hand, semantic search engines provide efficient and relevant results as the semantic web manages information with well defined meaning using ontology. A Meta-Search engine is a search tool that ...

  19. Assessment and Comparison of Search capabilities of Web-based Meta-Search Engines: A Checklist Approach

    Directory of Open Access Journals (Sweden)

    Alireza Isfandiyari Moghadam

    2010-03-01

    Full Text Available   The present investigation concerns evaluation, comparison and analysis of search options existing within web-based meta-search engines. 64 meta-search engines were identified. 19 meta-search engines that were free, accessible and compatible with the objectives of the present study were selected. An author’s constructed check list was used for data collection. Findings indicated that all meta-search engines studied used the AND operator, phrase search, number of results displayed setting, previous search query storage and help tutorials. Nevertheless, none of them demonstrated any search options for hypertext searching and displaying the size of the pages searched. 94.7% support features such as truncation, keywords in title and URL search and text summary display. The checklist used in the study could serve as a model for investigating search options in search engines, digital libraries and other internet search tools.

  20. The end of meta search engines in Europe?

    NARCIS (Netherlands)

    Husovec, Martin

    2015-01-01

    The technology behind the meta search engines supports countless number of Internet services ranging from the price and quality comparison websites to more sophisticated traffic connection finders and general search engines like Google. Meta search engines generally increase market transparency,

  1. Cybervetting internet searches for vetting, investigations, and open-source intelligence

    CERN Document Server

    Appel, Edward J

    2014-01-01

    Section I Behavior and TechnologyThe Internet's Potential for Investigators and Intelligence OfficersIntroductionGrowth of Internet UseA Practitioner's PerspectiveThe SearchInternet Posts and the People They ProfileFinding the NeedlesThe Need for SpeedSufficiency of SearchesNotesBehavior OnlineInternet Use GrowthEvolution of Internet UsesPhysical World, Virtual ActivitiesConnections and DisconnectingNotesUse and Abuse: Crime and Mis

  2. A Literature Review of Indexing and Searching Techniques Implementation in Educational Search Engines

    Science.gov (United States)

    El Guemmat, Kamal; Ouahabi, Sara

    2018-01-01

    The objective of this article is to analyze the searching and indexing techniques of educational search engines' implementation while treating future challenges. Educational search engines could greatly help in the effectiveness of e-learning if used correctly. However, these engines have several gaps which influence the performance of e-learning…

  3. BioCarian: search engine for exploratory searches in heterogeneous biological databases.

    Science.gov (United States)

    Zaki, Nazar; Tennakoon, Chandana

    2017-10-02

    There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search

  4. International Conference on Artificial Intelligence and Evolutionary Algorithms in Engineering Systems

    CERN Document Server

    Dash, Subhransu; Panigrahi, Bijaya

    2015-01-01

      The book is a collection of high-quality peer-reviewed research papers presented in Proceedings of International Conference on Artificial Intelligence and Evolutionary Algorithms in Engineering Systems (ICAEES 2014) held at Noorul Islam Centre for Higher Education, Kumaracoil, India. These research papers provide the latest developments in the broad area of use of artificial intelligence and evolutionary algorithms in engineering systems. The book discusses wide variety of industrial, engineering and scientific applications of the emerging techniques. It presents invited papers from the inventors/originators of new applications and advanced technologies.

  5. FindZebra: A search engine for rare diseases

    DEFF Research Database (Denmark)

    Dragusin, Radu; Petcu, Paula; Lioma, Christina Amalia

    2013-01-01

    Background: The web has become a primary information resource about illnesses and treatments for both medical and non-medical users. Standard web search is by far the most common interface for such information. It is therefore of interest to find out how well web search engines work for diagnostic...... approach for web search engines for rare disease diagnosis which includes 56 real life diagnostic cases, state-of-the-art evaluation measures, and curated information resources. In addition, we introduce FindZebra, a specialized (vertical) rare disease search engine. FindZebra is powered by open source...... medical concepts to demonstrate different ways of displaying the retrieved results to medical experts. Conclusions: Our results indicate that a specialized search engine can improve the diagnostic quality without compromising the ease of use of the currently widely popular web search engines. The proposed...

  6. Combining results of multiple search engines in proteomics.

    Science.gov (United States)

    Shteynberg, David; Nesvizhskii, Alexey I; Moritz, Robert L; Deutsch, Eric W

    2013-09-01

    A crucial component of the analysis of shotgun proteomics datasets is the search engine, an algorithm that attempts to identify the peptide sequence from the parent molecular ion that produced each fragment ion spectrum in the dataset. There are many different search engines, both commercial and open source, each employing a somewhat different technique for spectrum identification. The set of high-scoring peptide-spectrum matches for a defined set of input spectra differs markedly among the various search engine results; individual engines each provide unique correct identifications among a core set of correlative identifications. This has led to the approach of combining the results from multiple search engines to achieve improved analysis of each dataset. Here we review the techniques and available software for combining the results of multiple search engines and briefly compare the relative performance of these techniques.

  7. Combining Results of Multiple Search Engines in Proteomics*

    Science.gov (United States)

    Shteynberg, David; Nesvizhskii, Alexey I.; Moritz, Robert L.; Deutsch, Eric W.

    2013-01-01

    A crucial component of the analysis of shotgun proteomics datasets is the search engine, an algorithm that attempts to identify the peptide sequence from the parent molecular ion that produced each fragment ion spectrum in the dataset. There are many different search engines, both commercial and open source, each employing a somewhat different technique for spectrum identification. The set of high-scoring peptide-spectrum matches for a defined set of input spectra differs markedly among the various search engine results; individual engines each provide unique correct identifications among a core set of correlative identifications. This has led to the approach of combining the results from multiple search engines to achieve improved analysis of each dataset. Here we review the techniques and available software for combining the results of multiple search engines and briefly compare the relative performance of these techniques. PMID:23720762

  8. Challenging problems and solutions in intelligent systems

    CERN Document Server

    Grzegorzewski, Przemysław; Kacprzyk, Janusz; Owsiński, Jan; Penczek, Wojciech; Zadrożny, Sławomir

    2016-01-01

    This volume presents recent research, challenging problems and solutions in Intelligent Systems– covering the following disciplines: artificial and computational intelligence, fuzzy logic and other non-classic logics, intelligent database systems, information retrieval, information fusion, intelligent search (engines), data mining, cluster analysis, unsupervised learning, machine learning, intelligent data analysis, (group) decision support systems, intelligent agents and multi-agent systems, knowledge-based systems, imprecision and uncertainty handling, electronic commerce, distributed systems, etc. The book defines a common ground for sometimes seemingly disparate problems and addresses them by using the paradigm of broadly perceived intelligent systems. It presents a broad panorama of a multitude of theoretical and practical problems which have been successfully dealt with using the paradigm of intelligent computing.

  9. Next-Gen Search Engines

    Science.gov (United States)

    Gupta, Amardeep

    2005-01-01

    Current search engines--even the constantly surprising Google--seem unable to leap the next big barrier in search: the trillions of bytes of dynamically generated data created by individual web sites around the world, or what some researchers call the "deep web." The challenge now is not information overload, but information overlook.…

  10. Intelligent Agent Based Semantic Web in Cloud Computing Environment

    OpenAIRE

    Mukhopadhyay, Debajyoti; Sharma, Manoj; Joshi, Gajanan; Pagare, Trupti; Palwe, Adarsha

    2013-01-01

    Considering today's web scenario, there is a need of effective and meaningful search over the web which is provided by Semantic Web. Existing search engines are keyword based. They are vulnerable in answering intelligent queries from the user due to the dependence of their results on information available in web pages. While semantic search engines provides efficient and relevant results as the semantic web is an extension of the current web in which information is given well defined meaning....

  11. Multiple Presents: How Search Engines Re-write the Past

    NARCIS (Netherlands)

    Hellsten, I; Leydesdorff, L.; Wouters, P.

    2006-01-01

    Internet search engines function in a present which changes continuously. The search engines update their indices regularly, overwriting webpages with newer ones, adding new pages to the index and losing older ones. Some search engines can be used to search for information on the internet for

  12. Using Internet search engines to estimate word frequency.

    Science.gov (United States)

    Blair, Irene V; Urland, Geoffrey R; Ma, Jennifer E

    2002-05-01

    The present research investigated Internet search engines as a rapid, cost-effective alternative for estimating word frequencies. Frequency estimates for 382 words were obtained and compared across four methods: (1) Internet search engines, (2) the Kucera and Francis (1967) analysis of a traditional linguistic corpus, (3) the CELEX English linguistic database (Baayen, Piepenbrock, & Gulikers, 1995), and (4) participant ratings of familiarity. The results showed that Internet search engines produced frequency estimates that were highly consistent with those reported by Kucera and Francis and those calculated from CELEX, highly consistent across search engines, and very reliable over a 6-month period of time. Additional results suggested that Internet search engines are an excellent option when traditional word frequency analyses do not contain the necessary data (e.g., estimates for forenames and slang). In contrast, participants' familiarity judgments did not correspond well with the more objective estimates of word frequency. Researchers are advised to use search engines with large databases (e.g., AltaVista) to ensure the greatest representativeness of the frequency estimates.

  13. People searching for people: analysis of a people search engine log

    NARCIS (Netherlands)

    Weerkamp, W.; Berendsen, R.; Kovachev, B.; Meij, E.; Balog, K.; de Rijke, M.

    2011-01-01

    Recent years show an increasing interest in vertical search: searching within a particular type of information. Understanding what people search for in these "verticals" gives direction to research and provides pointers for the search engines themselves. In this paper we analyze the search logs of

  14. Web Feet Guide to Search Engines: Finding It on the Net.

    Science.gov (United States)

    Web Feet, 2001

    2001-01-01

    This guide to search engines for the World Wide Web discusses selecting the right search engine; interpreting search results; major search engines; online tutorials and guides; search engines for kids; specialized search tools for various subjects; and other specialized engines and gateways. (LRW)

  15. Competitive Intelligence on the Internet-Going for the Gold.

    Science.gov (United States)

    Kassler, Helene

    2000-01-01

    Discussion of competitive intelligence (CI) focuses on recent Web sties and several search techniques that provide valuable CI information. Highlights include links that display business relationships; information from vendors; general business sites; search engine strategies; local business newspapers; job postings; patent and trademark…

  16. Optimizing Vector-Quantization Processor Architecture for Intelligent Query-Search Applications

    Science.gov (United States)

    Xu, Huaiyu; Mita, Yoshio; Shibata, Tadashi

    2002-04-01

    The architecture of a very large scale integration (VLSI) vector-quantization processor (VQP) has been optimized to develop a general-purpose intelligent query-search agent. The agent performs a similarity-based search in a large-volume database. Although similarity-based search processing is computationally very expensive, latency-free searches have become possible due to the highly parallel maximum-likelihood search architecture of the VQP chip. Three architectures of the VQP chip have been studied and their performances are compared. In order to give reasonable searching results according to the different policies, the concept of penalty function has been introduced into the VQP. An E-commerce real-estate agency system has been developed using the VQP chip implemented in a field-programmable gate array (FPGA) and the effectiveness of such an agency system has been demonstrated.

  17. An Exploratory Survey of Student Perspectives Regarding Search Engines

    Science.gov (United States)

    Alshare, Khaled; Miller, Don; Wenger, James

    2005-01-01

    This study explored college students' perceptions regarding their use of search engines. The main objective was to determine how frequently students used various search engines, whether advanced search features were used, and how many search engines were used. Various factors that might influence student responses were examined. Results showed…

  18. Artificial intelligence in nuclear engineering: developments, lesson learned and future directions

    Energy Technology Data Exchange (ETDEWEB)

    Ruan, Da [The Belgian Nuclear Research Centre (SCK.CEN), Mol (Belgium)]. E-mail: druan@sckcen.be

    2005-07-01

    Full text of publication follows: In this lecture, an overview on artificial intelligence (AI) from control to decision making in nuclear engineering will be given mainly based on the 10 years progress of the FLINS forum (Fuzzy Logic and Intelligent Technology in Nuclear Science). Some FLINS concrete examples on nuclear reactor operation, nuclear safeguards information management, and cost estimation under uncertainty for a large nuclear project will be illustrated for the potential use of AI in nuclear engineering. Recommendations and future research directions on AI in nuclear engineering will be suggested from a practical point of view. (author)

  19. Artificial intelligence in nuclear engineering: developments, lesson learned and future directions

    International Nuclear Information System (INIS)

    Ruan, Da

    2005-01-01

    Full text of publication follows: In this lecture, an overview on artificial intelligence (AI) from control to decision making in nuclear engineering will be given mainly based on the 10 years progress of the FLINS forum (Fuzzy Logic and Intelligent Technology in Nuclear Science). Some FLINS concrete examples on nuclear reactor operation, nuclear safeguards information management, and cost estimation under uncertainty for a large nuclear project will be illustrated for the potential use of AI in nuclear engineering. Recommendations and future research directions on AI in nuclear engineering will be suggested from a practical point of view. (author)

  20. IBRI-CASONTO: Ontology-based semantic search engine

    Directory of Open Access Journals (Sweden)

    Awny Sayed

    2017-11-01

    Full Text Available The vast availability of information, that added in a very fast pace, in the data repositories creates a challenge in extracting correct and accurate information. Which has increased the competition among developers in order to gain access to technology that seeks to understand the intent researcher and contextual meaning of terms. While the competition for developing an Arabic Semantic Search systems are still in their infancy, and the reason could be traced back to the complexity of Arabic Language. It has a complex morphological, grammatical and semantic aspects, as it is a highly inflectional and derivational language. In this paper, we try to highlight and present an Ontological Search Engine called IBRI-CASONTO for Colleges of Applied Sciences, Oman. Our proposed engine supports both Arabic and English language. It is also employed two types of search which are a keyword-based search and a semantics-based search. IBRI-CASONTO is based on different technologies such as Resource Description Framework (RDF data and Ontological graph. The experiments represent in two sections, first it shows a comparison among Entity-Search and the Classical-Search inside the IBRI-CASONTO itself, second it compares the Entity-Search of IBRI-CASONTO with currently used search engines, such as Kngine, Wolfram Alpha and the most popular engine nowadays Google, in order to measure their performance and efficiency.

  1. Evaluating a Federated Medical Search Engine

    Science.gov (United States)

    Belden, J.; Williams, J.; Richardson, B.; Schuster, K.

    2014-01-01

    Summary Background Federated medical search engines are health information systems that provide a single access point to different types of information. Their efficiency as clinical decision support tools has been demonstrated through numerous evaluations. Despite their rigor, very few of these studies report holistic evaluations of medical search engines and even fewer base their evaluations on existing evaluation frameworks. Objectives To evaluate a federated medical search engine, MedSocket, for its potential net benefits in an established clinical setting. Methods This study applied the Human, Organization, and Technology (HOT-fit) evaluation framework in order to evaluate MedSocket. The hierarchical structure of the HOT-factors allowed for identification of a combination of efficiency metrics. Human fit was evaluated through user satisfaction and patterns of system use; technology fit was evaluated through the measurements of time-on-task and the accuracy of the found answers; and organization fit was evaluated from the perspective of system fit to the existing organizational structure. Results Evaluations produced mixed results and suggested several opportunities for system improvement. On average, participants were satisfied with MedSocket searches and confident in the accuracy of retrieved answers. However, MedSocket did not meet participants’ expectations in terms of download speed, access to information, and relevance of the search results. These mixed results made it necessary to conclude that in the case of MedSocket, technology fit had a significant influence on the human and organization fit. Hence, improving technological capabilities of the system is critical before its net benefits can become noticeable. Conclusions The HOT-fit evaluation framework was instrumental in tailoring the methodology for conducting a comprehensive evaluation of the search engine. Such multidimensional evaluation of the search engine resulted in recommendations for

  2. Adding a visualization feature to web search engines: it's time.

    Science.gov (United States)

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.

  3. Comparative Study on Three Major Internet Search Engines ...

    African Journals Online (AJOL)

    , Google and ask.com search engines. Experimental method was used with ten reference questions which were used to query each of the search engines . Yahoo obtained the highest results (521,801,043) among the three Web search ...

  4. Variability of patient spine education by Internet search engine.

    Science.gov (United States)

    Ghobrial, George M; Mehdi, Angud; Maltenfort, Mitchell; Sharan, Ashwini D; Harrop, James S

    2014-03-01

    Patients are increasingly reliant upon the Internet as a primary source of medical information. The educational experience varies by search engine, search term, and changes daily. There are no tools for critical evaluation of spinal surgery websites. To highlight the variability between common search engines for the same search terms. To detect bias, by prevalence of specific kinds of websites for certain spinal disorders. Demonstrate a simple scoring system of spinal disorder website for patient use, to maximize the quality of information exposed to the patient. Ten common search terms were used to query three of the most common search engines. The top fifty results of each query were tabulated. A negative binomial regression was performed to highlight the variation across each search engine. Google was more likely than Bing and Yahoo search engines to return hospital ads (P=0.002) and more likely to return scholarly sites of peer-reviewed lite (P=0.003). Educational web sites, surgical group sites, and online web communities had a significantly higher likelihood of returning on any search, regardless of search engine, or search string (P=0.007). Likewise, professional websites, including hospital run, industry sponsored, legal, and peer-reviewed web pages were less likely to be found on a search overall, regardless of engine and search string (P=0.078). The Internet is a rapidly growing body of medical information which can serve as a useful tool for patient education. High quality information is readily available, provided that the patient uses a consistent, focused metric for evaluating online spine surgery information, as there is a clear variability in the way search engines present information to the patient. Published by Elsevier B.V.

  5. Intelligent environmental data warehouse

    International Nuclear Information System (INIS)

    Ekechukwu, B.

    1998-01-01

    Making quick and effective decisions in environment management are based on multiple and complex parameters, a data warehouse is a powerful tool for the over all management of massive environmental information. Selecting the right data from a warehouse is an important factor consideration for end-users. This paper proposed an intelligent environmental data warehouse system. It consists of data warehouse to feed an environmental researchers and managers with desire environmental information needs to their research studies and decision in form of geometric and attribute data for study area, and a metadata for the other sources of environmental information. In addition, the proposed intelligent search engine works according to a set of rule, which enables the system to be aware of the environmental data wanted by the end-user. The system development process passes through four stages. These are data preparation, warehouse development, intelligent engine development and internet platform system development. (author)

  6. The Breakthrough Listen Search for Intelligent Life: 1.1-1.9 GHz Observations of 692 Nearby Stars

    Science.gov (United States)

    Enriquez, J. Emilio; Siemion, Andrew; Foster, Griffin; Gajjar, Vishal; Hellbourg, Greg; Hickish, Jack; Isaacson, Howard; Price, Danny C.; Croft, Steve; DeBoer, David; Lebofsky, Matt; MacMahon, David H. E.; Werthimer, Dan

    2017-11-01

    We report on a search for engineered signals from a sample of 692 nearby stars using the Robert C. Byrd Green Bank Telescope, undertaken as part of the Breakthrough Listen Initiative search for extraterrestrial intelligence. Observations were made over 1.1-1.9 GHz (L band), with three sets of five-minute observations of the 692 primary targets, interspersed with five-minute observations of secondary targets. By comparing the “ON” and “OFF” observations, we are able to identify terrestrial interference and place limits on the presence of engineered signals from putative extraterrestrial civilizations inhabiting the environs of the target stars. During the analysis, 11 events passed our thresholding algorithm, but a detailed analysis of their properties indicates that they are consistent with known examples of anthropogenic radio-frequency interference. We conclude that, at the time of our observations, none of the observed systems host high-duty-cycle radio transmitters emitting between 1.1 and 1.9 GHz with an Equivalent Isotropic Radiated Power of ˜1013 W, which is readily achievable by our own civilization. Our results suggest that fewer than ˜0.1% of the stellar systems within 50 pc possess the type of transmitters searched in this survey.

  7. Nature-inspired computation in engineering

    CERN Document Server

    2016-01-01

    This timely review book summarizes the state-of-the-art developments in nature-inspired optimization algorithms and their applications in engineering. Algorithms and topics include the overview and history of nature-inspired algorithms, discrete firefly algorithm, discrete cuckoo search, plant propagation algorithm, parameter-free bat algorithm, gravitational search, biogeography-based algorithm, differential evolution, particle swarm optimization and others. Applications include vehicle routing, swarming robots, discrete and combinatorial optimization, clustering of wireless sensor networks, cell formation, economic load dispatch, metamodeling, surrogated-assisted cooperative co-evolution, data fitting and reverse engineering as well as other case studies in engineering. This book will be an ideal reference for researchers, lecturers, graduates and engineers who are interested in nature-inspired computation, artificial intelligence and computational intelligence. It can also serve as a reference for relevant...

  8. [An object-oriented intelligent engineering design approach for lake pollution control].

    Science.gov (United States)

    Zou, Rui; Zhou, Jing; Liu, Yong; Zhu, Xiang; Zhao, Lei; Yang, Ping-Jian; Guo, Huai-Cheng

    2013-03-01

    Regarding the shortage and deficiency of traditional lake pollution control engineering techniques, a new lake pollution control engineering approach was proposed in this study, based on object-oriented intelligent design (OOID) from the perspective of intelligence. It can provide a new methodology and framework for effectively controlling lake pollution and improving water quality. The differences between the traditional engineering techniques and the OOID approach were compared. The key points for OOID were described as object perspective, cause and effect foundation, set points into surface, and temporal and spatial optimization. The blue algae control in lake was taken as an example in this study. The effect of algae control and water quality improvement were analyzed in details from the perspective of object-oriented intelligent design based on two engineering techniques (vertical hydrodynamic mixer and pumping algaecide recharge). The modeling results showed that the traditional engineering design paradigm cannot provide scientific and effective guidance for engineering design and decision-making regarding lake pollution. Intelligent design approach is based on the object perspective and quantitative causal analysis in this case. This approach identified that the efficiency of mixers was much higher than pumps in achieving the goal of low to moderate water quality improvement. However, when the objective of water quality exceeded a certain value (such as the control objective of peak Chla concentration exceeded 100 microg x L(-1) in this experimental water), the mixer cannot achieve this goal. The pump technique can achieve the goal but with higher cost. The efficiency of combining the two techniques was higher than using one of the two techniques alone. Moreover, the quantitative scale control of the two engineering techniques has a significant impact on the actual project benefits and costs.

  9. Intelligent energy management control of vehicle air conditioning system coupled with engine

    International Nuclear Information System (INIS)

    Khayyam, Hamid; Abawajy, Jemal; Jazar, Reza N.

    2012-01-01

    Vehicle Air Conditioning (AC) systems consist of an engine powered compressor activated by an electrical clutch. The AC system imposes an extra load to the vehicle's engine increasing the vehicle fuel consumption and emissions. Energy management control of the vehicle air conditioning is a nonlinear dynamic system, influenced by uncertain disturbances. In addition, the vehicle energy management control system interacts with different complex systems, such as engine, air conditioning system, environment, and driver, to deliver fuel consumption improvements. In this paper, we describe the energy management control of vehicle AC system coupled with vehicle engine through an intelligent control design. The Intelligent Energy Management Control (IEMC) system presented in this paper includes an intelligent algorithm which uses five exterior units and three integrated fuzzy controllers to produce desirable internal temperature and air quality, improved fuel consumption, low emission, and smooth driving. The three fuzzy controllers include: (i) a fuzzy cruise controller to adapt vehicle cruise speed via prediction of the road ahead using a Look-Ahead system, (ii) a fuzzy air conditioning controller to produce desirable temperature and air quality inside vehicle cabin room via a road information system, and (iii) a fuzzy engine controller to generate the required engine torque to move the vehicle smoothly on the road. We optimised the integrated operation of the air conditioning and the engine under various driving patterns and performed three simulations. Results show that the proposed IEMC system developed based on Fuzzy Air Conditioning Controller with Look-Ahead (FAC-LA) method is a more efficient controller for vehicle air conditioning system than the previously developed Coordinated Energy Management Systems (CEMS). - Highlights: ► AC interacts: vehicle, environment, driver components, and the interrelationships between them. ► Intelligent AC algorithm which uses

  10. Search Engine : an effective tool for exploring the Internet

    OpenAIRE

    Ranasinghe, W.M. Tharanga Dilruk

    2006-01-01

    The Internet has become the largest source of information. Today, millions of Websites exist and this number continuous to grow. Finding the right information at the right time is the challenge in the Internet age. Search engine is searchable database which allows locating the information on the Internet by submitting the keywords. Search engines can be divided into two categories as the Individual and Meta Search engines. This article discusses the features of these search engines in detail.

  11. Search Engine Advertising Effectiveness in a Multimedia Campaign

    NARCIS (Netherlands)

    Zenetti, German; Bijmolt, Tammo H. A.; Leeflang, Peter S. H.; Klapper, Daniel

    2014-01-01

    Search engine advertising has become a multibillion-dollar business and one of the dominant forms of advertising on the Internet. This study examines the effectiveness of search engine advertising within a multimedia campaign, with explicit consideration of the interaction effects between search

  12. Searching for Suicide Information on Web Search Engines in Chinese

    Directory of Open Access Journals (Sweden)

    Yen-Feng Lee

    2017-01-01

    Full Text Available Introduction: Recently, suicide prevention has been an important public health issue. However, with the growing access to information in cyberspace, the harmful information is easily accessible online. To investigate the accessibility of potentially harmful suicide-related information on the internet, we discuss the following issue about searching suicide information on the internet to draw attention to it. Methods: We use five search engines (Google, Yahoo, Bing, Yam, and Sina and four suicide-related search queries (suicide, how to suicide, suicide methods, and want to die in traditional Chinese in April 2016. We classified the first thirty linkages of the search results on each search engine by a psychiatric doctor into suicide prevention, pro-suicide, neutral, unrelated to suicide, or error websites. Results: Among the total 352 unique websites generated, the suicide prevention websites were the most frequent among the search results (37.8%, followed by websites unrelated to suicide (25.9% and neutral websites (23.0%. However, pro-suicide websites were still easily accessible (9.7%. Besides, compared with the USA and China, the search engine originating in Taiwan had the lowest accessibility to pro-suicide information. The results of ANOVA showed a significant difference between the groups, F = 8.772, P < 0.001. Conclusions: This study results suggest a need for further restrictions and regulations of pro-suicide information on the internet. Providing more supportive information online may be an effective plan for suicidal prevention.

  13. Google Patents: The global patent search engine

    OpenAIRE

    Noruzi, Alireza; Abdekhoda, Mohammadhiwa

    2014-01-01

    Google Patents (www.google.com/patents) includes over 8 million full-text patents. Google Patents works in the same way as the Google search engine. Google Patents is the global patent search engine that lets users search through patents from the USPTO (United States Patent and Trademark Office), EPO (European Patent Office), etc. This study begins with an overview of how to use Google Patent and identifies advanced search techniques not well-documented by Google Patent. It makes several sug...

  14. The Little Engines That Could: Modeling the Performance of World Wide Web Search Engines

    OpenAIRE

    Eric T. Bradlow; David C. Schmittlein

    2000-01-01

    This research examines the ability of six popular Web search engines, individually and collectively, to locate Web pages containing common marketing/management phrases. We propose and validate a model for search engine performance that is able to represent key patterns of coverage and overlap among the engines. The model enables us to estimate the typical additional benefit of using multiple search engines, depending on the particular set of engines being considered. It also provides an estim...

  15. Reflections on New Search Engine 新型搜索引擎畅想

    OpenAIRE

    Huang, Jiannian

    2007-01-01

    English abstract]Quick increment of need on internet information resources leads to a rush of search engines. This article introduces some new type of search engines which is appearing and will appear. These search engines includes as follows: grey document search engine, invisible web search engine, knowledge discovery search engine, clustering meta search engine, academic clustering search engine, conception comparison and conception analogy search engine, consultation search engine, teachi...

  16. IntegromeDB: an integrated system and biological search engine.

    Science.gov (United States)

    Baitaluk, Michael; Kozhenkov, Sergey; Dubinina, Yulia; Ponomarenko, Julia

    2012-01-19

    With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback.

  17. Beyond Artificial Intelligence toward Engineered Psychology

    Science.gov (United States)

    Bozinovski, Stevo; Bozinovska, Liljana

    This paper addresses the field of Artificial Intelligence, road it went so far and possible road it should go. The paper was invited by the Conference of IT Revolutions 2008, and discusses some issues not emphasized in AI trajectory so far. The recommendations are that the main focus should be personalities rather than programs or agents, that genetic environment should be introduced in reasoning about personalities, and that limbic system should be studied and modeled. Engineered Psychology is proposed as a road to go. Need for basic principles in psychology are discussed and a mathematical equation is proposed as fundamental law of engineered and human psychology.

  18. Subject Gateway Sites and Search Engine Ranking.

    Science.gov (United States)

    Thelwall, Mike

    2002-01-01

    Discusses subject gateway sites and commercial search engines for the Web and presents an explanation of Google's PageRank algorithm. The principle question addressed is the conditions under which a gateway site will increase the likelihood that a target page is found in search engines. (LRW)

  19. Human Flesh Search Engine and Online Privacy.

    Science.gov (United States)

    Zhang, Yang; Gao, Hong

    2016-04-01

    Human flesh search engine can be a double-edged sword, bringing convenience on the one hand and leading to infringement of personal privacy on the other hand. This paper discusses the ethical problems brought about by the human flesh search engine, as well as possible solutions.

  20. Understanding and modeling users of modern search engines

    NARCIS (Netherlands)

    Chuklin, A.

    2017-01-01

    As search is being used by billions of people, modern search engines are becoming more and more complex. And complexity does not just come from the algorithms. Richer and richer content is being added to search engine result pages: news and sports results, definitions and translations, images and

  1. L factor: hope and fear in the search for extraterrestrial intelligence

    Science.gov (United States)

    Rubin, Charles T.

    2001-08-01

    The L factor in the Drake equation is widely understood to account for most of the variance in estimates of the number of extraterrestrial intelligences that might be contacted by the search for extraterrestrial intelligence (SETI). It is also among the hardest to quantify. An examination of discussions of the L factor in the popular and technical SETI literature suggests that attempts to estimate L involve a variety of potentially conflicting assumptions about civilizational lifespan that reflect hopes and fears about the human future.

  2. Evaluation of an intelligent open learning system for engineering education

    Directory of Open Access Journals (Sweden)

    Maria Samarakou

    2016-09-01

    Full Text Available In computer-assisted education, the continuous monitoring and assessment of the learner is crucial for the delivery of personalized education to be effective. In this paper, we present a pilot application of the Student Diagnosis, Assistance, Evaluation System based on Artificial Intelligence (StuDiAsE, an open learning system for unattended student diagnosis, assistance and evaluation based on artificial intelligence. The system demonstrated in this paper has been designed with engineering students in mind and is capable of monitoring their comprehension, assessing their prior knowledge, building individual learner profiles, providing personalized assistance and, finally, evaluating a learner's performance both quantitatively and qualitatively by means of artificial intelligence techniques. The architecture and user interface of the system are being exhibited, the results and feedback received from a pilot application of the system within a theoretical engineering course are being demonstrated and the outcomes are being discussed.

  3. FindZebra: a search engine for rare diseases.

    Science.gov (United States)

    Dragusin, Radu; Petcu, Paula; Lioma, Christina; Larsen, Birger; Jørgensen, Henrik L; Cox, Ingemar J; Hansen, Lars Kai; Ingwersen, Peter; Winther, Ole

    2013-06-01

    The web has become a primary information resource about illnesses and treatments for both medical and non-medical users. Standard web search is by far the most common interface to this information. It is therefore of interest to find out how well web search engines work for diagnostic queries and what factors contribute to successes and failures. Among diseases, rare (or orphan) diseases represent an especially challenging and thus interesting class to diagnose as each is rare, diverse in symptoms and usually has scattered resources associated with it. We design an evaluation approach for web search engines for rare disease diagnosis which includes 56 real life diagnostic cases, performance measures, information resources and guidelines for customising Google Search to this task. In addition, we introduce FindZebra, a specialized (vertical) rare disease search engine. FindZebra is powered by open source search technology and uses curated freely available online medical information. FindZebra outperforms Google Search in both default set-up and customised to the resources used by FindZebra. We extend FindZebra with specialized functionalities exploiting medical ontological information and UMLS medical concepts to demonstrate different ways of displaying the retrieved results to medical experts. Our results indicate that a specialized search engine can improve the diagnostic quality without compromising the ease of use of the currently widely popular standard web search. The proposed evaluation approach can be valuable for future development and benchmarking. The FindZebra search engine is available at http://www.findzebra.com/. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Music Search Engines: Specifications and Challenges

    DEFF Research Database (Denmark)

    Nanopoulos, Alexandros; Rafilidis, Dimitrios; Manolopoulos, Yannis

    2009-01-01

    Nowadays we have a proliferation of music data available over the Web. One of the imperative challenges is how to search these vast, global-scale musical resources to find preferred music. Recent research has envisaged the notion of music search engines (MSEs) that allow for searching preferred...

  5. Applying Russian Search Engines to market Finnish Corporates

    OpenAIRE

    Pankratovs, Vladimirs

    2013-01-01

    The goal of this thesis work is to provide basic knowledge of Russia-based Search Engines marketing capabilities. After reading this material, the user is able to diverge different kinds of Search Engine Marketing tools and can perform advertising campaigns. This study includes information about the majority of tools available to the user and provides up to date screenshots of Russian Search engines front-end, which can be useful in further work. Study discusses the main principles and ba...

  6. Use of search engine optimization factors for Google page rank prediction

    OpenAIRE

    Tvrdi, Barbara

    2012-01-01

    Over the years, search engines have become an important tool for finding information. It is known that users select the link on the first page of search results in 62% of the cases. Search engine optimization techniques enable website improvement and therefore a better ranking in search engines. The exact specification of the factors that affect website ranking is not disclosed by search engine owners. In this thesis we tried to choose some most frequently mentioned search engine optimizatio...

  7. Searching Choices: Quantifying Decision-Making Processes Using Search Engine Data.

    Science.gov (United States)

    Moat, Helen Susannah; Olivola, Christopher Y; Chater, Nick; Preis, Tobias

    2016-07-01

    When making a decision, humans consider two types of information: information they have acquired through their prior experience of the world, and further information they gather to support the decision in question. Here, we present evidence that data from search engines such as Google can help us model both sources of information. We show that statistics from search engines on the frequency of content on the Internet can help us estimate the statistical structure of prior experience; and, specifically, we outline how such statistics can inform psychological theories concerning the valuation of human lives, or choices involving delayed outcomes. Turning to information gathering, we show that search query data might help measure human information gathering, and it may predict subsequent decisions. Such data enable us to compare information gathered across nations, where analyses suggest, for example, a greater focus on the future in countries with a higher per capita GDP. We conclude that search engine data constitute a valuable new resource for cognitive scientists, offering a fascinating new tool for understanding the human decision-making process. Copyright © 2016 The Authors. Topics in Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  8. The Use of Web Search Engines in Information Science Research.

    Science.gov (United States)

    Bar-Ilan, Judit

    2004-01-01

    Reviews the literature on the use of Web search engines in information science research, including: ways users interact with Web search engines; social aspects of searching; structure and dynamic nature of the Web; link analysis; other bibliometric applications; characterizing information on the Web; search engine evaluation and improvement; and…

  9. Automatic figure ranking and user interfacing for intelligent figure search.

    Directory of Open Access Journals (Sweden)

    Hong Yu

    2010-10-01

    Full Text Available Figures are important experimental results that are typically reported in full-text bioscience articles. Bioscience researchers need to access figures to validate research facts and to formulate or to test novel research hypotheses. On the other hand, the sheer volume of bioscience literature has made it difficult to access figures. Therefore, we are developing an intelligent figure search engine (http://figuresearch.askhermes.org. Existing research in figure search treats each figure equally, but we introduce a novel concept of "figure ranking": figures appearing in a full-text biomedical article can be ranked by their contribution to the knowledge discovery.We empirically validated the hypothesis of figure ranking with over 100 bioscience researchers, and then developed unsupervised natural language processing (NLP approaches to automatically rank figures. Evaluating on a collection of 202 full-text articles in which authors have ranked the figures based on importance, our best system achieved a weighted error rate of 0.2, which is significantly better than several other baseline systems we explored. We further explored a user interfacing application in which we built novel user interfaces (UIs incorporating figure ranking, allowing bioscience researchers to efficiently access important figures. Our evaluation results show that 92% of the bioscience researchers prefer as the top two choices the user interfaces in which the most important figures are enlarged. With our automatic figure ranking NLP system, bioscience researchers preferred the UIs in which the most important figures were predicted by our NLP system than the UIs in which the most important figures were randomly assigned. In addition, our results show that there was no statistical difference in bioscience researchers' preference in the UIs generated by automatic figure ranking and UIs by human ranking annotation.The evaluation results conclude that automatic figure ranking and user

  10. Semantic interpretation of search engine resultant

    Science.gov (United States)

    Nasution, M. K. M.

    2018-01-01

    In semantic, logical language can be interpreted in various forms, but the certainty of meaning is included in the uncertainty, which directly always influences the role of technology. One results of this uncertainty applies to search engines as user interfaces with information spaces such as the Web. Therefore, the behaviour of search engine results should be interpreted with certainty through semantic formulation as interpretation. Behaviour formulation shows there are various interpretations that can be done semantically either temporary, inclusion, or repeat.

  11. Intelligent engineering and technology for nuclear power plant operation

    International Nuclear Information System (INIS)

    Wang, P.P.; Gu, X.

    1996-01-01

    The Three-Mile-Island accident has drawn considerable attention by the engineering, scientific, management, financial, and political communities as well as society at large. This paper surveys possible causes of the accident studied by various groups. Research continues in this area with many projects aimed at specifically improving the performance and operation of a nuclear power plant using the contemporary technologies available. In addition to the known cause of the accident and suggest a strategy for coping with these problems in the future. With the increased use of intelligent methodologies called computational intelligence or soft-computing, a substantially larger collection of powerful tools are now available for our designers to use in order to tackle these sensitive and difficult issues. These intelligent methodologies consists of fuzzy logic, genetic algorithms, neural networks, artificial intelligence and expert systems, pattern recognition, machine intelligence, and fuzzy constraint networks. Using the Three-Mile-Island experience, this paper offers a set of specific recommendations for future designers to take advantage of the powerful tools of intelligent technologies that we are now able to master and encourages the adoption of a novel methodology called fuzzy constraint network

  12. 2nd International Conference on Intelligent Technologies and Engineering Systems

    CERN Document Server

    Chen, Cheng-Yi; Yang, Cheng-Fu

    2014-01-01

    This book includes the original, peer reviewed research papers from the 2nd International Conference on Intelligent Technologies and Engineering Systems (ICITES2013), which took place on December 12-14, 2013 at Cheng Shiu University in Kaohsiung, Taiwan. Topics covered include: laser technology, wireless and mobile networking, lean and agile manufacturing, speech processing, microwave dielectrics, intelligent circuits and systems, 3D graphics, communications, and structure dynamics and control.

  13. Document Clustering Approach for Meta Search Engine

    Science.gov (United States)

    Kumar, Naresh, Dr.

    2017-08-01

    The size of WWW is growing exponentially with ever change in technology. This results in huge amount of information with long list of URLs. Manually it is not possible to visit each page individually. So, if the page ranking algorithms are used properly then user search space can be restricted up to some pages of searched results. But available literatures show that no single search system can provide qualitative results from all the domains. This paper provides solution to this problem by introducing a new meta search engine that determine the relevancy of query corresponding to web page and cluster the results accordingly. The proposed approach reduces the user efforts, improves the quality of results and performance of the meta search engine.

  14. Artificial intelligence and engineering curricula - are changes needed?

    International Nuclear Information System (INIS)

    Jenkins, J.P.

    1988-01-01

    The purpose of this paper is to identify the expected impact of artificial intelligence (AI) on curricula and training courses. From this examination, new elements are proposed for the academic preparation and training of engineers who will evaluate and use these systems and capabilities. Artificial intelligence, from an operational viewpoint, begins with a set of rules governing the operation of logic, implemented via computer software and userware. These systems apply logic and experience to handling problems in an intelligent approach, especially when the number of alternatives to problem solution is beyond the scope of the human user. Usually, AI applications take the form of expert systems. An expert system embodies in the computer the knowledge-based component of an expert, such as domain knowledge and reasoning techniques, in such a form that the system can offer intelligent advice and, on demand, justify its own line of reasoning. Two languages predominate, LISP and Prolog. The AI user may interface with the knowledge base via one of these languages or by means of menu displays, cursor selections, or other conventional user interface methods

  15. Using Internet Search Engines to Obtain Medical Information: A Comparative Study

    Science.gov (United States)

    Wang, Liupu; Wang, Juexin; Wang, Michael; Li, Yong; Liang, Yanchun

    2012-01-01

    Background The Internet has become one of the most important means to obtain health and medical information. It is often the first step in checking for basic information about a disease and its treatment. The search results are often useful to general users. Various search engines such as Google, Yahoo!, Bing, and Ask.com can play an important role in obtaining medical information for both medical professionals and lay people. However, the usability and effectiveness of various search engines for medical information have not been comprehensively compared and evaluated. Objective To compare major Internet search engines in their usability of obtaining medical and health information. Methods We applied usability testing as a software engineering technique and a standard industry practice to compare the four major search engines (Google, Yahoo!, Bing, and Ask.com) in obtaining health and medical information. For this purpose, we searched the keyword breast cancer in Google, Yahoo!, Bing, and Ask.com and saved the results of the top 200 links from each search engine. We combined nonredundant links from the four search engines and gave them to volunteer users in an alphabetical order. The volunteer users evaluated the websites and scored each website from 0 to 10 (lowest to highest) based on the usefulness of the content relevant to breast cancer. A medical expert identified six well-known websites related to breast cancer in advance as standards. We also used five keywords associated with breast cancer defined in the latest release of Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) and analyzed their occurrence in the websites. Results Each search engine provided rich information related to breast cancer in the search results. All six standard websites were among the top 30 in search results of all four search engines. Google had the best search validity (in terms of whether a website could be opened), followed by Bing, Ask.com, and Yahoo!. The search

  16. Using Internet search engines to obtain medical information: a comparative study.

    Science.gov (United States)

    Wang, Liupu; Wang, Juexin; Wang, Michael; Li, Yong; Liang, Yanchun; Xu, Dong

    2012-05-16

    The Internet has become one of the most important means to obtain health and medical information. It is often the first step in checking for basic information about a disease and its treatment. The search results are often useful to general users. Various search engines such as Google, Yahoo!, Bing, and Ask.com can play an important role in obtaining medical information for both medical professionals and lay people. However, the usability and effectiveness of various search engines for medical information have not been comprehensively compared and evaluated. To compare major Internet search engines in their usability of obtaining medical and health information. We applied usability testing as a software engineering technique and a standard industry practice to compare the four major search engines (Google, Yahoo!, Bing, and Ask.com) in obtaining health and medical information. For this purpose, we searched the keyword breast cancer in Google, Yahoo!, Bing, and Ask.com and saved the results of the top 200 links from each search engine. We combined nonredundant links from the four search engines and gave them to volunteer users in an alphabetical order. The volunteer users evaluated the websites and scored each website from 0 to 10 (lowest to highest) based on the usefulness of the content relevant to breast cancer. A medical expert identified six well-known websites related to breast cancer in advance as standards. We also used five keywords associated with breast cancer defined in the latest release of Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) and analyzed their occurrence in the websites. Each search engine provided rich information related to breast cancer in the search results. All six standard websites were among the top 30 in search results of all four search engines. Google had the best search validity (in terms of whether a website could be opened), followed by Bing, Ask.com, and Yahoo!. The search results highly overlapped between the

  17. An ontology-based search engine for protein-protein interactions.

    Science.gov (United States)

    Park, Byungkyu; Han, Kyungsook

    2010-01-18

    Keyword matching or ID matching is the most common searching method in a large database of protein-protein interactions. They are purely syntactic methods, and retrieve the records in the database that contain a keyword or ID specified in a query. Such syntactic search methods often retrieve too few search results or no results despite many potential matches present in the database. We have developed a new method for representing protein-protein interactions and the Gene Ontology (GO) using modified Gödel numbers. This representation is hidden from users but enables a search engine using the representation to efficiently search protein-protein interactions in a biologically meaningful way. Given a query protein with optional search conditions expressed in one or more GO terms, the search engine finds all the interaction partners of the query protein by unique prime factorization of the modified Gödel numbers representing the query protein and the search conditions. Representing the biological relations of proteins and their GO annotations by modified Gödel numbers makes a search engine efficiently find all protein-protein interactions by prime factorization of the numbers. Keyword matching or ID matching search methods often miss the interactions involving a protein that has no explicit annotations matching the search condition, but our search engine retrieves such interactions as well if they satisfy the search condition with a more specific term in the ontology.

  18. D-score: a search engine independent MD-score.

    Science.gov (United States)

    Vaudel, Marc; Breiter, Daniela; Beck, Florian; Rahnenführer, Jörg; Martens, Lennart; Zahedi, René P

    2013-03-01

    While peptides carrying PTMs are routinely identified in gel-free MS, the localization of the PTMs onto the peptide sequences remains challenging. Search engine scores of secondary peptide matches have been used in different approaches in order to infer the quality of site inference, by penalizing the localization whenever the search engine similarly scored two candidate peptides with different site assignments. In the present work, we show how the estimation of posterior error probabilities for peptide candidates allows the estimation of a PTM score called the D-score, for multiple search engine studies. We demonstrate the applicability of this score to three popular search engines: Mascot, OMSSA, and X!Tandem, and evaluate its performance using an already published high resolution data set of synthetic phosphopeptides. For those peptides with phosphorylation site inference uncertainty, the number of spectrum matches with correctly localized phosphorylation increased by up to 25.7% when compared to using Mascot alone, although the actual increase depended on the fragmentation method used. Since this method relies only on search engine scores, it can be readily applied to the scoring of the localization of virtually any modification at no additional experimental or in silico cost. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. The Breakthrough Listen Initiative and the Future of the Search for Intelligent Life

    Science.gov (United States)

    Enriquez, J. Emilio; Siemion, Andrew; Croft, Steve; Hellbourg, Greg; Lebofsky, Matt; MacMahon, David; Price, Danny; DeBoer, David; Werthimer, Dan

    2017-05-01

    Unprecedented recent results in the fields of exoplanets and astrobiology have dramatically increased the interest in the potential existence of intelligent life elsewhere in the galaxy. Additionally, the capabilities of modern Searches for Extraterrestrial Intelligence (SETI) have increased tremendously. Much of this improvement is due to the ongoing development of wide bandwidth radio instruments and the Moore's Law increase in computing power over the previous decades. Together, these instrumentation improvements allow for narrow band signal searches of billions of frequency channels at once.The Breakthrough Listen Initiative (BL) was launched on July 20, 2015 at the Royal Society in London, UK with the goal to conduct the most comprehensive and sensitive search for advanced life in humanity's history. Here we detail important milestones achieved during the first year and a half of the program. We describe the key BL SETI surveys and briefly describe current facilities, including the Green Bank Telescope, the Automated Planet Finder and the Parkes Observatory. We also mention the ongoing and potential collaborations focused on complementary sciences, these include pulse searches of pulsars and FRBs, as well as astrophysically powered radio emission from stars targeted by our program.We conclude with a brief view towards future SETI searches with upcoming next-generation radio facilities such as SKA and ngVLA.

  20. Considerations for the development of task-based search engines

    DEFF Research Database (Denmark)

    Petcu, Paula; Dragusin, Radu

    2013-01-01

    Based on previous experience from working on a task-based search engine, we present a list of suggestions and ideas for an Information Retrieval (IR) framework that could inform the development of next generation professional search systems. The specific task that we start from is the clinicians......' information need in finding rare disease diagnostic hypotheses at the time and place where medical decisions are made. Our experience from the development of a search engine focused on supporting clinicians in completing this task has provided us valuable insights in what aspects should be considered...... by the developers of vertical search engines....

  1. Grooker, KartOO, Addict-o-Matic and More: Really Different Search Engines

    Science.gov (United States)

    Descy, Don E.

    2009-01-01

    There are hundreds of unique search engines in the United States and thousands of unique search engines around the world. If people get into search engines designed just to search particular web sites, the number is in the hundreds of thousands. This article looks at: (1) clustering search engines, such as KartOO (www.kartoo.com) and Grokker…

  2. New Architectures for Presenting Search Results Based on Web Search Engines Users Experience

    Science.gov (United States)

    Martinez, F. J.; Pastor, J. A.; Rodriguez, J. V.; Lopez, Rosana; Rodriguez, J. V., Jr.

    2011-01-01

    Introduction: The Internet is a dynamic environment which is continuously being updated. Search engines have been, currently are and in all probability will continue to be the most popular systems in this information cosmos. Method: In this work, special attention has been paid to the series of changes made to search engines up to this point,…

  3. Searching for a New Way to Reach Patrons: A Search Engine Optimization Pilot Project at Binghamton University Libraries

    Science.gov (United States)

    Rushton, Erin E.; Kelehan, Martha Daisy; Strong, Marcy A.

    2008-01-01

    Search engine use is one of the most popular online activities. According to a recent OCLC report, nearly all students start their electronic research using a search engine instead of the library Web site. Instead of viewing search engines as competition, however, librarians at Binghamton University Libraries decided to employ search engine…

  4. Evaluation of Proteomic Search Engines for the Analysis of Histone Modifications

    Science.gov (United States)

    2015-01-01

    Identification of histone post-translational modifications (PTMs) is challenging for proteomics search engines. Including many histone PTMs in one search increases the number of candidate peptides dramatically, leading to low search speed and fewer identified spectra. To evaluate database search engines on identifying histone PTMs, we present a method in which one kind of modification is searched each time, for example, unmodified, individually modified, and multimodified, each search result is filtered with false discovery rate less than 1%, and the identifications of multiple search engines are combined to obtain confident results. We apply this method for eight search engines on histone data sets. We find that two search engines, pFind and Mascot, identify most of the confident results at a reasonable speed, so we recommend using them to identify histone modifications. During the evaluation, we also find some important aspects for the analysis of histone modifications. Our evaluation of different search engines on identifying histone modifications will hopefully help those who are hoping to enter the histone proteomics field. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium with the data set identifier PXD001118. PMID:25167464

  5. Evaluation of proteomic search engines for the analysis of histone modifications.

    Science.gov (United States)

    Yuan, Zuo-Fei; Lin, Shu; Molden, Rosalynn C; Garcia, Benjamin A

    2014-10-03

    Identification of histone post-translational modifications (PTMs) is challenging for proteomics search engines. Including many histone PTMs in one search increases the number of candidate peptides dramatically, leading to low search speed and fewer identified spectra. To evaluate database search engines on identifying histone PTMs, we present a method in which one kind of modification is searched each time, for example, unmodified, individually modified, and multimodified, each search result is filtered with false discovery rate less than 1%, and the identifications of multiple search engines are combined to obtain confident results. We apply this method for eight search engines on histone data sets. We find that two search engines, pFind and Mascot, identify most of the confident results at a reasonable speed, so we recommend using them to identify histone modifications. During the evaluation, we also find some important aspects for the analysis of histone modifications. Our evaluation of different search engines on identifying histone modifications will hopefully help those who are hoping to enter the histone proteomics field. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium with the data set identifier PXD001118.

  6. Chemical Information in Scirus and BASE (Bielefeld Academic Search Engine)

    Science.gov (United States)

    Bendig, Regina B.

    2009-01-01

    The author sought to determine to what extent the two search engines, Scirus and BASE (Bielefeld Academic Search Engines), would be useful to first-year university students as the first point of searching for chemical information. Five topics were searched and the first ten records of each search result were evaluated with regard to the type of…

  7. Estimating Search Engine Index Size Variability

    DEFF Research Database (Denmark)

    Van den Bosch, Antal; Bogers, Toine; De Kunder, Maurice

    2016-01-01

    One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel...... method of estimating the size of a Web search engine’s index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing’s indices over a nine-year period, from March 2006...... until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find...

  8. The effective use of search engines on the Internet.

    Science.gov (United States)

    Younger, P

    This article explains how nurses can get the most out of researching information on the internet using the search engine Google. It also explores some of the other types of search engines that are available. Internet users are shown how to find text, images and reports and search within sites. Copyright issues are also discussed.

  9. Evidence-based Medicine Search: a customizable federated search engine.

    Science.gov (United States)

    Bracke, Paul J; Howse, David K; Keim, Samuel M

    2008-04-01

    This paper reports on the development of a tool by the Arizona Health Sciences Library (AHSL) for searching clinical evidence that can be customized for different user groups. The AHSL provides services to the University of Arizona's (UA's) health sciences programs and to the University Medical Center. Librarians at AHSL collaborated with UA College of Medicine faculty to create an innovative search engine, Evidence-based Medicine (EBM) Search, that provides users with a simple search interface to EBM resources and presents results organized according to an evidence pyramid. EBM Search was developed with a web-based configuration component that allows the tool to be customized for different specialties. Informal and anecdotal feedback from physicians indicates that EBM Search is a useful tool with potential in teaching evidence-based decision making. While formal evaluation is still being planned, a tool such as EBM Search, which can be configured for specific user populations, may help lower barriers to information resources in an academic health sciences center.

  10. SETI pioneers scientists talk about their search for extraterrestrial intelligence

    CERN Document Server

    Swift, David W.

    1990-01-01

    Why did some scientists decide to conduct a search for extraterrestrial intelligence (SETI)? What factors in their personal development predisposed them to such a quest? What obstacles did they encounter along the way? David Swift interviewed the first scientists involved in the search & offers a fascinating overview of the emergence of this modern scientific endeavor. He allows some of the most imaginative scientific thinkers of our time to hold forth on their views regarding SETI & extraterrestrial life & on how the field has developed. Readers will react with a range of opinions as broad as those concerning the likelihood of success in SETI itself. ''A goldmine of original information.''

  11. Noesis: Ontology based Scoped Search Engine and Resource Aggregator for Atmospheric Science

    Science.gov (United States)

    Ramachandran, R.; Movva, S.; Li, X.; Cherukuri, P.; Graves, S.

    2006-12-01

    The goal for search engines is to return results that are both accurate and complete. The search engines should find only what you really want and find everything you really want. Search engines (even meta search engines) lack semantics. The basis for search is simply based on string matching between the user's query term and the resource database and the semantics associated with the search string is not captured. For example, if an atmospheric scientist is searching for "pressure" related web resources, most search engines return inaccurate results such as web resources related to blood pressure. In this presentation Noesis, which is a meta-search engine and a resource aggregator that uses domain ontologies to provide scoped search capabilities will be described. Noesis uses domain ontologies to help the user scope the search query to ensure that the search results are both accurate and complete. The domain ontologies guide the user to refine their search query and thereby reduce the user's burden of experimenting with different search strings. Semantics are captured by refining the query terms to cover synonyms, specializations, generalizations and related concepts. Noesis also serves as a resource aggregator. It categorizes the search results from different online resources such as education materials, publications, datasets, web search engines that might be of interest to the user.

  12. The AXES-lite video search engine

    NARCIS (Netherlands)

    Chen, Shu; McGuinness, Kevin; Aly, Robin; de Jong, Franciska M.G.; O' Connor, Noel E.

    The aim of AXES is to develop tools that provide various types of users with new engaging ways to interact with audiovisual libraries, helping them discover, browse, navigate, search, and enrich archives. This paper describes the initial (lite) version of the AXES search engine, which is targeted at

  13. Dermatological image search engines on the Internet: do they work?

    Science.gov (United States)

    Cutrone, M; Grimalt, R

    2007-02-01

    Atlases on CD-ROM first substituted the use of paediatric dermatology atlases printed on paper. This permitted a faster search and a practical comparison of differential diagnoses. The third step in the evolution of clinical atlases was the onset of the online atlas. Many doctors now use the Internet image search engines to obtain clinical images directly. The aim of this study was to test the reliability of the image search engines compared to the online atlases. We tested seven Internet image search engines with three paediatric dermatology diseases. In general, the service offered by the search engines is good, and continues to be free of charge. The coincidence between what we searched for and what we found was generally excellent, and contained no advertisements. Most Internet search engines provided similar results but some were more user friendly than others. It is not necessary to repeat the same research with Picsearch, Lycos and MSN, as the response would be the same; there is a possibility that they might share software. Image search engines are a useful, free and precise method to obtain paediatric dermatology images for teaching purposes. There is still the matter of copyright to be resolved. What are the legal uses of these 'free' images? How do we define 'teaching purposes'? New watermark methods and encrypted electronic signatures might solve these problems and answer these questions.

  14. Anthropomorphism in the search for extra-terrestrial intelligence - The limits of cognition?

    Science.gov (United States)

    Bohlmann, Ulrike M.; Bürger, Moritz J. F.

    2018-02-01

    The question "Are we alone?" lingers in the human mind since ancient times. Early human civilisations populated the heavens above with a multitude of Gods endowed with some all too human characteristics - from their outer appearance to their innermost motivations. En passant they created thereby their own cultural founding myths on which they built their understanding of the world and its phenomena and deduced as well rules for the functioning of their own society. Advancing technology has enabled us to conduct this human quest for knowledge with more scientific means: optical and radio-wavelengths are being monitored for messages by an extra-terrestrial intelligence and active messaging attempts have also been undertaken. Scenarios have been developed for a possible detection of extra-terrestrial intelligence and post-detection guidelines and protocols have been elaborated. The human responses to the whole array of questions concerning the potential existence, discovery of and communication/interaction with an extra-terrestrial intelligence share as one clear thread a profound anthropomorphism, which ascribes classical human behavioural patterns also to an extra-terrestrial intelligence in much the same way as our ancestors attributed comparable conducts to mythological figures. This paper aims at pinpointing this thread in a number of classical reactions to basic questions related to the search for extra-terrestrial intelligence. Many of these reactions are based on human motives such as curiosity and fear, rationalised by experience and historical analogy and modelled in the Science Fiction Culture by literature and movies. Scrutinising the classical hypothetical explanations of the Fermi paradox under the angle of a potentially undue anthropomorphism, this paper intends to assist in understanding our human epistemological limitations in the search for extra-terrestrial intelligence. This attempt is structured into a series of questions: I. Can we be alone? II

  15. Sundanese ancient manuscripts search engine using probability approach

    Science.gov (United States)

    Suryani, Mira; Hadi, Setiawan; Paulus, Erick; Nurma Yulita, Intan; Supriatna, Asep K.

    2017-10-01

    Today, Information and Communication Technology (ICT) has become a regular thing for every aspect of live include cultural and heritage aspect. Sundanese ancient manuscripts as Sundanese heritage are in damage condition and also the information that containing on it. So in order to preserve the information in Sundanese ancient manuscripts and make them easier to search, a search engine has been developed. The search engine must has good computing ability. In order to get the best computation in developed search engine, three types of probabilistic approaches: Bayesian Networks Model, Divergence from Randomness with PL2 distribution, and DFR-PL2F as derivative form DFR-PL2 have been compared in this study. The three probabilistic approaches supported by index of documents and three different weighting methods: term occurrence, term frequency, and TF-IDF. The experiment involved 12 Sundanese ancient manuscripts. From 12 manuscripts there are 474 distinct terms. The developed search engine tested by 50 random queries for three types of query. The experiment results showed that for the single query and multiple query, the best searching performance given by the combination of PL2F approach and TF-IDF weighting method. The performance has been evaluated using average time responds with value about 0.08 second and Mean Average Precision (MAP) about 0.33.

  16. Short-term Internet search using makes people rely on search engines when facing unknown issues.

    Science.gov (United States)

    Wang, Yifan; Wu, Lingdan; Luo, Liang; Zhang, Yifen; Dong, Guangheng

    2017-01-01

    The Internet search engines, which have powerful search/sort functions and ease of use features, have become an indispensable tool for many individuals. The current study is to test whether the short-term Internet search training can make people more dependent on it. Thirty-one subjects out of forty subjects completed the search training study which included a pre-test, a six-day's training of Internet search, and a post-test. During the pre- and post- tests, subjects were asked to search online the answers to 40 unusual questions, remember the answers and recall them in the scanner. Un-learned questions were randomly presented at the recalling stage in order to elicited search impulse. Comparing to the pre-test, subjects in the post-test reported higher impulse to use search engines to answer un-learned questions. Consistently, subjects showed higher brain activations in dorsolateral prefrontal cortex and anterior cingulate cortex in the post-test than in the pre-test. In addition, there were significant positive correlations self-reported search impulse and brain responses in the frontal areas. The results suggest that a simple six-day's Internet search training can make people dependent on the search tools when facing unknown issues. People are easily dependent on the Internet search engines.

  17. Short-term Internet search using makes people rely on search engines when facing unknown issues.

    Directory of Open Access Journals (Sweden)

    Yifan Wang

    Full Text Available The Internet search engines, which have powerful search/sort functions and ease of use features, have become an indispensable tool for many individuals. The current study is to test whether the short-term Internet search training can make people more dependent on it. Thirty-one subjects out of forty subjects completed the search training study which included a pre-test, a six-day's training of Internet search, and a post-test. During the pre- and post- tests, subjects were asked to search online the answers to 40 unusual questions, remember the answers and recall them in the scanner. Un-learned questions were randomly presented at the recalling stage in order to elicited search impulse. Comparing to the pre-test, subjects in the post-test reported higher impulse to use search engines to answer un-learned questions. Consistently, subjects showed higher brain activations in dorsolateral prefrontal cortex and anterior cingulate cortex in the post-test than in the pre-test. In addition, there were significant positive correlations self-reported search impulse and brain responses in the frontal areas. The results suggest that a simple six-day's Internet search training can make people dependent on the search tools when facing unknown issues. People are easily dependent on the Internet search engines.

  18. Intelligent energy allocation strategy for PHEV charging station using gravitational search algorithm

    Science.gov (United States)

    Rahman, Imran; Vasant, Pandian M.; Singh, Balbir Singh Mahinder; Abdullah-Al-Wadud, M.

    2014-10-01

    Recent researches towards the use of green technologies to reduce pollution and increase penetration of renewable energy sources in the transportation sector are gaining popularity. The development of the smart grid environment focusing on PHEVs may also heal some of the prevailing grid problems by enabling the implementation of Vehicle-to-Grid (V2G) concept. Intelligent energy management is an important issue which has already drawn much attention to researchers. Most of these works require formulation of mathematical models which extensively use computational intelligence-based optimization techniques to solve many technical problems. Higher penetration of PHEVs require adequate charging infrastructure as well as smart charging strategies. We used Gravitational Search Algorithm (GSA) to intelligently allocate energy to the PHEVs considering constraints such as energy price, remaining battery capacity, and remaining charging time.

  19. Copyright over Works Reproduced and Published Online by Search Engines

    Directory of Open Access Journals (Sweden)

    Ernesto Rengifo García

    2016-12-01

    Full Text Available Search engines are an important technological tool that facilitates the dissemination and access to information on the Internet. However, when it comes to works protected by authors rights, in the case of continental law, or Copyright, for the Anglo-Saxon tradition, it is difficult to define if search engines infringe the rights of the owners of these works. In the face of this situation, the US and Europe have employed the exceptions to autorights and Fair Use to decide whether search engines infringes owners rights. This article carries out a comparative analysis of the different judicial decisions in the US and Europe on search engines and protected works.

  20. Automatic Planning of External Search Engine Optimization

    Directory of Open Access Journals (Sweden)

    Vita Jasevičiūtė

    2015-07-01

    Full Text Available This paper describes an investigation of the external search engine optimization (SEO action planning tool, dedicated to automatically extract a small set of most important keywords for each month during whole year period. The keywords in the set are extracted accordingly to external measured parameters, such as average number of searches during the year and for every month individually. Additionally the position of the optimized web site for each keyword is taken into account. The generated optimization plan is similar to the optimization plans prepared manually by the SEO professionals and can be successfully used as a support tool for web site search engine optimization.

  1. Intelligent Information Systems for Web Product Search

    NARCIS (Netherlands)

    D. Vandic (Damir)

    2017-01-01

    markdownabstractOver the last few years, we have experienced an increase in online shopping. Consequently, there is a need for efficient and effective product search engines. The rapid growth of e-commerce, however, has also introduced some challenges. Studies show that users can get overwhelmed by

  2. Study of Search Engine Transaction Logs Shows Little Change in How Users use Search Engines. A review of: Jansen, Bernard J., and Amanda Spink. “How Are We Searching the World Wide Web? A Comparison of Nine Search Engine Transaction Logs.” Information Processing & Management 42.1 (2006: 248‐263.

    Directory of Open Access Journals (Sweden)

    David Hook

    2006-09-01

    Full Text Available Objective – To examine the interactions between users and search engines, and how they have changed over time. Design – Comparative analysis of search engine transaction logs. Setting – Nine major analyses of search engine transaction logs. Subjects – Nine web search engine studies (4 European, 5 American over a seven‐year period, covering the search engines Excite, Fireball, AltaVista, BWIE and AllTheWeb. Methods – The results from individual studies are compared by year of study for percentages of single query sessions, one term queries, operator (and, or, not, etc. usage and single result page viewing. As well, the authors group the search queries into eleven different topical categories and compare how the breakdown has changed over time. Main Results – Based on the percentage of single query sessions, it does not appear that the complexity of interactions has changed significantly for either the U.S.‐based or the European‐based search engines. As well, there was little change observed in the percentage of one‐term queries over the years of study for either the U.S.‐based or the European‐based search engines. Few users (generally less than 20% use Boolean or other operators in their queries, and these percentages have remained relatively stable. One area of noticeable change is in the percentage of users viewing only one results page, which has increased over the years of study. Based on the studies of the U.S.‐based search engines, the topical categories of ‘People, Place or Things’ and ‘Commerce, Travel, Employment or Economy’ are becoming more popular, while the categories of ‘Sex and Pornography’ and ‘Entertainment or Recreation’ are declining. Conclusions – The percentage of users viewing only one results page increased during the years of the study, while the percentages of single query sessions, oneterm sessions and operator usage remained stable. The increase in single result page viewing

  3. Can electronic search engines optimize screening of search results in systematic reviews: an empirical study.

    Science.gov (United States)

    Sampson, Margaret; Barrowman, Nicholas J; Moher, David; Clifford, Tammy J; Platt, Robert W; Morrison, Andra; Klassen, Terry P; Zhang, Li

    2006-02-24

    Most electronic search efforts directed at identifying primary studies for inclusion in systematic reviews rely on the optimal Boolean search features of search interfaces such as DIALOG and Ovid. Our objective is to test the ability of an Ultraseek search engine to rank MEDLINE records of the included studies of Cochrane reviews within the top half of all the records retrieved by the Boolean MEDLINE search used by the reviewers. Collections were created using the MEDLINE bibliographic records of included and excluded studies listed in the review and all records retrieved by the MEDLINE search. Records were converted to individual HTML files. Collections of records were indexed and searched through a statistical search engine, Ultraseek, using review-specific search terms. Our data sources, systematic reviews published in the Cochrane library, were included if they reported using at least one phase of the Cochrane Highly Sensitive Search Strategy (HSSS), provided citations for both included and excluded studies and conducted a meta-analysis using a binary outcome measure. Reviews were selected if they yielded between 1000-6000 records when the MEDLINE search strategy was replicated. Nine Cochrane reviews were included. Included studies within the Cochrane reviews were found within the first 500 retrieved studies more often than would be expected by chance. Across all reviews, recall of included studies into the top 500 was 0.70. There was no statistically significant difference in ranking when comparing included studies with just the subset of excluded studies listed as excluded in the published review. The relevance ranking provided by the search engine was better than expected by chance and shows promise for the preliminary evaluation of large results from Boolean searches. A statistical search engine does not appear to be able to make fine discriminations concerning the relevance of bibliographic records that have been pre-screened by systematic reviewers.

  4. Teen smoking cessation help via the Internet: a survey of search engines.

    Science.gov (United States)

    Edwards, Christine C; Elliott, Sean P; Conway, Terry L; Woodruff, Susan I

    2003-07-01

    The objective of this study was to assess Web sites related to teen smoking cessation on the Internet. Seven Internet search engines were searched using the keywords teen quit smoking. The top 20 hits from each search engine were reviewed and categorized. The keywords teen quit smoking produced between 35 and 400,000 hits depending on the search engine. Of 140 potential hits, 62% were active, unique sites; 85% were listed by only one search engine; and 40% focused on cessation. Findings suggest that legitimate on-line smoking cessation help for teens is constrained by search engine choice and the amount of time teens spend looking through potential sites. Resource listings should be updated regularly. Smoking cessation Web sites need to be picked up on multiple search engine searches. Further evaluation of smoking cessation Web sites need to be conducted to identify the most effective help for teens.

  5. Interest in Anesthesia as Reflected by Keyword Searches using Common Search Engines.

    Science.gov (United States)

    Liu, Renyu; García, Paul S; Fleisher, Lee A

    2012-01-23

    Since current general interest in anesthesia is unknown, we analyzed internet keyword searches to gauge general interest in anesthesia in comparison with surgery and pain. The trend of keyword searches from 2004 to 2010 related to anesthesia and anaesthesia was investigated using Google Insights for Search. The trend of number of peer reviewed articles on anesthesia cited on PubMed and Medline from 2004 to 2010 was investigated. The average cost on advertising on anesthesia, surgery and pain was estimated using Google AdWords. Searching results in other common search engines were also analyzed. Correlation between year and relative number of searches was determined with psearch engines may provide different total number of searching results (available posts), the ratios of searching results between some common keywords related to perioperative care are comparable, indicating similar trend. The peer reviewed manuscripts on "anesthesia" and the proportion of papers on "anesthesia and outcome" are trending up. Estimates for spending of advertising dollars are less for anesthesia-related terms when compared to that for pain or surgery due to relative smaller number of searching traffic. General interest in anesthesia (anaesthesia) as measured by internet searches appears to be decreasing. Pain, preanesthesia evaluation, anesthesia and outcome and side effects of anesthesia are the critical areas that anesthesiologists should focus on to address the increasing concerns.

  6. Practical and Efficient Searching in Proteomics: A Cross Engine Comparison

    Science.gov (United States)

    Paulo, Joao A.

    2014-01-01

    Background Analysis of large datasets produced by mass spectrometry-based proteomics relies on database search algorithms to sequence peptides and identify proteins. Several such scoring methods are available, each based on different statistical foundations and thereby not producing identical results. Here, the aim is to compare peptide and protein identifications using multiple search engines and examine the additional proteins gained by increasing the number of technical replicate analyses. Methods A HeLa whole cell lysate was analyzed on an Orbitrap mass spectrometer for 10 technical replicates. The data were combined and searched using Mascot, SEQUEST, and Andromeda. Comparisons were made of peptide and protein identifications among the search engines. In addition, searches using each engine were performed with incrementing number of technical replicates. Results The number and identity of peptides and proteins differed across search engines. For all three search engines, the differences in proteins identifications were greater than the differences in peptide identifications indicating that the major source of the disparity may be at the protein inference grouping level. The data also revealed that analysis of 2 technical replicates can increase protein identifications by up to 10-15%, while a third replicate results in an additional 4-5%. Conclusions The data emphasize two practical methods of increasing the robustness of mass spectrometry data analysis. The data show that 1) using multiple search engines can expand the number of identified proteins (union) and validate protein identifications (intersection), and 2) analysis of 2 or 3 technical replicates can substantially expand protein identifications. Moreover, information can be extracted from a dataset by performing database searching with different engines and performing technical repeats, which requires no additional sample preparation and effectively utilizes research time and effort. PMID:25346847

  7. A unified architecture for biomedical search engines based on semantic web technologies.

    Science.gov (United States)

    Jalali, Vahid; Matash Borujerdi, Mohammad Reza

    2011-04-01

    There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.

  8. Evaluating search effectiveness of some selected search engines ...

    African Journals Online (AJOL)

    With advancement in technology, many individuals are getting familiar with the internet a lot of users seek for information on the World Wide Web (WWW) using variety of search engines. This research work evaluates the retrieval effectiveness of Google, Yahoo, Bing, AOL and Baidu. Precision, relative recall and response ...

  9. Robot-aided electrospinning toward intelligent biomedical engineering.

    Science.gov (United States)

    Tan, Rong; Yang, Xiong; Shen, Yajing

    2017-01-01

    The rapid development of robotics offers new opportunities for the traditional biofabrication in higher accuracy and controllability, which provides great potentials for the intelligent biomedical engineering. This paper reviews the state of the art of robotics in a widely used biomaterial fabrication process, i.e., electrospinning, including its working principle, main applications, challenges, and prospects. First, the principle and technique of electrospinning are introduced by categorizing it to melt electrospinning, solution electrospinning, and near-field electrospinning. Then, the applications of electrospinning in biomedical engineering are introduced briefly from the aspects of drug delivery, tissue engineering, and wound dressing. After that, we conclude the existing problems in traditional electrospinning such as low production, rough nanofibers, and uncontrolled morphology, and then discuss how those problems are addressed by robotics via four case studies. Lastly, the challenges and outlooks of robotics in electrospinning are discussed and prospected.

  10. Science, religion, and the search for extraterrestrial intelligence

    CERN Document Server

    Wilkinson, David

    2013-01-01

    If the discovery of life elsewhere in the universe is just around the corner, what would be the consequences for religion? Would it represent another major conflict between science and religion, even leading to the death of faith? Some would suggest that the discovery of any suggestion of extraterrestrial life would have a greater impact than even the Copernican and Darwinian revolutions. It is now over 50 years since the first modern scientific papers were published on the search for extraterrestrial intelligence (SETI). Yet the religious implications of this search and possible discovery have never been systematically addressed in the scientific or theological arena. SETI is now entering its most important era of scientific development. New observation techniques are leading to the discovery of extra-solar planets daily, and the Kepler mission has already collected over 1000 planetary candidates. This deluge of data is transforming the scientific and popular view of the existence of extraterrestrial intel...

  11. The Effect of Internet Searches on Afforestation: The Case of a Green Search Engine

    Directory of Open Access Journals (Sweden)

    Pedro Palos-Sanchez

    2018-01-01

    Full Text Available Ecosia is an Internet search engine that plants trees with the income obtained from advertising. This study explored the factors that affect the adoption of Ecosia.org from the perspective of technology adoption and trust. This was done by using the Unified Theory of Acceptance and Use of Technology (UTAUT2 and then analyzing the results with PLS-SEM (Partial Least Squares-Structural Equation Modeling. Subsequently, a survey was conducted with a structured questionnaire on search engines, which yielded the following results: (1 the idea of a company helping to mitigate the effects of climate change by planting trees is well received by Internet users. However, few people accept the idea of changing their habits from using traditional search engines; (2 Ecosia is a search engine believed to have higher compatibility rates, and needing less hardware resources, and (3 ecological marketing is an appropriate and future strategy that can increase the intention to use a technological product. Based on the results obtained, this study shows that a search engine or other service provided by the Internet, which can be audited (visits, searches, files, etc., can also contribute to curb the effects of deforestation and climate change. In addition, companies, and especially technological start-ups, are advised to take into account that users feel better using these tools. Finally, this study urges foundations and non-governmental organizations to fight against the effects of deforestation by supporting these initiatives. The study also urges companies to support technological services, and follow the behavior of Ecosia.org in order to positively influence user satisfaction by using ecological marketing strategies.

  12. Search Engine For Ebook Portal

    Directory of Open Access Journals (Sweden)

    Prashant Kanade

    2017-05-01

    Full Text Available The purpose of this paper is to establish the textual analytics involved in developing a search engine for an ebook portal. We have extracted our dataset from Project Gutenberg using a robot harvester. Textual Analytics is used for efficient search retrieval. The entire dataset is represented using Vector Space Model where each document is a vector in the vector space. Further for computational purposes we represent our dataset in the form of a Term Frequency- Inverse Document Frequency tf-idf matrix. The first step involves obtaining the most coherent sequence of words of the search query entered. The entered query is processed using Front End algorithms this includes-Spell Checker Text Segmentation and Language Modeling. Back End processing includes Similarity Modeling Clustering Indexing and Retrieval. The relationship between documents and words is established using cosine similarity measured between the documents and words in Vector Space. Clustering performed is used to suggest books that are similar to the search query entered by the user. Lastly the Lucene Based Elasticsearch engine is used for indexing on the documents. This allows faster retrieval of data. Elasticsearch returns a dictionary and creates a tf-idf matrix. The processed query is compared with the dictionary obtained and tf-idf matrix is used to calculate the score for each match to give most relevant result.

  13. Real-time earthquake monitoring using a search engine method.

    Science.gov (United States)

    Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong

    2014-12-04

    When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.

  14. Dynamics of a macroscopic model characterizing mutualism of search engines and web sites

    Science.gov (United States)

    Wang, Yuanshi; Wu, Hong

    2006-05-01

    We present a model to describe the mutualism relationship between search engines and web sites. In the model, search engines and web sites benefit from each other while the search engines are derived products of the web sites and cannot survive independently. Our goal is to show strategies for the search engines to survive in the internet market. From mathematical analysis of the model, we show that mutualism does not always result in survival. We show various conditions under which the search engines would tend to extinction, persist or grow explosively. Then by the conditions, we deduce a series of strategies for the search engines to survive in the internet market. We present conditions under which the initial number of consumers of the search engines has little contribution to their persistence, which is in agreement with the results in previous works. Furthermore, we show novel conditions under which the initial value plays an important role in the persistence of the search engines and deduce new strategies. We also give suggestions for the web sites to cooperate with the search engines in order to form a win-win situation.

  15. The Theory of Planned Behaviour Applied to Search Engines as a Learning Tool

    Science.gov (United States)

    Liaw, Shu-Sheng

    2004-01-01

    Search engines have been developed for helping learners to seek online information. Based on theory of planned behaviour approach, this research intends to investigate the behaviour of using search engines as a learning tool. After factor analysis, the results suggest that perceived satisfaction of search engine, search engines as an information…

  16. MuZeeker - Adapting a music search engine for mobile phones

    DEFF Research Database (Denmark)

    Larsen, Jakob Eg; Halling, Søren Christian; Sigurdsson, Magnus Kristinn

    2010-01-01

    We describe MuZeeker, a search engine with domain knowledge based on Wikipedia. MuZeeker enables the user to refine a search in multiple steps by means of category selection. In the present version we focus on multimedia search related to music and we present two prototype search applications (web......-based and mobile) and discuss the issues involved in adapting the search engine for mobile phones. A category based filtering approach enables the user to refine a search through relevance feedback by category selection instead of typing additional text, which is hypothesized to be an advantage in the mobile Mu......Zeeker application. We report from two usability experiments using the think aloud protocol, in which N=20 participants performed tasks using MuZeeker and a customized Google search engine. In both experiments web-based and mobile user interfaces were used. The experiment shows that participants are capable...

  17. Design and economic investigation of shell and tube heat exchangers using Improved Intelligent Tuned Harmony Search algorithm

    Directory of Open Access Journals (Sweden)

    Oguz Emrah Turgut

    2014-12-01

    Full Text Available This study explores the thermal design of shell and tube heat exchangers by using Improved Intelligent Tuned Harmony Search (I-ITHS algorithm. Intelligent Tuned Harmony Search (ITHS is an upgraded version of harmony search algorithm which has an advantage of deciding intensification and diversification processes by applying proper pitch adjusting strategy. In this study, we aim to improve the search capacity of ITHS algorithm by utilizing chaotic sequences instead of uniformly distributed random numbers and applying alternative search strategies inspired by Artificial Bee Colony algorithm and Opposition Based Learning on promising areas (best solutions. Design variables including baffle spacing, shell diameter, tube outer diameter and number of tube passes are used to minimize total cost of heat exchanger that incorporates capital investment and the sum of discounted annual energy expenditures related to pumping and heat exchanger area. Results show that I-ITHS can be utilized in optimizing shell and tube heat exchangers.

  18. The Breakthrough Listen Search for Intelligent Life: the first SETI results and other future science.

    Science.gov (United States)

    Enriquez, J. Emilio; Breakthrough Listen Team

    2018-01-01

    The Breakthrough Listen (BL) Initiative is the largest campaign in human history on the search for extraterrestrial intelligence. The work presented here is the first BL search for engineered signals. This comprises a sample of 692 nearby stars within 50 pc. We used the Green Bank Telescope (GBT) to conduct observations over 1.1-1.9 GHz (L-band). Our observing strategy allows us to reject most of the detected signals as terrestrial interference. During the analysis, eleven stars show events that passed our thresholding algorithm, but detailed analysis of their properties indicates they are consistent with known examples of anthropogenic radio frequency interference. This small number of false positives and their understood properties give confidence on the techniques used for this search. We conclude that, at the time of our observations none of the observed systems host high-duty-cycle radio transmitters emitting at the observed frequencies with an EIRP of 10^13 W, readily achievable by our own civilization.We can place limits on the presence of engineered signals from putative extraterrestrial civilizations inhabiting the environs of the target stars. Our results suggest that fewer than ~0.1% of the stellar systems within 50 pc possess the type of transmitters searched in this survey. This work provides the most stringent limit on the number of low power radio transmitters around nearby stars to date. We explored several metics to compare our results to previous SETI efforts. We developed a new figure-of-merit that can encompass a wider set of parameters and can be used on future SETI experiments for a meaningful comparison.We note that the current BL state-of-the-art digital backend installed at the Green Bank Observatory is the fastest ever used for a SETI experiment by a factor of a few. Here we will describe the potential use of the BL backend by other groups on complementary science, as well as a mention the ongoing and potential collaborations focused in

  19. Enhancing discovery in spatial data infrastructures using a search engine

    Directory of Open Access Journals (Sweden)

    Paolo Corti

    2018-05-01

    Full Text Available A spatial data infrastructure (SDI is a framework of geospatial data, metadata, users and tools intended to provide an efficient and flexible way to use spatial information. One of the key software components of an SDI is the catalogue service which is needed to discover, query and manage the metadata. Catalogue services in an SDI are typically based on the Open Geospatial Consortium (OGC Catalogue Service for the Web (CSW standard which defines common interfaces for accessing the metadata information. A search engine is a software system capable of supporting fast and reliable search, which may use ‘any means necessary’ to get users to the resources they need quickly and efficiently. These techniques may include full text search, natural language processing, weighted results, fuzzy tolerance results, faceting, hit highlighting, recommendations and many others. In this paper we present an example of a search engine being added to an SDI to improve search against large collections of geospatial datasets. The Centre for Geographic Analysis (CGA at Harvard University re-engineered the search component of its public domain SDI (Harvard WorldMap which is based on the GeoNode platform. A search engine was added to the SDI stack to enhance the CSW catalogue discovery abilities. It is now possible to discover spatial datasets from metadata by using the standard search operations of the catalogue and to take advantage of the new abilities of the search engine, to return relevant and reliable content to SDI users.

  20. The impact of search engine selection and sorting criteria on vaccination beliefs and attitudes: two experiments manipulating Google output.

    Science.gov (United States)

    Allam, Ahmed; Schulz, Peter Johannes; Nakamoto, Kent

    2014-04-02

    During the past 2 decades, the Internet has evolved to become a necessity in our daily lives. The selection and sorting algorithms of search engines exert tremendous influence over the global spread of information and other communication processes. This study is concerned with demonstrating the influence of selection and sorting/ranking criteria operating in search engines on users' knowledge, beliefs, and attitudes of websites about vaccination. In particular, it is to compare the effects of search engines that deliver websites emphasizing on the pro side of vaccination with those focusing on the con side and with normal Google as a control group. We conducted 2 online experiments using manipulated search engines. A pilot study was to verify the existence of dangerous health literacy in connection with searching and using health information on the Internet by exploring the effect of 2 manipulated search engines that yielded either pro or con vaccination sites only, with a group receiving normal Google as control. A pre-post test design was used; participants were American marketing students enrolled in a study-abroad program in Lugano, Switzerland. The second experiment manipulated the search engine by applying different ratios of con versus pro vaccination webpages displayed in the search results. Participants were recruited from Amazon's Mechanical Turk platform where it was published as a human intelligence task (HIT). Both experiments showed knowledge highest in the group offered only pro vaccination sites (Z=-2.088, P=.03; Kruskal-Wallis H test [H₅]=11.30, P=.04). They acknowledged the importance/benefits (Z=-2.326, P=.02; H5=11.34, P=.04) and effectiveness (Z=-2.230, P=.03) of vaccination more, whereas groups offered antivaccination sites only showed increased concern about effects (Z=-2.582, P=.01; H₅=16.88, P=.005) and harmful health outcomes (Z=-2.200, P=.02) of vaccination. Normal Google users perceived information quality to be positive despite a

  1. Hybrid intelligent optimization methods for engineering problems

    Science.gov (United States)

    Pehlivanoglu, Yasin Volkan

    The purpose of optimization is to obtain the best solution under certain conditions. There are numerous optimization methods because different problems need different solution methodologies; therefore, it is difficult to construct patterns. Also mathematical modeling of a natural phenomenon is almost based on differentials. Differential equations are constructed with relative increments among the factors related to yield. Therefore, the gradients of these increments are essential to search the yield space. However, the landscape of yield is not a simple one and mostly multi-modal. Another issue is differentiability. Engineering design problems are usually nonlinear and they sometimes exhibit discontinuous derivatives for the objective and constraint functions. Due to these difficulties, non-gradient-based algorithms have become more popular in recent decades. Genetic algorithms (GA) and particle swarm optimization (PSO) algorithms are popular, non-gradient based algorithms. Both are population-based search algorithms and have multiple points for initiation. A significant difference from a gradient-based method is the nature of the search methodologies. For example, randomness is essential for the search in GA or PSO. Hence, they are also called stochastic optimization methods. These algorithms are simple, robust, and have high fidelity. However, they suffer from similar defects, such as, premature convergence, less accuracy, or large computational time. The premature convergence is sometimes inevitable due to the lack of diversity. As the generations of particles or individuals in the population evolve, they may lose their diversity and become similar to each other. To overcome this issue, we studied the diversity concept in GA and PSO algorithms. Diversity is essential for a healthy search, and mutations are the basic operators to provide the necessary variety within a population. After having a close scrutiny of the diversity concept based on qualification and

  2. PlateRunner: A Search Engine to Identify EMR Boilerplates.

    Science.gov (United States)

    Divita, Guy; Workman, T Elizabeth; Carter, Marjorie E; Redd, Andrew; Samore, Matthew H; Gundlapalli, Adi V

    2016-01-01

    Medical text contains boilerplated content, an artifact of pull-down forms from EMRs. Boilerplated content is the source of challenges for concept extraction on clinical text. This paper introduces PlateRunner, a search engine on boilerplates from the US Department of Veterans Affairs (VA) EMR. Boilerplates containing concepts should be identified and reviewed to recognize challenging formats, identify high yield document titles, and fine tune section zoning. This search engine has the capability to filter negated and asserted concepts, save and search query results. This tool can save queries, search results, and documents found for later analysis.

  3. Auditing Search Engines for Differential Satisfaction Across Demographics

    OpenAIRE

    Mehrotra, R.; Anderson, A.; Diaz, F.; Sharma, A.; Wallach, H. M.; Yilmaz, E.

    2017-01-01

    Many online services, such as search engines, social media platforms, and digital marketplaces, are advertised as being available to any user, regardless of their age, gender, or other demographic factors. However, there are growing concerns that these services may systematically underserve some groups of users. In this paper, we present a framework for internally auditing such services for differences in user satisfaction across demographic groups, using search engines as a case study. We fi...

  4. [Biomedical information on the internet using search engines. A one-year trial].

    Science.gov (United States)

    Corrao, Salvatore; Leone, Francesco; Arnone, Sabrina

    2004-01-01

    The internet is a communication medium and content distributor that provide information in the general sense but it could be of great utility regarding as the search and retrieval of biomedical information. Search engines represent a great deal to rapidly find information on the net. However, we do not know whether general search engines and meta-search ones are reliable in order to find useful and validated biomedical information. The aim of our study was to verify the reproducibility of a search by key-words (pediatric or evidence) using 9 international search engines and 1 meta-search engine at the baseline and after a one year period. We analysed the first 20 citations as output of each searching. We evaluated the formal quality of Web-sites and their domain extensions. Moreover, we compared the output of each search at the start of this study and after a one year period and we considered as a criterion of reliability the number of Web-sites cited again. We found some interesting results that are reported throughout the text. Our findings point out an extreme dynamicity of the information on the Web and, for this reason, we advice a great caution when someone want to use search and meta-search engines as a tool for searching and retrieve reliable biomedical information. On the other hand, some search and meta-search engines could be very useful as a first step searching for defining better a search and, moreover, for finding institutional Web-sites too. This paper allows to know a more conscious approach to the internet biomedical information universe.

  5. An open-source, mobile-friendly search engine for public medical knowledge.

    Science.gov (United States)

    Samwald, Matthias; Hanbury, Allan

    2014-01-01

    The World Wide Web has become an important source of information for medical practitioners. To complement the capabilities of currently available web search engines we developed FindMeEvidence, an open-source, mobile-friendly medical search engine. In a preliminary evaluation, the quality of results from FindMeEvidence proved to be competitive with those from TRIP Database, an established, closed-source search engine for evidence-based medicine.

  6. HPT Clearance Control: Intelligent Engine Systems-Phase 1

    Science.gov (United States)

    2005-01-01

    The following work has been completed to satisfy the Phase I Deliverables for the "HPT Clearance Control" project under NASA GRC's "Intelligent Engine Systems" program: (1) Need for the development of an advanced HPT ACC system has been very clearly laid out, (2) Several existing and potential clearance control systems have been reviewed, (3) A scorecard has been developed to document the system, performance (fuel burn, range, payload, etc.), thermal, and mechanical characteristics of the existing clearance control systems, (4) Engine size and flight cycle selection for the advanced HPT ACC system has been reviewed with "large engine"/"long range mission" combination showing the most benefit, (5) A scoring criteria has been developed to tie together performance parameters for an objective, data driven comparison of competing systems, and (6) The existing HPT ACC systems have been scored based on this scoring system.

  7. International Conference on Artificial Intelligence and Evolutionary Computations in Engineering Systems

    CERN Document Server

    Bhaskar, M; Panigrahi, Bijaya; Das, Swagatam

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in the first International Conference on International Conference on Artificial Intelligence and Evolutionary Computations in Engineering Systems (ICAIECES -2015) held at Velammal Engineering College (VEC), Chennai, India during 22 – 23 April 2015. The book discusses wide variety of industrial, engineering and scientific applications of the emerging techniques. Researchers from academic and industry present their original work and exchange ideas, information, techniques and applications in the field of Communication, Computing and Power Technologies.

  8. Search engines and the production of academic knowledge

    OpenAIRE

    van Dijck, J.

    2010-01-01

    This article argues that search engines in general, and Google Scholar in particular, have become significant co-producers of academic knowledge. Knowledge is not simply conveyed to users, but is co-produced by search engines’ ranking systems and profiling systems, none of which are open to the rules of transparency, relevance and privacy in a manner known from library scholarship in the public domain. Inexperienced users tend to trust proprietary engines as neutral mediators of knowledge and...

  9. Intelligent systems/software engineering methodology - A process to manage cost and risk

    Science.gov (United States)

    Friedlander, Carl; Lehrer, Nancy

    1991-01-01

    A systems development methodology is discussed that has been successfully applied to the construction of a number of intelligent systems. This methodology is a refinement of both evolutionary and spiral development methodologies. It is appropriate for development of intelligent systems. The application of advanced engineering methodology to the development of software products and intelligent systems is an important step toward supporting the transition of AI technology into aerospace applications. A description of the methodology and the process model from which it derives is given. Associated documents and tools are described which are used to manage the development process and record and report the emerging design.

  10. A real-time all-atom structural search engine for proteins.

    Science.gov (United States)

    Gonzalez, Gabriel; Hannigan, Brett; DeGrado, William F

    2014-07-01

    Protein designers use a wide variety of software tools for de novo design, yet their repertoire still lacks a fast and interactive all-atom search engine. To solve this, we have built the Suns program: a real-time, atomic search engine integrated into the PyMOL molecular visualization system. Users build atomic-level structural search queries within PyMOL and receive a stream of search results aligned to their query within a few seconds. This instant feedback cycle enables a new "designability"-inspired approach to protein design where the designer searches for and interactively incorporates native-like fragments from proven protein structures. We demonstrate the use of Suns to interactively build protein motifs, tertiary interactions, and to identify scaffolds compatible with hot-spot residues. The official web site and installer are located at http://www.degradolab.org/suns/ and the source code is hosted at https://github.com/godotgildor/Suns (PyMOL plugin, BSD license), https://github.com/Gabriel439/suns-cmd (command line client, BSD license), and https://github.com/Gabriel439/suns-search (search engine server, GPLv2 license).

  11. Virtual Reference Services through Web Search Engines: Study of Academic Libraries in Pakistan

    Directory of Open Access Journals (Sweden)

    Rubia Khan

    2017-03-01

    Full Text Available Web search engines (WSE are powerful and popular tools in the field of information service management. This study is an attempt to examine the impact and usefulness of web search engines in providing virtual reference services (VRS within academic libraries in Pakistan. The study also attempts to investigate the relevant expertise and skills of library professionals in providing digital reference services (DRS efficiently using web search engines. Methodology used in this study is quantitative in nature. The data was collected from fifty public and private sector universities in Pakistan using a structured questionnaire. Microsoft Excel and SPSS were used for data analysis. The study concludes that web search engines are commonly used by librarians to help users (especially research scholars by providing digital reference services. The study also finds a positive correlation between use of web search engines and quality of digital reference services provided to library users. It is concluded that although search engines have increased the expectations of users and are really big competitors to a library’s reference desk, they are however not an alternative to reference service. Findings reveal that search engines pose numerous challenges for librarians and the study also attempts to bring together possible remedial measures. This study is useful for library professionals to understand the importance of search engines in providing VRS. The study also provides an intellectual comparison among different search engines, their capabilities, limitations, challenges and opportunities to provide VRS effectively in libraries.

  12. Information retrieval implementing and evaluating search engines

    CERN Document Server

    Büttcher, Stefan; Cormack, Gordon V

    2016-01-01

    Information retrieval is the foundation for modern search engines. This textbook offers an introduction to the core topics underlying modern search technologies, including algorithms, data structures, indexing, retrieval, and evaluation. The emphasis is on implementation and experimentation; each chapter includes exercises and suggestions for student projects. Wumpus -- a multiuser open-source information retrieval system developed by one of the authors and available online -- provides model implementations and a basis for student work. The modular structure of the book allows instructors to use it in a variety of graduate-level courses, including courses taught from a database systems perspective, traditional information retrieval courses with a focus on IR theory, and courses covering the basics of Web retrieval. In addition to its classroom use, Information Retrieval will be a valuable reference for professionals in computer science, computer engineering, and software engineering.

  13. LoyalTracker: Visualizing Loyalty Dynamics in Search Engines.

    Science.gov (United States)

    Shi, Conglei; Wu, Yingcai; Liu, Shixia; Zhou, Hong; Qu, Huamin

    2014-12-01

    The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines.

  14. Artificial intelligence search techniques for optimization of the cold source geometry

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1988-01-01

    Most optimization studies of cold neutron sources have concentrated on the numerical prediction or experimental measurement of the cold moderator optimum thickness which produces the largest cold neutron leakage for a given thermal neutron source. Optimizing the geometrical shape of the cold source, however, is a more difficult problem because the optimized quantity, the cold neutron leakage, is an implicit function of the shape which is the unknown in such a study. We draw an analogy between this problem and a state space search, then we use a simple Artificial Intelligence (AI) search technique to determine the optimum cold source shape based on a two-group, r-z diffusion model. We implemented this AI design concept in the computer program AID which consists of two modules, a physical model module and a search module, which can be independently modified, improved, or made more sophisticated. 7 refs., 1 fig

  15. Artificial intelligence search techniques for the optimization of cold source geometry

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1988-01-01

    Most optimization studies of cold neutron sources have concentrated on the numerical prediction or experimental measurement of the cold moderator optimum thickness that produces the largest cold neutron leakage for a given thermal neutron source. Optimizing the geometric shape of the cold source, however, is a more difficult problem because the optimized quantity, the cold neutron leakage, is an implicit function of the shape, which is the unknown in such a study. An analogy is drawn between this problem and a state space search, then a simple artificial intelligence (AI) search technique is used to determine the optimum cold source shape based on a two-group, r-z diffusion model. This AI design concept was implemented in the computer program AID, which consists of two modules, a physical model module, and a search module, which can be independently modified, improved, or made more sophisticated

  16. Web Search Services in 1998: Trends and Challenges.

    Science.gov (United States)

    Feldman, Susan

    1998-01-01

    Charts the trends and challenges that 1998 has brought to popular search engines such as AltaVista, Excite, HotBot, Infoseek, Lycos, and Northern Light. Highlights testing strategies used, use of real (not artificial) intelligence, innovations, online market pressures, barriers to use, and tips and recommendations. (AEF)

  17. `Googling' Terrorists: Are Northern Irish Terrorists Visible on Internet Search Engines?

    Science.gov (United States)

    Reilly, P.

    In this chapter, the analysis suggests that Northern Irish terrorists are not visible on Web search engines when net users employ conventional Internet search techniques. Editors of mass media organisations traditionally have had the ability to decide whether a terrorist atrocity is `newsworthy,' controlling the `oxygen' supply that sustains all forms of terrorism. This process, also known as `gatekeeping,' is often influenced by the norms of social responsibility, or alternatively, with regard to the interests of the advertisers and corporate sponsors that sustain mass media organisations. The analysis presented in this chapter suggests that Internet search engines can also be characterised as `gatekeepers,' albeit without the ability to shape the content of Websites before it reaches net users. Instead, Internet search engines give priority retrieval to certain Websites within their directory, pointing net users towards these Websites rather than others on the Internet. Net users are more likely to click on links to the more `visible' Websites on Internet search engine directories, these sites invariably being the highest `ranked' in response to a particular search query. A number of factors including the design of the Website and the number of links to external sites determine the `visibility' of a Website on Internet search engines. The study suggests that Northern Irish terrorists and their sympathisers are unlikely to achieve a greater degree of `visibility' online than they enjoy in the conventional mass media through the perpetration of atrocities. Although these groups may have a greater degree of freedom on the Internet to publicise their ideologies, they are still likely to be speaking to the converted or members of the press. Although it is easier to locate Northern Irish terrorist organisations on Internet search engines by linking in via ideology, ideological description searches, such as `Irish Republican' and `Ulster Loyalist,' are more likely to

  18. Developing as new search engine and browser for libraries to search and organize the World Wide Web library resources

    OpenAIRE

    Sreenivasulu, V.

    2000-01-01

    Internet Granthalaya urges world wide advocates and targets at the task of creating a new search engine and dedicated browseer. Internet Granthalaya may be the ultimate search engine exclusively dedicated for every library use to search and organize the world wide web libary resources

  19. Development of health information search engine based on metadata and ontology.

    Science.gov (United States)

    Song, Tae-Min; Park, Hyeoun-Ae; Jin, Dal-Lae

    2014-04-01

    The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers.

  20. Health literacy and usability of clinical trial search engines.

    Science.gov (United States)

    Utami, Dina; Bickmore, Timothy W; Barry, Barbara; Paasche-Orlow, Michael K

    2014-01-01

    Several web-based search engines have been developed to assist individuals to find clinical trials for which they may be interested in volunteering. However, these search engines may be difficult for individuals with low health and computer literacy to navigate. The authors present findings from a usability evaluation of clinical trial search tools with 41 participants across the health and computer literacy spectrum. The study consisted of 3 parts: (a) a usability study of an existing web-based clinical trial search tool; (b) a usability study of a keyword-based clinical trial search tool; and (c) an exploratory study investigating users' information needs when deciding among 2 or more candidate clinical trials. From the first 2 studies, the authors found that users with low health literacy have difficulty forming queries using keywords and have significantly more difficulty using a standard web-based clinical trial search tool compared with users with adequate health literacy. From the third study, the authors identified the search factors most important to individuals searching for clinical trials and how these varied by health literacy level.

  1. Archiving, ordering and searching: search engines, algorithms, databases and deep mediatization

    DEFF Research Database (Denmark)

    Andersen, Jack

    2018-01-01

    This article argues that search engines, algorithms, and databases can be considered as a way of understanding deep mediatization (Couldry & Hepp, 2016). They are embedded in a variety of social and cultural practices and as such they change our communicative actions to be shaped by their logic o...... reviewed recent trends in mediatization research, the argument is discussed and unfolded in-between the material and social constructivist-phenomenological interpretations of mediatization. In conclusion, it is discussed how deep this form of mediatization can be taken to be.......This article argues that search engines, algorithms, and databases can be considered as a way of understanding deep mediatization (Couldry & Hepp, 2016). They are embedded in a variety of social and cultural practices and as such they change our communicative actions to be shaped by their logic...

  2. Optimizing Online Suicide Prevention: A Search Engine-Based Tailored Approach.

    Science.gov (United States)

    Arendt, Florian; Scherr, Sebastian

    2017-11-01

    Search engines are increasingly used to seek suicide-related information online, which can serve both harmful and helpful purposes. Google acknowledges this fact and presents a suicide-prevention result for particular search terms. Unfortunately, the result is only presented to a limited number of visitors. Hence, Google is missing the opportunity to provide help to vulnerable people. We propose a two-step approach to a tailored optimization: First, research will identify the risk factors. Second, search engines will reweight algorithms according to the risk factors. In this study, we show that the query share of the search term "poisoning" on Google shows substantial peaks corresponding to peaks in actual suicidal behavior. Accordingly, thresholds for showing the suicide-prevention result should be set to the lowest levels during the spring, on Sundays and Mondays, on New Year's Day, and on Saturdays following Thanksgiving. Search engines can help to save lives globally by utilizing a more tailored approach to suicide prevention.

  3. MSblender: A probabilistic approach for integrating peptide identifications from multiple database search engines.

    Science.gov (United States)

    Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M

    2011-07-01

    Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.

  4. An intelligent instrument for measuring exhaust temperature of marine engine

    Science.gov (United States)

    Ma, Nan-Qi; Su, Hua; Liu, Jun

    2006-12-01

    Exhaust temperature of the marine engine is commonly measured through thermocouple. Measure deviation will occur after using the thermocouple for some time due to nonlinearity of thermocouple itself, high temperature and chemical corrosion of measure point. Frequent replacement of thermocouple will increase the operating cost. This paper designs a new intelligent instrument for solving the above-mentioned problems of the marine engine temperature measurement, which combines the conventional thermocouple temperature measurement technology and SCM(single chip microcomputer). The reading of the thermocouple is simple and precise and the calibration can be made automatically and manually.

  5. Are We Alone? GAVRT Search for Extra Terrestrial Intelligence (SETI) Project

    Science.gov (United States)

    Bensel, Holly; Cool, Ian; St. Mary's High School Astronomy Club; St. Mary's Middle School Astronomy Club

    2017-01-01

    The Goldstone Apple Valley Radio Telescope Program (GAVRT) is a partnership between NASA’s Jet Propulsion Laboratory and the Lewis Center for Educational Research. The program is an authentic science investigation program for students in grades K through 12 and offers them the ability to learn how to be a part of a science team while they are making a real contribution to scientific knowledge.Using the internet from their classroom, students take control of a 34-meter decommissioned NASA radio telescope located at the Goldstone Deep Space Network complex in California. Students collect data on strong radio sources and work in collaboration with professional radio astronomers to analyze the data.Throughout history man has wondered if we were alone in the Universe. SETI - or the Search for Extra Terrestrial Intelligence - is one of the programs offered through GAVRT that is designed to help answer that question. By participating in SETI, students learn about science by doing real science and maybe, if they get very lucky, they might make the most important discovery of our lifetime: Intelligent life beyond Earth!At St. Mary’s School, students in grades 6-12 have participated in the project since its inception. The St. Mary’s Middle School Astronomy Club is leading the way in their relentless search for ET and radio telescope studies. Students use the radio telescope to select a very small portion of the Milky Way Galaxy - or galactic plane - and scan across it over and over in the hopes of finding a signal that is not coming from humans or radio interference. The possibility of being the first to discover an alien signal has kept some students searching for the past three years. For them to discover something of this magnitude is like winning the lottery: small chance of winning - big payoff. To that end, the club is focusing on several portions of the Milky Way where they have detected a strong candidate in the past. The hope is to pick it up a second and

  6. A search engine for the engineering and equipment data management system (EDMS) at CERN

    International Nuclear Information System (INIS)

    Tsyganov, A; Amerigo, S M; Petit, S; Pettersson, T; Suwalska, A

    2008-01-01

    CERN, the European Laboratory for Particle Physics, located in Geneva -Switzerland, is currently building the LHC (Large Hadron Collider), a 27 km particle accelerator. The equipment life-cycle management of this project is provided by the Engineering and Equipment Data Management System (EDMS) Service. Using an Oracle database, it supports the management and follow-up of different kinds of documentation through the whole life cycle of the LHC project: design, manufacturing, installation, commissioning data etc... The equipment data collection phase is now slowing down and the project is getting closer to the 'As-Built' phase: the phase of the project consuming and exploring the large volumes of data stored since 1996. Searching through millions of items of information (documents, equipment parts, operations...) multiplied by dozens of points of view (operators, maintainers...) requires an efficient and flexible search engine. This paper describes the process followed by the team to implement the search engine for the LHC As-built project in the EDMS Service. The emphasis is put on the design decision to decouple the search engine from any user interface, potentially enabling other systems to also use it. Projections, algorithms, and the planned implementation are described in this paper. The implementation of the first version started in early 2007

  7. Building a better search engine for earth science data

    Science.gov (United States)

    Armstrong, E. M.; Yang, C. P.; Moroni, D. F.; McGibbney, L. J.; Jiang, Y.; Huang, T.; Greguska, F. R., III; Li, Y.; Finch, C. J.

    2017-12-01

    Free text data searching of earth science datasets has been implemented with varying degrees of success and completeness across the spectrum of the 12 NASA earth sciences data centers. At the JPL Physical Oceanography Distributed Active Archive Center (PO.DAAC) the search engine has been developed around the Solr/Lucene platform. Others have chosen other popular enterprise search platforms like Elasticsearch. Regardless, the default implementations of these search engines leveraging factors such as dataset popularity, term frequency and inverse document term frequency do not fully meet the needs of precise relevancy and ranking of earth science search results. For the PO.DAAC, this shortcoming has been identified for several years by its external User Working Group that has assigned several recommendations to improve the relevancy and discoverability of datasets related to remotely sensed sea surface temperature, ocean wind, waves, salinity, height and gravity that comprise a total count of over 500 public availability datasets. Recently, the PO.DAAC has teamed with an effort led by George Mason University to improve the improve the search and relevancy ranking of oceanographic data via a simple search interface and powerful backend services called MUDROD (Mining and Utilizing Dataset Relevancy from Oceanographic Datasets to Improve Data Discovery) funded by the NASA AIST program. MUDROD has mined and utilized the combination of PO.DAAC earth science dataset metadata, usage metrics, and user feedback and search history to objectively extract relevance for improved data discovery and access. In addition to improved dataset relevance and ranking, the MUDROD search engine also returns recommendations to related datasets and related user queries. This presentation will report on use cases that drove the architecture and development, and the success metrics and improvements on search precision and recall that MUDROD has demonstrated over the existing PO.DAAC search

  8. Search engines, the new bottleneck for content access

    NARCIS (Netherlands)

    van Eijk, N.; Preissl, B.; Haucap, J.; Curwen, P.

    2009-01-01

    The core function of a search engine is to make content and sources of information easily accessible (although the search results themselves may actually include parts of the underlying information). In an environment with unlimited amounts of information available on open platforms such as the

  9. Improving sensitivity in proteome studies by analysis of false discovery rates for multiple search engines.

    Science.gov (United States)

    Jones, Andrew R; Siepen, Jennifer A; Hubbard, Simon J; Paton, Norman W

    2009-03-01

    LC-MS experiments can generate large quantities of data, for which a variety of database search engines are available to make peptide and protein identifications. Decoy databases are becoming widely used to place statistical confidence in result sets, allowing the false discovery rate (FDR) to be estimated. Different search engines produce different identification sets so employing more than one search engine could result in an increased number of peptides (and proteins) being identified, if an appropriate mechanism for combining data can be defined. We have developed a search engine independent score, based on FDR, which allows peptide identifications from different search engines to be combined, called the FDR Score. The results demonstrate that the observed FDR is significantly different when analysing the set of identifications made by all three search engines, by each pair of search engines or by a single search engine. Our algorithm assigns identifications to groups according to the set of search engines that have made the identification, and re-assigns the score (combined FDR Score). The combined FDR Score can differentiate between correct and incorrect peptide identifications with high accuracy, allowing on average 35% more peptide identifications to be made at a fixed FDR than using a single search engine.

  10. Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.

    Science.gov (United States)

    Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile

    2016-01-01

    This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.

  11. DRUMS: a human disease related unique gene mutation search engine.

    Science.gov (United States)

    Li, Zuofeng; Liu, Xingnan; Wen, Jingran; Xu, Ye; Zhao, Xin; Li, Xuan; Liu, Lei; Zhang, Xiaoyan

    2011-10-01

    With the completion of the human genome project and the development of new methods for gene variant detection, the integration of mutation data and its phenotypic consequences has become more important than ever. Among all available resources, locus-specific databases (LSDBs) curate one or more specific genes' mutation data along with high-quality phenotypes. Although some genotype-phenotype data from LSDB have been integrated into central databases little effort has been made to integrate all these data by a search engine approach. In this work, we have developed disease related unique gene mutation search engine (DRUMS), a search engine for human disease related unique gene mutation as a convenient tool for biologists or physicians to retrieve gene variant and related phenotype information. Gene variant and phenotype information were stored in a gene-centred relational database. Moreover, the relationships between mutations and diseases were indexed by the uniform resource identifier from LSDB, or another central database. By querying DRUMS, users can access the most popular mutation databases under one interface. DRUMS could be treated as a domain specific search engine. By using web crawling, indexing, and searching technologies, it provides a competitively efficient interface for searching and retrieving mutation data and their relationships to diseases. The present system is freely accessible at http://www.scbit.org/glif/new/drums/index.html. © 2011 Wiley-Liss, Inc.

  12. The Effectiveness of Web Search Engines to Index New Sites from Different Countries

    Science.gov (United States)

    Pirkola, Ari

    2009-01-01

    Introduction: Investigates how effectively Web search engines index new sites from different countries. The primary interest is whether new sites are indexed equally or whether search engines are biased towards certain countries. If major search engines show biased coverage it can be considered a significant economic and political problem because…

  13. SOLVING ENGINEERING OPTIMIZATION PROBLEMS WITH THE SWARM INTELLIGENCE METHODS

    Directory of Open Access Journals (Sweden)

    V. Panteleev Andrei

    2017-01-01

    Full Text Available An important stage in problem solving process for aerospace and aerostructures designing is calculating their main charac- teristics optimization. The results of the four constrained optimization problems related to the design of various technical systems: such as determining the best parameters of welded beams, pressure vessel, gear, spring are presented. The purpose of each task is to minimize the cost and weight of the construction. The object functions in optimization practical problem are nonlinear functions with a lot of variables and a complex layer surface indentations. That is why using classical approach for extremum seeking is not efficient. Here comes the necessity of using such methods of optimization that allow to find a near optimal solution in acceptable amount of time with the minimum waste of computer power. Such methods include the methods of Swarm Intelligence: spiral dy- namics algorithm, stochastic diffusion search, hybrid seeker optimization algorithm. The Swarm Intelligence methods are designed in such a way that a swarm consisting of agents carries out the search for extremum. In search for the point of extremum, the parti- cles exchange information and consider their experience as well as the experience of population leader and the neighbors in some area. To solve the listed problems there has been designed a program complex, which efficiency is illustrated by the solutions of four applied problems. Each of the considered applied optimization problems is solved with all the three chosen methods. The ob- tained numerical results can be compared with the ones found in a swarm with a particle method. The author gives recommenda- tions on how to choose methods parameters and penalty function value, which consider inequality constraints.

  14. Search engines and the production of academic knowledge

    NARCIS (Netherlands)

    van Dijck, J.

    2010-01-01

    This article argues that search engines in general, and Google Scholar in particular, have become significant co-producers of academic knowledge. Knowledge is not simply conveyed to users, but is co-produced by search engines’ ranking systems and profiling systems, none of which are open to the

  15. Intelligent Control Systems with an Introduction to System of Systems Engineering

    CERN Document Server

    Nanayakkara, Thrishantha

    2009-01-01

    From aeronautics and manufacturing to healthcare and disaster management, systems engineering (SE) focuses on designing applications that ensure performance optimization. This title integrates the fundamentals of artificial intelligence and systems control in a framework applicable to both simple dynamic systems and large-scale system of systems

  16. Determination of geographic variance in stroke prevalence using Internet search engine analytics.

    Science.gov (United States)

    Walcott, Brian P; Nahed, Brian V; Kahle, Kristopher T; Redjal, Navid; Coumans, Jean-Valery

    2011-06-01

    Previous methods to determine stroke prevalence, such as nationwide surveys, are labor-intensive endeavors. Recent advances in search engine query analytics have led to a new metric for disease surveillance to evaluate symptomatic phenomenon, such as influenza. The authors hypothesized that the use of search engine query data can determine the prevalence of stroke. The Google Insights for Search database was accessed to analyze anonymized search engine query data. The authors' search strategy utilized common search queries used when attempting either to identify the signs and symptoms of a stroke or to perform stroke education. The search logic was as follows: (stroke signs + stroke symptoms + mini stroke--heat) from January 1, 2005, to December 31, 2010. The relative number of searches performed (the interest level) for this search logic was established for all 50 states and the District of Columbia. A Pearson product-moment correlation coefficient was calculated from the statespecific stroke prevalence data previously reported. Web search engine interest level was available for all 50 states and the District of Columbia over the time period for January 1, 2005-December 31, 2010. The interest level was highest in Alabama and Tennessee (100 and 96, respectively) and lowest in California and Virginia (58 and 53, respectively). The Pearson correlation coefficient (r) was calculated to be 0.47 (p = 0.0005, 2-tailed). Search engine query data analysis allows for the determination of relative stroke prevalence. Further investigation will reveal the reliability of this metric to determine temporal pattern analysis and prevalence in this and other symptomatic diseases.

  17. A Longitudinal Analysis of Search Engine Index Size

    DEFF Research Database (Denmark)

    Van den Bosch, Antal; Bogers, Toine; De Kunder, Maurice

    2015-01-01

    One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel...... method of estimating the size of a Web search engine’s index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing’s indexes over a nine-year period, from March 2006...... until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find...

  18. Brief Report: Consistency of Search Engine Rankings for Autism Websites

    Science.gov (United States)

    Reichow, Brian; Naples, Adam; Steinhoff, Timothy; Halpern, Jason; Volkmar, Fred R.

    2012-01-01

    The World Wide Web is one of the most common methods used by parents to find information on autism spectrum disorders and most consumers find information through search engines such as Google or Bing. However, little is known about how the search engines operate or the consistency of the results that are returned over time. This study presents the…

  19. An approach in building a chemical compound search engine in oracle database.

    Science.gov (United States)

    Wang, H; Volarath, P; Harrison, R

    2005-01-01

    A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated. The convenience of the implementation emphasizes the efficiency and simplicity of the framework.

  20. Google chemtrails: a methodology to analyze topic representation in search engine results

    OpenAIRE

    Ballatore, Andrea

    2015-01-01

    Search engine results influence the visibility of different viewpoints in political, cultural, and scientific debates. Treating search engines as editorial products with intrinsic biases can help understand the structure of information flows in new media. This paper outlines an empirical methodology to analyze the representation of topics in search engines, reducing the spatial and temporal biases in the results. As a case study, the methodology is applied to 15 popular conspiracy theories, e...

  1. 'Sciencenet'--towards a global search and share engine for all scientific knowledge.

    Science.gov (United States)

    Lütjohann, Dominic S; Shah, Asmi H; Christen, Michael P; Richter, Florian; Knese, Karsten; Liebel, Urban

    2011-06-15

    Modern biological experiments create vast amounts of data which are geographically distributed. These datasets consist of petabytes of raw data and billions of documents. Yet to the best of our knowledge, a search engine technology that searches and cross-links all different data types in life sciences does not exist. We have developed a prototype distributed scientific search engine technology, 'Sciencenet', which facilitates rapid searching over this large data space. By 'bringing the search engine to the data', we do not require server farms. This platform also allows users to contribute to the search index and publish their large-scale data to support e-Science. Furthermore, a community-driven method guarantees that only scientific content is crawled and presented. Our peer-to-peer approach is sufficiently scalable for the science web without performance or capacity tradeoff. The free to use search portal web page and the downloadable client are accessible at: http://sciencenet.kit.edu. The web portal for index administration is implemented in ASP.NET, the 'AskMe' experiment publisher is written in Python 2.7, and the backend 'YaCy' search engine is based on Java 1.6.

  2. The Gaze of the Perfect Search Engine: Google as an Infrastructure of Dataveillance

    Science.gov (United States)

    Zimmer, M.

    Web search engines have emerged as a ubiquitous and vital tool for the successful navigation of the growing online informational sphere. The goal of the world's largest search engine, Google, is to "organize the world's information and make it universally accessible and useful" and to create the "perfect search engine" that provides only intuitive, personalized, and relevant results. While intended to enhance intellectual mobility in the online sphere, this chapter reveals that the quest for the perfect search engine requires the widespread monitoring and aggregation of a users' online personal and intellectual activities, threatening the values the perfect search engines were designed to sustain. It argues that these search-based infrastructures of dataveillance contribute to a rapidly emerging "soft cage" of everyday digital surveillance, where they, like other dataveillance technologies before them, contribute to the curtailing of individual freedom, affect users' sense of self, and present issues of deep discrimination and social justice.

  3. GeoSearcher: Location-Based Ranking of Search Engine Results.

    Science.gov (United States)

    Watters, Carolyn; Amoudi, Ghada

    2003-01-01

    Discussion of Web queries with geospatial dimensions focuses on an algorithm that assigns location coordinates dynamically to Web sites based on the URL. Describes a prototype search system that uses the algorithm to re-rank search engine results for queries with a geospatial dimension, thus providing an alternative ranking order for search engine…

  4. Search Engine Optimization

    CERN Document Server

    Davis, Harold

    2006-01-01

    SEO--short for Search Engine Optimization--is the art, craft, and science of driving web traffic to web sites. Web traffic is food, drink, and oxygen--in short, life itself--to any web-based business. Whether your web site depends on broad, general traffic, or high-quality, targeted traffic, this PDF has the tools and information you need to draw more traffic to your site. You'll learn how to effectively use PageRank (and Google itself); how to get listed, get links, and get syndicated; and much more. The field of SEO is expanding into all the possible ways of promoting web traffic. This

  5. MOMFER: A Search Engine of Thompson's Motif-Index of Folk Literature

    NARCIS (Netherlands)

    Karsdorp, F.B.; van der Meulen, Marten; Meder, Theo; van den Bosch, Antal

    2015-01-01

    More than fifty years after the first edition of Thompson's seminal Motif-Indexof Folk Literature, we present an online search engine tailored to fully disclose the index digitally. This search engine, called MOMFER, greatly enhances the searchability of the Motif-Index and provides exciting new

  6. 3rd International Conference on Intelligent Technologies and Engineering Systems

    CERN Document Server

    2016-01-01

    This book includes the original, peer reviewed research from the 3rd International Conference on Intelligent Technologies and Engineering Systems (ICITES2014), held in December, 2014 at Cheng Shiu University in Kaohsiung, Taiwan. Topics covered include: Automation and robotics, fiber optics and laser technologies, network and communication systems, micro and nano technologies, and solar and power systems. This book also Explores emerging technologies and their application in a broad range of engineering disciplines Examines fiber optics and laser technologies Covers biomedical, electrical, industrial, and mechanical systems Discusses multimedia systems and applications, computer vision and image & video signal processing.

  7. Survey of formal and informal citation in Google search engine

    Directory of Open Access Journals (Sweden)

    Afsaneh Teymourikhani

    2016-03-01

    Full Text Available Aim: Informal citations is bibliographic information (title or Internet address, citing sources of information resources for informal scholarly communication and always neglected in traditional citation databases. This study is done, in order to answer the question of whether informal citations in the web environment are traceable. The present research aims to determine what proportion of web citations of Google search engine is related to formal and informal citation. Research method: Webometrics is the method used. The study is done on 1344 research articles of 98 open access journal, and the method that is used to extract the web citation from Google search engine is “Web / URL citation extraction". Findings: The findings showed that ten percent of the web citations of Google search engine are formal and informal citations. The highest formal citation in the Google search engine with 19/27% is in the field of library and information science and the lowest official citation by 1/54% is devoted to the field of civil engineering. The highest percentage of informal citations with 3/57% is devoted to sociology and the lowest percentage of informal citations by 0/39% is devoted to the field of civil engineering. Journal Citation is highest with 94/12% in the surgical field and lowest with 5/26 percent in the philosophy filed. Result: Due to formal and informal citations in the Google search engine which is about 10 percent and the reduction of this amount compared to previous research, it seems that track citations by this engine should be treated with more caution. We see that the amount of formal citation is variable in different disciplines. Cited journals in the field of surgery, is highest and in the filed of philosophy is lowest, this indicates that in the filed of philosophy, that is a subset of the social sciences, journals in scientific communication do not play a significant role. On the other hand, book has a key role in this filed

  8. Querying archetype-based EHRs by search ontology-based XPath engineering.

    Science.gov (United States)

    Kropf, Stefan; Uciteli, Alexandr; Schierle, Katrin; Krücken, Peter; Denecke, Kerstin; Herre, Heinrich

    2018-05-11

    Legacy data and new structured data can be stored in a standardized format as XML-based EHRs on XML databases. Querying documents on these databases is crucial for answering research questions. Instead of using free text searches, that lead to false positive results, the precision can be increased by constraining the search to certain parts of documents. A search ontology-based specification of queries on XML documents defines search concepts and relates them to parts in the XML document structure. Such query specification method is practically introduced and evaluated by applying concrete research questions formulated in natural language on a data collection for information retrieval purposes. The search is performed by search ontology-based XPath engineering that reuses ontologies and XML-related W3C standards. The key result is that the specification of research questions can be supported by the usage of search ontology-based XPath engineering. A deeper recognition of entities and a semantic understanding of the content is necessary for a further improvement of precision and recall. Key limitation is that the application of the introduced process requires skills in ontology and software development. In future, the time consuming ontology development could be overcome by implementing a new clinical role: the clinical ontologist. The introduced Search Ontology XML extension connects Search Terms to certain parts in XML documents and enables an ontology-based definition of queries. Search ontology-based XPath engineering can support research question answering by the specification of complex XPath expressions without deep syntax knowledge about XPaths.

  9. Sexual information seeking on web search engines.

    Science.gov (United States)

    Spink, Amanda; Koricich, Andrew; Jansen, B J; Cole, Charles

    2004-02-01

    Sexual information seeking is an important element within human information behavior. Seeking sexually related information on the Internet takes many forms and channels, including chat rooms discussions, accessing Websites or searching Web search engines for sexual materials. The study of sexual Web queries provides insight into sexually-related information-seeking behavior, of value to Web users and providers alike. We qualitatively analyzed queries from logs of 1,025,910 Alta Vista and AlltheWeb.com Web user queries from 2001. We compared the differences in sexually-related Web searching between Alta Vista and AlltheWeb.com users. Differences were found in session duration, query outcomes, and search term choices. Implications of the findings for sexual information seeking are discussed.

  10. Intelligible Artificial Intelligence

    OpenAIRE

    Weld, Daniel S.; Bansal, Gagan

    2018-01-01

    Since Artificial Intelligence (AI) software uses techniques like deep lookahead search and stochastic optimization of huge neural networks to fit mammoth datasets, it often results in complex behavior that is difficult for people to understand. Yet organizations are deploying AI algorithms in many mission-critical settings. In order to trust their behavior, we must make it intelligible --- either by using inherently interpretable models or by developing methods for explaining otherwise overwh...

  11. Quality Dimensions of Internet Search Engines.

    Science.gov (United States)

    Xie, M.; Wang, H.; Goh, T. N.

    1998-01-01

    Reviews commonly used search engines (AltaVista, Excite, infoseek, Lycos, HotBot, WebCrawler), focusing on existing comparative studies; considers quality dimensions from the customer's point of view based on a SERVQUAL framework; and groups these quality expectations in five dimensions: tangibles, reliability, responsiveness, assurance, and…

  12. Information Retrieval for Education: Making Search Engines Language Aware

    Science.gov (United States)

    Ott, Niels; Meurers, Detmar

    2010-01-01

    Search engines have been a major factor in making the web the successful and widely used information source it is today. Generally speaking, they make it possible to retrieve web pages on a topic specified by the keywords entered by the user. Yet web searching currently does not take into account which of the search results are comprehensible for…

  13. A longitudinal analysis of search engine index size

    NARCIS (Netherlands)

    Bosch, A.P.J. van den; Bogers, T.; Kunder, M. de; Salah, A. A.; Tonta, Y.; Salah, A. A. A.; Sugimoto, C.; Al, U.

    2015-01-01

    One of the determining factors of the quality of Web search engines is the size and quality of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We

  14. Andromeda - a peptide search engine integrated into the MaxQuant environment

    DEFF Research Database (Denmark)

    Cox, Jurgen; Neuhauser, Nadin; Michalski, Annette

    2011-01-01

    A key step in mass spectrometry (MS)-based proteomics is the identification of peptides in sequence databases by their fragmentation spectra. Here we describe Andromeda, a novel peptide search engine using a probabilistic scoring model. On proteome data Andromeda performs as well as Mascot......, a widely used commercial search engine, as judged by sensitivity and specificity analysis based on target decoy searches. Furthermore, it can handle data with arbitrarily high fragment mass accuracy, is able to assign and score complex patterns of post-translational modifications, such as highly...... phosphorylated peptides and accommodates extremely large databases. The algorithms of Andromeda are provided. Andromeda can function independently or as an integrated search engine of the widely used MaxQuant computational proteomics platform and both are freely available at www.maxquant.org. The combination...

  15. Comparative Analysis Between Intelligent Biogas Engineering and Traditional Biogas Engineering%智能化沼气工程技术及其优势分析

    Institute of Scientific and Technical Information of China (English)

    林斌; 徐庆贤; 官雪芳; 林碧芬

    2012-01-01

    以福建省建瓯市新星养猪场为例,阐述智能化沼气工程系统的工艺流程及技术要点,包括酸化池和沼气池的pH值监测系统、智能化进料系统、玻璃钢沼气池加热系统,沼气自动搅拌系统、沼气流量监测系统和互联网远程控制系统;比较智能化沼气工程与传统沼气工程的成本和产气率情况,认为从沼气工程可持续运行的角度出发,智能化沼气工程更符合成本效益原则;智能化沼气工程技术系统能为沼气发酵创造最佳发酵环境,获得高效的产气率.%Firstly, this paper introduces the key technological points of the intelligent biogas engineering system by taking Xin-xing seed—pig pig farms as an example, including pH monitoring system of acidification tank and biogas digester, intelligent feeding system, heating system for FRP digester, bio—gas automatic mixing system, gas flowing metering system and internet remote control system. Then, it takes a comparison of the cost and gas production rate between intelligent biogas engineering and traditional biogas engineering, the result showing that: for the long—term sustainable operation of biogas engineering, intelligent biogas engineering is more according with the cost—benefit principle; technology system of intelligent biogas engineering can create the best environment for biogas fermentation, thus it can get the highest efficiency of gas production.

  16. How do radiologists use the human search engine?

    International Nuclear Information System (INIS)

    Wolfe, Jeremy M.; Evans, Karla K.; Drew, Trafton; Aizenman, Avigael; Josephs, Emilie

    2016-01-01

    Radiologists perform many 'visual search tasks' in which they look for one or more instances of one or more types of target item in a medical image (e.g. cancer screening). To understand and improve how radiologists do such tasks, it must be understood how the human 'search engine' works. This article briefly reviews some of the relevant work into this aspect of medical image perception. Questions include how attention and the eyes are guided in radiologic search? How is global (image-wide) information used in search? How might properties of human vision and human cognition lead to errors in radiologic search? (authors)

  17. Estimating search engine index size variability: a 9-year longitudinal study.

    Science.gov (United States)

    van den Bosch, Antal; Bogers, Toine; de Kunder, Maurice

    One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel method of estimating the size of a Web search engine's index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing's indices over a nine-year period, from March 2006 until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find that much, if not all of this variability can be explained by changes in the indexing and ranking infrastructure of Google and Bing. This casts further doubt on whether Web search engines can be used reliably for cross-sectional webometric studies.

  18. 26th International Conference on Industrial, Engineering & Other Applications of Applied Intelligent Systems (IEA/AIE)

    CERN Document Server

    Bosse, Tibor; Hindriks, Koen; Hoogendoorn, Mark; Jonker, Catholijn; Treur, Jan; Contemporary Challenges and Solutions in Applied Artificial Intelligence

    2013-01-01

      Since its origination in the mid-twentieth century, the area of Artificial Intelligence (AI) has undergone a number of developments. While the early interest in AI was mainly triggered by the desire to develop artifacts that show the same intelligent behavior as humans, nowadays scientists have realized that research in AI involves a multitude of separate challenges, besides the traditional goal to replicate human intelligence. In particular, recent history has pointed out that a variety of ‘intelligent’ computational techniques, part of which are inspired by human intelligence, may be successfully applied to solve all kinds of practical problems. This sub-area of AI, which has its main emphasis on applications of intelligent systems to solve real-life problems, is currently known under the term Applied Intelligence.   The objective of the International Conference on Industrial, Engineering & Other Applications of Applied Intelligent Systems (IEA/AIE) is to promote and disseminate recent research ...

  19. Intelligent bioinformatics : the application of artificial intelligence techniques to bioinformatics problems

    National Research Council Canada - National Science Library

    Keedwell, Edward

    2005-01-01

    ... Intelligence and Computer Science 3.1 Introduction to search 3.2 Search algorithms 3.3 Heuristic search methods 3.4 Optimal search strategies 3.5 Problems with search techniques 3.6 Complexity of...

  20. Engineering and management of IT-based service systems an intelligent decision-making support systems approach

    CERN Document Server

    Gomez, Jorge; Garrido, Leonardo; Perez, Francisco

    2014-01-01

    Intelligent Decision-Making Support Systems (i-DMSS) are specialized IT-based systems that support some or several phases of the individual, team, organizational or inter-organizational decision making process by deploying some or several intelligent mechanisms. This book pursues the following academic aims: (i) generate a compendium of quality theoretical and applied contributions in Intelligent Decision-Making Support Systems (i-DMSS) for engineering and management IT-based service systems (ITSS); (ii)  diffuse scarce knowledge about foundations, architectures and effective and efficient methods and strategies for successfully planning, designing, building, operating, and evaluating i-DMSS for ITSS, and (iii) create an awareness of, and a bridge between ITSS and i-DMSS academicians and practitioners in the current complex and dynamic engineering and management ITSS organizational. The book presents a collection of 11 chapters referring to relevant topics for both IT service systems and i-DMSS including: pr...

  1. On introduction of artificial intelligence elements to heat power engineering

    Science.gov (United States)

    Dregalin, A. F.; Nazyrova, R. R.

    1993-10-01

    The basic problems of 'the thermodynamic intelligence' of personal computers have been outlined. The thermodynamic intellect of personal computers as a concept has been introduced to heat processes occurring in engines of flying vehicles. In particular, the thermodynamic intellect of computers is determined by the possibility of deriving formal relationships between thermodynamic functions. In chemical thermodynamics, a concept of a characteristic function has been introduced.

  2. Interface Design Concepts in the Development of ELSA, an Intelligent Electronic Library Search Assistant.

    Science.gov (United States)

    Denning, Rebecca; Smith, Philip J.

    1994-01-01

    Describes issues and advances in the design of appropriate inference engines and knowledge structures needed by commercially feasible intelligent intermediary systems for information retrieval. Issues associated with the design of interfaces to such functions are discussed in detail. Design principles for guiding implementation of these interfaces…

  3. International Conference on Artificial Intelligence and Evolutionary Computations in Engineering Systems

    CERN Document Server

    Vijayakumar, K; Panigrahi, Bijaya; Das, Swagatam

    2017-01-01

    The volume is a collection of high-quality peer-reviewed research papers presented in the International Conference on Artificial Intelligence and Evolutionary Computation in Engineering Systems (ICAIECES 2016) held at SRM University, Chennai, Tamilnadu, India. This conference is an international forum for industry professionals and researchers to deliberate and state their research findings, discuss the latest advancements and explore the future directions in the emerging areas of engineering and technology. The book presents original work and novel ideas, information, techniques and applications in the field of communication, computing and power technologies.

  4. Taking It to the Top: A Lesson in Search Engine Optimization

    Science.gov (United States)

    Frydenberg, Mark; Miko, John S.

    2011-01-01

    Search engine optimization (SEO), the promoting of a Web site so it achieves optimal position with a search engine's rankings, is an important strategy for organizations and individuals in order to promote their brands online. Techniques for achieving SEO are relevant to students of marketing, computing, media arts, and other disciplines, and many…

  5. TOWARDS ACTIVE SEO (SEARCH ENGINE OPTIMIZATION 2.0

    Directory of Open Access Journals (Sweden)

    Charles-Victor Boutet

    2012-12-01

    Full Text Available In the age of writable web, new skills and new practices are appearing. In an environment that allows everyone to communicate information globally, internet referencing (or SEO is a strategic discipline that aims to generate visibility, internet traffic and a maximum exploitation of sites publications. Often misperceived as a fraud, SEO has evolved to be a facilitating tool for anyone who wishes to reference their website with search engines. In this article we show that it is possible to achieve the first rank in search results of keywords that are very competitive. We show methods that are quick, sustainable and legal; while applying the principles of active SEO 2.0. This article also clarifies some working functions of search engines, some advanced referencing techniques (that are completely ethical and legal and we lay the foundations for an in depth reflection on the qualities and advantages of these techniques.

  6. Predicting Drug Recalls From Internet Search Engine Queries.

    Science.gov (United States)

    Yom-Tov, Elad

    2017-01-01

    Batches of pharmaceuticals are sometimes recalled from the market when a safety issue or a defect is detected in specific production runs of a drug. Such problems are usually detected when patients or healthcare providers report abnormalities to medical authorities. Here, we test the hypothesis that defective production lots can be detected earlier by monitoring queries to Internet search engines. We extracted queries from the USA to the Bing search engine, which mentioned one of the 5195 pharmaceutical drugs during 2015 and all recall notifications issued by the Food and Drug Administration (FDA) during that year. By using attributes that quantify the change in query volume at the state level, we attempted to predict if a recall of a specific drug will be ordered by FDA in a time horizon ranging from 1 to 40 days in future. Our results show that future drug recalls can indeed be identified with an AUC of 0.791 and a lift at 5% of approximately 6 when predicting a recall occurring one day ahead. This performance degrades as prediction is made for longer periods ahead. The most indicative attributes for prediction are sudden spikes in query volume about a specific medicine in each state. Recalls of prescription drugs and those estimated to be of medium-risk are more likely to be identified using search query data. These findings suggest that aggregated Internet search engine data can be used to facilitate in early warning of faulty batches of medicines.

  7. 25th International Conference on Industrial, Engineering & Other Applica- tions of Applied Intelligent Systems (IEA/AIE 2012)

    CERN Document Server

    Jiang, He; Ali, Moonis; Li, Mingchu; Modern Advances in Intelligent Systems and Tools

    2012-01-01

    Intelligent systems provide a platform to connect the research in artificial intelligence to real-world problem solving applications. Various intelligent systems have been developed to face real-world applications. This book discusses the modern advances in intelligent systems and the tools in applied artificial intelligence. It consists of twenty-three chapters authored by participants of the 25th International Conference on Industrial, Engineering & Other Applications of Applied Intelligent Systems (IEA/AIE 2012) which was held in Dalian, China. This book is divided into six parts, including Applied Intelligence, Cognitive Computing and Affective Computing, Data Mining and Intelligent Systems, Decision Support Systems, Machine Learning, and Natural Language Processing. Each part includes three to five chapters. In these chapters, many approaches, applications, restrictions, and discussions are presented. The material of each chapter is self-contained and was reviewed by at least two anonymous referees t...

  8. Cross-system evaluation of clinical trial search engines.

    Science.gov (United States)

    Jiang, Silis Y; Weng, Chunhua

    2014-01-01

    Clinical trials are fundamental to the advancement of medicine but constantly face recruitment difficulties. Various clinical trial search engines have been designed to help health consumers identify trials for which they may be eligible. Unfortunately, knowledge of the usefulness and usability of their designs remains scarce. In this study, we used mixed methods, including time-motion analysis, think-aloud protocol, and survey, to evaluate five popular clinical trial search engines with 11 users. Differences in user preferences and time spent on each system were observed and correlated with user characteristics. In general, searching for applicable trials using these systems is a cognitively demanding task. Our results show that user perceptions of these systems are multifactorial. The survey indicated eTACTS being the generally preferred system, but this finding did not persist among all mixed methods. This study confirms the value of mixed-methods for a comprehensive system evaluation. Future system designers must be aware that different users groups expect different functionalities.

  9. Collection of Medical Original Data with Search Engine for Decision Support.

    Science.gov (United States)

    Orthuber, Wolfgang

    2016-01-01

    Medicine is becoming more and more complex and humans can capture total medical knowledge only partially. For specific access a high resolution search engine is demonstrated, which allows besides conventional text search also search of precise quantitative data of medical findings, therapies and results. Users can define metric spaces ("Domain Spaces", DSs) with all searchable quantitative data ("Domain Vectors", DSs). An implementation of the search engine is online in http://numericsearch.com. In future medicine the doctor could make first a rough diagnosis and check which fine diagnostics (quantitative data) colleagues had collected in such a case. Then the doctor decides about fine diagnostics and results are sent (half automatically) to the search engine which filters a group of patients which best fits to these data. In this specific group variable therapies can be checked with associated therapeutic results, like in an individual scientific study for the current patient. The statistical (anonymous) results could be used for specific decision support. Reversely the therapeutic decision (in the best case with later results) could be used to enhance the collection of precise pseudonymous medical original data which is used for better and better statistical (anonymous) search results.

  10. IdentiPy: an extensible search engine for protein identification in shotgun proteomics.

    Science.gov (United States)

    Levitsky, Lev I; Ivanov, Mark V; Lobas, Anna A; Bubis, Julia A; Tarasova, Irina A; Solovyeva, Elizaveta M; Pridatchenko, Marina L; Gorshkov, Mikhail V

    2018-04-23

    We present an open-source, extensible search engine for shotgun proteomics. Implemented in Python programming language, IdentiPy shows competitive processing speed and sensitivity compared with the state-of-the-art search engines. It is equipped with a user-friendly web interface, IdentiPy Server, enabling the use of a single server installation accessed from multiple workstations. Using a simplified version of X!Tandem scoring algorithm and its novel ``auto-tune'' feature, IdentiPy outperforms the popular alternatives on high-resolution data sets. Auto-tune adjusts the search parameters for the particular data set, resulting in improved search efficiency and simplifying the user experience. IdentiPy with the auto-tune feature shows higher sensitivity compared with the evaluated search engines. IdentiPy Server has built-in post-processing and protein inference procedures and provides graphic visualization of the statistical properties of the data set and the search results. It is open-source and can be freely extended to use third-party scoring functions or processing algorithms, and allows customization of the search workflow for specialized applications.

  11. Andromeda: a peptide search engine integrated into the MaxQuant environment.

    Science.gov (United States)

    Cox, Jürgen; Neuhauser, Nadin; Michalski, Annette; Scheltema, Richard A; Olsen, Jesper V; Mann, Matthias

    2011-04-01

    A key step in mass spectrometry (MS)-based proteomics is the identification of peptides in sequence databases by their fragmentation spectra. Here we describe Andromeda, a novel peptide search engine using a probabilistic scoring model. On proteome data, Andromeda performs as well as Mascot, a widely used commercial search engine, as judged by sensitivity and specificity analysis based on target decoy searches. Furthermore, it can handle data with arbitrarily high fragment mass accuracy, is able to assign and score complex patterns of post-translational modifications, such as highly phosphorylated peptides, and accommodates extremely large databases. The algorithms of Andromeda are provided. Andromeda can function independently or as an integrated search engine of the widely used MaxQuant computational proteomics platform and both are freely available at www.maxquant.org. The combination enables analysis of large data sets in a simple analysis workflow on a desktop computer. For searching individual spectra Andromeda is also accessible via a web server. We demonstrate the flexibility of the system by implementing the capability to identify cofragmented peptides, significantly improving the total number of identified peptides.

  12. Design of personalized search engine based on user-webpage dynamic model

    Science.gov (United States)

    Li, Jihan; Li, Shanglin; Zhu, Yingke; Xiao, Bo

    2013-12-01

    Personalized search engine focuses on establishing a user-webpage dynamic model. In this model, users' personalized factors are introduced so that the search engine is better able to provide the user with targeted feedback. This paper constructs user and webpage dynamic vector tables, introduces singular value decomposition analysis in the processes of topic categorization, and extends the traditional PageRank algorithm.

  13. Index Compression and Efficient Query Processing in Large Web Search Engines

    Science.gov (United States)

    Ding, Shuai

    2013-01-01

    The inverted index is the main data structure used by all the major search engines. Search engines build an inverted index on their collection to speed up query processing. As the size of the web grows, the length of the inverted list structures, which can easily grow to hundreds of MBs or even GBs for common terms (roughly linear in the size of…

  14. HOW DO RADIOLOGISTS USE THE HUMAN SEARCH ENGINE?

    Science.gov (United States)

    Wolfe, Jeremy M; Evans, Karla K; Drew, Trafton; Aizenman, Avigael; Josephs, Emilie

    2016-06-01

    Radiologists perform many 'visual search tasks' in which they look for one or more instances of one or more types of target item in a medical image (e.g. cancer screening). To understand and improve how radiologists do such tasks, it must be understood how the human 'search engine' works. This article briefly reviews some of the relevant work into this aspect of medical image perception. Questions include how attention and the eyes are guided in radiologic search? How is global (image-wide) information used in search? How might properties of human vision and human cognition lead to errors in radiologic search? © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Teaching Planetary Science as Part of the Search for Extraterrestrial Intelligence (SETI)

    Science.gov (United States)

    Margot, Jean-Luc; Greenberg, Adam H.

    2017-10-01

    In Spring 2016 and 2017, UCLA offered a course titled "EPSS C179/279 - Search for Extraterrestrial Intelligence: Theory and Applications". The course is designed for advanced undergraduate students and graduate students in the science, technical, engineering, and mathematical fields. Each year, students designed an observing sequence for the Green Bank telescope, observed known planetary systems remotely, wrote a sophisticated and modular data processing pipeline, analyzed the data, and presented their results. In 2016, 15 students participated in the course (9U, 5G; 11M, 3F) and observed 14 planetary systems in the Kepler field. In 2017, 17 students participated (15U, 2G; 10M, 7F) and observed 10 planetary systems in the Kepler field, TRAPPIST-1, and LHS 1140. In order to select suitable targets, students learned about planetary systems, planetary habitability, and planetary dynamics. In addition to planetary science fundamentals, students learned radio astronomy fundamentals, collaborative software development, signal processing techniques, and statistics. Evaluations indicate that the course is challenging but that students are eager to learn because of the engrossing nature of SETI. Students particularly value the teamwork approach, the observing experience, and working with their own data. The next offering of the course will be in Spring 2018. Additional information about our SETI work is available at seti.ucla.edu.

  16. A Modular Artificial Intelligence Inference Engine System (MAIS) for support of on orbit experiments

    Science.gov (United States)

    Hancock, Thomas M., III

    1994-01-01

    This paper describes a Modular Artificial Intelligence Inference Engine System (MAIS) support tool that would provide health and status monitoring, cognitive replanning, analysis and support of on-orbit Space Station, Spacelab experiments and systems.

  17. Intelligent systems for urban search and rescue: challenges and lessons learned

    Science.gov (United States)

    Jacoff, Adam; Messina, Elena; Weiss, Brian A.

    2003-09-01

    Urban search and rescue (USAR) is one of the most dangerous and time-critical non-wartime activities. Researchers have been developing hardware and software to enable robots to perform some search and rescue functions so as to minimize the exposure of human rescue personnel to danger and maximize the survival of victims. Significant progress has been achieved, but much work remains. USAR demands a blending of numerous specialized technologies. An effective USAR robot must be endowed with key competencies, such as being able to negotiate collapsed structures, find victims and assess their condition, identify potential hazards, generate maps of the structure and victim locations, and communicate with rescue personnel. These competencies bring to bear work in numerous sub-disciplines of intelligent systems (or artificial intelligence) such as sensory processing, world modeling, behavior generation, path planning, and human-robot interaction, in addition to work in communications, mechanism design and advanced sensors. In an attempt to stimulate progress in the field, reference USAR challenges are being developed and propagated worldwide. In order to make efficient use of finite research resources, the robotic USAR community must share a common understanding of what is required, technologically, to attain each competency, and have a rigorous measure of the current level of effectiveness of various technologies. NIST is working with partner organizations to measure the performance of robotic USAR competencies and technologies. In this paper, we describe the reference test arenas for USAR robots, assess the current challenges within the field, and discuss experiences thus far in the testing effort.

  18. Optimization process planning using hybrid genetic algorithm and intelligent search for job shop machining.

    Science.gov (United States)

    Salehi, Mojtaba; Bahreininejad, Ardeshir

    2011-08-01

    Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously.

  19. Developing a search engine for pharmacotherapeutic information that is not published in biomedical journals.

    Science.gov (United States)

    Do Pazo-Oubiña, F; Calvo Pita, C; Puigventós Latorre, F; Periañez-Párraga, L; Ventayol Bosch, P

    2011-01-01

    To identify publishers of pharmacotherapeutic information not found in biomedical journals that focuses on evaluating and providing advice on medicines and to develop a search engine to access this information. Compiling web sites that publish information on the rational use of medicines and have no commercial interests. Free-access web sites in Spanish, Galician, Catalan or English. Designing a search engine using the Google "custom search" application. Overall 159 internet addresses were compiled and were classified into 9 labels. We were able to recover the information from the selected sources using a search engine, which is called "AlquimiA" and available from http://www.elcomprimido.com/FARHSD/AlquimiA.htm. The main sources of pharmacotherapeutic information not published in biomedical journals were identified. The search engine is a useful tool for searching and accessing "grey literature" on the internet. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.

  20. The LAILAPS Search Engine: Relevance Ranking in Life Science Databases

    Directory of Open Access Journals (Sweden)

    Lange Matthias

    2010-06-01

    Full Text Available Search engines and retrieval systems are popular tools at a life science desktop. The manual inspection of hundreds of database entries, that reflect a life science concept or fact, is a time intensive daily work. Hereby, not the number of query results matters, but the relevance does. In this paper, we present the LAILAPS search engine for life science databases. The concept is to combine a novel feature model for relevance ranking, a machine learning approach to model user relevance profiles, ranking improvement by user feedback tracking and an intuitive and slim web user interface, that estimates relevance rank by tracking user interactions. Queries are formulated as simple keyword lists and will be expanded by synonyms. Supporting a flexible text index and a simple data import format, LAILAPS can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases.

  1. Towards Identifying and Reducing the Bias of Disease Information Extracted from Search Engine Data.

    Science.gov (United States)

    Huang, Da-Cang; Wang, Jin-Feng; Huang, Ji-Xia; Sui, Daniel Z; Zhang, Hong-Yan; Hu, Mao-Gui; Xu, Cheng-Dong

    2016-06-01

    The estimation of disease prevalence in online search engine data (e.g., Google Flu Trends (GFT)) has received a considerable amount of scholarly and public attention in recent years. While the utility of search engine data for disease surveillance has been demonstrated, the scientific community still seeks ways to identify and reduce biases that are embedded in search engine data. The primary goal of this study is to explore new ways of improving the accuracy of disease prevalence estimations by combining traditional disease data with search engine data. A novel method, Biased Sentinel Hospital-based Area Disease Estimation (B-SHADE), is introduced to reduce search engine data bias from a geographical perspective. To monitor search trends on Hand, Foot and Mouth Disease (HFMD) in Guangdong Province, China, we tested our approach by selecting 11 keywords from the Baidu index platform, a Chinese big data analyst similar to GFT. The correlation between the number of real cases and the composite index was 0.8. After decomposing the composite index at the city level, we found that only 10 cities presented a correlation of close to 0.8 or higher. These cities were found to be more stable with respect to search volume, and they were selected as sample cities in order to estimate the search volume of the entire province. After the estimation, the correlation improved from 0.8 to 0.864. After fitting the revised search volume with historical cases, the mean absolute error was 11.19% lower than it was when the original search volume and historical cases were combined. To our knowledge, this is the first study to reduce search engine data bias levels through the use of rigorous spatial sampling strategies.

  2. Engineering Courses on Computational Thinking Through Solving Problems in Artificial Intelligence

    Directory of Open Access Journals (Sweden)

    Piyanuch Silapachote

    2017-09-01

    Full Text Available Computational thinking sits at the core of every engineering and computing related discipline. It has increasingly emerged as its own subject in all levels of education. It is a powerful cornerstone for cognitive development, creative problem solving, algorithmic thinking and designs, and programming. How to effectively teach computational thinking skills poses real challenges and creates opportunities. Targeting entering computer science and engineering undergraduates, we resourcefully integrate elements from artificial intelligence (AI into introductory computing courses. In addition to comprehension of the essence of computational thinking, practical exercises in AI enable inspirations of collaborative problem solving beyond abstraction, logical reasoning, critical and analytical thinking. Problems in machine intelligence systems intrinsically connect students to algorithmic oriented computing and essential mathematical foundations. Beyond knowledge representation, AI fosters a gentle introduction to data structures and algorithms. Focused on engaging mental tool, a computer is never a necessity. Neither coding nor programming is ever required. Instead, students enjoy constructivist classrooms designed to always be active, flexible, and highly dynamic. Learning to learn and reflecting on cognitive experiences, they rigorously construct knowledge from collectively solving exciting puzzles, competing in strategic games, and participating in intellectual discussions.

  3. Intelligent Mining Engineering Systems in the Structure of Industry 4.0

    Directory of Open Access Journals (Sweden)

    Rylnikova Marina

    2017-01-01

    Full Text Available The solution of the problem of improving the human environment and working conditions at mines is based on the provision of the rationale of parameters and conditions for the implementation of an environmentally balanced cycle of comprehensive development of mineral deposits on the basis of the design of mining engineering systems characterized by the minimization of the human factor effect in danger zones of mining operations. In this area, robotized technologies are being developed, machinery and mechanisms with the elements of artificial intelligence, and mining and transport system automatic controls are being put into service throughout the world. In the upcoming decades, mining machines and mechanisms will be virtually industrial robots. The article presents the results of zoning of open-pit and underground mine production areas, as well as mining engineering system of combined development depending on the fact and periodicity of human presence in zones of mining processes. As a surface geotechnology case study, the software structure based on a modular concept is described. The performance philosophy of mining and transport equipment with the elements of artificial intelligence is shown when it is put into service in an open pit.

  4. Intelligent Mining Engineering Systems in the Structure of Industry 4.0

    Science.gov (United States)

    Rylnikova, Marina; Radchenko, Dmitriy; Klebanov, Dmitriy

    2017-11-01

    The solution of the problem of improving the human environment and working conditions at mines is based on the provision of the rationale of parameters and conditions for the implementation of an environmentally balanced cycle of comprehensive development of mineral deposits on the basis of the design of mining engineering systems characterized by the minimization of the human factor effect in danger zones of mining operations. In this area, robotized technologies are being developed, machinery and mechanisms with the elements of artificial intelligence, and mining and transport system automatic controls are being put into service throughout the world. In the upcoming decades, mining machines and mechanisms will be virtually industrial robots. The article presents the results of zoning of open-pit and underground mine production areas, as well as mining engineering system of combined development depending on the fact and periodicity of human presence in zones of mining processes. As a surface geotechnology case study, the software structure based on a modular concept is described. The performance philosophy of mining and transport equipment with the elements of artificial intelligence is shown when it is put into service in an open pit.

  5. Penerapan Teknik Seo (Search Engine Optimization pada Website dalam Strategi Pemasaran melalui Internet

    Directory of Open Access Journals (Sweden)

    Rony Baskoro Lukito

    2014-12-01

    Full Text Available The purpose of this research is how to optimize a web design that can increase the number of visitors. The number of Internet users in the world continues to grow in line with advances in information technology. Products and services marketing media do not just use the printed and electronic media. Moreover, the cost of using the Internet as a medium of marketing is relatively inexpensive when compared to the use of television as a marketing medium. The penetration of the internet as a marketing medium lasted for 24 hours in different parts of the world. But to make an internet site into a site that is visited by many internet users, the site is not only good from the outside view only. Web sites that serve as a medium for marketing must be built with the correct rules, so that the Web site be optimal marketing media. One of the good rules in building the internet site as a marketing medium is how the content of such web sites indexed well in search engines like google. Search engine optimization in the index will be focused on the search engine Google for 83% of internet users across the world using Google as a search engine. Search engine optimization commonly known as SEO (Search Engine Optimization is an important rule that the internet site is easier to find a user with the desired keywords.

  6. Research on the optimization strategy of web search engine based on data mining

    Science.gov (United States)

    Chen, Ronghua

    2018-04-01

    With the wide application of search engines, web site information has become an important way for people to obtain information. People have found that they are growing in an increasingly explosive manner. Web site information is verydifficult to find the information they need, and now the search engine can not meet the need, so there is an urgent need for the network to provide website personalized information service, data mining technology for this new challenge is to find a breakthrough. In order to improve people's accuracy of finding information from websites, a website search engine optimization strategy based on data mining is proposed, and verified by website search engine optimization experiment. The results show that the proposed strategy improves the accuracy of the people to find information, and reduces the time for people to find information. It has an important practical value.

  7. Modification site localization scoring integrated into a search engine.

    Science.gov (United States)

    Baker, Peter R; Trinidad, Jonathan C; Chalkley, Robert J

    2011-07-01

    Large proteomic data sets identifying hundreds or thousands of modified peptides are becoming increasingly common in the literature. Several methods for assessing the reliability of peptide identifications both at the individual peptide or data set level have become established. However, tools for measuring the confidence of modification site assignments are sparse and are not often employed. A few tools for estimating phosphorylation site assignment reliabilities have been developed, but these are not integral to a search engine, so require a particular search engine output for a second step of processing. They may also require use of a particular fragmentation method and are mostly only applicable for phosphorylation analysis, rather than post-translational modifications analysis in general. In this study, we present the performance of site assignment scoring that is directly integrated into the search engine Protein Prospector, which allows site assignment reliability to be automatically reported for all modifications present in an identified peptide. It clearly indicates when a site assignment is ambiguous (and if so, between which residues), and reports an assignment score that can be translated into a reliability measure for individual site assignments.

  8. Adding a Visualization Feature to Web Search Engines: It’s Time

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Pak C.

    2008-11-11

    Since the first world wide web (WWW) search engine quietly entered our lives in 1994, the “information need” behind web searching has rapidly grown into a multi-billion dollar business that dominates the internet landscape, drives e-commerce traffic, propels global economy, and affects the lives of the whole human race. Today’s search engines are faster, smarter, and more powerful than those released just a few years ago. With the vast investment pouring into research and development by leading web technology providers and the intense emotion behind corporate slogans such as “win the web” or “take back the web,” I can’t help but ask why are we still using the very same “text-only” interface that was used 13 years ago to browse our search engine results pages (SERPs)? Why has the SERP interface technology lagged so far behind in the web evolution when the corresponding search technology has advanced so rapidly? In this article I explore some current SERP interface issues, suggest a simple but practical visual-based interface design approach, and argue why a visual approach can be a strong candidate for tomorrow’s SERP interface.

  9. Intelligent quality function deployment system in concurrent engineering environment

    Science.gov (United States)

    Lin, Zhihang; Che, Ada

    1998-10-01

    This paper describes work being undertaken in the development of an intelligent distributed quality function deployment (IDQFD) system, which supports product design team to transfer and deployment the `Voice of Customer' through `House of Quality' into the various stages of product planning, engineering and manufacturing. The requirement modeling of products, the optimization in QFD are indicated. The framework of the system, including QFD tools and platform for distributed collaborative work in QFD, is described. The strategy and methods for the collaboration processing in QFD process are presented. It shows promise for application in practice.

  10. The invisible Web uncovering information sources search engines can't see

    CERN Document Server

    Sherman, Chris

    2001-01-01

    Enormous expanses of the Internet are unreachable with standard web search engines. This book provides the key to finding these hidden resources by identifying how to uncover and use invisible web resources. Mapping the invisible Web, when and how to use it, assessing the validity of the information, and the future of Web searching are topics covered in detail. Only 16 percent of Net-based information can be located using a general search engine. The other 84 percent is what is referred to as the invisible Web-made up of information stored in databases. Unlike pages on the visible Web, informa

  11. Comparing image search behaviour in the ARRS GoldMiner search engine and a clinical PACS/RIS.

    Science.gov (United States)

    De-Arteaga, Maria; Eggel, Ivan; Do, Bao; Rubin, Daniel; Kahn, Charles E; Müller, Henning

    2015-08-01

    Information search has changed the way we manage knowledge and the ubiquity of information access has made search a frequent activity, whether via Internet search engines or increasingly via mobile devices. Medical information search is in this respect no different and much research has been devoted to analyzing the way in which physicians aim to access information. Medical image search is a much smaller domain but has gained much attention as it has different characteristics than search for text documents. While web search log files have been analysed many times to better understand user behaviour, the log files of hospital internal systems for search in a PACS/RIS (Picture Archival and Communication System, Radiology Information System) have rarely been analysed. Such a comparison between a hospital PACS/RIS search and a web system for searching images of the biomedical literature is the goal of this paper. Objectives are to identify similarities and differences in search behaviour of the two systems, which could then be used to optimize existing systems and build new search engines. Log files of the ARRS GoldMiner medical image search engine (freely accessible on the Internet) containing 222,005 queries, and log files of Stanford's internal PACS/RIS search called radTF containing 18,068 queries were analysed. Each query was preprocessed and all query terms were mapped to the RadLex (Radiology Lexicon) terminology, a comprehensive lexicon of radiology terms created and maintained by the Radiological Society of North America, so the semantic content in the queries and the links between terms could be analysed, and synonyms for the same concept could be detected. RadLex was mainly created for the use in radiology reports, to aid structured reporting and the preparation of educational material (Lanlotz, 2006) [1]. In standard medical vocabularies such as MeSH (Medical Subject Headings) and UMLS (Unified Medical Language System) specific terms of radiology are often

  12. Engineering general intelligence

    CERN Document Server

    Goertzel, Ben; Geisweiller, Nil

    2014-01-01

    The work outlines a detailed blueprint for the creation of an Artificial General Intelligence system with capability at the human level and ultimately beyond, according to the Cog Prime AGI design and the Open Cog software architecture.

  13. Engineering general intelligence

    CERN Document Server

    Goertzel, Ben; Geisweiller, Nil

    2014-01-01

    The work outlines a novel conceptual and theoretical framework for understanding Artificial General Intelligence and based on this framework outlines a practical roadmap for the development of AGI with capability at the human level and ultimately beyond.

  14. ERRATUM: TOWARDS ACTIVE SEO (SEARCH ENGINE OPTIMIZATION 2.0

    Directory of Open Access Journals (Sweden)

    Charles-Victor Boutet

    2013-04-01

    Full Text Available In the age of writable web, new skills and new practices are appearing. In an environment that allows everyone to communicate information globally, internet referencing (or SEO is a strategic discipline that aims to generate visibility, internet traffic and a maximum exploitation of sites publications. Often misperceived as a fraud, SEO has evolved to be a facilitating tool for anyone who wishes to reference their website with search engines. In this article we show that it is possible to achieve the first rank in search results of keywords that are very competitive. We show methods that are quick, sustainable and legal; while applying the principles of active SEO 2.0. This article also clarifies some working functions of search engines, some advanced referencing techniques (that are completely ethical and legal and we lay the foundations for an in depth reflection on the qualities and advantages of these techniques.

  15. A study of medical and health queries to web search engines.

    Science.gov (United States)

    Spink, Amanda; Yang, Yin; Jansen, Jim; Nykanen, Pirrko; Lorence, Daniel P; Ozmutlu, Seda; Ozmutlu, H Cenk

    2004-03-01

    This paper reports findings from an analysis of medical or health queries to different web search engines. We report results: (i). comparing samples of 10000 web queries taken randomly from 1.2 million query logs from the AlltheWeb.com and Excite.com commercial web search engines in 2001 for medical or health queries, (ii). comparing the 2001 findings from Excite and AlltheWeb.com users with results from a previous analysis of medical and health related queries from the Excite Web search engine for 1997 and 1999, and (iii). medical or health advice-seeking queries beginning with the word 'should'. Findings suggest: (i). a small percentage of web queries are medical or health related, (ii). the top five categories of medical or health queries were: general health, weight issues, reproductive health and puberty, pregnancy/obstetrics, and human relationships, and (iii). over time, the medical and health queries may have declined as a proportion of all web queries, as the use of specialized medical/health websites and e-commerce-related queries has increased. Findings provide insights into medical and health-related web querying and suggests some implications for the use of the general web search engines when seeking medical/health information.

  16. Research Trends with Cross Tabulation Search Engine

    Science.gov (United States)

    Yin, Chengjiu; Hirokawa, Sachio; Yau, Jane Yin-Kim; Hashimoto, Kiyota; Tabata, Yoshiyuki; Nakatoh, Tetsuya

    2013-01-01

    To help researchers in building a knowledge foundation of their research fields which could be a time-consuming process, the authors have developed a Cross Tabulation Search Engine (CTSE). Its purpose is to assist researchers in 1) conducting research surveys, 2) efficiently and effectively retrieving information (such as important researchers,…

  17. FOAMSearch.net: A custom search engine for emergency medicine and critical care.

    Science.gov (United States)

    Raine, Todd; Thoma, Brent; Chan, Teresa M; Lin, Michelle

    2015-08-01

    The number of online resources read by and pertinent to clinicians has increased dramatically. However, most healthcare professionals still use mainstream search engines as their primary port of entry to the resources on the Internet. These search engines use algorithms that do not make it easy to find clinician-oriented resources. FOAMSearch, a custom search engine (CSE), was developed to find relevant, high-quality online resources for emergency medicine and critical care (EMCC) clinicians. Using Google™ algorithms, it searches a vetted list of >300 blogs, podcasts, wikis, knowledge translation tools, clinical decision support tools and medical journals. Utilisation has increased progressively to >3000 users/month since its launch in 2011. Further study of the role of CSEs to find medical resources is needed, and it might be possible to develop similar CSEs for other areas of medicine. © 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  18. Win the game of Googleopoly unlocking the secret strategy of search engines

    CERN Document Server

    Bradley, Sean V

    2015-01-01

    Rank higher in search results with this guide to SEO and content building supremacy Google is not only the number one search engine in the world, it is also the number one website in the world. Only 5 percent of site visitors search past the first page of Google, so if you're not in those top ten results, you are essentially invisible. Winning the Game of Googleopoly is the ultimate roadmap to Page One Domination. The POD strategy is what gets you on that super-critical first page of Google results by increasing your page views. You'll learn how to shape your online presence for Search Engine

  19. Teacher Effectiveness in Relation to Emotional Intelligence Among Medical and Engineering Faculty Members

    Directory of Open Access Journals (Sweden)

    Ajeya Jha

    2012-11-01

    Full Text Available Studies have revealed that emotional intelligence (EI influences an individual's job performance in terms of organizational commitment and job satisfaction. But prior studies were limited mostly to the corporate sector. Therefore the present study was conducted to understand the correlation between EI and teaching performance in the case of faculty members at medical and engineering colleges, as courses related to these two fields are quite extensive and demanding which often leads to stress among students (Saipanish, 2003; Foster & Spencer, 2003; Schneider, 2007; Ray and Joseph, 2010. A total of 250 faculty members from three medical and four private engineering colleges of Uttar Pradesh, India, participated in the study. Emotional intelligence scale (EIS, 2007, Teacher Effectiveness Scale (TES, 2010 and Teacher Rating Scale (TRS, 2003 were administered to measure the emotional intelligence, self-reported teacher effectiveness and student rated teacher effectiveness of the faculty members respectively. All materials used in the study are constructed and standardized on Indian population. The study revealed a positive correlation between EI and teacher effectiveness, both self-reported and students rated. Among ten components of EI considered in the study; emotional stability, self-motivation, managing relations, self-awareness and integrity emerged as the best predictors of teacher effectiveness. Gender differences on the scores of EI and Teacher Effectiveness was insignificant. The EI and self-reported teacher effectiveness of engineering faculty members were relatively higher than those of medical faculty. However, according to students’ rating there was no significant difference in teacher effectiveness among the two groups. Implications of this research from the perspective of training faculty members are discussed.

  20. Enhanced identification of eligibility for depression research using an electronic medical record search engine.

    Science.gov (United States)

    Seyfried, Lisa; Hanauer, David A; Nease, Donald; Albeiruti, Rashad; Kavanagh, Janet; Kales, Helen C

    2009-12-01

    Electronic medical records (EMRs) have become part of daily practice for many physicians. Attempts have been made to apply electronic search engine technology to speed EMR review. This was a prospective, observational study to compare the speed and clinical accuracy of a medical record search engine vs. manual review of the EMR. Three raters reviewed 49 cases in the EMR to screen for eligibility in a depression study using the electronic medical record search engine (EMERSE). One week later raters received a scrambled set of the same patients including 9 distractor cases, and used manual EMR review to determine eligibility. For both methods, accuracy was assessed for the original 49 cases by comparison with a gold standard rater. Use of EMERSE resulted in considerable time savings; chart reviews using EMERSE were significantly faster than traditional manual review (p=0.03). The percent agreement of raters with the gold standard (e.g. concurrent validity) using either EMERSE or manual review was not significantly different. Using a search engine optimized for finding clinical information in the free-text sections of the EMR can provide significant time savings while preserving clinical accuracy. The major power of this search engine is not from a more advanced and sophisticated search algorithm, but rather from a user interface designed explicitly to help users search the entire medical record in a way that protects health information.

  1. The History of the Internet Search Engine: Navigational Media and the Traffic Commodity

    Science.gov (United States)

    van Couvering, E.

    This chapter traces the economic development of the search engine industry over time, beginning with the earliest Web search engines and ending with the domination of the market by Google, Yahoo! and MSN. Specifically, it focuses on the ways in which search engines are similar to and different from traditional media institutions, and how the relations between traditional and Internet media have changed over time. In addition to its historical overview, a core contribution of this chapter is the analysis of the industry using a media value chain based on audiences rather than on content, and the development of traffic as the core unit of exchange. It shows that traditional media companies failed when they attempted to create vertically integrated portals in the late 1990s, based on the idea of controlling Internet content, while search engines succeeded in creating huge "virtually integrated" networks based on control of Internet traffic rather than Internet content.

  2. The Breakthrough Listen Search for Intelligent Life

    Science.gov (United States)

    Croft, Steve; Siemion, Andrew; De Boer, David; Enriquez, J. Emilio; Foster, Griffin; Gajjar, Vishal; Hellbourg, Greg; Hickish, Jack; Isaacson, Howard; Lebofsky, Matt; MacMahon, David; Price, Daniel; Werthimer, Dan

    2018-01-01

    The $100M, 10-year philanthropic "Breakthrough Listen" project is driving an unprecedented expansion of the search for intelligent life beyond Earth. Modern instruments allow ever larger regions of parameter space (luminosity function, duty cycle, beaming fraction, frequency coverage) to be explored, which is enabling us to place meaningful physical limits on the prevalence of transmitting civilizations. Data volumes are huge, and preclude long-term storage of the raw data products, so real-time and machine learning processing techniques must be employed to identify candidate signals as well as simultaneously classifying interfering sources. However, the Galaxy is now known to be a target-rich environment, teeming with habitable planets.Data from Breakthrough Listen can also be used by researchers in other areas of astronomy to study pulsars, fast radio bursts, and a range of other science targets. Breakthrough Listen is already underway in the optical and radio bands, and is also engaging with facilities across the world, including Square Kilometer Array precursors and pathfinders. I will give an overview of the technology, science goals, data products, and roadmap of Breakthrough Listen, as we attempt to answer one of humanity's oldest questions: Are we alone?

  3. A Privacy-Preserving Intelligent Medical Diagnosis System Based on Oblivious Keyword Search

    Directory of Open Access Journals (Sweden)

    Zhaowen Lin

    2017-01-01

    Full Text Available One of the concerns people have is how to get the diagnosis online without privacy being jeopardized. In this paper, we propose a privacy-preserving intelligent medical diagnosis system (IMDS, which can efficiently solve the problem. In IMDS, users submit their health examination parameters to the server in a protected form; this submitting process is based on Paillier cryptosystem and will not reveal any information about their data. And then the server retrieves the most likely disease (or multiple diseases from the database and returns it to the users. In the above search process, we use the oblivious keyword search (OKS as a basic framework, which makes the server maintain the computational ability but cannot learn any personal information over the data of users. Besides, this paper also provides a preprocessing method for data stored in the server, to make our protocol more efficient.

  4. An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.

    Science.gov (United States)

    Zweigenbaum, P; Darmoni, S J; Grabar, N; Douyère, M; Benichou, J

    2002-01-01

    Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF.

  5. Research on the User Interest Modeling of Personalized Search Engine

    Institute of Scientific and Technical Information of China (English)

    LI Zhengwei; XIA Shixiong; NIU Qiang; XIA Zhanguo

    2007-01-01

    At present, how to enable Search Engine to construct user personal interest model initially, master user's personalized information timely and provide personalized services accurately have become the hotspot in the research of Search Engine area.Aiming at the problems of user model's construction and combining techniques of manual customization modeling and automatic analytical modeling, a User Interest Model (UIM) is proposed in the paper. On the basis of it, the corresponding establishment and update algorithms of User Interest Profile (UIP) are presented subsequently. Simulation tests proved that the UIM proposed and corresponding algorithms could enhance the retrieval precision effectively and have superior adaptability.

  6. Competitive intelligence as an enabler for firm competitiveness: An overview

    Directory of Open Access Journals (Sweden)

    Alexander Maune

    2014-06-01

    Full Text Available The purpose of this article is to provide an overview, from literature, about how competitive intelligence can be an enabler towards a firm’s competitiveness. This overview is done under the background of intense global competition that firms are currently experiencing. This paper used a qualitative content analysis as a data collection methodology on all identified journal articles on competitive intelligence and firm competitiveness. To identify relevant literature, academic databases and search engines were used. Moreover, a review of references in related studies led to more relevant sources, the references of which were further reviewed and analysed. To ensure reliability and trustworthiness, peer-reviewed journal articles and triangulation were used. The paper found that competitive intelligence is an important enabler of firm competitiveness. The findings from this paper will assist business managers to understand and improve their outlook of competitive intelligence as an enabler of firm competitiveness and will be of great academic value.

  7. New Capabilities in the Astrophysics Multispectral Archive Search Engine

    Science.gov (United States)

    Cheung, C. Y.; Kelley, S.; Roussopoulos, N.

    The Astrophysics Multispectral Archive Search Engine (AMASE) uses object-oriented database techniques to provide a uniform multi-mission and multi-spectral interface to search for data in the distributed archives. We describe our experience of porting AMASE from Illustra object-relational DBMS to the Informix Universal Data Server. New capabilities and utilities have been developed, including a spatial datablade that supports Nearest Neighbor queries.

  8. The U.S. Army Functional Concept for Intelligence 2020-2040

    Science.gov (United States)

    2017-02-01

    importance of open source intelligence ( OSINT ) and the Internet of things adds to the volume and diversity of data. Access to the vast amounts of data...U.S. interests. OSINT collection may be the driving source of information, particularly in large urban centers. TRADOC Pamphlet 525-2-1 22...private and public security and traffic systems. (2) OSINT provides insight into human terrain, including social media, search-engines, databases

  9. Development and Evaluation of Thesauri-Based Bibliographic Biomedical Search Engine

    Science.gov (United States)

    Alghoson, Abdullah

    2017-01-01

    Due to the large volume and exponential growth of biomedical documents (e.g., books, journal articles), it has become increasingly challenging for biomedical search engines to retrieve relevant documents based on users' search queries. Part of the challenge is the matching mechanism of free-text indexing that performs matching based on…

  10. GeneView: a comprehensive semantic search engine for PubMed.

    Science.gov (United States)

    Thomas, Philippe; Starlinger, Johannes; Vowinkel, Alexander; Arzt, Sebastian; Leser, Ulf

    2012-07-01

    Research results are primarily published in scientific literature and curation efforts cannot keep up with the rapid growth of published literature. The plethora of knowledge remains hidden in large text repositories like MEDLINE. Consequently, life scientists have to spend a great amount of time searching for specific information. The enormous ambiguity among most names of biomedical objects such as genes, chemicals and diseases often produces too large and unspecific search results. We present GeneView, a semantic search engine for biomedical knowledge. GeneView is built upon a comprehensively annotated version of PubMed abstracts and openly available PubMed Central full texts. This semi-structured representation of biomedical texts enables a number of features extending classical search engines. For instance, users may search for entities using unique database identifiers or they may rank documents by the number of specific mentions they contain. Annotation is performed by a multitude of state-of-the-art text-mining tools for recognizing mentions from 10 entity classes and for identifying protein-protein interactions. GeneView currently contains annotations for >194 million entities from 10 classes for ∼21 million citations with 271,000 full text bodies. GeneView can be searched at http://bc3.informatik.hu-berlin.de/.

  11. An empirical study on website usability elements and how they affect search engine optimisation

    Directory of Open Access Journals (Sweden)

    Eugene B. Visser

    2011-03-01

    Full Text Available The primary objective of this research project was to identify and investigate the website usability attributes which are in contradiction with search engine optimisation elements. The secondary objective was to determine if these usability attributes affect conversion. Although the literature review identifies the contradictions, experts disagree about their existence.An experiment was conducted, whereby the conversion and/or traffic ratio results of an existing control website were compared to a usability-designed version of the control website,namely the experimental website. All optimisation elements were ignored, thus implementing only usability. The results clearly show that inclusion of the usability attributes positively affect conversion,indicating that usability is a prerequisite for effective website design. Search engine optimisation is also a prerequisite for the very reason that if a website does not rank on the first page of the search engine result page for a given keyword, then that website might as well not exist. According to this empirical work, usability is in contradiction to search engine optimisation best practices. Therefore the two need to be weighed up in terms of importance towards search engines and visitors.

  12. A Full-Text-Based Search Engine for Finding Highly Matched Documents Across Multiple Categories

    Science.gov (United States)

    Nguyen, Hung D.; Steele, Gynelle C.

    2016-01-01

    This report demonstrates the full-text-based search engine that works on any Web-based mobile application. The engine has the capability to search databases across multiple categories based on a user's queries and identify the most relevant or similar. The search results presented here were found using an Android (Google Co.) mobile device; however, it is also compatible with other mobile phones.

  13. The Search for Extraterrestrial Intelligence in the 1960s: Science in Popular Culture

    Science.gov (United States)

    Smith, Sierra

    2012-01-01

    Building upon the advancement of technology during the Second World War and the important scientific discoveries which have been made about the structure and components of the universe, scientists, especially in radio astronomy and physics, began seriously addressing the possibility of extraterrestrial intelligence in the 1960s. The Search for Extraterrestrial Intelligence (SETI) quickly became one of the most controversial scientific issues in the post Second World War period. The controversy played out, not only in scientific and technical journals, but in newspapers and in popular literature. Proponents for SETI, including Frank Drake, Carl Sagan, and Philip Morrison, actively used a strategy of engagement with the public by using popular media to lobby for exposure and funding. This paper will examine the use of popular media by scientists interested in SETI to popularize and heighten public awareness and also to examine the effects of popularization on SETI's early development. My research has been generously supported by the National Radio Astronomy Observatory.

  14. Knowledge-based personalized search engine for the Web-based Human Musculoskeletal System Resources (HMSR) in biomechanics.

    Science.gov (United States)

    Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba

    2013-02-01

    Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. The Case for Intelligent Propulsion Control for Fast Engine Response

    Science.gov (United States)

    Litt, Jonathan S.; Frederick, Dean K.; Guo, Ten-Huei

    2009-01-01

    Damaged aircraft have occasionally had to rely solely on thrust to maneuver as a consequence of losing hydraulic power needed to operate flight control surfaces. The lack of successful landings in these cases inspired research into more effective methods of utilizing propulsion-only control. That research demonstrated that one of the major contributors to the difficulty in landing is the slow response of the engines as compared to using traditional flight control. To address this, research is being conducted into ways of making the engine more responsive under emergency conditions. This can be achieved by relaxing controller limits, adjusting schedules, and/or redesigning the regulators to increase bandwidth. Any of these methods can enable faster response at the potential expense of engine life and increased likelihood of stall. However, an example sensitivity analysis revealed a complex interaction of the limits and the difficulty in predicting the way to achieve the fastest response. The sensitivity analysis was performed on a realistic engine model, and demonstrated that significantly faster engine response can be achieved compared to standard Bill of Material control. However, the example indicates the need for an intelligent approach to controller limit adjustment in order for the potential to be fulfilled.

  16. GTNDSE: The GA Tech nuclear data search engine

    International Nuclear Information System (INIS)

    Kulp, W.D.; Wood, J.L.

    2004-01-01

    The function of the search engine is to retrieve data from ENSDF-formatted files and to write data in user-selected format. The purposes are horizontal systematics of nuclear mass surface, comparison with experimental data and to assist in data analysis and evaluation

  17. NEWordS A News Search Engine for English Vocabulary Learning

    Directory of Open Access Journals (Sweden)

    Xuejing Huang

    2015-08-01

    Full Text Available Vocabulary is the first hurdle for English learners to over- come. Instead of simply showing a word again and again we come up with an idea to develop an English news article search engine based on users word-reciting record on Shanbay.com. It is designed for advanced English learners to find suitable reading materials. The search engine consists of Crawling Module Document Normalizing module Indexing Module Querying Module and Interface Module. We propose three sorting amp ranking algorithms for Querying Module. For the basic algorithm five crucial principles are taken into consideration. Term frequency inverse document frequency familiarity degree and article freshness degree are factors in this algorithm. Then we think of a improved algorithm for the scene in which a user read multiple articles in the searching result list. Here we adopt a iterative amp greedy method. The essential idea is to select English news articles one by one according to the query meanwhile dynamically update the unfamiliarity of the words during each iterative step. Moreover we develop an advanced algorithm to take article difficulty in to account. Interface Module is designed as a website meanwhile some data visualization technologies e.g. word cloud are applied here. Furthermore we conduct both applicability check and performance evaluation. Metrics such as searching time word-covering ratio and minimum number of articles that completely cover all the queried vocabulary are randomly sampled and profoundly analyzed. The result shows that our search engine works very well with satisfying performance.

  18. Through the Google Goggles: Sociopolitical Bias in Search Engine Design

    Science.gov (United States)

    Diaz, A.

    Search engines like Google are essential to navigating the Web's endless supply of news, political information, and citizen discourse. The mechanisms and conditions under which search results are selected should therefore be of considerable interest to media scholars, political theorists, and citizens alike. In this chapter, I adopt a "deliberative" ideal for search engines and examine whether Google exhibits the "same old" media biases of mainstreaming, hypercommercialism, and industry consolidation. In the end, serious objections to Google are raised: Google may favor popularity over richness; it provides advertising that competes directly with "editorial" content; it so overwhelmingly dominates the industry that users seldom get a second opinion, and this is unlikely to change. Ultimately, however, the results of this analysis may speak less about Google than about contradictions in the deliberative ideal and the so-called "inherently democratic" nature of the Web.

  19. The EBI search engine: EBI search as a service—making biological data accessible for all

    Science.gov (United States)

    Park, Young M.; Squizzato, Silvano; Buso, Nicola; Gur, Tamer

    2017-01-01

    Abstract We present an update of the EBI Search engine, an easy-to-use fast text search and indexing system with powerful data navigation and retrieval capabilities. The interconnectivity that exists between data resources at EMBL–EBI provides easy, quick and precise navigation and a better understanding of the relationship between different data types that include nucleotide and protein sequences, genes, gene products, proteins, protein domains, protein families, enzymes and macromolecular structures, as well as the life science literature. EBI Search provides a powerful RESTful API that enables its integration into third-party portals, thus providing ‘Search as a Service’ capabilities, which are the main topic of this article. PMID:28472374

  20. Comparison of Four Search Engines and their efficacy With Emphasis on Literature Research in Addiction (Prevention and Treatment).

    Science.gov (United States)

    Samadzadeh, Gholam Reza; Rigi, Tahereh; Ganjali, Ali Reza

    2013-01-01

    Surveying valuable and most recent information from internet, has become vital for researchers and scholars, because every day, thousands and perhaps millions of scientific works are brought out as digital resources which represented by internet and researchers can't ignore this great resource to find related documents for their literature search, which may not be found in any library. With regard to variety of documents presented on the internet, search engines are one of the most effective search tools for finding information. The aim of this study is to evaluate the three criteria, recall, preciseness and importance of the four search engines which are PubMed, Science Direct, Google Scholar and federated search of Iranian National Medical Digital Library in addiction (prevention and treatment) to select the most effective search engine for offering the best literature research. This research was a cross-sectional study by which four popular search engines in medical sciences were evaluated. To select keywords, medical subject heading (Mesh) was used. We entered given keywords in the search engines and after searching, 10 first entries were evaluated. Direct observation was used as a mean for data collection and they were analyzed by descriptive statistics (number, percent number and mean) and inferential statistics, One way analysis of variance (ANOVA) and post hoc Tukey in Spss. 15 statistical software. P Value search engines had different operations with regard to the evaluated criteria. Since P Value was 0.004 search engines. PubMed, Science Direct and Google Scholar were the best in recall, preciseness and importance respectively. As literature research is one of the most important stages of research, it's better for researchers, especially Substance-Related Disorders scholars to use different search engines with the best recall, preciseness and importance in that subject field to reach desirable results while searching and they don't depend on just one

  1. Intelligence Reach for Expertise (IREx)

    Science.gov (United States)

    Hadley, Christina; Schoening, James R.; Schreiber, Yonatan

    2015-05-01

    IREx is a search engine for next-generation analysts to find collaborators. U.S. Army Field Manual 2.0 (Intelligence) calls for collaboration within and outside the area of operations, but finding the best collaborator for a given task can be challenging. IREx will be demonstrated as part of Actionable Intelligence Technology Enabled Capability Demonstration (AI-TECD) at the E15 field exercises at Ft. Dix in July 2015. It includes a Task Model for describing a task and its prerequisite competencies, plus a User Model (i.e., a user profile) for individuals to assert their capabilities and other relevant data. These models use a canonical suite of ontologies as a foundation for these models, which enables robust queries and also keeps the models logically consistent. IREx also supports learning validation, where a learner who has completed a course module can search and find a suitable task to practice and demonstrate that their new knowledge can be used in the real world for its intended purpose. The IREx models are in the initial phase of a process to develop them as an IEEE standard. This initiative is currently an approved IEEE Study Group, after which follows a standards working group, then a balloting group, and if all goes well, an IEEE standard.

  2. Intelligence Ethics:

    DEFF Research Database (Denmark)

    Rønn, Kira Vrist

    2016-01-01

    Questions concerning what constitutes a morally justified conduct of intelligence activities have received increased attention in recent decades. However, intelligence ethics is not yet homogeneous or embedded as a solid research field. The aim of this article is to sketch the state of the art...... of intelligence ethics and point out subjects for further scrutiny in future research. The review clusters the literature on intelligence ethics into two groups: respectively, contributions on external topics (i.e., the accountability of and the public trust in intelligence agencies) and internal topics (i.......e., the search for an ideal ethical framework for intelligence actions). The article concludes that there are many holes to fill for future studies on intelligence ethics both in external and internal discussions. Thus, the article is an invitation – especially, to moral philosophers and political theorists...

  3. Seasonal trends in sleep-disordered breathing: evidence from Internet search engine query data.

    Science.gov (United States)

    Ingram, David G; Matthews, Camilla K; Plante, David T

    2015-03-01

    The primary aim of the current study was to test the hypothesis that there is a seasonal component to snoring and obstructive sleep apnea (OSA) through the use of Google search engine query data. Internet search engine query data were retrieved from Google Trends from January 2006 to December 2012. Monthly normalized search volume was obtained over that 7-year period in the USA and Australia for the following search terms: "snoring" and "sleep apnea". Seasonal effects were investigated by fitting cosinor regression models. In addition, the search terms "snoring children" and "sleep apnea children" were evaluated to examine seasonal effects in pediatric populations. Statistically significant seasonal effects were found using cosinor analysis in both USA and Australia for "snoring" (p search term in Australia (p = 0.13). Seasonal patterns for "snoring children" and "sleep apnea children" were observed in the USA (p = 0.002 and p search volume to examine these search terms in Australia. All searches peaked in the winter or early spring in both countries, with the magnitude of seasonal effect ranging from 5 to 50 %. Our findings indicate that there are significant seasonal trends for both snoring and sleep apnea internet search engine queries, with a peak in the winter and early spring. Further research is indicated to determine the mechanisms underlying these findings, whether they have clinical impact, and if they are associated with other comorbid medical conditions that have similar patterns of seasonal exacerbation.

  4. Artificial Intelligence Project

    Science.gov (United States)

    1990-01-01

    Symposium on Aritificial Intelligence and Software Engineering Working Notes, March 1989. Blumenthal, Brad, "An Architecture for Automating...Artificial Intelligence Project Final Technical Report ARO Contract: DAAG29-84-K-OGO Artificial Intelligence LaboratO"ry The University of Texas at...Austin N>.. ~ ~ JA 1/I 1991 n~~~ Austin, Texas 78712 ________k A,.tificial Intelligence Project i Final Technical Report ARO Contract: DAAG29-84-K-0060

  5. Developing a distributed HTML5-based search engine for geospatial resource discovery

    Science.gov (United States)

    ZHOU, N.; XIA, J.; Nebert, D.; Yang, C.; Gui, Z.; Liu, K.

    2013-12-01

    With explosive growth of data, Geospatial Cyberinfrastructure(GCI) components are developed to manage geospatial resources, such as data discovery and data publishing. However, the efficiency of geospatial resources discovery is still challenging in that: (1) existing GCIs are usually developed for users of specific domains. Users may have to visit a number of GCIs to find appropriate resources; (2) The complexity of decentralized network environment usually results in slow response and pool user experience; (3) Users who use different browsers and devices may have very different user experiences because of the diversity of front-end platforms (e.g. Silverlight, Flash or HTML). To address these issues, we developed a distributed and HTML5-based search engine. Specifically, (1)the search engine adopts a brokering approach to retrieve geospatial metadata from various and distributed GCIs; (2) the asynchronous record retrieval mode enhances the search performance and user interactivity; (3) the search engine based on HTML5 is able to provide unified access capabilities for users with different devices (e.g. tablet and smartphone).

  6. Envisioning engineering education and practice in the coming intelligence convergence era — a complex adaptive systems approach

    Science.gov (United States)

    Noor, Ahmed K.

    2013-12-01

    Some of the recent attempts for improving and transforming engineering education are reviewed. The attempts aim at providing the entry level engineers with the skills needed to address the challenges of future large-scale complex systems and projects. Some of the frontier sectors and future challenges for engineers are outlined. The major characteristics of the coming intelligence convergence era (the post-information age) are identified. These include the prevalence of smart devices and environments, the widespread applications of anticipatory computing and predictive / prescriptive analytics, as well as a symbiotic relationship between humans and machines. Devices and machines will be able to learn from, and with, humans in a natural collaborative way. The recent game changers in learnscapes (learning paradigms, technologies, platforms, spaces, and environments) that can significantly impact engineering education in the coming era are identified. Among these are open educational resources, knowledge-rich classrooms, immersive interactive 3D learning, augmented reality, reverse instruction / flipped classroom, gamification, robots in the classroom, and adaptive personalized learning. Significant transformative changes in, and mass customization of, learning are envisioned to emerge from the synergistic combination of the game changers and other technologies. The realization of the aforementioned vision requires the development of a new multidisciplinary framework of emergent engineering for relating innovation, complexity and cybernetics, within the future learning environments. The framework can be used to treat engineering education as a complex adaptive system, with dynamically interacting and communicating components (instructors, individual, small, and large groups of learners). The emergent behavior resulting from the interactions can produce progressively better, and continuously improving, learning environment. As a first step towards the realization of

  7. Sentiment analysis and ontology engineering an environment of computational intelligence

    CERN Document Server

    Chen, Shyi-Ming

    2016-01-01

    This edited volume provides the reader with a fully updated, in-depth treatise on the emerging principles, conceptual underpinnings, algorithms and practice of Computational Intelligence in the realization of concepts and implementation of models of sentiment analysis and ontology –oriented engineering. The volume involves studies devoted to key issues of sentiment analysis, sentiment models, and ontology engineering. The book is structured into three main parts. The first part offers a comprehensive and prudently structured exposure to the fundamentals of sentiment analysis and natural language processing. The second part consists of studies devoted to the concepts, methodologies, and algorithmic developments elaborating on fuzzy linguistic aggregation to emotion analysis, carrying out interpretability of computational sentiment models, emotion classification, sentiment-oriented information retrieval, a methodology of adaptive dynamics in knowledge acquisition. The third part includes a plethora of applica...

  8. Finding Business Information on the "Invisible Web": Search Utilities vs. Conventional Search Engines.

    Science.gov (United States)

    Darrah, Brenda

    Researchers for small businesses, which may have no access to expensive databases or market research reports, must often rely on information found on the Internet, which can be difficult to find. Although current conventional Internet search engines are now able to index over on billion documents, there are many more documents existing in…

  9. Human-Centric Interfaces for Ambient Intelligence

    CERN Document Server

    Aghajan, Hamid; Delgado, Ramon Lopez-Cozar

    2009-01-01

    To create truly effective human-centric ambient intelligence systems both engineering and computing methods are needed. This is the first book to bridge data processing and intelligent reasoning methods for the creation of human-centered ambient intelligence systems. Interdisciplinary in nature, the book covers topics such as multi-modal interfaces, human-computer interaction, smart environments and pervasive computing, addressing principles, paradigms, methods and applications. This book will be an ideal reference for university researchers, R&D engineers, computer engineers, and graduate s

  10. The EBI search engine: EBI search as a service-making biological data accessible for all.

    Science.gov (United States)

    Park, Young M; Squizzato, Silvano; Buso, Nicola; Gur, Tamer; Lopez, Rodrigo

    2017-07-03

    We present an update of the EBI Search engine, an easy-to-use fast text search and indexing system with powerful data navigation and retrieval capabilities. The interconnectivity that exists between data resources at EMBL-EBI provides easy, quick and precise navigation and a better understanding of the relationship between different data types that include nucleotide and protein sequences, genes, gene products, proteins, protein domains, protein families, enzymes and macromolecular structures, as well as the life science literature. EBI Search provides a powerful RESTful API that enables its integration into third-party portals, thus providing 'Search as a Service' capabilities, which are the main topic of this article. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Applications of artificial intelligence in engineering problems. Vol. 1 and 2. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Sriram, D; Adey, R [eds.

    1986-01-01

    The purpose of the first international conference on the application of AI to engineering problems is to provide a forum for engineers all over the world to present their work on these applications. The interest being shown in the engineering community is evident by the large number of papers from the conference that are compiled into these volumes. The topics span various engineering disciplines. These topics are grouped into the following categories: - Knowledge representation and acquisition. Robotics. Natural language processing. Inexact inference techniques. General design methodologies. Constraints. Design strategies specific to a particular domain. Diagnostic systems. Intelligent front ends. Tools and techniques for building expert systems. Most of the papers deal with knowledge-based expert systems. In addition to the regular sessions, two sessions discussing a number of ongoing projects were also included in the conference. A panel on the future of AI in the engineering industry and another panel on networking were scheduled. Distinguished invited speakers from academia and industry presented lectures on their experiences with the application of AI.

  12. Users' Understanding of Search Engine Advertisements

    Directory of Open Access Journals (Sweden)

    Lewandowski, Dirk

    2017-12-01

    Full Text Available In this paper, a large-scale study on users' understanding of search-based advertising is presented. It is based on (1 a survey, (2 a task-based user study, and (3 an online experiment. Data were collected from 1,000 users representative of the German online population. Findings show that users generally lack an understanding of Google's business model and the workings of search-based advertising. 42% of users self-report that they either do not know that it is possible to pay Google for preferred listings for one's company on the SERPs or do not know how to distinguish between organic results and ads. In the task-based user study, we found that only 1.3 percent of participants were able to mark all areas correctly. 9.6 percent had all their identifications correct but did not mark all results they were required to mark. For none of the screenshots given were more than 35% of users able to mark all areas correctly. In the experiment, we found that users who are not able to distinguish between the two results types choose ads around twice as often as users who can recognize the ads. The implications are that models of search engine advertising and of information seeking need to be amended, and that there is a severe need for regulating search-based advertising.

  13. Web Spam, Social Propaganda and the Evolution of Search Engine Rankings

    Science.gov (United States)

    Metaxas, Panagiotis Takis

    Search Engines have greatly influenced the way we experience the web. Since the early days of the web, users have been relying on them to get informed and make decisions. When the web was relatively small, web directories were built and maintained using human experts to screen and categorize pages according to their characteristics. By the mid 1990's, however, it was apparent that the human expert model of categorizing web pages does not scale. The first search engines appeared and they have been evolving ever since, taking over the role that web directories used to play.

  14. Searching with Experience - A Search Engine for Product Information that Learns from its Users

    NARCIS (Netherlands)

    Leeuwen, van J.P.; Jessurun, A.J.; Jansen, G.; Martens, B.; Brown, A.

    2005-01-01

    This paper describes the motivation and development of a new algorithm for ranking web pages. This development aims to enable the implementation of a search engine that can provide highly personalised results to queries. It was initiated by a request from the Dutch CAD industry, but has generic

  15. Computational intelligence in time series forecasting theory and engineering applications

    CERN Document Server

    Palit, Ajoy K

    2005-01-01

    Foresight in an engineering enterprise can make the difference between success and failure, and can be vital to the effective control of industrial systems. Applying time series analysis in the on-line milieu of most industrial plants has been problematic owing to the time and computational effort required. The advent of soft computing tools offers a solution. The authors harness the power of intelligent technologies individually and in combination. Examples of the particular systems and processes susceptible to each technique are investigated, cultivating a comprehensive exposition of the improvements on offer in quality, model building and predictive control and the selection of appropriate tools from the plethora available. Application-oriented engineers in process control, manufacturing, production industry and research centres will find much to interest them in this book. It is suitable for industrial training purposes, as well as serving as valuable reference material for experimental researchers.

  16. Anatomy and evolution of database search engines-a central component of mass spectrometry based proteomic workflows.

    Science.gov (United States)

    Verheggen, Kenneth; Raeder, Helge; Berven, Frode S; Martens, Lennart; Barsnes, Harald; Vaudel, Marc

    2017-09-13

    Sequence database search engines are bioinformatics algorithms that identify peptides from tandem mass spectra using a reference protein sequence database. Two decades of development, notably driven by advances in mass spectrometry, have provided scientists with more than 30 published search engines, each with its own properties. In this review, we present the common paradigm behind the different implementations, and its limitations for modern mass spectrometry datasets. We also detail how the search engines attempt to alleviate these limitations, and provide an overview of the different software frameworks available to the researcher. Finally, we highlight alternative approaches for the identification of proteomic mass spectrometry datasets, either as a replacement for, or as a complement to, sequence database search engines. © 2017 Wiley Periodicals, Inc.

  17. Search Engines : Some Data Protection Issues

    OpenAIRE

    Unger, Patrick

    2009-01-01

    The thesis elaborates on a topic which has attracted a lot of discussion in recent years and is the subject of an ongoing debate. A big controversy has flourished between the company Google Inc. and several privacy groups' ostensibly led by the "Working Party on the Protection of Individuals with regard to the Processing of Personal Data". Deep data protection concerns have been raised in this debate by the use of a search engine and its storing of personal data belonging to the data subject....

  18. Respiratory syncytial virus tracking using internet search engine data.

    Science.gov (United States)

    Oren, Eyal; Frere, Justin; Yom-Tov, Eran; Yom-Tov, Elad

    2018-04-03

    Respiratory Syncytial Virus (RSV) is the leading cause of hospitalization in children less than 1 year of age in the United States. Internet search engine queries may provide high resolution temporal and spatial data to estimate and predict disease activity. After filtering an initial list of 613 symptoms using high-resolution Bing search logs, we used Google Trends data between 2004 and 2016 for a smaller list of 50 terms to build predictive models of RSV incidence for five states where long-term surveillance data was available. We then used domain adaptation to model RSV incidence for the 45 remaining US states. Surveillance data sources (hospitalization and laboratory reports) were highly correlated, as were laboratory reports with search engine data. The four terms which were most often statistically significantly correlated as time series with the surveillance data in the five state models were RSV, flu, pneumonia, and bronchiolitis. Using our models, we tracked the spread of RSV by observing the time of peak use of the search term in different states. In general, the RSV peak moved from south-east (Florida) to the north-west US. Our study represents the first time that RSV has been tracked using Internet data results and highlights successful use of search filters and domain adaptation techniques, using data at multiple resolutions. Our approach may assist in identifying spread of both local and more widespread RSV transmission and may be applicable to other seasonal conditions where comprehensive epidemiological data is difficult to collect or obtain.

  19. GEMINI: a computationally-efficient search engine for large gene expression datasets.

    Science.gov (United States)

    DeFreitas, Timothy; Saddiki, Hachem; Flaherty, Patrick

    2016-02-24

    Low-cost DNA sequencing allows organizations to accumulate massive amounts of genomic data and use that data to answer a diverse range of research questions. Presently, users must search for relevant genomic data using a keyword, accession number of meta-data tag. However, in this search paradigm the form of the query - a text-based string - is mismatched with the form of the target - a genomic profile. To improve access to massive genomic data resources, we have developed a fast search engine, GEMINI, that uses a genomic profile as a query to search for similar genomic profiles. GEMINI implements a nearest-neighbor search algorithm using a vantage-point tree to store a database of n profiles and in certain circumstances achieves an [Formula: see text] expected query time in the limit. We tested GEMINI on breast and ovarian cancer gene expression data from The Cancer Genome Atlas project and show that it achieves a query time that scales as the logarithm of the number of records in practice on genomic data. In a database with 10(5) samples, GEMINI identifies the nearest neighbor in 0.05 sec compared to a brute force search time of 0.6 sec. GEMINI is a fast search engine that uses a query genomic profile to search for similar profiles in a very large genomic database. It enables users to identify similar profiles independent of sample label, data origin or other meta-data information.

  20. Open meta-search with OpenSearch: a case study

    OpenAIRE

    O'Riordan, Adrian P.

    2007-01-01

    The goal of this project was to demonstrate the possibilities of open source search engine and aggregation technology in a Web environment by building a meta-search engine which employs free open search engines and open protocols. In contrast many meta-search engines on the Internet use proprietary search systems. The search engines employed in this case study are all based on the OpenSearch protocol. OpenSearch-compliant systems support XML technologies such as RSS and Atom for aggregation a...

  1. An advanced search engine for patent analytics in medicinal chemistry.

    Science.gov (United States)

    Pasche, Emilie; Gobeill, Julien; Teodoro, Douglas; Gaudinat, Arnaud; Vishnykova, Dina; Lovis, Christian; Ruch, Patrick

    2012-01-01

    Patent collections contain an important amount of medical-related knowledge, but existing tools were reported to lack of useful functionalities. We present here the development of TWINC, an advanced search engine dedicated to patent retrieval in the domain of health and life sciences. Our tool embeds two search modes: an ad hoc search to retrieve relevant patents given a short query and a related patent search to retrieve similar patents given a patent. Both search modes rely on tuning experiments performed during several patent retrieval competitions. Moreover, TWINC is enhanced with interactive modules, such as chemical query expansion, which is of prior importance to cope with various ways of naming biomedical entities. While the related patent search showed promising performances, the ad-hoc search resulted in fairly contrasted results. Nonetheless, TWINC performed well during the Chemathlon task of the PatOlympics competition and experts appreciated its usability.

  2. Providing Formative Assessment to Students Solving Multipath Engineering Problems with Complex Arrangements of Interacting Parts: An Intelligent Tutor Approach

    Science.gov (United States)

    Steif, Paul S.; Fu, Luoting; Kara, Levent Burak

    2016-01-01

    Problems faced by engineering students involve multiple pathways to solution. Students rarely receive effective formative feedback on handwritten homework. This paper examines the potential for computer-based formative assessment of student solutions to multipath engineering problems. In particular, an intelligent tutor approach is adopted and…

  3. Evaluating a federated medical search engine: tailoring the methodology and reporting the evaluation outcomes.

    Science.gov (United States)

    Saparova, D; Belden, J; Williams, J; Richardson, B; Schuster, K

    2014-01-01

    Federated medical search engines are health information systems that provide a single access point to different types of information. Their efficiency as clinical decision support tools has been demonstrated through numerous evaluations. Despite their rigor, very few of these studies report holistic evaluations of medical search engines and even fewer base their evaluations on existing evaluation frameworks. To evaluate a federated medical search engine, MedSocket, for its potential net benefits in an established clinical setting. This study applied the Human, Organization, and Technology (HOT-fit) evaluation framework in order to evaluate MedSocket. The hierarchical structure of the HOT-factors allowed for identification of a combination of efficiency metrics. Human fit was evaluated through user satisfaction and patterns of system use; technology fit was evaluated through the measurements of time-on-task and the accuracy of the found answers; and organization fit was evaluated from the perspective of system fit to the existing organizational structure. Evaluations produced mixed results and suggested several opportunities for system improvement. On average, participants were satisfied with MedSocket searches and confident in the accuracy of retrieved answers. However, MedSocket did not meet participants' expectations in terms of download speed, access to information, and relevance of the search results. These mixed results made it necessary to conclude that in the case of MedSocket, technology fit had a significant influence on the human and organization fit. Hence, improving technological capabilities of the system is critical before its net benefits can become noticeable. The HOT-fit evaluation framework was instrumental in tailoring the methodology for conducting a comprehensive evaluation of the search engine. Such multidimensional evaluation of the search engine resulted in recommendations for system improvement.

  4. Towards artificial intelligence based diesel engine performance control under varying operating conditions using support vector regression

    Directory of Open Access Journals (Sweden)

    Naradasu Kumar Ravi

    2013-01-01

    Full Text Available Diesel engine designers are constantly on the look-out for performance enhancement through efficient control of operating parameters. In this paper, the concept of an intelligent engine control system is proposed that seeks to ensure optimized performance under varying operating conditions. The concept is based on arriving at the optimum engine operating parameters to ensure the desired output in terms of efficiency. In addition, a Support Vector Machines based prediction model has been developed to predict the engine performance under varying operating conditions. Experiments were carried out at varying loads, compression ratios and amounts of exhaust gas recirculation using a variable compression ratio diesel engine for data acquisition. It was observed that the SVM model was able to predict the engine performance accurately.

  5. Global polar geospatial information service retrieval based on search engine and ontology reasoning

    Science.gov (United States)

    Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang

    2007-01-01

    In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

  6. The EBI Search engine: providing search and retrieval functionality for biological data from EMBL-EBI.

    Science.gov (United States)

    Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Gur, Tamer; Cowley, Andrew; Li, Weizhong; Uludag, Mahmut; Pundir, Sangya; Cham, Jennifer A; McWilliam, Hamish; Lopez, Rodrigo

    2015-07-01

    The European Bioinformatics Institute (EMBL-EBI-https://www.ebi.ac.uk) provides free and unrestricted access to data across all major areas of biology and biomedicine. Searching and extracting knowledge across these domains requires a fast and scalable solution that addresses the requirements of domain experts as well as casual users. We present the EBI Search engine, referred to here as 'EBI Search', an easy-to-use fast text search and indexing system with powerful data navigation and retrieval capabilities. API integration provides access to analytical tools, allowing users to further investigate the results of their search. The interconnectivity that exists between data resources at EMBL-EBI provides easy, quick and precise navigation and a better understanding of the relationship between different data types including sequences, genes, gene products, proteins, protein domains, protein families, enzymes and macromolecular structures, together with relevant life science literature. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Curating the Web: Building a Google Custom Search Engine for the Arts

    Science.gov (United States)

    Hennesy, Cody; Bowman, John

    2008-01-01

    Google's first foray onto the web made search simple and results relevant. With its Co-op platform, Google has taken another step toward dramatically increasing the relevancy of search results, further adapting the World Wide Web to local needs. Google Custom Search Engine, a tool on the Co-op platform, puts one in control of his or her own search…

  8. On founding of the science and technology intelligence (STI) research system for the grand engineering research organization

    International Nuclear Information System (INIS)

    Li Zhimin; Tang Yong; Shi Yi; Wang Yirong

    2010-01-01

    This article discusses the science and technology intelligence (STI) research system for grand engineering research organization, and pose that this system should be composed of five elements of research category, research form, service patterns , quality control and fruit evaluation and precession with ability, and describes its definition, connotation and function for each element. research category includes strategy intelligence, technology route and develop trend, technology detail; research form has dynamic track, investigation and analysis, consult study; service patterns involve demand or induction service, independence or mutual action service; quality control and fruit evaluation should be conducted by a group of technologist and intelligence expert; precession with ability should be an organized system with good configuration and learning ability. (authors)

  9. The ontology supported intelligent system for experiment search in the scientific Research center

    Directory of Open Access Journals (Sweden)

    Cvjetković Vladimir

    2014-01-01

    Full Text Available Ontologies and corresponding knowledge bases can be quite successfully used for many tasks that rely on domain knowledge and semantic structures, which should be available for machine processing and sharing. Using SPARQL queries for retrieval of required elements from ontologies and knowledge bases, can significantly simplify modeling of arbitrary structures of concepts and data, and implementation of required functionalities. This paper describes developed ontology for support of Research Centre for testing of active substances that conducts scientific experiments. According to created ontology corresponding knowledge base was made and populated with real experimental data. Developed ontology and knowledge base are directly used for an intelligent system of experiment search which is based on many criteria from ontology. Proposed system gets the desired search result, which is actually an experiment in the form of a written report. Presented solution and implementation are very flexible and adaptable, and can be used as kind of a template by similar information system dealing with biological or similar complex system.

  10. SpEnD: Linked Data SPARQL Endpoints Discovery Using Search Engines

    OpenAIRE

    Yumusak, Semih; Dogdu, Erdogan; Kodaz, Halife; Kamilaris, Andreas

    2016-01-01

    In this study, a novel metacrawling method is proposed for discovering and monitoring linked data sources on the Web. We implemented the method in a prototype system, named SPARQL Endpoints Discovery (SpEnD). SpEnD starts with a "search keyword" discovery process for finding relevant keywords for the linked data domain and specifically SPARQL endpoints. Then, these search keywords are utilized to find linked data sources via popular search engines (Google, Bing, Yahoo, Yandex). By using this ...

  11. Exploring the Relevance of Search Engines: An Overview of Google as a Case Study

    Directory of Open Access Journals (Sweden)

    Ricardo Beltrán-Alfonso

    2017-08-01

    Full Text Available The huge amount of data on the Internet and the diverse list of strategies used to try to link this information with relevant searches through Linked Data have generated a revolution in data treatment and its representation. Nevertheless, the conventional search engines like Google are kept as strategies with good reception to do search processes. The following article presents a study of the development and evolution of search engines, more specifically, to analyze the relevance of findings based on the number of results displayed in paging systems with Google as a case study. Finally, it is intended to contribute to indexing criteria in search results, based on an approach to Semantic Web as a stage in the evolution of the Web.

  12. SearchResultFinder: federated search made easy

    NARCIS (Netherlands)

    Trieschnigg, Rudolf Berend; Tjin-Kam-Jet, Kien; Hiemstra, Djoerd

    Building a federated search engine based on a large number existing web search engines is a challenge: implementing the programming interface (API) for each search engine is an exacting and time-consuming job. In this demonstration we present SearchResultFinder, a browser plugin which speeds up

  13. Study on Fault Diagnostics of a Turboprop Engine Using Inverse Performance Model and Artificial Intelligent Methods

    Science.gov (United States)

    Kong, Changduk; Lim, Semyeong

    2011-12-01

    Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.

  14. Information access in the art history domain. Evaluating a federated search engine for Rembrandt research

    NARCIS (Netherlands)

    Verberne, S.; Boves, L.W.J.; Bosch, A.P.J. van den

    2016-01-01

    The art history domain is an interesting case for search engines tailored to the digital humanities, because the domain involves different types of sources (primary and secondary; text and images). One example of an art history search engine is RemBench, which provides access to information in four

  15. A Webometric Analysis of ISI Medical Journals Using Yahoo, AltaVista, and All the Web Search Engines

    Directory of Open Access Journals (Sweden)

    Zohreh Zahedi

    2010-12-01

    Full Text Available The World Wide Web is an important information source for scholarly communications. Examining the inlinks via webometrics studies has attracted particular interests among information researchers. In this study, the number of inlinks to 69 ISI medical journals retrieved by Yahoo, AltaVista, and All The web Search Engines were examined via a comparative and Webometrics study. For data analysis, SPSS software was employed. Findings revealed that British Medical Journal website attracted the most links of all in the three search engines. There is a significant correlation between the number of External links and the ISI impact factor. The most significant correlation in the three search engines exists between external links of Yahoo and AltaVista (100% and the least correlation is found between external links of All The web & the number of pages of AltaVista (0.51. There is no significant difference between the internal links & the number of pages found by the three search engines. But in case of impact factors, significant differences are found between these three search engines. So, the study shows that journals with higher impact factor attract more links to their websites. It also indicates that the three search engines are significantly different in terms of total links, outlinks and web impact factors

  16. The design of intelligent support systems for nuclear reactor operators

    International Nuclear Information System (INIS)

    Bernard, J.A.

    1992-01-01

    This paper identifies factors relevant to the design of intelligent support systems and their use for the provision of real-time diagnostic information. As such, it constitutes a followup to the state-of-the-art review that was previously published by Bernard and Washio on the utilization of expert systems within the nuclear industry. Some major differences between intelligent-support tools and conventional expert systems are enumerated. In summary, conventional expert systems that encode experimental knowledge in production rules are not suitable vehicle for the creation of operator support systems. The principal difficulty is the need for real-time operation. This in turn means that intelligent support systems will have knowledge bases derived from temporally accurate plant models, inference engines that permit revisions in the search process to accommodate revised data, and man-machine interfaces that do not require any human input. Such systems will be heavily instrumented, and the associated knowledge bases will require a hierarchical organization to emulate human approaches to analysis

  17. Search and rescue in collapsed structures: engineering and social science aspects.

    Science.gov (United States)

    El-Tawil, Sherif; Aguirre, Benigno

    2010-10-01

    This paper discusses the social science and engineering dimensions of search and rescue (SAR) in collapsed buildings. First, existing information is presented on factors that influence the behaviour of trapped victims, particularly human, physical, socioeconomic and circumstantial factors. Trapped victims are most often discussed in the context of structural collapse and injuries sustained. Most studies in this area focus on earthquakes as the type of disaster that produces the most extensive structural damage. Second, information is set out on the engineering aspects of urban search and rescue (USAR) in the United States, including the role of structural engineers in USAR operations, training and certification of structural specialists, and safety and general procedures. The use of computational simulation to link the engineering and social science aspects of USAR is discussed. This could supplement training of local SAR groups and USAR teams, allowing them to understand better the collapse process and how voids form in a rubble pile. A preliminary simulation tool developed for this purpose is described. © 2010 The Author(s). Journal compilation © Overseas Development Institute, 2010.

  18. Key word placing in Web page body text to increase visibility to search engines

    Directory of Open Access Journals (Sweden)

    W. T. Kritzinger

    2007-11-01

    Full Text Available The growth of the World Wide Web has spawned a wide variety of new information sources, which has also left users with the daunting task of determining which sources are valid. Many users rely on the Web as an information source because of the low cost of information retrieval. It is also claimed that the Web has evolved into a powerful business tool. Examples include highly popular business services such as Amazon.com and Kalahari.net. It is estimated that around 80% of users utilize search engines to locate information on the Internet. This, by implication, places emphasis on the underlying importance of Web pages being listed on search engines indices. Empirical evidence that the placement of key words in certain areas of the body text will have an influence on the Web sites' visibility to search engines could not be found in the literature. The result of two experiments indicated that key words should be concentrated towards the top, and diluted towards the bottom of a Web page to increase visibility. However, care should be taken in terms of key word density, to prevent search engine algorithms from raising the spam alarm.

  19. Pyndri: a Python Interface to the Indri Search Engine

    NARCIS (Netherlands)

    Van Gysel, C.; Kanoulas, E.; de Rijke, M.; Jose, J.M.; Hauff, C.; Altıngovde, I.S.; Song, D.; Albakour, D.; Watt, S.; Tait, J.

    2017-01-01

    We introduce pyndri, a Python interface to the Indri search engine. Pyndri allows to access Indri indexes from Python at two levels: (1) dictionary and tokenized document collection, (2) evaluating queries on the index. We hope that with the release of pyndri, we will stimulate reproducible, open

  20. [On the seasonality of dermatoses: a retrospective analysis of search engine query data depending on the season].

    Science.gov (United States)

    Köhler, M J; Springer, S; Kaatz, M

    2014-09-01

    The volume of search engine queries about disease-relevant items reflects public interest and correlates with disease prevalence as proven by the example of flu (influenza). Other influences include media attention or holidays. The present work investigates if the seasonality of prevalence or symptom severity of dermatoses correlates with search engine query data. The relative weekly volume of dermatological relevant search terms was assessed by the online tool Google Trends for the years 2009-2013. For each item, the degree of seasonality was calculated via frequency analysis and a geometric approach. Many dermatoses show a marked seasonality, reflected by search engine query volumes. Unexpected seasonal variations of these queries suggest a previously unknown variability of the respective disease prevalence. Furthermore, using the example of allergic rhinitis, a close correlation of search engine query data with actual pollen count can be demonstrated. In many cases, search engine query data are appropriate to estimate seasonal variability in prevalence of common dermatoses. This finding may be useful for real-time analysis and formation of hypotheses concerning pathogenetic or symptom aggravating mechanisms and may thus contribute to improvement of diagnostics and prevention of skin diseases.

  1. How People Interact with Technology based on Natural and Artificial Intelligence

    Directory of Open Access Journals (Sweden)

    Vasile MAZILESCU

    2017-04-01

    Full Text Available This paper aims to analyse the different forms of intelligence within organizations in a systemic and inclusive vision, in order to design an integrated environment based on Artificial Intelligence (AI and Collective Intelligence (CI. This way we effectively shift the classical approaches of connecting people with people using collaboration tools (which allow people to work together, such as videoconferencing or email, groupware in virtual space, forums, workflow, of connecting people with a series of content management knowledge (taxonomies and documents classification, ontologies or thesauri, search engines, portals, to the current approaches of connecting people on the use (automatic of operational knowledge to solve problems and make decisions based on intellectual cooperation. Few technologies have the big potential to review how we live, move, and work. Artificial intelligence (AI is nowdays equivalent of electricity and the Internet. AI is expected to bring massive shifts in how people perceive and interact with technology, with machines performing a wider range of tasks, in many cases doing a better job than humans.

  2. A Dynamic Intelligent Decision Approach to Dependency Modeling of Project Tasks in Complex Engineering System Optimization

    Directory of Open Access Journals (Sweden)

    Tinggui Chen

    2013-01-01

    Full Text Available Complex engineering system optimization usually involves multiple projects or tasks. On the one hand, dependency modeling among projects or tasks highlights structures in systems and their environments which can help to understand the implications of connectivity on different aspects of system performance and also assist in designing, optimizing, and maintaining complex systems. On the other hand, multiple projects or tasks are either happening at the same time or scheduled into a sequence in order to use common resources. In this paper, we propose a dynamic intelligent decision approach to dependency modeling of project tasks in complex engineering system optimization. The approach takes this decision process as a two-stage decision-making problem. In the first stage, a task clustering approach based on modularization is proposed so as to find out a suitable decomposition scheme for a large-scale project. In the second stage, according to the decomposition result, a discrete artificial bee colony (ABC algorithm inspired by the intelligent foraging behavior of honeybees is developed for the resource constrained multiproject scheduling problem. Finally, a certain case from an engineering design of a chemical processing system is utilized to help to understand the proposed approach.

  3. PubMed vs. HighWire Press: a head-to-head comparison of two medical literature search engines.

    Science.gov (United States)

    Vanhecke, Thomas E; Barnes, Michael A; Zimmerman, Janet; Shoichet, Sandor

    2007-09-01

    PubMed and HighWire Press are both useful medical literature search engines available for free to anyone on the internet. We measured retrieval accuracy, number of results generated, retrieval speed, features and search tools on HighWire Press and PubMed using the quick search features of each. We found that using HighWire Press resulted in a higher likelihood of retrieving the desired article and higher number of search results than the same search on PubMed. PubMed was faster than HighWire Press in delivering search results regardless of search settings. There are considerable differences in search features between these two search engines.

  4. In Search of Search Engine Marketing Strategy Amongst SME's in Ireland

    Science.gov (United States)

    Barry, Chris; Charleton, Debbie

    Researchers have identified the Web as a searchers first port of call for locating information. Search Engine Marketing (SEM) strategies have been noted as a key consideration when developing, maintaining and managing Websites. A study presented here of SEM practices of Irish small to medium enterprises (SMEs) reveals they plan to spend more resources on SEM in the future. Most firms utilize an informal SEM strategy, where Website optimization is perceived most effective in attracting traffic. Respondents cite the use of ‘keywords in title and description tags’ as the most used SEM technique, followed by the use of ‘keywords throughout the whole Website’; while ‘Pay for Placement’ was most widely used Paid Search technique. In concurrence with the literature, measuring SEM performance remains a significant challenge with many firms unsure if they measure it effectively. An encouraging finding is that Irish SMEs adopt a positive ethical posture when undertaking SEM.

  5. Quantitative evaluation of recall and precision of CAT Crawler, a search engine specialized on retrieval of Critically Appraised Topics

    Science.gov (United States)

    Dong, Peng; Wong, Ling Ling; Ng, Sarah; Loh, Marie; Mondry, Adrian

    2004-01-01

    Background Critically Appraised Topics (CATs) are a useful tool that helps physicians to make clinical decisions as the healthcare moves towards the practice of Evidence-Based Medicine (EBM). The fast growing World Wide Web has provided a place for physicians to share their appraised topics online, but an increasing amount of time is needed to find a particular topic within such a rich repository. Methods A web-based application, namely the CAT Crawler, was developed by Singapore's Bioinformatics Institute to allow physicians to adequately access available appraised topics on the Internet. A meta-search engine, as the core component of the application, finds relevant topics following keyword input. The primary objective of the work presented here is to evaluate the quantity and quality of search results obtained from the meta-search engine of the CAT Crawler by comparing them with those obtained from two individual CAT search engines. From the CAT libraries at these two sites, all possible keywords were extracted using a keyword extractor. Of those common to both libraries, ten were randomly chosen for evaluation. All ten were submitted to the two search engines individually, and through the meta-search engine of the CAT Crawler. Search results were evaluated for relevance both by medical amateurs and professionals, and the respective recall and precision were calculated. Results While achieving an identical recall, the meta-search engine showed a precision of 77.26% (±14.45) compared to the individual search engines' 52.65% (±12.0) (p search engine approach. The improved precision due to inherent filters underlines the practical usefulness of this tool for clinicians. PMID:15588311

  6. 2015 Chinese Intelligent Systems Conference

    CERN Document Server

    Du, Junping; Li, Hongbo; Zhang, Weicun; CISC’15

    2016-01-01

    This book presents selected research papers from the 2015 Chinese Intelligent Systems Conference (CISC’15), held in Yangzhou, China. The topics covered include multi-agent systems, evolutionary computation, artificial intelligence, complex systems, computation intelligence and soft computing, intelligent control, advanced control technology, robotics and applications, intelligent information processing, iterative learning control, and machine learning. Engineers and researchers from academia, industry and the government can gain valuable insights into solutions combining ideas from multiple disciplines in the field of intelligent systems.

  7. [Engineered spider silk: the intelligent biomaterial of the future. Part I].

    Science.gov (United States)

    Florczak, Anna; Piekoś, Konrad; Kaźmierska, Katarzyna; Mackiewicz, Andrzej; Dams-Kozłowska, Hanna

    2011-06-17

    The unique properties of spider silk such as strength, extensibility, toughness, biocompatibility and biodegradability are the reasons for the recent development in silk biomaterial technology. For a long time scientific progress was impeded by limited access to spider silk. However, the development of the molecular biology strategy was a breaking point in synthetic spider silk protein design. The sequences of engineered spider silk are based on the consensus motives of the corresponding natural equivalents. Moreover, the engineered silk proteins may be modified in order to gain a new function. The strategy of the hybrid proteins constructed on the DNA level combines the sequence of engineered silk, which is responsible for the biomaterial structure, with the sequence of polypeptide which allows functionalization of the silk biomaterial. The functional domains may comprise receptor binding sites, enzymes, metal or sugar binding sites and others. Currently, advanced research is being conducted, which on the one hand focuses on establishing the particular silk structure and understanding the process of silk thread formation in nature. On the other hand, there are attempts to improve methods of engineered spider silk protein production. Due to acquired knowledge and recent progress in synthetic protein technology, the engineered silk will turn into intelligent biomaterial of the future, while its industrial production scale will trigger a biotechnological revolution.

  8. An Integrated Conceptual Environment based on Collective Intelligence and Distributed Artificial Intelligence for Connecting People on Problem Solving

    Directory of Open Access Journals (Sweden)

    Vasile MAZILESCU

    2012-12-01

    Full Text Available This paper aims to analyze the different forms of intelligence within organizations in a systemic and inclusive vision, in order to conceptualize an integrated environment based on Distributed Artificial Intelligence (DAI and Collective Intelligence (CI. In this way we effectively shift the classical approaches of connecting people with people using collaboration tools (which allow people to work together, such as videoconferencing or email, groupware in virtual space, forums, workflow, of connecting people with a series of content management knowledge (taxonomies and documents classification, ontologies or thesauri, search engines, portals, to the current approaches of connecting people on the use (automatic of operational knowledge to solve problems and make decisions based on intellectual cooperation. The best way to use collective intelligence is based on knowledge mobilization and semantic technologies. We must not let computers to imitate people but to support people think and develop their ideas within a group. CI helps people to think together, while DAI tries to support people so as to limit human error. Within an organization, to manage CI is to combine instruments like Semantic Technologies (STs, knowledge mobilization methods for developing Knowledge Management (KM strategies, and the processes that promote connection and collaboration between individual minds in order to achieve collective objectives, to perform a task or to solve increasingly economic complex problems.

  9. Improvement of natural image search engines results by emotional filtering

    Directory of Open Access Journals (Sweden)

    Patrice Denis

    2016-04-01

    Full Text Available With the Internet 2.0 era, managing user emotions is a problem that more and more actors are interested in. Historically, the first notions of emotion sharing were expressed and defined with emoticons. They allowed users to show their emotional status to others in an impersonal and emotionless digital world. Now, in the Internet of social media, every day users share lots of content with each other on Facebook, Twitter, Google+ and so on. Several new popular web sites like FlickR, Picassa, Pinterest, Instagram or DeviantArt are now specifically based on sharing image content as well as personal emotional status. This kind of information is economically very valuable as it can for instance help commercial companies sell more efficiently. In fact, with this king of emotional information, business can made where companies will better target their customers needs, and/or even sell them more products. Research has been and is still interested in the mining of emotional information from user data since then. In this paper, we focus on the impact of emotions from images that have been collected from search image engines. More specifically our proposition is the creation of a filtering layer applied on the results of such image search engines. Our peculiarity relies in the fact that it is the first attempt from our knowledge to filter image search engines results with an emotional filtering approach.

  10. Semantic Oriented Agent based Approach towards Engineering Data Management, Web Information Retrieval and User System Communication Problems

    OpenAIRE

    Ahmed, Zeeshan; Gerhard, Detlef

    2010-01-01

    The four intensive problems to the software rose by the software industry .i.e., User System Communication / Human Machine Interface, Meta Data extraction, Information processing & management and Data representation are discussed in this research paper. To contribute in the field we have proposed and described an intelligent semantic oriented agent based search engine including the concepts of intelligent graphical user interface, natural language based information processing, data management...

  11. GeNemo: a search engine for web-based functional genomic data.

    Science.gov (United States)

    Zhang, Yongqing; Cao, Xiaoyi; Zhong, Sheng

    2016-07-08

    A set of new data types emerged from functional genomic assays, including ChIP-seq, DNase-seq, FAIRE-seq and others. The results are typically stored as genome-wide intensities (WIG/bigWig files) or functional genomic regions (peak/BED files). These data types present new challenges to big data science. Here, we present GeNemo, a web-based search engine for functional genomic data. GeNemo searches user-input data against online functional genomic datasets, including the entire collection of ENCODE and mouse ENCODE datasets. Unlike text-based search engines, GeNemo's searches are based on pattern matching of functional genomic regions. This distinguishes GeNemo from text or DNA sequence searches. The user can input any complete or partial functional genomic dataset, for example, a binding intensity file (bigWig) or a peak file. GeNemo reports any genomic regions, ranging from hundred bases to hundred thousand bases, from any of the online ENCODE datasets that share similar functional (binding, modification, accessibility) patterns. This is enabled by a Markov Chain Monte Carlo-based maximization process, executed on up to 24 parallel computing threads. By clicking on a search result, the user can visually compare her/his data with the found datasets and navigate the identified genomic regions. GeNemo is available at www.genemo.org. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Quantitative evaluation of recall and precision of CAT Crawler, a search engine specialized on retrieval of Critically Appraised Topics

    Directory of Open Access Journals (Sweden)

    Loh Marie

    2004-12-01

    Full Text Available Abstract Background Critically Appraised Topics (CATs are a useful tool that helps physicians to make clinical decisions as the healthcare moves towards the practice of Evidence-Based Medicine (EBM. The fast growing World Wide Web has provided a place for physicians to share their appraised topics online, but an increasing amount of time is needed to find a particular topic within such a rich repository. Methods A web-based application, namely the CAT Crawler, was developed by Singapore's Bioinformatics Institute to allow physicians to adequately access available appraised topics on the Internet. A meta-search engine, as the core component of the application, finds relevant topics following keyword input. The primary objective of the work presented here is to evaluate the quantity and quality of search results obtained from the meta-search engine of the CAT Crawler by comparing them with those obtained from two individual CAT search engines. From the CAT libraries at these two sites, all possible keywords were extracted using a keyword extractor. Of those common to both libraries, ten were randomly chosen for evaluation. All ten were submitted to the two search engines individually, and through the meta-search engine of the CAT Crawler. Search results were evaluated for relevance both by medical amateurs and professionals, and the respective recall and precision were calculated. Results While achieving an identical recall, the meta-search engine showed a precision of 77.26% (±14.45 compared to the individual search engines' 52.65% (±12.0 (p Conclusion The results demonstrate the validity of the CAT Crawler meta-search engine approach. The improved precision due to inherent filters underlines the practical usefulness of this tool for clinicians.

  13. Using internet search engines and library catalogs to locate toxicology information.

    Science.gov (United States)

    Wukovitz, L D

    2001-01-12

    The increasing importance of the Internet demands that toxicologists become aquainted with its resources. To find information, researchers must be able to effectively use Internet search engines, directories, subject-oriented websites, and library catalogs. The article will explain these resources, explore their benefits and weaknesses, and identify skills that help the researcher to improve search results and critically evaluate sources for their relevancy, validity, accuracy, and timeliness.

  14. An Object-Oriented Graphical User Interface for a Reusable Rocket Engine Intelligent Control System

    Science.gov (United States)

    Litt, Jonathan S.; Musgrave, Jeffrey L.; Guo, Ten-Huei; Paxson, Daniel E.; Wong, Edmond; Saus, Joseph R.; Merrill, Walter C.

    1994-01-01

    An intelligent control system for reusable rocket engines under development at NASA Lewis Research Center requires a graphical user interface to allow observation of the closed-loop system in operation. The simulation testbed consists of a real-time engine simulation computer, a controls computer, and several auxiliary computers for diagnostics and coordination. The system is set up so that the simulation computer could be replaced by the real engine and the change would be transparent to the control system. Because of the hard real-time requirement of the control computer, putting a graphical user interface on it was not an option. Thus, a separate computer used strictly for the graphical user interface was warranted. An object-oriented LISP-based graphical user interface has been developed on a Texas Instruments Explorer 2+ to indicate the condition of the engine to the observer through plots, animation, interactive graphics, and text.

  15. Using the Web for Competitive Intelligence (CI) Gathering

    Science.gov (United States)

    Rocker, JoAnne; Roncaglia, George

    2002-01-01

    Businesses use the Internet as a way to communicate company information as a way of engaging their customers. As the use of the Web for business transactions and advertising grows, so too, does the amount of useful information for practitioners of competitive intelligence (CI). CI is the legal and ethical practice of information gathering about competitors and the marketplace. Information sources like company webpages, online newspapers and news organizations, electronic journal articles and reports, and Internet search engines allow CI practitioners analyze company strengths and weaknesses for their customers. More company and marketplace information than ever is available on the Internet and a lot of it is free. Companies should view the Web not only as a business tool but also as a source of competitive intelligence. In a highly competitive marketplace can any organization afford to ignore information about the other players and customers in that same marketplace?

  16. A search engine to find the best data?

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    What if you could see your experiment’s results in a “page rank” style? How would your workflow change if you could collaborate with your colleagues on a single platform? What if you could search all your event data for certain specifications? All of these ideas (and more) are being explored at the LHCb experiment in collaboration with Internet giant Yandex.   An extremely rare B0s → μμ decay candidate event observed in the LHCb detector. As the leading search provider in Russia, with over 60% of the market share, Yandex is to East what Google is to West. Their collaboration with CERN began back in 2011, when Yandex co-founder Ilya Segalovich was approached by then-LHCb spokesperson Andrei Golutvin. “Just as Yandex's search engines sift through thousands of websites to find the right page, our experimentalists apply algorithms to find the best result in our data," says Andrei Golutvin. "Perhaps the techn...

  17. Engineering Evaluation and Assessment (EE and A) Report for the Symbolic and Sub-symbolic Robotics Intelligence Control System (SS-RICS)

    Science.gov (United States)

    2018-04-01

    ARL-TR-8352 ● APR 2018 US Army Research Laboratory Engineering Evaluation and Assessment (EE&A) Report for the Symbolic and Sub...APR 2018 US Army Research Laboratory Engineering Evaluation and Assessment (EE&A) Report for the Symbolic and Sub-symbolic Robotics...Intelligence Control System (SS-RICS) by Troy Dale Kelley and Eric Avery Human Research and Engineering Directorate, ARL Sean McGhee STG Inc

  18. Environmental engineering: Saving a threatened resource--In search of solutions

    International Nuclear Information System (INIS)

    Linaweaver, F.P.

    1992-01-01

    This proceedings, Environmental Engineering: Saving a Threatened Resource--In search of solutions, contains papers presented at the 1992 National Conference on Environmental Engineering, a component of Water Forum '92, Baltimore, Maryland, August 2-5, 1992. Some of the topics addressed include air quality; environmental assessment; sludge management and disposal; solid waste, toxic and hazardous materials; water supply and treatment; and water/wastewater infrastructure. In addition, key areas explored are toxicity reduction; urban nonpoint source pollution; incineration; landfills; leachate control; and VOC emissions from wastewater treatment plants. This publication provides the environmental engineer with state-of-the-art information on practical environmental engineering and results from recent advancements in scientific knowledge in this field. Individual papers are processed separately for inclusion in the appropriate data bases

  19. Crescendo: A Protein Sequence Database Search Engine for Tandem Mass Spectra.

    Science.gov (United States)

    Wang, Jianqi; Zhang, Yajie; Yu, Yonghao

    2015-07-01

    A search engine that discovers more peptides reliably is essential to the progress of the computational proteomics. We propose two new scoring functions (L- and P-scores), which aim to capture similar characteristics of a peptide-spectrum match (PSM) as Sequest and Comet do. Crescendo, introduced here, is a software program that implements these two scores for peptide identification. We applied Crescendo to test datasets and compared its performance with widely used search engines, including Mascot, Sequest, and Comet. The results indicate that Crescendo identifies a similar or larger number of peptides at various predefined false discovery rates (FDR). Importantly, it also provides a better separation between the true and decoy PSMs, warranting the future development of a companion post-processing filtering algorithm.

  20. A novel algorithm for validating peptide identification from a shotgun proteomics search engine.

    Science.gov (United States)

    Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Mu, Zheng; Jennings, Jennifer L; Hoek, Kristen L; Allos, Tara; Howard, Leigh M; Edwards, Kathryn M; Weil, P Anthony; Link, Andrew J

    2013-03-01

    Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC-MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three-step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm on the basis of the resolution and mass accuracy of the mass spectrometer employed in the LC-MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines.

  1. Semantics-Based Intelligent Indexing and Retrieval of Digital Images - A Case Study

    Science.gov (United States)

    Osman, Taha; Thakker, Dhavalkumar; Schaefer, Gerald

    The proliferation of digital media has led to a huge interest in classifying and indexing media objects for generic search and usage. In particular, we are witnessing colossal growth in digital image repositories that are difficult to navigate using free-text search mechanisms, which often return inaccurate matches as they typically rely on statistical analysis of query keyword recurrence in the image annotation or surrounding text. In this chapter we present a semantically enabled image annotation and retrieval engine that is designed to satisfy the requirements of commercial image collections market in terms of both accuracy and efficiency of the retrieval process. Our search engine relies on methodically structured ontologies for image annotation, thus allowing for more intelligent reasoning about the image content and subsequently obtaining a more accurate set of results and a richer set of alternatives matchmaking the original query. We also show how our well-analysed and designed domain ontology contributes to the implicit expansion of user queries as well as presenting our initial thoughts on exploiting lexical databases for explicit semantic-based query expansion.

  2. Preliminary Comparison of Three Search Engines for Point of Care Access to MEDLINE® Citations

    Science.gov (United States)

    Hauser, Susan E.; Demner-Fushman, Dina; Ford, Glenn M.; Jacobs, Joshua L.; Thoma, George

    2006-01-01

    Medical resident physicians used MD on Tap in real time to search for MEDLINE citations relevant to clinical questions using three search engines: Essie, Entrez and Google™, in order of performance. PMID:17238564

  3. Search Engine Customization and Data Set Builder

    OpenAIRE

    Arias Moreno, Fco Javier

    2009-01-01

    There are two core objectives in this work: firstly, to build a data set, and secondly, to customize a search engine. The first objective is to design and implement a data set builder. There are two steps required for this. The first step is to build a crawler. The second step is to include a cleaner. The crawler collects Web links. The cleaner extracts the main content and removes noise from the files crawled. The goal of this application is crawling Web news sites to find the...

  4. Reconsidering the Rhizome: A Textual Analysis of Web Search Engines as Gatekeepers of the Internet

    Science.gov (United States)

    Hess, A.

    Critical theorists have often drawn from Deleuze and Guattari's notion of the rhizome when discussing the potential of the Internet. While the Internet may structurally appear as a rhizome, its day-to-day usage by millions via search engines precludes experiencing the random interconnectedness and potential democratizing function. Through a textual analysis of four search engines, I argue that Web searching has grown hierarchies, or "trees," that organize data in tracts of knowledge and place users in marketing niches rather than assist in the development of new knowledge.

  5. Search Engine Marketing (SEM): Financial & Competitive Advantages of an Effective Hotel SEM Strategy

    OpenAIRE

    Leora Halpern Lanz

    2015-01-01

    Search Engine Marketing and Optimization (SEO, SEM) are keystones of a hotels marketing strategy, in fact research shows that 90% of travelers start their vacation planning with a Google search. Learn five strategies that can enhance a hotels SEO and SEM strategies to boost bookings.

  6. Training Engineers for the Ambient Intelligence Challenge

    Science.gov (United States)

    Corno, Fulvio; De Russis, Luigi

    2017-01-01

    The increasing complexity of the new breed of distributed intelligent systems, such as the Internet of Things, which require a diversity of languages and protocols, can only be tamed with design and programming best practices. Interest is also growing for including the human factor, as advocated by the "ambient intelligence" (AmI)…

  7. Artificial Intelligence

    CERN Document Server

    Warwick, Kevin

    2011-01-01

    if AI is outside your field, or you know something of the subject and would like to know more then Artificial Intelligence: The Basics is a brilliant primer.' - Nick Smith, Engineering and Technology Magazine November 2011 Artificial Intelligence: The Basics is a concise and cutting-edge introduction to the fast moving world of AI. The author Kevin Warwick, a pioneer in the field, examines issues of what it means to be man or machine and looks at advances in robotics which have blurred the boundaries. Topics covered include: how intelligence can be defined whether machines can 'think' sensory

  8. Improved Multiobjective Harmony Search Algorithm with Application to Placement and Sizing of Distributed Generation

    Directory of Open Access Journals (Sweden)

    Wanxing Sheng

    2014-01-01

    Full Text Available To solve the comprehensive multiobjective optimization problem, this study proposes an improved metaheuristic searching algorithm with combination of harmony search and the fast nondominated sorting approach. This is a kind of the novel intelligent optimization algorithm for multiobjective harmony search (MOHS. The detailed description and the algorithm formulating are discussed. Taking the optimal placement and sizing issue of distributed generation (DG in distributed power system as one example, the solving procedure of the proposed method is given. Simulation result on modified IEEE 33-bus test system and comparison with NSGA-II algorithm has proved that the proposed MOHS can get promising results for engineering application.

  9. The LAILAPS search engine: a feature model for relevance ranking in life science databases.

    Science.gov (United States)

    Lange, Matthias; Spies, Karl; Colmsee, Christian; Flemming, Steffen; Klapperstück, Matthias; Scholz, Uwe

    2010-03-25

    Efficient and effective information retrieval in life sciences is one of the most pressing challenge in bioinformatics. The incredible growth of life science databases to a vast network of interconnected information systems is to the same extent a big challenge and a great chance for life science research. The knowledge found in the Web, in particular in life-science databases, are a valuable major resource. In order to bring it to the scientist desktop, it is essential to have well performing search engines. Thereby, not the response time nor the number of results is important. The most crucial factor for millions of query results is the relevance ranking. In this paper, we present a feature model for relevance ranking in life science databases and its implementation in the LAILAPS search engine. Motivated by the observation of user behavior during their inspection of search engine result, we condensed a set of 9 relevance discriminating features. These features are intuitively used by scientists, who briefly screen database entries for potential relevance. The features are both sufficient to estimate the potential relevance, and efficiently quantifiable. The derivation of a relevance prediction function that computes the relevance from this features constitutes a regression problem. To solve this problem, we used artificial neural networks that have been trained with a reference set of relevant database entries for 19 protein queries. Supporting a flexible text index and a simple data import format, this concepts are implemented in the LAILAPS search engine. It can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases. LAILAPS is publicly available for SWISSPROT data at http://lailaps.ipk-gatersleben.de.

  10. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.

    Science.gov (United States)

    Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John

    2012-12-05

    For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.

  11. Artificial Intelligence and Economic Theories

    OpenAIRE

    Marwala, Tshilidzi; Hurwitz, Evan

    2017-01-01

    The advent of artificial intelligence has changed many disciplines such as engineering, social science and economics. Artificial intelligence is a computational technique which is inspired by natural intelligence such as the swarming of birds, the working of the brain and the pathfinding of the ants. These techniques have impact on economic theories. This book studies the impact of artificial intelligence on economic theories, a subject that has not been extensively studied. The theories that...

  12. Intelligent methods for data retrieval in fusion databases

    International Nuclear Information System (INIS)

    Vega, J.

    2008-01-01

    The plasma behaviour is identified through the recognition of patterns inside signals. The search for patterns is usually a manual and tedious procedure in which signals need to be examined individually. A breakthrough in data retrieval for fusion databases is the development of intelligent methods to search for patterns. A pattern (in the broadest sense) could be a single segment of a waveform, a set of pixels within an image or even a heterogeneous set of features made up of waveforms, images and any kind of experimental data. Intelligent methods will allow searching for data according to technical, scientific and structural criteria instead of an identifiable time interval or pulse number. Such search algorithms should be intelligent enough to avoid passing over the entire database. Benefits of such access methods are discussed and several available techniques are reviewed. In addition, the applicability of the methods from general purpose searching systems to ad hoc developments is covered

  13. Balancing Efficiency and Effectiveness for Fusion-Based Search Engines in the "Big Data" Environment

    Science.gov (United States)

    Li, Jieyu; Huang, Chunlan; Wang, Xiuhong; Wu, Shengli

    2016-01-01

    Introduction: In the big data age, we have to deal with a tremendous amount of information, which can be collected from various types of sources. For information search systems such as Web search engines or online digital libraries, the collection of documents becomes larger and larger. For some queries, an information search system needs to…

  14. FPS-RAM: Fast Prefix Search RAM-Based Hardware for Forwarding Engine

    Science.gov (United States)

    Zaitsu, Kazuya; Yamamoto, Koji; Kuroda, Yasuto; Inoue, Kazunari; Ata, Shingo; Oka, Ikuo

    Ternary content addressable memory (TCAM) is becoming very popular for designing high-throughput forwarding engines on routers. However, TCAM has potential problems in terms of hardware and power costs, which limits its ability to deploy large amounts of capacity in IP routers. In this paper, we propose new hardware architecture for fast forwarding engines, called fast prefix search RAM-based hardware (FPS-RAM). We designed FPS-RAM hardware with the intent of maintaining the same search performance and physical user interface as TCAM because our objective is to replace the TCAM in the market. Our RAM-based hardware architecture is completely different from that of TCAM and has dramatically reduced the costs and power consumption to 62% and 52%, respectively. We implemented FPS-RAM on an FPGA to examine its lookup operation.

  15. General vs health specialized search engine: a blind comparative evaluation of top search results.

    Science.gov (United States)

    Pletneva, Natalia; Ruiz de Castaneda, Rafael; Baroz, Frederic; Boyer, Celia

    2014-01-01

    This paper presents the results of a blind comparison of top ten search results retrieved by Google.ch (French) and Khresmoi for everyone, a health specialized search engine. Participants--students of the Faculty of Medicine of the University of Geneva had to complete three tasks and select their preferred results. The majority of the participants have largely preferred Google results while Khresmoi results showed potential to compete in specific topics. The coverage of the results seems to be one of the reasons. The second being that participants do not know how to select quality and transparent health web pages. More awareness, tools and education about the matter is required for the students of Medicine to be able to efficiently distinguish trustworthy online health information.

  16. Search Engine Marketing (SEM: Financial & Competitive Advantages of an Effective Hotel SEM Strategy

    Directory of Open Access Journals (Sweden)

    Leora Halpern Lanz

    2015-05-01

    Full Text Available Search Engine Marketing and Optimization (SEO, SEM are keystones of a hotels marketing strategy, in fact research shows that 90% of travelers start their vacation planning with a Google search. Learn five strategies that can enhance a hotels SEO and SEM strategies to boost bookings.

  17. Search Engine Optimization for Flash Best Practices for Using Flash on the Web

    CERN Document Server

    Perkins, Todd

    2009-01-01

    Search Engine Optimization for Flash dispels the myth that Flash-based websites won't show up in a web search by demonstrating exactly what you can do to make your site fully searchable -- no matter how much Flash it contains. You'll learn best practices for using HTML, CSS and JavaScript, as well as SWFObject, for building sites with Flash that will stand tall in search rankings.

  18. Intelligence Is What the Intelligence Test Measures. Seriously

    Directory of Open Access Journals (Sweden)

    Han L. J. van der Maas

    2014-02-01

    Full Text Available The mutualism model, an alternative for the g-factor model of intelligence, implies a formative measurement model in which “g” is an index variable without a causal role. If this model is accurate, the search for a genetic of brain instantiation of “g” is deemed useless. This also implies that the (weighted sum score of items of an intelligence test is just what it is: a weighted sum score. Preference for one index above the other is a pragmatic issue that rests mainly on predictive value.

  19. Issues regarding the design and acceptance of intelligent support systems for reactor operators

    International Nuclear Information System (INIS)

    Bernard, J.A.

    1992-01-01

    In this paper, factors relevant to the design and acceptance of intelligent support systems for the operation of nuclear power plants are enumerated and discussed. The central premise is that conventional expert systems which encode experiential knowledge in production rules are not a suitable vehicle for the creation of practical operator support systems. The principal difficulty is the need for real-time operation. This in turn means that intelligent support systems will have knowledge bases derived from temporally accurate plant models, inference engines that permit revisions in the search process so as to accommodate revised or new data, and man-machine interfaces that do not require any human input. Such systems will have to be heavily instrumented and the associated knowledge bases will require a hierarchical organization so as to emulate human approaches to analysis. Issues related to operator acceptance of intelligent support tools are then reviewed. Possible applications are described and the relative merits of the machine- and human-centered approaches to the implementation of intelligent support systems are enumerated. The paper concludes with a plea for additional experimental evaluations

  20. Maximizing the sensitivity and reliability of peptide identification in large-scale proteomic experiments by harnessing multiple search engines.

    Science.gov (United States)

    Yu, Wen; Taylor, J Alex; Davis, Michael T; Bonilla, Leo E; Lee, Kimberly A; Auger, Paul L; Farnsworth, Chris C; Welcher, Andrew A; Patterson, Scott D

    2010-03-01

    Despite recent advances in qualitative proteomics, the automatic identification of peptides with optimal sensitivity and accuracy remains a difficult goal. To address this deficiency, a novel algorithm, Multiple Search Engines, Normalization and Consensus is described. The method employs six search engines and a re-scoring engine to search MS/MS spectra against protein and decoy sequences. After the peptide hits from each engine are normalized to error rates estimated from the decoy hits, peptide assignments are then deduced using a minimum consensus model. These assignments are produced in a series of progressively relaxed false-discovery rates, thus enabling a comprehensive interpretation of the data set. Additionally, the estimated false-discovery rate was found to have good concordance with the observed false-positive rate calculated from known identities. Benchmarking against standard proteins data sets (ISBv1, sPRG2006) and their published analysis, demonstrated that the Multiple Search Engines, Normalization and Consensus algorithm consistently achieved significantly higher sensitivity in peptide identifications, which led to increased or more robust protein identifications in all data sets compared with prior methods. The sensitivity and the false-positive rate of peptide identification exhibit an inverse-proportional and linear relationship with the number of participating search engines.

  1. Intelligent System Design Using Hyper-Heuristics

    Directory of Open Access Journals (Sweden)

    Nelishia Pillay

    2015-07-01

    Full Text Available Determining the most appropriate search method or artificial intelligence technique to solve a problem is not always evident and usually requires implementation of the different approaches to ascertain this. In some instances a single approach may not be sufficient and hybridization of methods may be needed to find a solution. This process can be time consuming. The paper proposes the use of hyper-heuristics as a means of identifying which method or combination of approaches is needed to solve a problem. The research presented forms part of a larger initiative aimed at using hyper-heuristics to develop intelligent hybrid systems. As an initial step in this direction, this paper investigates this for classical artificial intelligence uninformed and informed search methods, namely depth first search, breadth first search, best first search, hill-climbing and the A* algorithm. The hyper-heuristic determines the search or combination of searches to use to solve the problem. An evolutionary algorithm hyper-heuristic is implemented for this purpose and its performance is evaluated in solving the 8-Puzzle, Towers of Hanoi and Blocks World problems. The hyper-heuristic employs a generational evolutionary algorithm which iteratively refines an initial population using tournament selection to select parents, which the mutation and crossover operators are applied to for regeneration. The hyper-heuristic was able to identify a search or combination of searches to produce solutions for the twenty 8-Puzzle, five Towers of Hanoi and five Blocks World problems. Furthermore, admissible solutions were produced for all problem instances.

  2. Visibiome: an efficient microbiome search engine based on a scalable, distributed architecture.

    Science.gov (United States)

    Azman, Syafiq Kamarul; Anwar, Muhammad Zohaib; Henschel, Andreas

    2017-07-24

    Given the current influx of 16S rRNA profiles of microbiota samples, it is conceivable that large amounts of them eventually are available for search, comparison and contextualization with respect to novel samples. This process facilitates the identification of similar compositional features in microbiota elsewhere and therefore can help to understand driving factors for microbial community assembly. We present Visibiome, a microbiome search engine that can perform exhaustive, phylogeny based similarity search and contextualization of user-provided samples against a comprehensive dataset of 16S rRNA profiles environments, while tackling several computational challenges. In order to scale to high demands, we developed a distributed system that combines web framework technology, task queueing and scheduling, cloud computing and a dedicated database server. To further ensure speed and efficiency, we have deployed Nearest Neighbor search algorithms, capable of sublinear searches in high-dimensional metric spaces in combination with an optimized Earth Mover Distance based implementation of weighted UniFrac. The search also incorporates pairwise (adaptive) rarefaction and optionally, 16S rRNA copy number correction. The result of a query microbiome sample is the contextualization against a comprehensive database of microbiome samples from a diverse range of environments, visualized through a rich set of interactive figures and diagrams, including barchart-based compositional comparisons and ranking of the closest matches in the database. Visibiome is a convenient, scalable and efficient framework to search microbiomes against a comprehensive database of environmental samples. The search engine leverages a popular but computationally expensive, phylogeny based distance metric, while providing numerous advantages over the current state of the art tool.

  3. Toward two-dimensional search engines

    International Nuclear Information System (INIS)

    Ermann, L; Shepelyansky, D L; Chepelianskii, A D

    2012-01-01

    We study the statistical properties of various directed networks using ranking of their nodes based on the dominant vectors of the Google matrix known as PageRank and CheiRank. On average PageRank orders nodes proportionally to a number of ingoing links, while CheiRank orders nodes proportionally to a number of outgoing links. In this way, the ranking of nodes becomes two dimensional which paves the way for the development of two-dimensional search engines of a new type. Statistical properties of information flow on the PageRank–CheiRank plane are analyzed for networks of British, French and Italian universities, Wikipedia, Linux Kernel, gene regulation and other networks. A special emphasis is done for British universities networks using the large database publicly available in the UK. Methods of spam links control are also analyzed. (paper)

  4. Starry messages: Searching for signatures of interstellar archaeology

    Energy Technology Data Exchange (ETDEWEB)

    Carrigan, Richard A., Jr.; /Fermilab

    2009-12-01

    Searching for signatures of cosmic-scale archaeological artifacts such as Dyson spheres or Kardashev civilizations is an interesting alternative to conventional SETI. Uncovering such an artifact does not require the intentional transmission of a signal on the part of the original civilization. This type of search is called interstellar archaeology or sometimes cosmic archaeology. The detection of intelligence elsewhere in the Universe with interstellar archaeology or SETI would have broad implications for science. For example, the constraints of the anthropic principle would have to be loosened if a different type of intelligence was discovered elsewhere. A variety of interstellar archaeology signatures are discussed including non-natural planetary atmospheric constituents, stellar doping with isotopes of nuclear wastes, Dyson spheres, as well as signatures of stellar and galactic-scale engineering. The concept of a Fermi bubble due to interstellar migration is introduced in the discussion of galactic signatures. These potential interstellar archaeological signatures are classified using the Kardashev scale. A modified Drake equation is used to evaluate the relative challenges of finding various sources. With few exceptions interstellar archaeological signatures are clouded and beyond current technological capabilities. However SETI for so-called cultural transmissions and planetary atmosphere signatures are within reach.

  5. Computational Intelligence and Decision Making Trends and Applications

    CERN Document Server

    Madureira, Ana; Marques, Viriato

    2013-01-01

    This book provides a general overview and original analysis of new developments and applications in several areas of Computational Intelligence and Information Systems. Computational Intelligence has become an important tool for engineers to develop and analyze novel techniques to solve problems in basic sciences such as physics, chemistry, biology, engineering, environment and social sciences.   The material contained in this book addresses the foundations and applications of Artificial Intelligence and Decision Support Systems, Complex and Biological Inspired Systems, Simulation and Evolution of Real and Artificial Life Forms, Intelligent Models and Control Systems, Knowledge and Learning Technologies, Web Semantics and Ontologies, Intelligent Tutoring Systems, Intelligent Power Systems, Self-Organized and Distributed Systems, Intelligent Manufacturing Systems and Affective Computing. The contributions have all been written by international experts, who provide current views on the topics discussed and pr...

  6. The accuracy of Internet search engines to predict diagnoses from symptoms can be assessed with a validated scoring system.

    Science.gov (United States)

    Shenker, Bennett S

    2014-02-01

    To validate a scoring system that evaluates the ability of Internet search engines to correctly predict diagnoses when symptoms are used as search terms. We developed a five point scoring system to evaluate the diagnostic accuracy of Internet search engines. We identified twenty diagnoses common to a primary care setting to validate the scoring system. One investigator entered the symptoms for each diagnosis into three Internet search engines (Google, Bing, and Ask) and saved the first five webpages from each search. Other investigators reviewed the webpages and assigned a diagnostic accuracy score. They rescored a random sample of webpages two weeks later. To validate the five point scoring system, we calculated convergent validity and test-retest reliability using Kendall's W and Spearman's rho, respectively. We used the Kruskal-Wallis test to look for differences in accuracy scores for the three Internet search engines. A total of 600 webpages were reviewed. Kendall's W for the raters was 0.71 (psearch engines is a valid and reliable instrument. The scoring system may be used in future Internet research. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. High-speed data search

    Science.gov (United States)

    Driscoll, James N.

    1994-01-01

    The high-speed data search system developed for KSC incorporates existing and emerging information retrieval technology to help a user intelligently and rapidly locate information found in large textual databases. This technology includes: natural language input; statistical ranking of retrieved information; an artificial intelligence concept called semantics, where 'surface level' knowledge found in text is used to improve the ranking of retrieved information; and relevance feedback, where user judgements about viewed information are used to automatically modify the search for further information. Semantics and relevance feedback are features of the system which are not available commercially. The system further demonstrates focus on paragraphs of information to decide relevance; and it can be used (without modification) to intelligently search all kinds of document collections, such as collections of legal documents medical documents, news stories, patents, and so forth. The purpose of this paper is to demonstrate the usefulness of statistical ranking, our semantic improvement, and relevance feedback.

  8. The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections.

    Science.gov (United States)

    Epstein, Robert; Robertson, Ronald E

    2015-08-18

    Internet search rankings have a significant impact on consumer choices, mainly because users trust and choose higher-ranked results more than lower-ranked results. Given the apparent power of search rankings, we asked whether they could be manipulated to alter the preferences of undecided voters in democratic elections. Here we report the results of five relevant double-blind, randomized controlled experiments, using a total of 4,556 undecided voters representing diverse demographic characteristics of the voting populations of the United States and India. The fifth experiment is especially notable in that it was conducted with eligible voters throughout India in the midst of India's 2014 Lok Sabha elections just before the final votes were cast. The results of these experiments demonstrate that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) search ranking bias can be masked so that people show no awareness of the manipulation. We call this type of influence, which might be applicable to a variety of attitudes and beliefs, the search engine manipulation effect. Given that many elections are won by small margins, our results suggest that a search engine company has the power to influence the results of a substantial number of elections with impunity. The impact of such manipulations would be especially large in countries dominated by a single search engine company.

  9. GGRNA: an ultrafast, transcript-oriented search engine for genes and transcripts.

    Science.gov (United States)

    Naito, Yuki; Bono, Hidemasa

    2012-07-01

    GGRNA (http://GGRNA.dbcls.jp/) is a Google-like, ultrafast search engine for genes and transcripts. The web server accepts arbitrary words and phrases, such as gene names, IDs, gene descriptions, annotations of gene and even nucleotide/amino acid sequences through one simple search box, and quickly returns relevant RefSeq transcripts. A typical search takes just a few seconds, which dramatically enhances the usability of routine searching. In particular, GGRNA can search sequences as short as 10 nt or 4 amino acids, which cannot be handled easily by popular sequence analysis tools. Nucleotide sequences can be searched allowing up to three mismatches, or the query sequences may contain degenerate nucleotide codes (e.g. N, R, Y, S). Furthermore, Gene Ontology annotations, Enzyme Commission numbers and probe sequences of catalog microarrays are also incorporated into GGRNA, which may help users to conduct searches by various types of keywords. GGRNA web server will provide a simple and powerful interface for finding genes and transcripts for a wide range of users. All services at GGRNA are provided free of charge to all users.

  10. Decision making in family medicine: randomized trial of the effects of the InfoClinique and Trip database search engines.

    Science.gov (United States)

    Labrecque, Michel; Ratté, Stéphane; Frémont, Pierre; Cauchon, Michel; Ouellet, Jérôme; Hogg, William; McGowan, Jessie; Gagnon, Marie-Pierre; Njoya, Merlin; Légaré, France

    2013-10-01

    To compare the ability of users of 2 medical search engines, InfoClinique and the Trip database, to provide correct answers to clinical questions and to explore the perceived effects of the tools on the clinical decision-making process. Randomized trial. Three family medicine units of the family medicine program of the Faculty of Medicine at Laval University in Quebec city, Que. Fifteen second-year family medicine residents. Residents generated 30 structured questions about therapy or preventive treatment (2 questions per resident) based on clinical encounters. Using an Internet platform designed for the trial, each resident answered 20 of these questions (their own 2, plus 18 of the questions formulated by other residents, selected randomly) before and after searching for information with 1 of the 2 search engines. For each question, 5 residents were randomly assigned to begin their search with InfoClinique and 5 with the Trip database. The ability of residents to provide correct answers to clinical questions using the search engines, as determined by third-party evaluation. After answering each question, participants completed a questionnaire to assess their perception of the engine's effect on the decision-making process in clinical practice. Of 300 possible pairs of answers (1 answer before and 1 after the initial search), 254 (85%) were produced by 14 residents. Of these, 132 (52%) and 122 (48%) pairs of answers concerned questions that had been assigned an initial search with InfoClinique and the Trip database, respectively. Both engines produced an important and similar absolute increase in the proportion of correct answers after searching (26% to 62% for InfoClinique, for an increase of 36%; 24% to 63% for the Trip database, for an increase of 39%; P = .68). For all 30 clinical questions, at least 1 resident produced the correct answer after searching with either search engine. The mean (SD) time of the initial search for each question was 23.5 (7

  11. Towards a universal competitive intelligence process model

    Directory of Open Access Journals (Sweden)

    Rene Pellissier

    2013-08-01

    Full Text Available Background: Competitive intelligence (CI provides actionable intelligence, which provides a competitive edge in enterprises. However, without proper process, it is difficult to develop actionable intelligence. There are disagreements about how the CI process should be structured. For CI professionals to focus on producing actionable intelligence, and to do so with simplicity, they need a common CI process model.Objectives: The purpose of this research is to review the current literature on CI, to look at the aims of identifying and analysing CI process models, and finally to propose a universal CI process model.Method: The study was qualitative in nature and content analysis was conducted on all identified sources establishing and analysing CI process models. To identify relevant literature, academic databases and search engines were used. Moreover, a review of references in related studies led to more relevant sources, the references of which were further reviewed and analysed. To ensure reliability, only peer-reviewed articles were used.Results: The findings reveal that the majority of scholars view the CI process as a cycle of interrelated phases. The output of one phase is the input of the next phase.Conclusion: The CI process is a cycle of interrelated phases. The output of one phase is the input of the next phase. These phases are influenced by the following factors: decision makers, process and structure, organisational awareness and culture, and feedback.

  12. Design implications for task-specific search utilities for retrieval and re-engineering of code

    Science.gov (United States)

    Iqbal, Rahat; Grzywaczewski, Adam; Halloran, John; Doctor, Faiyaz; Iqbal, Kashif

    2017-05-01

    The importance of information retrieval systems is unquestionable in the modern society and both individuals as well as enterprises recognise the benefits of being able to find information effectively. Current code-focused information retrieval systems such as Google Code Search, Codeplex or Koders produce results based on specific keywords. However, these systems do not take into account developers' context such as development language, technology framework, goal of the project, project complexity and developer's domain expertise. They also impose additional cognitive burden on users in switching between different interfaces and clicking through to find the relevant code. Hence, they are not used by software developers. In this paper, we discuss how software engineers interact with information and general-purpose information retrieval systems (e.g. Google, Yahoo!) and investigate to what extent domain-specific search and recommendation utilities can be developed in order to support their work-related activities. In order to investigate this, we conducted a user study and found that software engineers followed many identifiable and repeatable work tasks and behaviours. These behaviours can be used to develop implicit relevance feedback-based systems based on the observed retention actions. Moreover, we discuss the implications for the development of task-specific search and collaborative recommendation utilities embedded with the Google standard search engine and Microsoft IntelliSense for retrieval and re-engineering of code. Based on implicit relevance feedback, we have implemented a prototype of the proposed collaborative recommendation system, which was evaluated in a controlled environment simulating the real-world situation of professional software engineers. The evaluation has achieved promising initial results on the precision and recall performance of the system.

  13. PR Students' Perceptions and Readiness for Using Search Engine Optimization

    Science.gov (United States)

    Moody, Mia; Bates, Elizabeth

    2013-01-01

    Enough evidence is available to support the idea that public relations professionals must possess search engine optimization (SEO) skills to assist clients in a full-service capacity; however, little research exists on how much college students know about the tactic and best practices for incorporating SEO into course curriculum. Furthermore, much…

  14. A Competitive and Experiential Assignment in Search Engine Optimization Strategy

    Science.gov (United States)

    Clarke, Theresa B.; Clarke, Irvine, III

    2014-01-01

    Despite an increase in ad spending and demand for employees with expertise in search engine optimization (SEO), methods for teaching this important marketing strategy have received little coverage in the literature. Using Bloom's cognitive goals hierarchy as a framework, this experiential assignment provides a process for educators who may be new…

  15. The Role of Exploratory Talk in Classroom Search Engine Tasks

    Science.gov (United States)

    Knight, Simon; Mercer, Neil

    2015-01-01

    While search engines are commonly used by children to find information, and in classroom-based activities, children are not adept in their information seeking or evaluation of information sources. Prior work has explored such activities in isolated, individual contexts, failing to account for the collaborative, discourse-mediated nature of search…

  16. The Search for Extraterrestrial Intelligence (SETI) and Whether to send 'Messages' (METI): A Case for Conversation, Patience and Due Diligence

    Science.gov (United States)

    Brin, D.

    Understanding the controversy over "Messages to Extra Terrestrial Intelligence" or METI requires a grounding in the history and rationale of SETI (Search for ETI). Insights since the turn of the century have changed SETI's scientific basis. Continued null results from the radio search do not invalidate continuing effort, but they do raise questions about long-held assumptions. Modified search strategies are discussed. The Great Silence or Fermi Paradox is appraised, along with the disruptive plausibility of interstellar travel. Psychological motivations for METI are considered. With this underpinning, we consider why a small cadre of SETI-ist radio astronomers have resisted the notion of international consultations before humanity takes a brash and irreversible step into METI, shouting our presence into the cosmos.

  17. Search engine imaginary: Visions and values in the co-production of search technology and Europe.

    Science.gov (United States)

    Mager, Astrid

    2017-04-01

    This article discusses the co-production of search technology and a European identity in the context of the EU data protection reform. The negotiations of the EU data protection legislation ran from 2012 until 2015 and resulted in a unified data protection legislation directly binding for all European member states. I employ a discourse analysis to examine EU policy documents and Austrian media materials related to the reform process. Using the concept 'sociotechnical imaginary', I show how a European imaginary of search engines is forming in the EU policy domain, how a European identity is constructed in the envisioned politics of control, and how national specificities contribute to the making and unmaking of a European identity. I discuss the roles that national technopolitical identities play in shaping both search technology and Europe, taking as an example Austria, a small country with a long history in data protection and a tradition of restrained technology politics.

  18. REPTREE CLASSIFIER FOR IDENTIFYING LINK SPAM IN WEB SEARCH ENGINES

    Directory of Open Access Journals (Sweden)

    S.K. Jayanthi

    2013-01-01

    Full Text Available Search Engines are used for retrieving the information from the web. Most of the times, the importance is laid on top 10 results sometimes it may shrink as top 5, because of the time constraint and reliability on the search engines. Users believe that top 10 or 5 of total results are more relevant. Here comes the problem of spamdexing. It is a method to deceive the search result quality. Falsified metrics such as inserting enormous amount of keywords or links in website may take that website to the top 10 or 5 positions. This paper proposes a classifier based on the Reptree (Regression tree representative. As an initial step Link-based features such as neighbors, pagerank, truncated pagerank, trustrank and assortativity related attributes are inferred. Based on this features, tree is constructed. The tree uses the feature inference to differentiate spam sites from legitimate sites. WEBSPAM-UK-2007 dataset is taken as a base. It is preprocessed and converted into five datasets FEATA, FEATB, FEATC, FEATD and FEATE. Only link based features are taken for experiments. This paper focus on link spam alone. Finally a representative tree is created which will more precisely classify the web spam entries. Results are given. Regression tree classification seems to perform well as shown through experiments.

  19. LIVIVO - the Vertical Search Engine for Life Sciences.

    Science.gov (United States)

    Müller, Bernd; Poley, Christoph; Pössel, Jana; Hagelstein, Alexandra; Gübitz, Thomas

    2017-01-01

    The explosive growth of literature and data in the life sciences challenges researchers to keep track of current advancements in their disciplines. Novel approaches in the life science like the One Health paradigm require integrated methodologies in order to link and connect heterogeneous information from databases and literature resources. Current publications in the life sciences are increasingly characterized by the employment of trans-disciplinary methodologies comprising molecular and cell biology, genetics, genomic, epigenomic, transcriptional and proteomic high throughput technologies with data from humans, plants, and animals. The literature search engine LIVIVO empowers retrieval functionality by incorporating various literature resources from medicine, health, environment, agriculture and nutrition. LIVIVO is developed in-house by ZB MED - Information Centre for Life Sciences. It provides a user-friendly and usability-tested search interface with a corpus of 55 Million citations derived from 50 databases. Standardized application programming interfaces are available for data export and high throughput retrieval. The search functions allow for semantic retrieval with filtering options based on life science entities. The service oriented architecture of LIVIVO uses four different implementation layers to deliver search services. A Knowledge Environment is developed by ZB MED to deal with the heterogeneity of data as an integrative approach to model, store, and link semantic concepts within literature resources and databases. Future work will focus on the exploitation of life science ontologies and on the employment of NLP technologies in order to improve query expansion, filters in faceted search, and concept based relevancy rankings in LIVIVO.

  20. Hunting for unexpected post-translational modifications by spectral library searching with tier-wise scoring.

    Science.gov (United States)

    Ma, Chun Wai Manson; Lam, Henry

    2014-05-02

    Discovering novel post-translational modifications (PTMs) to proteins and detecting specific modification sites on proteins is one of the last frontiers of proteomics. At present, hunting for post-translational modifications remains challenging in widely practiced shotgun proteomics workflows due to the typically low abundance of modified peptides and the greatly inflated search space as more potential mass shifts are considered by the search engines. Moreover, most popular search methods require that the user specifies the modification(s) for which to search; therefore, unexpected and novel PTMs will not be detected. Here a new algorithm is proposed to apply spectral library searching to the problem of open modification searches, namely, hunting for PTMs without prior knowledge of what PTMs are in the sample. The proposed tier-wise scoring method intelligently looks for unexpected PTMs by allowing mass-shifted peak matches but only when the number of matches found is deemed statistically significant. This allows the search engine to search for unexpected modifications while maintaining its ability to identify unmodified peptides effectively at the same time. The utility of the method is demonstrated using three different data sets, in which the numbers of spectrum identifications to both unmodified and modified peptides were substantially increased relative to a regular spectral library search as well as to another open modification spectral search method, pMatch.

  1. EDITORIAL: Advances in Measurement Technology and Intelligent Instruments for Production Engineering

    Science.gov (United States)

    Gao, Wei; Takaya, Yasuhiro; Gao, Yongsheng; Krystek, Michael

    2008-08-01

    Measurement and instrumentation have long played an important role in Production Engineering, through supporting both the traditional field of manufacturing and the new field of micro/nano-technology. Papers published in this special feature were selected and updated from those presented at The 8th International Symposium on Measurement Technology and Intelligent Instruments (ISMTII 2007) held at Tohoku University, Sendai, Japan, on 24-27 September 2007. ISMTII 2007 was organized by ICMI (The International Committee on Measurements and Instrumentation), Japan Society for Precision Engineering (JSPE, Technical Committee of Intelligent Measurement with Nanoscale), Korean Society for Precision Engineering (KSPE), Chinese Society for Measurement (CSM) and Tohoku University. The conference was also supported by Center for Precision Metrology of UNC Charlotte and Singapore Institute of Manufacturing Technology. A total of 220 papers, including four keynote papers, were presented at ISMTII 2007, covering a wide range of topics, including micro/nano-metrology, precision measurement, online & in-process measurement, surface metrology, optical metrology & image processing, biomeasurement, sensor technology, intelligent measurement & instrumentation, uncertainty, traceability & calibration, and signal processing algorithms. The guest editors recommended publication of updated versions of some of the best ISMTII 2007 papers in this special feature of Measurement Science and Technology. The first two papers were presented in ISMTII 2007 as keynote papers. Takamasu et al from The University of Tokyo report uncertainty estimation for coordinate metrology, in which methods of estimating uncertainties using the coordinate measuring system after calibration are formulated. Haitjema, from Mitutoyo Research Center Europe, treats the most often used interferometric measurement techniques (displacement interferometry and surface interferometry) and their major sources of errors. Among

  2. Teaching Search Engine Marketing through the Google Ad Grants Program

    Science.gov (United States)

    Clarke, Theresa B.; Murphy, Jamie; Wetsch, Lyle R.; Boeck, Harold

    2018-01-01

    Instructors may find it difficult to stay abreast of the rapidly changing nature of search engine marketing (SEM) and to incorporate hands-on, practical classroom experiences. One solution is Google Ad Grants, a nonprofit edition of Google AdWords that provides up to $10,000 monthly in free advertising. A quasi-experiment revealed no differences…

  3. Psycholinguistics and the Search for Extraterrestrial Intelligence

    Directory of Open Access Journals (Sweden)

    Lidija Krotenko

    2017-09-01

    Full Text Available The author of the article reveals the possibilities of psycholinguistics in the identifi cation and interpretation of languages and texts of Alien Civilizations. The author combines modern interdisciplinary research in psycholinguistics with the theory “Evolving Matter” proposed by Oleg Bazaluk and concludes that the identifi cation of languages and texts of Alien Civilizations, as well as the communication of terrestrial civilization with Extraterrestrial Intelligence, is in principle possible. To that end, it is necessary to achieve the required level of the modeling of neurophilosophy and to include these achievements of modern psycholinguistics studies: а language acquisition; b language comprehension; c language production; d second language acquisition. On the one hand, the possibilities of neurophilosophy to accumulate and model advanced neuroscience research; on the other hand, highly specialized psycholinguistic studies in language evolution are able to provide the communication of terrestrial civilization with Extraterrestrial Intelligence.

  4. Rural architecture between artificial intelligence and natural intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Cennamo, M.; Palma, P. di; Ricciardelli, A. [University of Naples Frederico II (Italy). Dept. of Configurazione e Attuazione dell Architettra

    2000-02-01

    Following the field of research carried out and reported in the Second International Conference for Teachers of Architecture held in Florence on October 16, 17 and 18, 1997, which stated the central position of Architectural project in relation to Human Intelligence, Natural Intelligence and Artificial Intelligence, the present paper suggests a phase of application of the theoretical assumptions to spacial models paradigmatic of the complexity of projects and building technique, as well as of the relationship between man-made environment and natural one. Among the different typologies in architecture, this research focuses on the rural buildings in Campania, mainly on the ones in the Vesuvius area, as those are the most suitable to be studied and salvaged with the help of biology, mathematics and high engineering. (author)

  5. A Taxonomic Search Engine: federating taxonomic databases using web services.

    Science.gov (United States)

    Page, Roderic D M

    2005-03-09

    The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. The Taxonomic Search Engine (TSE) is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO) and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID) authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata) for each name. The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  6. RNA search engines empower the bacterial intranet.

    Science.gov (United States)

    Dendooven, Tom; Luisi, Ben F

    2017-08-15

    RNA acts not only as an information bearer in the biogenesis of proteins from genes, but also as a regulator that participates in the control of gene expression. In bacteria, small RNA molecules (sRNAs) play controlling roles in numerous processes and help to orchestrate complex regulatory networks. Such processes include cell growth and development, response to stress and metabolic change, transcription termination, cell-to-cell communication, and the launching of programmes for host invasion. All these processes require recognition of target messenger RNAs by the sRNAs. This review summarizes recent results that have provided insights into how bacterial sRNAs are recruited into effector ribonucleoprotein complexes that can seek out and act upon target transcripts. The results hint at how sRNAs and their protein partners act as pattern-matching search engines that efficaciously regulate gene expression, by performing with specificity and speed while avoiding off-target effects. The requirements for efficient searches of RNA patterns appear to be common to all domains of life. © 2017 The Author(s).

  7. Adverse Reactions Associated With Cannabis Consumption as Evident From Search Engine Queries.

    Science.gov (United States)

    Yom-Tov, Elad; Lev-Ran, Shaul

    2017-10-26

    Cannabis is one of the most widely used psychoactive substances worldwide, but adverse drug reactions (ADRs) associated with its use are difficult to study because of its prohibited status in many countries. Internet search engine queries have been used to investigate ADRs in pharmaceutical drugs. In this proof-of-concept study, we tested whether these queries can be used to detect the adverse reactions of cannabis use. We analyzed anonymized queries from US-based users of Bing, a widely used search engine, made over a period of 6 months and compared the results with the prevalence of cannabis use as reported in the US National Survey on Drug Use in the Household (NSDUH) and with ADRs reported in the Food and Drug Administration's Adverse Drug Reporting System. Predicted prevalence of cannabis use was estimated from the fraction of people making queries about cannabis, marijuana, and 121 additional synonyms. Predicted ADRs were estimated from queries containing layperson descriptions to 195 ICD-10 symptoms list. Our results indicated that the predicted prevalence of cannabis use at the US census regional level reaches an R 2 of .71 NSDUH data. Queries for ADRs made by people who also searched for cannabis reveal many of the known adverse effects of cannabis (eg, cough and psychotic symptoms), as well as plausible unknown reactions (eg, pyrexia). These results indicate that search engine queries can serve as an important tool for the study of adverse reactions of illicit drugs, which are difficult to study in other settings. ©Elad Yom-Tov, Shaul Lev-Ran. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 26.10.2017.

  8. Surfing for suicide methods and help: content analysis of websites retrieved with search engines in Austria and the United States.

    Science.gov (United States)

    Till, Benedikt; Niederkrotenthaler, Thomas

    2014-08-01

    The Internet provides a variety of resources for individuals searching for suicide-related information. Structured content-analytic approaches to assess intercultural differences in web contents retrieved with method-related and help-related searches are scarce. We used the 2 most popular search engines (Google and Yahoo/Bing) to retrieve US-American and Austrian search results for the term suicide, method-related search terms (e.g., suicide methods, how to kill yourself, painless suicide, how to hang yourself), and help-related terms (e.g., suicidal thoughts, suicide help) on February 11, 2013. In total, 396 websites retrieved with US search engines and 335 websites from Austrian searches were analyzed with content analysis on the basis of current media guidelines for suicide reporting. We assessed the quality of websites and compared findings across search terms and between the United States and Austria. In both countries, protective outweighed harmful website characteristics by approximately 2:1. Websites retrieved with method-related search terms (e.g., how to hang yourself) contained more harmful (United States: P search engines generally had more protective characteristics (P search engines. Resources with harmful characteristics were better ranked than those with protective characteristics (United States: P < .01, Austria: P < .05). The quality of suicide-related websites obtained depends on the search terms used. Preventive efforts to improve the ranking of preventive web content, particularly regarding method-related search terms, seem necessary. © Copyright 2014 Physicians Postgraduate Press, Inc.

  9. Emerging interdisciplinary fields in the coming intelligence/convergence era

    Science.gov (United States)

    Noor, Ahmed K.

    2012-09-01

    Dramatic advances are in the horizon resulting from rapid pace of development of several technologies, including, computing, communication, mobile, robotic, and interactive technologies. These advances, along with the trend towards convergence of traditional engineering disciplines with physical, life and other science disciplines will result in the development of new interdisciplinary fields, as well as in new paradigms for engineering practice in the coming intelligence/convergence era (post-information age). The interdisciplinary fields include Cyber Engineering, Living Systems Engineering, Biomechatronics/Robotics Engineering, Knowledge Engineering, Emergent/Complexity Engineering, and Multiscale Systems engineering. The paper identifies some of the characteristics of the intelligence/convergence era, gives broad definition of convergence, describes some of the emerging interdisciplinary fields, and lists some of the academic and other organizations working in these disciplines. The need is described for establishing a Hierarchical Cyber-Physical Ecosystem for facilitating interdisciplinary collaborations, and accelerating development of skilled workforce in the new fields. The major components of the ecosystem are listed. The new interdisciplinary fields will yield critical advances in engineering practice, and help in addressing future challenges in broad array of sectors, from manufacturing to energy, transportation, climate, and healthcare. They will also enable building large future complex adaptive systems-of-systems, such as intelligent multimodal transportation systems, optimized multi-energy systems, intelligent disaster prevention systems, and smart cities.

  10. Artificial intelligence in medicine.

    OpenAIRE

    Ramesh, A. N.; Kambhampati, C.; Monson, J. R. T.; Drew, P. J.

    2004-01-01

    INTRODUCTION: Artificial intelligence is a branch of computer science capable of analysing complex medical data. Their potential to exploit meaningful relationship with in a data set can be used in the diagnosis, treatment and predicting outcome in many clinical scenarios. METHODS: Medline and internet searches were carried out using the keywords 'artificial intelligence' and 'neural networks (computer)'. Further references were obtained by cross-referencing from key articles. An overview of ...

  11. Information theory, animal communication, and the search for extraterrestrial intelligence

    Science.gov (United States)

    Doyle, Laurance R.; McCowan, Brenda; Johnston, Simon; Hanser, Sean F.

    2011-02-01

    We present ongoing research in the application of information theory to animal communication systems with the goal of developing additional detectors and estimators for possible extraterrestrial intelligent signals. Regardless of the species, for intelligence (i.e., complex knowledge) to be transmitted certain rules of information theory must still be obeyed. We demonstrate some preliminary results of applying information theory to socially complex marine mammal species (bottlenose dolphins and humpback whales) as well as arboreal squirrel monkeys, because they almost exclusively rely on vocal signals for their communications, producing signals which can be readily characterized by signal analysis. Metrics such as Zipf's Law and higher-order information-entropic structure are emerging as indicators of the communicative complexity characteristic of an "intelligent message" content within these animals' signals, perhaps not surprising given these species' social complexity. In addition to human languages, for comparison we also apply these metrics to pulsar signals—perhaps (arguably) the most "organized" of stellar systems—as an example of astrophysical systems that would have to be distinguished from an extraterrestrial intelligence message by such information theoretic filters. We also look at a message transmitted from Earth (Arecibo Observatory) that contains a lot of meaning but little information in the mathematical sense we define it here. We conclude that the study of non-human communication systems on our own planet can make a valuable contribution to the detection of extraterrestrial intelligence by providing quantitative general measures of communicative complexity. Studying the complex communication systems of other intelligent species on our own planet may also be one of the best ways to deprovincialize our thinking about extraterrestrial communication systems in general.

  12. Eczema, Atopic Dermatitis, or Atopic Eczema: Analysis of Global Search Engine Trends.

    Science.gov (United States)

    Xu, Shuai; Thyssen, Jacob P; Paller, Amy S; Silverberg, Jonathan I

    The lack of standardized nomenclature for atopic dermatitis (AD) creates challenges for scientific communication, patient education, and advocacy. We sought to determine the relative popularity of the terms eczema, AD, and atopic eczema (AE) using global search engine volumes. A retrospective analysis of average monthly search volumes from 2014 to 2016 of Google, Bing/Yahoo, and Baidu was performed for eczema, AD, and AE in English and 37 other languages. Google Trends was used to determine the relative search popularity of each term from 2006 to 2016 in English and the top foreign languages, German, Turkish, Russian, and Japanese. Overall, eczema accounted for 1.5 million monthly searches (84%) compared with 247 000 searches for AD (14%) and 44 000 searches for AE (2%). For English language, eczema accounted for 93% of searches compared with 6% for AD and 1% for AE. Search popularity for eczema increased from 2006 to 2016 but remained stable for AD and AE. Given the ambiguity of the term eczema, we recommend the universal use of the next most popular term, AD.

  13. Internet Search Engines: Copyright's "Fair Use" in Reproduction and Public Display Rights

    National Research Council Canada - National Science Library

    Jeweler, Robin

    2007-01-01

    .... If so, is the activity a "fair use" protected by the Copyright Act? These issues frequently implicate search engines, which scan the web to allow users to find content for uses, both legitimate and illegitimate...

  14. 2015 Chinese Intelligent Automation Conference

    CERN Document Server

    Li, Hongbo

    2015-01-01

    Proceedings of the 2015 Chinese Intelligent Automation Conference presents selected research papers from the CIAC’15, held in Fuzhou, China. The topics include adaptive control, fuzzy control, neural network based control, knowledge based control, hybrid intelligent control, learning control, evolutionary mechanism based control, multi-sensor integration, failure diagnosis, reconfigurable control, etc. Engineers and researchers from academia, industry and the government can gain valuable insights into interdisciplinary solutions in the field of intelligent automation.

  15. Application of computational intelligence in emerging power systems

    African Journals Online (AJOL)

    ... in the electrical engineering applications. This paper highlights the application of computational intelligence methods in power system problems. Various types of CI methods, which are widely used in power system, are also discussed in the brief. Keywords: Power systems, computational intelligence, artificial intelligence.

  16. Artificial intelligence

    International Nuclear Information System (INIS)

    Perret-Galix, D.

    1992-01-01

    A vivid example of the growing need for frontier physics experiments to make use of frontier technology is in the field of artificial intelligence and related themes. This was reflected in the second international workshop on 'Software Engineering, Artificial Intelligence and Expert Systems in High Energy and Nuclear Physics' which took place from 13-18 January at France Telecom's Agelonde site at La Londe des Maures, Provence. It was the second in a series, the first having been held at Lyon in 1990

  17. Origin of Disagreements in Tandem Mass Spectra Interpretation by Search Engines.

    Science.gov (United States)

    Tessier, Dominique; Lollier, Virginie; Larré, Colette; Rogniaux, Hélène

    2016-10-07

    Several proteomic database search engines that interpret LC-MS/MS data do not identify the same set of peptides. These disagreements occur even when the scores of the peptide-to-spectrum matches suggest good confidence in the interpretation. Our study shows that these disagreements observed for the interpretations of a given spectrum are almost exclusively due to the variation of what we call the "peptide space", i.e., the set of peptides that are actually compared to the experimental spectra. We discuss the potential difficulties of precisely defining the "peptide space." Indeed, although several parameters that are generally reported in publications can easily be set to the same values, many additional parameters-with much less straightforward user access-might impact the "peptide space" used by each program. Moreover, in a configuration where each search engine identifies the same candidates for each spectrum, the inference of the proteins may remain quite different depending on the false discovery rate selected.

  18. Search for microorganisms on Europa and Mars in relation with the evolution of intelligent behavior on other worlds

    International Nuclear Information System (INIS)

    Chela-Flores, Julian

    2001-11-01

    Within the context of how to search for life in the Solar System, we discuss the need to consider universal evolutionary biomarkers, in addition to those of biochemical nature that have already been selected for in the biology experiments of the old Viking and future Beagle-2 landers. For the wider problem of the evolution of intelligent behavior on other worlds (the SETI program), the type of experiments suggested below aim at establishing a direct connection between Solar System exploration and the first steps along the pathway toward the evolution of intelligent behavior. The two leading sites for the implementation of the proposed first whole-cell experiments would be, firstly, Europa after the Europa-Orbiter mission, either on the ice-crust, or in the ocean itself by means of a submersible; secondly, such experiments could be implemented once isolated liquid water oases are identified in the Martian substratum. (author)

  19. Obstacles of Search Engines Used by Graduate Students at The Faculty of Education, The Islamic University in Gaza

    Directory of Open Access Journals (Sweden)

    Fayez Kamal Shaladan

    2018-03-01

    Full Text Available This study aimed to identify obstacles of search engines used by graduate students at the Faculty of Education, the Islamic University in Gaza, and to overcome them. The researchers utilized the analytical descriptive approach to achieve the goal of the study. They used the interview tool and designed a questionnaire to collect data for the study. The sample of the study was (164 male and female postgraduate students enrolled in the College of Education. The study results were as follows: The degree of obstacles to the use of search engines among postgraduate students at the Faculty of Education at the Islamic University in Gaza was high with a percentage of (%71.05.There were no statistically significant differences between the averages of the study sample for the obstacles of the use of the search engines among the postgraduate students in the Faculty of Education, the Islamic University due to the gender and academic variables, the cumulative average. An exception to this was the third theme which was personal constraints which had differences in favor of students whose cumulative rates were less than (%85. The study concluded with these recommendations: The university should subscribe to various search engines revise admission terms and conditions for postgraduate studies whereby English and computer courses can be included. Keywords: Search engines, Students, Postgraduate studies, Islamic University.

  20. SpEnD: Linked Data SPARQL Endpoints Discovery Using Search Engines

    Science.gov (United States)

    Yumusak, Semih; Dogdu, Erdogan; Kodaz, Halife; Kamilaris, Andreas; Vandenbussche, Pierre-Yves

    In this study, a novel metacrawling method is proposed for discovering and monitoring linked data sources on the Web. We implemented the method in a prototype system, named SPARQL Endpoints Discovery (SpEnD). SpEnD starts with a "search keyword" discovery process for finding relevant keywords for the linked data domain and specifically SPARQL endpoints. Then, these search keywords are utilized to find linked data sources via popular search engines (Google, Bing, Yahoo, Yandex). By using this method, most of the currently listed SPARQL endpoints in existing endpoint repositories, as well as a significant number of new SPARQL endpoints, have been discovered. Finally, we have developed a new SPARQL endpoint crawler (SpEC) for crawling and link analysis.

  1. What do K-12 students feel when dealing with technology and engineering issues? Gardner's multiple intelligence theory implications in technology lessons for motivating engineering vocations at Spanish Secondary School

    Science.gov (United States)

    Sánchez-Martín, Jesús; Álvarez-Gragera, García J.; Dávila-Acedo, M. Antonia; Mellado, Vicente

    2017-11-01

    The interest on engineering and scientific studies can be raised up even from the early years of academic instructional process. This vocation may be linked to emotions and aptitudes towards technological education. Particularly, students get in touch with these technological issues (namely STEM) during the Compulsory Secondary Education in Spain (12-16 years old).This work presents a preliminary evaluation of how relevant is Gardner's multiple intelligence theory (MIT) in the teaching-learning process within the Technology Lessons. In this sense, MIT was considered as an explanation variable of the emotional response within the different educational parts (so-called syllabus units, SU) in the Technology spanish curriculum. Different intelligence style (IS) will orient the student to a vision of the engineering and technology. This work tries to identify which relationships can be established between IS and specific technology and engineering learning. This research involved up to 135 students were subsequently tested about their predominant (IS) and on the emotions that arouse in them when working with each SU. The results were statistically significant and only those with a Logic-arithmetic or Environmental IS were not affected by the SU.Best teaching and learning practicesare required for encouraging further engineering studies.

  2. Development of a special topics course on intelligent transportation systems for the Zachry Department of Civil Engineering of Texas A&M University.

    Science.gov (United States)

    2009-08-31

    With Intelligent Transportation Systems (ITS), engineers and system integrators blend emerging : detection/surveillance, communications, and computer technologies with transportation management and : control concepts to improve the safety and mobilit...

  3. Development of a Google-based search engine for data mining radiology reports.

    Science.gov (United States)

    Erinjeri, Joseph P; Picus, Daniel; Prior, Fred W; Rubin, David A; Koppel, Paul

    2009-08-01

    The aim of this study is to develop a secure, Google-based data-mining tool for radiology reports using free and open source technologies and to explore its use within an academic radiology department. A Health Insurance Portability and Accountability Act (HIPAA)-compliant data repository, search engine and user interface were created to facilitate treatment, operations, and reviews preparatory to research. The Institutional Review Board waived review of the project, and informed consent was not required. Comprising 7.9 GB of disk space, 2.9 million text reports were downloaded from our radiology information system to a fileserver. Extensible markup language (XML) representations of the reports were indexed using Google Desktop Enterprise search engine software. A hypertext markup language (HTML) form allowed users to submit queries to Google Desktop, and Google's XML response was interpreted by a practical extraction and report language (PERL) script, presenting ranked results in a web browser window. The query, reason for search, results, and documents visited were logged to maintain HIPAA compliance. Indexing averaged approximately 25,000 reports per hour. Keyword search of a common term like "pneumothorax" yielded the first ten most relevant results of 705,550 total results in 1.36 s. Keyword search of a rare term like "hemangioendothelioma" yielded the first ten most relevant results of 167 total results in 0.23 s; retrieval of all 167 results took 0.26 s. Data mining tools for radiology reports will improve the productivity of academic radiologists in clinical, educational, research, and administrative tasks. By leveraging existing knowledge of Google's interface, radiologists can quickly perform useful searches.

  4. What Major Search Engines Like Google, Yahoo and Bing Need to Know about Teachers in the UK?

    Science.gov (United States)

    Seyedarabi, Faezeh

    2014-01-01

    This article briefly outlines the current major search engines' approach to teachers' web searching. The aim of this article is to make Web searching easier for teachers when searching for relevant online teaching materials, in general, and UK teacher practitioners at primary, secondary and post-compulsory levels, in particular. Therefore, major…

  5. 14th ACIS/IEEE International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing

    CERN Document Server

    Studies in Computational Intelligence : Volume 492

    2013-01-01

    This edited book presents scientific results of the 14th ACIS/IEEE International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD 2013), held in Honolulu, Hawaii, USA on July 1-3, 2013. The aim of this conference was to bring together scientists, engineers, computer users, and students to share their experiences and exchange new ideas, research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them. The conference organizers selected the 17 outstanding papers from those papers accepted for presentation at the conference.  

  6. 15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing

    CERN Document Server

    2015-01-01

    This edited book presents scientific results of 15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD 2014) held on June 30 – July 2, 2014 in Las Vegas Nevada, USA. The aim of this conference was to bring together scientists, engineers, computer users, and students to share their experiences and exchange new ideas, research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them. The conference organizers selected the 13 outstanding papers from those papers accepted for presentation at the conference.

  7. A cognitive evaluation of four online search engines for answering definitional questions posed by physicians.

    Science.gov (United States)

    Yu, Hong; Kaufman, David

    2007-01-01

    The Internet is having a profound impact on physicians' medical decision making. One recent survey of 277 physicians showed that 72% of physicians regularly used the Internet to research medical information and 51% admitted that information from web sites influenced their clinical decisions. This paper describes the first cognitive evaluation of four state-of-the-art Internet search engines: Google (i.e., Google and Scholar.Google), MedQA, Onelook, and PubMed for answering definitional questions (i.e., questions with the format of "What is X?") posed by physicians. Onelook is a portal for online definitions, and MedQA is a question answering system that automatically generates short texts to answer specific biomedical questions. Our evaluation criteria include quality of answer, ease of use, time spent, and number of actions taken. Our results show that MedQA outperforms Onelook and PubMed in most of the criteria, and that MedQA surpasses Google in time spent and number of actions, two important efficiency criteria. Our results show that Google is the best system for quality of answer and ease of use. We conclude that Google is an effective search engine for medical definitions, and that MedQA exceeds the other search engines in that it provides users direct answers to their questions; while the users of the other search engines have to visit several sites before finding all of the pertinent information.

  8. Advances in intelligent analysis of medical data and decision support systems

    CERN Document Server

    Iantovics, Barna

    2013-01-01

    This volume is a result of the fruitful and vivid discussions during the MedDecSup'2012 International Workshop bringing together a relevant body of knowledge, and new developments in the increasingly important field of medical informatics. This carefully edited book presents new ideas aimed at the development of intelligent processing of various kinds of medical information and the perfection of the contemporary computer systems for medical decision support. The book presents advances of the medical information systems for intelligent archiving, processing, analysis and search-by-content which will improve the quality of the medical services for every patient and of the global healthcare system. The book combines in a synergistic way theoretical developments with the practicability of the approaches developed and presents the last developments and achievements in  medical informatics to a broad range of readers: engineers, mathematicians, physicians, and PhD students.

  9. 14th International Conference on Software Engineering, Artificial Intelligence Research, Management and Applications

    CERN Document Server

    2016-01-01

    This edited book presents scientific results of the 14th International Conference on Software Engineering, Artificial Intelligence Research, Management and Applications (SERA 2016) held on June 8-10, 2016 at Towson University, USA. The aim of this conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users, and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way. Research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them. The conference organizers selected the best papers from those papers accepted for presentation at the conference. The papers were chosen based on review scores submitted by members of the program committee, and underwent further rigorous rounds of review. This publication capture...

  10. Emotional intelligence among nursing students: Findings from a cross-sectional study.

    Science.gov (United States)

    Štiglic, Gregor; Cilar, Leona; Novak, Žiga; Vrbnjak, Dominika; Stenhouse, Rosie; Snowden, Austyn; Pajnkihar, Majda

    2018-07-01

    Emotional intelligence in nursing is of global interest. International studies identify that emotional intelligence influences nurses' work and relationships with patients. It is associated with compassion and care. Nursing students scored higher on measures of emotional intelligence compared to students of other study programmes. The level of emotional intelligence increases with age and tends to be higher in women. This study aims to measure the differences in emotional intelligence between nursing students with previous caring experience and those without; to examine the effects of gender on emotional intelligence scores; and to test whether nursing students score higher than engineering colleagues on emotional intelligence measures. A cross-sectional descriptive study design was used. The study included 113 nursing and 104 engineering students at the beginning of their first year of study at a university in Slovenia. Emotional intelligence was measured using the Trait Emotional Intelligence Questionnaire (TEIQue) and Schutte Self Report Emotional Intelligence Test (SSEIT). Shapiro-Wilk's test of normality was used to test the sample distribution, while the differences in mean values were tested using Student t-test of independent samples. Emotional intelligence was higher in nursing students (n = 113) than engineering students (n = 104) in both measures [TEIQue t = 3.972; p emotional intelligence scores than male students on both measures, the difference was not statistically significant [TEIQue t = -0.839; p = 0.403; SSEIT t = -1.159; p = 0.249]. EI scores in nursing students with previous caring experience were not higher compared to students without such experience for any measure [TEIQue t = -1.633; p = 0.105; SSEIT t = -0.595; p = 0.553]. Emotional intelligence was higher in nursing than engineering students, and slightly higher in women than men. It was not associated with previous caring experience. Copyright

  11. Whiplash Syndrome Reloaded: Digital Echoes of Whiplash Syndrome in the European Internet Search Engine Context

    Science.gov (United States)

    2017-01-01

    Background In many Western countries, after a motor vehicle collision, those involved seek health care for the assessment of injuries and for insurance documentation purposes. In contrast, in many less wealthy countries, there may be limited access to care and no insurance or compensation system. Objective The purpose of this infodemiology study was to investigate the global pattern of evolving Internet usage in countries with and without insurance and the corresponding compensation systems for whiplash injury. Methods We used the Internet search engine analytics via Google Trends to study the health information-seeking behavior concerning whiplash injury at national population levels in Europe. Results We found that the search for “whiplash” is strikingly and consistently often associated with the search for “compensation” in countries or cultures with a tort system. Frequent or traumatic painful injuries; diseases or disorders such as arthritis, headache, radius, and hip fracture; depressive disorders; and fibromyalgia were not associated similarly with searches on “compensation.” Conclusions In this study, we present evidence from the evolving viewpoint of naturalistic Internet search engine analytics that the expectations for receiving compensation may influence Internet search behavior in relation to whiplash injury. PMID:28347974

  12. International Conference on Frontiers of Intelligent Computing : Theory and Applications

    CERN Document Server

    Udgata, Siba; Biswal, Bhabendra

    2014-01-01

    This volume contains the papers presented at the Second International Conference on Frontiers in Intelligent Computing: Theory and Applications (FICTA-2013) held during 14-16 November 2013 organized by Bhubaneswar Engineering College (BEC), Bhubaneswar, Odisha, India. It contains 63 papers focusing on application of intelligent techniques which includes evolutionary computation techniques like genetic algorithm, particle swarm optimization techniques, teaching-learning based optimization etc  for various engineering applications such as data mining, Fuzzy systems, Machine Intelligence and ANN, Web technologies and Multimedia applications and Intelligent computing and Networking etc.

  13. Refining comparative proteomics by spectral counting to account for shared peptides and multiple search engines.

    Science.gov (United States)

    Chen, Yao-Yi; Dasari, Surendra; Ma, Ze-Qiang; Vega-Montoto, Lorenzo J; Li, Ming; Tabb, David L

    2012-09-01

    Spectral counting has become a widely used approach for measuring and comparing protein abundance in label-free shotgun proteomics. However, when analyzing complex samples, the ambiguity of matching between peptides and proteins greatly affects the assessment of peptide and protein inventories, differentiation, and quantification. Meanwhile, the configuration of database searching algorithms that assign peptides to MS/MS spectra may produce different results in comparative proteomic analysis. Here, we present three strategies to improve comparative proteomics through spectral counting. We show that comparing spectral counts for peptide groups rather than for protein groups forestalls problems introduced by shared peptides. We demonstrate the advantage and flexibility of this new method in two datasets. We present four models to combine four popular search engines that lead to significant gains in spectral counting differentiation. Among these models, we demonstrate a powerful vote counting model that scales well for multiple search engines. We also show that semi-tryptic searching outperforms tryptic searching for comparative proteomics. Overall, these techniques considerably improve protein differentiation on the basis of spectral count tables.

  14. Artificial intelligence based models for stream-flow forecasting: 2000-2015

    Science.gov (United States)

    Yaseen, Zaher Mundher; El-shafie, Ahmed; Jaafar, Othman; Afan, Haitham Abdulmohsin; Sayl, Khamis Naba

    2015-11-01

    The use of Artificial Intelligence (AI) has increased since the middle of the 20th century as seen in its application in a wide range of engineering and science problems. The last two decades, for example, has seen a dramatic increase in the development and application of various types of AI approaches for stream-flow forecasting. Generally speaking, AI has exhibited significant progress in forecasting and modeling non-linear hydrological applications and in capturing the noise complexity in the dataset. This paper explores the state-of-the-art application of AI in stream-flow forecasting, focusing on defining the data-driven of AI, the advantages of complementary models, as well as the literature and their possible future application in modeling and forecasting stream-flow. The review also identifies the major challenges and opportunities for prospective research, including, a new scheme for modeling the inflow, a novel method for preprocessing time series frequency based on Fast Orthogonal Search (FOS) techniques, and Swarm Intelligence (SI) as an optimization approach.

  15. Changes in users' mental models of Web search engines after ten ...

    African Journals Online (AJOL)

    Ward's Cluster analyses including the Pseudo T² Statistical analyses were used to determine the mental model clusters for the seventeen salient design features of Web search engines at each time point. The cubic clustering criterion (CCC) and the dendogram were conducted for each sample to help determine the number ...

  16. A Taxonomic Search Engine: Federating taxonomic databases using web services

    Directory of Open Access Journals (Sweden)

    Page Roderic DM

    2005-03-01

    Full Text Available Abstract Background The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. Results The Taxonomic Search Engine (TSE is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata for each name. Conclusion The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  17. Science and engineering intelligent surveillance systems

    CERN Document Server

    Huihuan, Qian; Xu, Yangsheng

    2011-01-01

    As shortcomings such as high labor costs make intelligent surveillance systems more desirable, this practical book focuses on detecting abnormal behavior based on learning and the analysis of dangerous crowd behavior based on texture and optical flow.

  18. Which Search Engine Is the Most Used One among University Students?

    Science.gov (United States)

    Cavus, Nadire; Alpan, Kezban

    2010-01-01

    The importance of information is increasing in the information age that we are living in with internet becoming the major information resource for people with rapidly increasing number of documents. This situation makes finding information on the internet without web search engines impossible. The aim of the study is revealing most widely used…

  19. Use of search engines for academic activities by the academic staff ...

    African Journals Online (AJOL)

    The research was designed to investigate the Internet Search Engine use behaviour and experiences of lecturers at the University of Jos, using the academics of the Faculty of Natural Sciences in the University as a focal population. The entire population of 148 academic staff members in the Faculty was adopted for the ...

  20. Intelligent Integrated System Health Management

    Science.gov (United States)

    Figueroa, Fernando

    2012-01-01

    Intelligent Integrated System Health Management (ISHM) is the management of data, information, and knowledge (DIaK) with the purposeful objective of determining the health of a system (Management: storage, distribution, sharing, maintenance, processing, reasoning, and presentation). Presentation discusses: (1) ISHM Capability Development. (1a) ISHM Knowledge Model. (1b) Standards for ISHM Implementation. (1c) ISHM Domain Models (ISHM-DM's). (1d) Intelligent Sensors and Components. (2) ISHM in Systems Design, Engineering, and Integration. (3) Intelligent Control for ISHM-Enabled Systems

  1. Inefficiency and Bias of Search Engines in Retrieving References Containing Scientific Names of Fossil Amphibians

    Science.gov (United States)

    Brown, Lauren E.; Dubois, Alain; Shepard, Donald B.

    2008-01-01

    Retrieval efficiencies of paper-based references in journals and other serials containing 10 scientific names of fossil amphibians were determined for seven major search engines. Retrievals were compared to the number of references obtained covering the period 1895-2006 by a Comprehensive Search. The latter was primarily a traditional…

  2. Seasonal trends in tinnitus symptomatology: evidence from Internet search engine query data.

    Science.gov (United States)

    Plante, David T; Ingram, David G

    2015-10-01

    The primary aim of this study was to test the hypothesis that the symptom of tinnitus demonstrates a seasonal pattern with worsening in the winter relative to the summer using Internet search engine query data. Normalized search volume for the term 'tinnitus' from January 2004 through December 2013 was retrieved from Google Trends. Seasonal effects were evaluated using cosinor regression models. Primary countries of interest were the United States and Australia. Secondary exploratory analyses were also performed using data from Germany, the United Kingdom, Canada, Sweden, and Switzerland. Significant seasonal effects for 'tinnitus' search queries were found in the United States and Australia (p search volume in the winter relative to the summer. Our findings indicate that there are significant seasonal trends for Internet search queries for tinnitus, with a zenith in winter months. Further research is indicated to determine the biological mechanisms underlying these findings, as they may provide insights into the pathophysiology of this common and debilitating medical symptom.

  3. 6th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing

    CERN Document Server

    2016-01-01

    This edited book presents scientific results of the 16th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD 2015) which was held on June 1 – 3, 2015 in Takamatsu, Japan. The aim of this conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users, and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way. Research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them.

  4. 17th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing

    CERN Document Server

    SNPD 2016

    2016-01-01

    This edited book presents scientific results of the 17th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD 2016) which was held on May 30 - June 1, 2016 in Shanghai, China. The aim of this conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users, and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way. Research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them.

  5. Swarm Intelligence-Based Hybrid Models for Short-Term Power Load Prediction

    Directory of Open Access Journals (Sweden)

    Jianzhou Wang

    2014-01-01

    Full Text Available Swarm intelligence (SI is widely and successfully applied in the engineering field to solve practical optimization problems because various hybrid models, which are based on the SI algorithm and statistical models, are developed to further improve the predictive abilities. In this paper, hybrid intelligent forecasting models based on the cuckoo search (CS as well as the singular spectrum analysis (SSA, time series, and machine learning methods are proposed to conduct short-term power load prediction. The forecasting performance of the proposed models is augmented by a rolling multistep strategy over the prediction horizon. The test results are representative of the out-performance of the SSA and CS in tuning the seasonal autoregressive integrated moving average (SARIMA and support vector regression (SVR in improving load forecasting, which indicates that both the SSA-based data denoising and SI-based intelligent optimization strategy can effectively improve the model’s predictive performance. Additionally, the proposed CS-SSA-SARIMA and CS-SSA-SVR models provide very impressive forecasting results, demonstrating their strong robustness and universal forecasting capacities in terms of short-term power load prediction 24 hours in advance.

  6. Whiplash Syndrome Reloaded: Digital Echoes of Whiplash Syndrome in the European Internet Search Engine Context.

    Science.gov (United States)

    Noll-Hussong, Michael

    2017-03-27

    In many Western countries, after a motor vehicle collision, those involved seek health care for the assessment of injuries and for insurance documentation purposes. In contrast, in many less wealthy countries, there may be limited access to care and no insurance or compensation system. The purpose of this infodemiology study was to investigate the global pattern of evolving Internet usage in countries with and without insurance and the corresponding compensation systems for whiplash injury. We used the Internet search engine analytics via Google Trends to study the health information-seeking behavior concerning whiplash injury at national population levels in Europe. We found that the search for "whiplash" is strikingly and consistently often associated with the search for "compensation" in countries or cultures with a tort system. Frequent or traumatic painful injuries; diseases or disorders such as arthritis, headache, radius, and hip fracture; depressive disorders; and fibromyalgia were not associated similarly with searches on "compensation." In this study, we present evidence from the evolving viewpoint of naturalistic Internet search engine analytics that the expectations for receiving compensation may influence Internet search behavior in relation to whiplash injury. ©Michael Noll-Hussong. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 27.03.2017.

  7. A search engine to identify pathway genes from expression data on multiple organisms

    Directory of Open Access Journals (Sweden)

    Zambon Alexander C

    2007-05-01

    Full Text Available Abstract Background The completion of several genome projects showed that most genes have not yet been characterized, especially in multicellular organisms. Although most genes have unknown functions, a large collection of data is available describing their transcriptional activities under many different experimental conditions. In many cases, the coregulatation of a set of genes across a set of conditions can be used to infer roles for genes of unknown function. Results We developed a search engine, the Multiple-Species Gene Recommender (MSGR, which scans gene expression datasets from multiple organisms to identify genes that participate in a genetic pathway. The MSGR takes a query consisting of a list of genes that function together in a genetic pathway from one of six organisms: Homo sapiens, Drosophila melanogaster, Caenorhabditis elegans, Saccharomyces cerevisiae, Arabidopsis thaliana, and Helicobacter pylori. Using a probabilistic method to merge searches, the MSGR identifies genes that are significantly coregulated with the query genes in one or more of those organisms. The MSGR achieves its highest accuracy for many human pathways when searches are combined across species. We describe specific examples in which new genes were identified to be involved in a neuromuscular signaling pathway and a cell-adhesion pathway. Conclusion The search engine can scan large collections of gene expression data for new genes that are significantly coregulated with a pathway of interest. By integrating searches across organisms, the MSGR can identify pathway members whose coregulation is either ancient or newly evolved.

  8. Intelligent Growth Automaton of Virtual Plant Based on Physiological Engine

    Science.gov (United States)

    Zhu, Qingsheng; Guo, Mingwei; Qu, Hongchun; Deng, Qingqing

    In this paper, a novel intelligent growth automaton of virtual plant is proposed. Initially, this intelligent growth automaton analyzes the branching pattern which is controlled by genes and then builds plant; moreover, it stores the information of plant growth, provides the interface between virtual plant and environment, and controls the growth and development of plant on the basis of environment and the function of plant organs. This intelligent growth automaton can simulate that the plant growth is controlled by genetic information system, and the information of environment and the function of plant organs. The experimental results show that the intelligent growth automaton can simulate the growth of plant conveniently and vividly.

  9. GIGGLE: a search engine for large-scale integrated genome analysis.

    Science.gov (United States)

    Layer, Ryan M; Pedersen, Brent S; DiSera, Tonya; Marth, Gabor T; Gertz, Jason; Quinlan, Aaron R

    2018-02-01

    GIGGLE is a genomics search engine that identifies and ranks the significance of genomic loci shared between query features and thousands of genome interval files. GIGGLE (https://github.com/ryanlayer/giggle) scales to billions of intervals and is over three orders of magnitude faster than existing methods. Its speed extends the accessibility and utility of resources such as ENCODE, Roadmap Epigenomics, and GTEx by facilitating data integration and hypothesis generation.

  10. OrChem - An open source chemistry search engine for Oracle®

    Science.gov (United States)

    2009-01-01

    Background Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Results Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. Availability OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net. PMID:20298521

  11. OrChem - An open source chemistry search engine for Oracle(R).

    Science.gov (United States)

    Rijnbeek, Mark; Steinbeck, Christoph

    2009-10-22

    Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net.

  12. Impact of Internet Search Engines on OPAC Users: A Study of Punjabi University, Patiala (India)

    Science.gov (United States)

    Kumar, Shiv

    2012-01-01

    Purpose: The aim of this paper is to study the impact of internet search engine usage with special reference to OPAC searches in the Punjabi University Library, Patiala, Punjab (India). Design/methodology/approach: The primary data were collected from 352 users comprising faculty, research scholars and postgraduate students of the university. A…

  13. Intelligent Systems for Aerospace Engineering - An Overview

    National Research Council Canada - National Science Library

    Krishnakumar, K

    2003-01-01

    Intelligent systems are nature-inspired, mathematically sound, computationally intensive problem solving tools and methodologies that have become extremely important for advancing the current trends...

  14. Speech Intelligibility

    Science.gov (United States)

    Brand, Thomas

    Speech intelligibility (SI) is important for different fields of research, engineering and diagnostics in order to quantify very different phenomena like the quality of recordings, communication and playback devices, the reverberation of auditoria, characteristics of hearing impairment, benefit using hearing aids or combinations of these things.

  15. Sampling the Radio Transient Universe: Studies of Pulsars and the Search for Extraterrestrial Intelligence

    Science.gov (United States)

    Chennamangalam, Jayanth

    The transient radio universe is a relatively unexplored area of astronomy, offering a variety of phenomena, from solar and Jovian bursts, to flare stars, pulsars, and bursts of Galactic and potentially even cosmological origin. Among these, perhaps the most widely studied radio transients, pulsars are fast-spinning neutron stars that emit radio beams from their magnetic poles. In spite of over 40 years of research on pulsars, we have more questions than answers on these exotic compact objects, chief among them the nature of their emission mechanism. Nevertheless, the wealth of phenomena exhibited by pulsars make them one of the most useful astrophysical tools. With their high densities, pulsars are probes of the nature of ultra-dense matter. Characterized by their high timing stability, pulsars can be used to verify the predictions of general relativity, discover planets around them, study bodies in the solar system, and even serve as an interplanetary (and possibly some day, interstellar) navigation aid. Pulsars are also used to study the nature of the interstellar medium, much like a flashlight illuminating airborne dust in a dark room. Studies of pulsars in the Galactic center can help answer questions about the massive black hole in the region and the star formation history in its vicinity. Millisecond pulsars in globular clusters are long-lived tracers of their progenitors, low-mass X-ray binaries, and can be used to study the dynamical history of those clusters. Another source of interest in radio transient astronomy is the hitherto undetected engineered signal from extraterrestrial intelligence. The Search for Extraterrestrial Intelligence (SETI) is an ongoing attempt at discovering the presence of technological life elsewhere in the Galaxy. In this work, I present my forays into two aspects of the study of the radio transient universe---pulsars and SETI. Firstly, I describe my work on the luminosity function and population size of pulsars in the globular

  16. Aplikasi Search Engine Perpustakaan Petra Berbasis Android dengan Apache SOLR

    Directory of Open Access Journals (Sweden)

    Andreas Handojo

    2016-07-01

    Full Text Available Abstrak: Pendidikan merupakan kebutuhan yang penting bagi manusia untuk meningkatkan kemampuan serta taraf hidupnya.Selain melalui pendidikan formal, ilmu juga dapat diperoleh melalui media cetak atau buku.Perpustakaan merupakan salah satu sarana yang penting dalam menunjang hal tersebut.Meskipun sangat bermanfaat, terdapat kesulitan penggunaan layanan perpustakaan, karena terlalu banyaknya koleksi pustaka yang ada (buku, jurnal, majalah, dan sebagainya sehingga sulit untuk menemukan buku yang ingin dicari.Oleh sebab itu, selain harus berkembang dengan penyediaan koleksi pustaka, perpustakaan harus dapat mengikuti perkembangan zaman yang ada sehingga mempermudah penggunaan layanan perpustakaan.Saat iniperpustakaan Universitas Kristen Petra memiliki perpustakaan dengan kurang lebih 230.000 koleksi fisik maupun digital (berdasarkan data 2014.Dimana daftar koleksi fisik dan dokumen digital dapat diakses melalui website perpustakaan.Adanya koleksi pustaka yang sangat banyak ini menyebabkan kesulitan pengguna dalam melakukan proses pencarian. Sehingga guna menambah fitur layanan yang diberikan maka pada penelitian ini dibuatlah sebuah aplikasi layanan search engine perpustakaan menggunakan platform Apache SOLR dan database PostgreSQL. Selain itu, guna lebih meningkatkan kemudahan akses maka aplikasi ini dibuat dengan menggunakan platform mobile device berbasis Android.Selain pengujian terhadap aplikasi dilakukan juga pengujian dengan mengedarkan kuesioner terhadap 50 calon pengguna.Dari hasil kuestioner tersebut menunjukkan bahwa fitur-fitur yang dibuat telah sesuai dengan kebutuhan pengguna (78%. Kata kunci: SOLR, Mesin Pencarian, Perpustakaan, PostgreSQL Abstract: Education is an essential requirement for people to improve their standard of living. Other than through formal education, science can also be obtained through the print media or books. Library is one important tool supporting it. Although it is useful, there are difficulties use library

  17. Software and Network Engineering

    CERN Document Server

    2012-01-01

    The series "Studies in Computational Intelligence" (SCI) publishes new developments and advances in the various areas of computational intelligence – quickly and with a high quality. The intent is to cover the theory, applications, and design methods of computational intelligence, as embedded in the fields of engineering, computer science, physics and life science, as well as the methodologies behind them. The series contains monographs, lecture notes and edited volumes in computational intelligence spanning the areas of neural networks, connectionist systems, genetic algorithms, evolutionary computation, artificial intelligence, cellular automata, self-organizing systems, soft computing, fuzzy systems, and hybrid intelligent systems. Critical to both contributors and readers are the short publication time and world-wide distribution - this permits a rapid and broad dissemination of research results.   The purpose of the first ACIS International Symposium on Software and Network Engineering held on Decembe...

  18. Internet-based intelligent information processing systems

    CERN Document Server

    Tonfoni, G; Ichalkaranje, N S

    2003-01-01

    The Internet/WWW has made it possible to easily access quantities of information never available before. However, both the amount of information and the variation in quality pose obstacles to the efficient use of the medium. Artificial intelligence techniques can be useful tools in this context. Intelligent systems can be applied to searching the Internet and data-mining, interpreting Internet-derived material, the human-Web interface, remote condition monitoring and many other areas. This volume presents the latest research on the interaction between intelligent systems (neural networks, adap

  19. Extraterrestrial Intelligence: What Would it Mean?

    Science.gov (United States)

    Impey, Chris

    2015-04-01

    Results from NASA's Kepler mission imply a hundred million Earth-like habitable worlds in the Milky Way galaxy, many of which formed billions of years before the Earth. Each of these worlds is likely to have all of the ingredients needed for biology. The real estate of time and space for the evolution of intelligent life is formidable, begging the question of whether or not we are alone in the universe. The implications of making contact have been explored extensively in science fiction and the popular culture, but less frequently in the serious scientific literature. Astronomers have carried out searches for extraterrestrial intelligence for over half a century, with no success so far. In practice, it is easier to search for alien technology than to discern intelligence of unknown function and form. In this talk, the modes of technology that can currently be detected are summarized, along with the implications of a timing argument than any detected civilization is likely to be much more advanced than ours. Fermi's famous question ``Where Are They?'' is as well posed now as it was sixty years ago. The existence of extraterrestrial intelligence would have profound practical, cultural, and religious implications for humanity.

  20. An end user evaluation of query formulation and results review tools in three medical meta-search engines.

    Science.gov (United States)

    Leroy, Gondy; Xu, Jennifer; Chung, Wingyan; Eggers, Shauna; Chen, Hsinchun

    2007-01-01

    Retrieving sufficient relevant information online is difficult for many people because they use too few keywords to search and search engines do not provide many support tools. To further complicate the search, users often ignore support tools when available. Our goal is to evaluate in a realistic setting when users use support tools and how they perceive these tools. We compared three medical search engines with support tools that require more or less effort from users to form a query and evaluate results. We carried out an end user study with 23 users who were asked to find information, i.e., subtopics and supporting abstracts, for a given theme. We used a balanced within-subjects design and report on the effectiveness, efficiency and usability of the support tools from the end user perspective. We found significant differences in efficiency but did not find significant differences in effectiveness between the three search engines. Dynamic user support tools requiring less effort led to higher efficiency. Fewer searches were needed and more documents were found per search when both query reformulation and result review tools dynamically adjust to the user query. The query reformulation tool that provided a long list of keywords, dynamically adjusted to the user query, was used most often and led to more subtopics. As hypothesized, the dynamic result review tools were used more often and led to more subtopics than static ones. These results were corroborated by the usability questionnaires, which showed that support tools that dynamically optimize output were preferred.

  1. International Conference on Frontiers of Intelligent Computing : Theory and Applications

    CERN Document Server

    Bhateja, Vikrant; Udgata, Siba; Pattnaik, Prasant

    2017-01-01

    The book is a collection of high-quality peer-reviewed research papers presented at International Conference on Frontiers of Intelligent Computing: Theory and applications (FICTA 2016) held at School of Computer Engineering, KIIT University, Bhubaneswar, India during 16 – 17 September 2016. The book presents theories, methodologies, new ideas, experiences and applications in all areas of intelligent computing and its applications to various engineering disciplines like computer science, electronics, electrical and mechanical engineering.

  2. Application of computational intelligence to biology

    CERN Document Server

    Sekhar, Akula

    2016-01-01

    This book is a contribution of translational and allied research to the proceedings of the International Conference on Computational Intelligence and Soft Computing. It explains how various computational intelligence techniques can be applied to investigate various biological problems. It is a good read for Research Scholars, Engineers, Medical Doctors and Bioinformatics researchers.

  3. Adaptive Engineering of an Embedded System, Engineered for use by Search and Rescue Canines

    Directory of Open Access Journals (Sweden)

    Cristina Ribeiro

    2011-06-01

    Full Text Available In Urban Search and Rescue (US&R operations, canine teams are deployed to find live patients, and save lives. US&R may benefit from increased levels of situational awareness, through information made available through the use of embedded systems attached to the dogs. One of these is the Canine Pose Estimation (CPE system. There are many challenges faced with such embedded systems including the engineering of such devices for use in disaster environments. Durability and wireless connectivity in areas with materials that inhibit wireless communications, the safety of the dog wearing the devices, and form factor must be accommodated. All of these factors must be weighed without compromising the accuracy of the application and the timely delivery of its data. This paper discusses the adaptive engineering process and how each of the unique challenges of emergency response embedded systems can be defined and overcome through effective design methods.

  4. GIGGLE: a search engine for large-scale integrated genome analysis

    Science.gov (United States)

    Layer, Ryan M; Pedersen, Brent S; DiSera, Tonya; Marth, Gabor T; Gertz, Jason; Quinlan, Aaron R

    2018-01-01

    GIGGLE is a genomics search engine that identifies and ranks the significance of genomic loci shared between query features and thousands of genome interval files. GIGGLE (https://github.com/ryanlayer/giggle) scales to billions of intervals and is over three orders of magnitude faster than existing methods. Its speed extends the accessibility and utility of resources such as ENCODE, Roadmap Epigenomics, and GTEx by facilitating data integration and hypothesis generation. PMID:29309061

  5. Needle Custom Search: Recall-oriented search on the Web using semantic annotations

    NARCIS (Netherlands)

    Kaptein, Rianne; Koot, Gijs; Huis in 't Veld, Mirjam A.A.; van den Broek, Egon; de Rijke, Maarten; Kenter, Tom; de Vries, A.P.; Zhai, Chen Xiang; de Jong, Franciska M.G.; Radinsky, Kira; Hofmann, Katja

    Web search engines are optimized for early precision, which makes it difficult to perform recall-oriented tasks using these search engines. In this article, we present our tool Needle Custom Search. This tool exploits semantic annotations of Web search results and, thereby, increase the efficiency

  6. Needle Custom Search : Recall-oriented search on the web using semantic annotations

    NARCIS (Netherlands)

    Kaptein, Rianne; Koot, Gijs; Huis in 't Veld, Mirjam A.A.; van den Broek, Egon L.

    2014-01-01

    Web search engines are optimized for early precision, which makes it difficult to perform recall-oriented tasks using these search engines. In this article, we present our tool Needle Custom Search. This tool exploits semantic annotations of Web search results and, thereby, increase the efficiency

  7. Artificial Intelligence (AI) Studies in Water Resources

    OpenAIRE

    Ay, Murat; Özyıldırım, Serhat

    2018-01-01

    Artificial intelligence has been extensively used in many areas such as computer science,robotics, engineering, medicine, translation, economics, business, and psychology. Variousstudies in the literature show that the artificial intelligence in modeling approaches give closeresults to the real data for solution of linear, non-linear, and other systems. In this study, wereviewed the current state-of-the-art and progress on the modelling of artificial intelligence forwater variables: rainfall-...

  8. AN OPPORTUNISTIC SEARCH FOR EXTRATERRESTRIAL INTELLIGENCE (SETI) WITH THE MURCHISON WIDEFIELD ARRAY

    Energy Technology Data Exchange (ETDEWEB)

    Tingay, S. J.; Tremblay, C.; Walsh, A.; Urquhart, R. [International Centre for Radio Astronomy Research (ICRAR), Curtin University, Bentley, WA 6102 (Australia)

    2016-08-20

    A spectral line image cube generated from 115 minutes of MWA data that covers a field of view of 400 sq, deg. around the Galactic Center is used to perform the first Search for ExtraTerrestrial Intelligence (SETI) with the Murchison Widefield Array (MWA). Our work constitutes the first modern SETI experiment at low radio frequencies, here between 103 and 133 MHz, paving the way for large-scale searches with the MWA and, in the future, the low-frequency Square Kilometre Array. Limits of a few hundred mJy beam{sup −1} for narrowband emission (10 kHz) are derived from our data, across our 400 sq. deg. field of view. Within this field, 45 exoplanets in 38 planetary systems are known. We extract spectra at the locations of these systems from our image cube to place limits on the presence of narrow line emission from these systems. We then derive minimum isotropic transmitter powers for these exoplanets; a small handful of the closest objects (10 s of pc) yield our best limits of order 10{sup 14} W (Equivalent Isotropic Radiated Power). These limits lie above the highest power directional transmitters near these frequencies currently operational on Earth. A SETI experiment with the MWA covering the full accessible sky and its full frequency range would require approximately one month of observing time. The MWA frequency range, its southern hemisphere location on an extraordinarily radio quiet site, its very large field of view, and its high sensitivity make it a unique facility for SETI.

  9. Towards intelligent diagnostic system employing integration of mathematical and engineering model

    International Nuclear Information System (INIS)

    Isa, Nor Ashidi Mat

    2015-01-01

    The development of medical diagnostic system has been one of the main research fields during years. The goal of the medical diagnostic system is to place a nosological system that could ease the diagnostic evaluation normally performed by scientists and doctors. Efficient diagnostic evaluation is essentials and requires broad knowledge in order to improve conventional diagnostic system. Several approaches on developing the medical diagnostic system have been designed and tested since the earliest 60s. Attempts on improving their performance have been made which utilizes the fields of artificial intelligence, statistical analyses, mathematical model and engineering theories. With the availability of the microcomputer and software development as well as the promising aforementioned fields, medical diagnostic prototypes could be developed. In general, the medical diagnostic system consists of several stages, namely the 1) data acquisition, 2) feature extraction, 3) feature selection, and 4) classifications stages. Data acquisition stage plays an important role in converting the inputs measured from the real world physical conditions to the digital numeric values that can be manipulated by the computer system. One of the common medical inputs could be medical microscopic images, radiographic images, magnetic resonance image (MRI) as well as medical signals such as electrocardiogram (ECG) and electroencephalogram (EEG). Normally, the scientist or doctors have to deal with myriad of data and redundant to be processed. In order to reduce the complexity of the diagnosis process, only the significant features of the raw data such as peak value of the ECG signal or size of lesion in the mammogram images will be extracted and considered in the subsequent stages. Mathematical models and statistical analyses will be performed to select the most significant features to be classified. The statistical analyses such as principal component analysis and discriminant analysis as well

  10. Towards intelligent diagnostic system employing integration of mathematical and engineering model

    Science.gov (United States)

    Isa, Nor Ashidi Mat

    2015-05-01

    The development of medical diagnostic system has been one of the main research fields during years. The goal of the medical diagnostic system is to place a nosological system that could ease the diagnostic evaluation normally performed by scientists and doctors. Efficient diagnostic evaluation is essentials and requires broad knowledge in order to improve conventional diagnostic system. Several approaches on developing the medical diagnostic system have been designed and tested since the earliest 60s. Attempts on improving their performance have been made which utilizes the fields of artificial intelligence, statistical analyses, mathematical model and engineering theories. With the availability of the microcomputer and software development as well as the promising aforementioned fields, medical diagnostic prototypes could be developed. In general, the medical diagnostic system consists of several stages, namely the 1) data acquisition, 2) feature extraction, 3) feature selection, and 4) classifications stages. Data acquisition stage plays an important role in converting the inputs measured from the real world physical conditions to the digital numeric values that can be manipulated by the computer system. One of the common medical inputs could be medical microscopic images, radiographic images, magnetic resonance image (MRI) as well as medical signals such as electrocardiogram (ECG) and electroencephalogram (EEG). Normally, the scientist or doctors have to deal with myriad of data and redundant to be processed. In order to reduce the complexity of the diagnosis process, only the significant features of the raw data such as peak value of the ECG signal or size of lesion in the mammogram images will be extracted and considered in the subsequent stages. Mathematical models and statistical analyses will be performed to select the most significant features to be classified. The statistical analyses such as principal component analysis and discriminant analysis as well

  11. Towards intelligent diagnostic system employing integration of mathematical and engineering model

    Energy Technology Data Exchange (ETDEWEB)

    Isa, Nor Ashidi Mat [Imaging and Intelligent System Research Team (ISRT), School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Penang (Malaysia)

    2015-05-15

    The development of medical diagnostic system has been one of the main research fields during years. The goal of the medical diagnostic system is to place a nosological system that could ease the diagnostic evaluation normally performed by scientists and doctors. Efficient diagnostic evaluation is essentials and requires broad knowledge in order to improve conventional diagnostic system. Several approaches on developing the medical diagnostic system have been designed and tested since the earliest 60s. Attempts on improving their performance have been made which utilizes the fields of artificial intelligence, statistical analyses, mathematical model and engineering theories. With the availability of the microcomputer and software development as well as the promising aforementioned fields, medical diagnostic prototypes could be developed. In general, the medical diagnostic system consists of several stages, namely the 1) data acquisition, 2) feature extraction, 3) feature selection, and 4) classifications stages. Data acquisition stage plays an important role in converting the inputs measured from the real world physical conditions to the digital numeric values that can be manipulated by the computer system. One of the common medical inputs could be medical microscopic images, radiographic images, magnetic resonance image (MRI) as well as medical signals such as electrocardiogram (ECG) and electroencephalogram (EEG). Normally, the scientist or doctors have to deal with myriad of data and redundant to be processed. In order to reduce the complexity of the diagnosis process, only the significant features of the raw data such as peak value of the ECG signal or size of lesion in the mammogram images will be extracted and considered in the subsequent stages. Mathematical models and statistical analyses will be performed to select the most significant features to be classified. The statistical analyses such as principal component analysis and discriminant analysis as well

  12. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-08-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  13. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-12-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  14. Application of heuristic and machine-learning approach to engine model calibration

    Science.gov (United States)

    Cheng, Jie; Ryu, Kwang R.; Newman, C. E.; Davis, George C.

    1993-03-01

    Automation of engine model calibration procedures is a very challenging task because (1) the calibration process searches for a goal state in a huge, continuous state space, (2) calibration is often a lengthy and frustrating task because of complicated mutual interference among the target parameters, and (3) the calibration problem is heuristic by nature, and often heuristic knowledge for constraining a search cannot be easily acquired from domain experts. A combined heuristic and machine learning approach has, therefore, been adopted to improve the efficiency of model calibration. We developed an intelligent calibration program called ICALIB. It has been used on a daily basis for engine model applications, and has reduced the time required for model calibrations from many hours to a few minutes on average. In this paper, we describe the heuristic control strategies employed in ICALIB such as a hill-climbing search based on a state distance estimation function, incremental problem solution refinement by using a dynamic tolerance window, and calibration target parameter ordering for guiding the search. In addition, we present the application of a machine learning program called GID3* for automatic acquisition of heuristic rules for ordering target parameters.

  15. How Will Online Affiliate Marketing Networks Impact Search Engine Rankings?

    OpenAIRE

    Janssen, David; Heck, Eric

    2007-01-01

    textabstractIn online affiliate marketing networks advertising web sites offer their affiliates revenues based on provided web site traffic and associated leads and sales. Advertising web sites can have a network of thousands of affiliates providing them with web site traffic through hyperlinks on their web sites. Search engines such as Google, MSN, and Yahoo, consider hyperlinks as a proof of quality and/or reliability of the linked web sites, and therefore use them to determine the relevanc...

  16. Development of core thermal-hydraulics module for intelligent reactor design system (IRDS)

    International Nuclear Information System (INIS)

    Kugo, Teruhiko; Nakagawa, Masayuki; Fujii, Sadao.

    1994-08-01

    We have developed an innovative reactor core thermal-hydraulics module where a designer can easily and efficiently evaluate his design concept of a new type reactor in the thermal-hydraulics field. The main purpose of this module is to decide a feasible range of basic design parameters of a reactor core in a conceptual design stage of a new type reactor. The module is to be implemented in Intelligent Reactor Design System (IRDS). The module has the following characteristics; 1) to deal with several reactor types, 2) four thermal hydraulics and fuel behavior analysis codes are installed to treat different type of reactors and design detail, 3) to follow flexibly modification of a reactor concept, 4) to provide analysis results in an understandable way so that a designer can easily evaluate feasibility of his concept, and so on. The module runs on an engineering workstation (EWS) and has a user-friendly man-machine interface on a pre- and post-processing. And it is equipped with a function to search a feasible range called as Design Window, for two design parameters by artificial intelligence (AI) technique and knowledge engineering. In this report, structure, guidance for users of an usage of the module and instruction of input data for analysis modules are presented. (author)

  17. evaluating search effectiveness of some selected search engines

    African Journals Online (AJOL)

    Precision, relative recall and response time were considered for this ... a total of 24 search queries were sampled based on information queries, .... searching process and results, although there are other ... Q3.2 Software prototype model.

  18. Seasonal trends in hypertension in Poland: evidence from Google search engine query data.

    Science.gov (United States)

    Płatek, Anna E; Sierdziński, Janusz; Krzowski, Bartosz; Szymański, Filip M

    2018-01-01

    Various conditions, including arterial hypertension, exhibit seasonal trends in their occurrence and magnitude. Those trends correspond to an interest exhibited in the number of Internet searches for the specific conditions per month. The aim of the study was to show seasonal trends in the hypertension prevalence in Poland relate to the data from the Google Trends tool. Internet search engine query data were retrieved from Google Trends from January 2008 to November 2017. Data were calculated as a monthly normalised search volume from the nine-year period. Data was presented for specific geographic regions, including Poland, the United States of America, Australia, and worldwide for the following search terms: "arterial hypertension (pol. nadciśnienie tętnicze)", "hypertension (pol. nadciśnienie)" and "hypertension medical condition". Seasonal effects were calculated using regression models and presented graphically. In Poland the search volume is the highest between November and May, while patients exhibit the least interest in arterial hypertension during summer holidays (p Google.

  19. An application of artificial intelligence for rainfall–runoff modeling

    Indian Academy of Sciences (India)

    This study proposes an application of two techniques of artificial intelligence (AI) ... (2006) applied rainfall–runoff modeling using ANN ... in artificial intelligence, engineering and science .... usually be estimated from a sample of observations.

  20. 1st International Conference on Intelligent Computing and Communication

    CERN Document Server

    Satapathy, Suresh; Sanyal, Manas; Bhateja, Vikrant

    2017-01-01

    The book covers a wide range of topics in Computer Science and Information Technology including swarm intelligence, artificial intelligence, evolutionary algorithms, and bio-inspired algorithms. It is a collection of papers presented at the First International Conference on Intelligent Computing and Communication (ICIC2) 2016. The prime areas of the conference are Intelligent Computing, Intelligent Communication, Bio-informatics, Geo-informatics, Algorithm, Graphics and Image Processing, Graph Labeling, Web Security, Privacy and e-Commerce, Computational Geometry, Service Orient Architecture, and Data Engineering.