WorldWideScience

Sample records for web search tasks

  1. Supporting complex search tasks

    DEFF Research Database (Denmark)

    Gäde, Maria; Hall, Mark; Huurdeman, Hugo

    2015-01-01

    There is broad consensus in the field of IR that search is complex in many use cases and applications, both on the Web and in domain specific collections, and both professionally and in our daily life. Yet our understanding of complex search tasks, in comparison to simple look up tasks......, is fragmented at best. The workshop addressed the many open research questions: What are the obvious use cases and applications of complex search? What are essential features of work tasks and search tasks to take into account? And how do these evolve over time? With a multitude of information, varying from...... introductory to specialized, and from authoritative to speculative or opinionated, when to show what sources of information? How does the information seeking process evolve and what are relevant differences between different stages? With complex task and search process management, blending searching, browsing...

  2. Web Search Engines

    OpenAIRE

    Rajashekar, TB

    1998-01-01

    The World Wide Web is emerging as an all-in-one information source. Tools for searching Web-based information include search engines, subject directories and meta search tools. We take a look at key features of these tools and suggest practical hints for effective Web searching.

  3. Book Review: Web Search-Public Searching of the Web

    OpenAIRE

    Yazdan Mansourian

    2004-01-01

    The book consists of four sections including (1) the context of web search, (2) how people search the web, (3) subjects of web search and (4) conclusion: trends and future directions. The first section includes three chapters addressing a brief but informative introduction about the main involved elements of web search process and web search research including search engines mechanism, human computer interaction in web searching and research design in web search studies.

  4. Chemical Search Web Utility

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Chemical Search Web Utility is an intuitive web application that allows the public to easily find the chemical that they are interested in using, and which...

  5. Distributed Deep Web Search

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien

    2013-01-01

    The World Wide Web contains billions of documents (and counting); hence, it is likely that some document will contain the answer or content you are searching for. While major search engines like Bing and Google often manage to return relevant results to your query, there are plenty of situations in

  6. A Comparison of the Use of Text Summaries, Plain Thumbnails, and Enhanced Thumbnails for Web Search Tasks.

    Science.gov (United States)

    Woodruff, Allison; Rosenholtz, Ruth; Morrison, Julie B.; Faulring, Andrew; Pirolli, Peter

    2002-01-01

    Discussion of Web search strategies focuses on a comparative study of textual and graphical summarization mechanisms applied to search engine results. Suggests that thumbnail images (graphical summaries) can increase efficiency in processing results, and that enhanced thumbnails (augmented with readable textual elements) had more consistent…

  7. The Evolution of Web Searching.

    Science.gov (United States)

    Green, David

    2000-01-01

    Explores the interrelation between Web publishing and information retrieval technologies and lists new approaches to Web indexing and searching. Highlights include Web directories; search engines; portalisation; Internet service providers; browser providers; meta search engines; popularity based analysis; natural language searching; links-based…

  8. XML and Better Web Searching.

    Science.gov (United States)

    Jackson, Joe; Gilstrap, Donald L.

    1999-01-01

    Addresses the implications of the new Web metalanguage XML for searching on the World Wide Web and considers the future of XML on the Web. Compared to HTML, XML is more concerned with structure of data than documents, and these data structures should prove conducive to precise, context rich searching. (Author/LRW)

  9. Image Searching across the Web.

    Science.gov (United States)

    Pack, Thomas

    2002-01-01

    Discusses how to find digital images on the Web. Considers images and copyright; provides an overview of the search capabilities of six search engines, including AltaVista, Google, AllTheWeb.com, Ditto.com, Picsearch, and Lycos; and describes specialized image search engines. (LRW)

  10. Measuring Personalization of Web Search

    DEFF Research Database (Denmark)

    Hannak, Aniko; Sapiezynski, Piotr; Kakhki, Arash Molavi

    2013-01-01

    Web search is an integral part of our daily lives. Recently, there has been a trend of personalization in Web search, where different users receive different results for the same search query. The increasing personalization is leading to concerns about Filter Bubble effects, where certain users...... are simply unable to access information that the search engines’ algorithm decidesis irrelevant. Despitetheseconcerns, there has been little quantification of the extent of personalization in Web search today, or the user attributes that cause it. In light of this situation, we make three contributions....... First, we develop a methodology for measuring personalization in Web search results. While conceptually simple, there are numerous details that our methodology must handle in order to accurately attribute differences in search results to personalization. Second, we apply our methodology to 200 users...

  11. Web Search Studies: Multidisciplinary Perspectives on Web Search Engines

    Science.gov (United States)

    Zimmer, Michael

    Perhaps the most significant tool of our internet age is the web search engine, providing a powerful interface for accessing the vast amount of information available on the world wide web and beyond. While still in its infancy compared to the knowledge tools that precede it - such as the dictionary or encyclopedia - the impact of web search engines on society and culture has already received considerable attention from a variety of academic disciplines and perspectives. This article aims to organize a meta-discipline of “web search studies,” centered around a nucleus of major research on web search engines from five key perspectives: technical foundations and evaluations; transaction log analyses; user studies; political, ethical, and cultural critiques; and legal and policy analyses.

  12. Overview of the TREC 2014 Federated Web Search Track

    OpenAIRE

    Demeester, Thomas; Trieschnigg, Rudolf Berend; Nguyen, Dong-Phuong; Zhou, Ke; Hiemstra, Djoerd

    2014-01-01

    The TREC Federated Web Search track facilitates research in topics related to federated web search, by providing a large realistic data collection sampled from a multitude of online search engines. The FedWeb 2013 challenges of Resource Selection and Results Merging challenges are again included in FedWeb 2014, and we additionally introduced the task of vertical selection. Other new aspects are the required link between the Resource Selection and Results Merging, and the importance of diversi...

  13. Drexel at TREC 2014 Federated Web Search Track

    Science.gov (United States)

    2014-11-01

    of its input RS results. 1. INTRODUCTION Federated Web Search is the task of searching multiple search engines simultaneously and combining their...or distributed properly[5]. The goal of RS is then, for a given query, to select only the most promising search engines from all those available. Most...result pages of 149 search engines . 4000 queries are used in building the sample set. As a part of the Vertical Selection task, search engines are

  14. Semantic Search of Web Services

    Science.gov (United States)

    Hao, Ke

    2013-01-01

    This dissertation addresses semantic search of Web services using natural language processing. We first survey various existing approaches, focusing on the fact that the expensive costs of current semantic annotation frameworks result in limited use of semantic search for large scale applications. We then propose a vector space model based service…

  15. Click Models for Web Search

    NARCIS (Netherlands)

    Chuklin, A.; Markov, I.; de Rijke, M.

    2015-01-01

    With the rapid growth of web search in recent years the problem of modeling its users has started to attract more and more attention of the information retrieval community. This has several motivations. By building a model of user behavior we are essentially developing a better understanding of a

  16. Multitasking Web Searching and Implications for Design.

    Science.gov (United States)

    Ozmutlu, Seda; Ozmutlu, H. C.; Spink, Amanda

    2003-01-01

    Findings from a study of users' multitasking searches on Web search engines include: multitasking searches are a noticeable user behavior; multitasking search sessions are longer than regular search sessions in terms of queries per session and duration; both Excite and AlltheWeb.com users search for about three topics per multitasking session and…

  17. Recall Oriented Search on the Web using Semantic Annotations

    NARCIS (Netherlands)

    Kaptein, A.M.; Broek, E.L. van den; Koot, G.; Huis in't Veld, M.A.A.

    2013-01-01

    Web search engines are optimized for early precision, which makes it difficult to perform recall oriented tasks with them. In this article, we propose several ways to leverage semantic annotations and, thereby, increase the efficiency of recall oriented search tasks, with a focus on forensic

  18. Recall oriented search on the web using semantic annotations

    NARCIS (Netherlands)

    Kaptein, Rianne; van den Broek, Egon; Koot, Gijs; Huis in 't Veld, Mirjam A.A.; Bennett, P.N.; Gabrilovich, E.; Kamps, J.; Karlgren, J.

    2013-01-01

    Web search engines are optimized for early precision, which makes it difficult to perform recall oriented tasks with them. In this article, we propose several ways to leverage semantic annotations and, thereby, increase the efficiency of recall oriented search tasks, with a focus on forensic

  19. Resolving Person Names in Web People Search

    Science.gov (United States)

    Balog, Krisztian; Azzopardi, Leif; de Rijke, Maarten

    Disambiguating person names in a set of documents (such as a set of web pages returned in response to a person name) is a key task for the presentation of results and the automatic profiling of experts. With largely unstructured documents and an unknown number of people with the same name the problem presents many difficulties and challenges. This chapter treats the task of person name disambiguation as a document clustering problem, where it is assumed that the documents represent particular people. This leads to the person cluster hypothesis, which states that similar documents tend to represent the same person. Single Pass Clustering, k-Means Clustering, Agglomerative Clustering and Probabilistic Latent Semantic Analysis are employed and empirically evaluated in this context. On the SemEval 2007 Web People Search it is shown that the person cluster hypothesis holds reasonably well and that the Single Pass Clustering and Agglomerative Clustering methods provide the best performance.

  20. Analyzing web log files of the health on the net HONmedia search engine to define typical image search tasks for image retrieval evaluation.

    Science.gov (United States)

    Müller, Henning; Boyer, Célia; Gaudinat, Arnaud; Hersh, William; Geissbuhler, Antoine

    2007-01-01

    Medical institutions produce ever-increasing amount of diverse information. The digital form makes these data available for the use on more than a single patient. Images are no exception to this. However, less is known about how medical professionals search for visual medical information and how they want to use it outside of the context of a single patient. This article analyzes ten months of usage log files of the Health on the Net (HON) medical media search engine. Key words were extracted from all queries and the most frequent terms and subjects were identified. The dataset required much pre-treatment. Problems included national character sets, spelling errors and the use of terms in several languages. The results show that media search, particularly for images, was frequently used. The most common queries were for general concepts (e.g., heart, lung). To define realistic information needs for the ImageCLEFmed challenge evaluation (Cross Language Evaluation Forum medical image retrieval), we used frequent queries that were still specific enough to at least cover two of the three axes on modality, anatomic region, and pathology. Several research groups evaluated their image retrieval algorithms based on these defined topics.

  1. Semantic search meets the Web

    OpenAIRE

    Fernández Sánchez, Miriam; López, Vanessa; Sabou, Marta; Uren, Victoria S; Vallet Weadon, David Jordi; Motta, Enrico; Castells, Pablo

    2008-01-01

    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. M. Fernández, V. López, M. Sabou, V. S. Uren, D. Vallet, E. Motta, and P. Castells, "Semantic search meets the Web", 2008 IE...

  2. A Survey on Semantic Web Search Engine

    OpenAIRE

    G.Sudeepthi; Anuradha, G.; M.Surendra Prasad Babu

    2012-01-01

    The tremendous growth in the volume of data and with the terrific growth of number of web pages, traditional search engines now a days are not appropriate and not suitable anymore. Search engine is the most important tool to discover any information in World Wide Web. Semantic Search Engine is born of traditional search engine to overcome the above problem. The Semantic Web is an extension of the current web in which information is given well-defined meaning. Semantic web technologies are pla...

  3. Process-oriented semantic web search

    CERN Document Server

    Tran, DT

    2011-01-01

    The book is composed of two main parts. The first part is a general study of Semantic Web Search. The second part specifically focuses on the use of semantics throughout the search process, compiling a big picture of Process-oriented Semantic Web Search from different pieces of work that target specific aspects of the process.In particular, this book provides a rigorous account of the concepts and technologies proposed for searching resources and semantic data on the Semantic Web. To collate the various approaches and to better understand what the notion of Semantic Web Search entails, this bo

  4. A neural click model for web search

    NARCIS (Netherlands)

    Borisov, A.; Markov, I.; de Rijke, M.; Serdyukov, P.

    2016-01-01

    Understanding user browsing behavior in web search is key to improving web search effectiveness. Many click models have been proposed to explain or predict user clicks on search engine results. They are based on the probabilistic graphical model (PGM) framework, in which user behavior is represented

  5. WebMARS: a multimedia search engine

    Science.gov (United States)

    Ortega-Binderberger, Michael; Mehrotra, Sharad; Chakrabarti, Kaushik; Porkaew, Kriengkrai

    1999-12-01

    The Web provides a large repository of multimedia data, text, images, etc. Most current search engines focus on textural retrieval. In this paper, we focus on using an integrated textural and visual search engine for Web documents. We support query refinement which proves useful and enables cross-media browsing in addition to regular search.

  6. Example Based Entity Search in the Web of Data

    NARCIS (Netherlands)

    Bron, M.; Balog, K.; de Rijke, M.

    2013-01-01

    The scale of today's Web of Data motivates the use of keyword search-based approaches to entity-oriented search tasks in addition to traditional structure-based approaches, which require users to have knowledge of the underlying schema. We propose an alternative structure-based approach that makes

  7. Do two heads search better than one? Effects of student collaboration on web search behavior and search outcomes.

    NARCIS (Netherlands)

    Lazonder, Adrianus W.

    2005-01-01

    This study compared Pairs of students with Single students in web search tasks. The underlying hypothesis was that peer-to-peer collaboration encourages students to articulate their thoughts, which in turn has a facilitative effect on the regulation of the search process as well as search outcomes.

  8. Searching a database based web site

    OpenAIRE

    Filipe Silva; Gabriel David

    2003-01-01

    Currently, information systems are usually supported by databases (DB) and accessed through a Web interface. Pages in such Web sites are not drawn from HTML files but are generated on the fly upon request. Indexing and searching such dynamic pages raises several extra difficulties not solved by most search engines, which were designed for static contents. In this paper we describe the development of a search engine that overcomes most of the problems for a specific Web site, how the limitatio...

  9. Sexual information seeking on web search engines.

    Science.gov (United States)

    Spink, Amanda; Koricich, Andrew; Jansen, B J; Cole, Charles

    2004-02-01

    Sexual information seeking is an important element within human information behavior. Seeking sexually related information on the Internet takes many forms and channels, including chat rooms discussions, accessing Websites or searching Web search engines for sexual materials. The study of sexual Web queries provides insight into sexually-related information-seeking behavior, of value to Web users and providers alike. We qualitatively analyzed queries from logs of 1,025,910 Alta Vista and AlltheWeb.com Web user queries from 2001. We compared the differences in sexually-related Web searching between Alta Vista and AlltheWeb.com users. Differences were found in session duration, query outcomes, and search term choices. Implications of the findings for sexual information seeking are discussed.

  10. Where Is It? How Deaf Adolescents Complete Fact-Based Internet Search Tasks

    Science.gov (United States)

    Smith, Chad E.

    2007-01-01

    An exploratory study was designed to describe Internet search behaviors of deaf adolescents who used Internet search engines to complete fact-based search tasks. The study examined search behaviors of deaf high school students such as query formation, query modification, Web site identification, and Web site selection. Consisting of two fact-based…

  11. Asymptotic analysis for personalized web search

    NARCIS (Netherlands)

    Volkovich, Y.; Litvak, Nelli

    2010-01-01

    PageRank with personalization is used in Web search as an importance measure for Web documents. The goal of this paper is to characterize the tail behavior of the PageRank distribution in the Web and other complex networks characterized by power laws. To this end, we model the PageRank as a solution

  12. Research Proposal for Distributed Deep Web Search

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien

    2010-01-01

    This proposal identifies two main problems related to deep web search, and proposes a step by step solution for each of them. The first problem is about searching deep web content by means of a simple free-text interface (with just one input field, instead of a complex interface with many input

  13. Deep web search: an overview and roadmap

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien; Trieschnigg, Rudolf Berend; Hiemstra, Djoerd

    2011-01-01

    We review the state-of-the-art in deep web search and propose a novel classification scheme to better compare deep web search systems. The current binary classification (surfacing versus virtual integration) hides a number of implicit decisions that must be made by a developer. We make these

  14. Image Searching on the Excite Web Search Engine.

    Science.gov (United States)

    Goodrum, Abby; Spink, Amanda

    2001-01-01

    Examines visual information needs as expressed in users' Web image queries on the Excite search engine. Discusses metadata; content-based image retrieval; user interaction with images; terms per query; term frequency; and implications for the development of models for visual information retrieval and for the design of Web search engines.…

  15. An intelligent method for geographic Web search

    Science.gov (United States)

    Mei, Kun; Yuan, Ying

    2008-10-01

    While the electronically available information in the World-Wide Web is explosively growing and thus increasing, the difficulty to find relevant information is also increasing for search engine user. In this paper we discuss how to constrain web queries geographically. A number of search queries are associated with geographical locations, either explicitly or implicitly. Accurately and effectively detecting the locations where search queries are truly about has huge potential impact on increasing search relevance, bringing better targeted search results, and improving search user satisfaction. Our approach focus on both in the way geographic information is extracted from the web and, as far as we can tell, in the way it is integrated into query processing. This paper gives an overview of a spatially aware search engine for semantic querying of web document. It also illustrates algorithms for extracting location from web documents and query requests using the location ontologies to encode and reason about formal semantics of geographic web search. Based on a real-world scenario of tourism guide search, the application of our approach shows that the geographic information retrieval can be efficiently supported.

  16. Psychophysics in a Web browser? Comparing response times collected with JavaScript and Psychophysics Toolbox in a visual search task.

    Science.gov (United States)

    de Leeuw, Joshua R; Motz, Benjamin A

    2016-03-01

    Behavioral researchers are increasingly using Web-based software such as JavaScript to conduct response time experiments. Although there has been some research on the accuracy and reliability of response time measurements collected using JavaScript, it remains unclear how well this method performs relative to standard laboratory software in psychologically relevant experimental manipulations. Here we present results from a visual search experiment in which we measured response time distributions with both Psychophysics Toolbox (PTB) and JavaScript. We developed a methodology that allowed us to simultaneously run the visual search experiment with both systems, interleaving trials between two independent computers, thus minimizing the effects of factors other than the experimental software. The response times measured by JavaScript were approximately 25 ms longer than those measured by PTB. However, we found no reliable difference in the variability of the distributions related to the software, and both software packages were equally sensitive to changes in the response times as a result of the experimental manipulations. We concluded that JavaScript is a suitable tool for measuring response times in behavioral research.

  17. Task search in a human computation market

    OpenAIRE

    Chilton, Lydia B.; Miller, Robert C.; Horton, John J.; Azenkot, Shiri

    2010-01-01

    In order to understand how a labor market for human computation functions, it is important to know how workers search for tasks. This paper uses two complementary methods to gain insight into how workers search for tasks on Mechanical Turk. First, we perform a high frequency scrape of 36 pages of search results and analyze it by looking at the rate of disappearance of tasks across key ways Mechanical Turk allows workers to sort tasks. Second, we present the results of a survey in which we pai...

  18. Collaborative Web Search Who, What, Where, When, and Why

    CERN Document Server

    Morris, Meredith Ringel

    2009-01-01

    Today, Web search is treated as a solitary experience. Web browsers and search engines are typically designed to support a single user, working alone. However, collaboration on information-seeking tasks is actually commonplace. Students work together to complete homework assignments, friends seek information about joint entertainment opportunities, family members jointly plan vacation travel, and colleagues jointly conduct research for their projects. As improved networking technologies and the rise of social media simplify the process of remote collaboration, and large, novel display form-fac

  19. A grammar checker based on web searching

    Directory of Open Access Journals (Sweden)

    Joaquim Moré

    2006-05-01

    Full Text Available This paper presents an English grammar and style checker for non-native English speakers. The main characteristic of this checker is the use of an Internet search engine. As the number of web pages written in English is immense, the system hypothesises that a piece of text not found on the Web is probably badly written. The system also hypothesises that the Web will provide examples of how the content of the text segment can be expressed in a grammatically correct and idiomatic way. Thus, when the checker warns the user about the odd nature of a text segment, the Internet engine searches for contexts that can help the user decide whether he/she should correct the segment or not. By means of a search engine, the checker also suggests use of other expressions that appear on the Web more often than the expression he/she actually wrote.

  20. The Use of Web Search Engines in Information Science Research.

    Science.gov (United States)

    Bar-Ilan, Judit

    2004-01-01

    Reviews the literature on the use of Web search engines in information science research, including: ways users interact with Web search engines; social aspects of searching; structure and dynamic nature of the Web; link analysis; other bibliometric applications; characterizing information on the Web; search engine evaluation and improvement; and…

  1. Developing as new search engine and browser for libraries to search and organize the World Wide Web library resources

    OpenAIRE

    Sreenivasulu, V.

    2000-01-01

    Internet Granthalaya urges world wide advocates and targets at the task of creating a new search engine and dedicated browseer. Internet Granthalaya may be the ultimate search engine exclusively dedicated for every library use to search and organize the world wide web libary resources

  2. Adding a visualization feature to web search engines: it's time.

    Science.gov (United States)

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.

  3. A Survey On Meta Search Engine in Semantic Web

    OpenAIRE

    Prof. M.Surendra Prasad Babu; G.Sudeepthi

    2011-01-01

    The Search engines plays an important role in the success of the Web, Search engines helps any Internet user to rapidly find relevant information. But the unsolved problems of current search engines have led to the development of the Semantic Web. In the environment of Semantic Web, the search engines are more useful and efficient in searching the relevant web information., and our work shows how the fundamental elements of the meta search engine can be used in retriving the information resou...

  4. Statistical search on the Semantic Web.

    Science.gov (United States)

    Kobayashi, Norio; Toyoda, Tetsuro

    2008-04-01

    Statistical analysis of links on the Semantic Web is important for various evaluation purposes such as quantifying an individual's scientific research output based on citation links. SPARQL has been proposed as a standardized query language for the Semantic Web and is intuitively understandable; however, it does not adequately support statistical evaluation of semantic links. We have extended SPARQL to a novel Resource Description Framework (RDF) query language termed General and Rapid Association Study Query Language (GRASQL) to generate inferences connecting semantic Boolean-based deduction and statistical evaluation of RDF resources. We have verified the descriptive capability of GRASQL by writing GRASQL queries for practical biomedical search patterns including in silico positional cloning studies and for ranking researchers in a specific domain of expertise by introducing k index, the number of papers containing specific keywords that are published in a fixed period by a researcher. We have also developed a search engine termed General and Rapid Association Study Engine (GRASE), which executes a restricted variety of GRASQL queries by requesting a dynamic and comprehensive evaluation of statistical significance of intersections between each group of documents assigned to URIs and those documents matching user-specified keywords and omics conditions. By performing practical in silico positional cloning searches with GRASE, we show the relevance of our approach on the Semantic Web for biomedical knowledge discovery problem solving. GRASE is used as the search engine for the Positional Medline (PosMed) service and Researcher Finder service at http://omicspace.riken.jp/.

  5. Resource Selection for Federated Search on the Web

    OpenAIRE

    Nguyen, Dong; Demeester, Thomas; Trieschnigg, Dolf; Hiemstra, Djoerd

    2016-01-01

    A publicly available dataset for federated search reflecting a real web environment has long been absent, making it difficult for researchers to test the validity of their federated search algorithms for the web setting. We present several experiments and analyses on resource selection on the web using a recently released test collection containing the results from more than a hundred real search engines, ranging from large general web search engines such as Google, Bing and Yahoo to small do...

  6. An Analysis of Web Image Queries for Search.

    Science.gov (United States)

    Pu, Hsiao-Tieh

    2003-01-01

    Examines the differences between Web image and textual queries, and attempts to develop an analytic model to investigate their implications for Web image retrieval systems. Provides results that give insight into Web image searching behavior and suggests implications for improvement of current Web image search engines. (AEF)

  7. Overview of the TREC 2013 Federated Web Search Track

    NARCIS (Netherlands)

    Demeester, Thomas; Trieschnigg, Rudolf Berend; Nguyen, Dong-Phuong; Hiemstra, Djoerd

    The TREC Federated Web Search track is intended to promote research related to federated search in a realistic web setting, and hereto provides a large data collection gathered from a series of online search engines. This overview paper discusses the results of the first edition of the track, FedWeb

  8. Predicting consumer behavior with Web search.

    Science.gov (United States)

    Goel, Sharad; Hofman, Jake M; Lahaie, Sébastien; Pennock, David M; Watts, Duncan J

    2010-10-12

    Recent work has demonstrated that Web search volume can "predict the present," meaning that it can be used to accurately track outcomes such as unemployment levels, auto and home sales, and disease prevalence in near real time. Here we show that what consumers are searching for online can also predict their collective future behavior days or even weeks in advance. Specifically we use search query volume to forecast the opening weekend box-office revenue for feature films, first-month sales of video games, and the rank of songs on the Billboard Hot 100 chart, finding in all cases that search counts are highly predictive of future outcomes. We also find that search counts generally boost the performance of baseline models fit on other publicly available data, where the boost varies from modest to dramatic, depending on the application in question. Finally, we reexamine previous work on tracking flu trends and show that, perhaps surprisingly, the utility of search data relative to a simple autoregressive model is modest. We conclude that in the absence of other data sources, or where small improvements in predictive performance are material, search queries provide a useful guide to the near future.

  9. Active reranking for web image search.

    Science.gov (United States)

    Tian, Xinmei; Tao, Dacheng; Hua, Xian-Sheng; Wu, Xiuqing

    2010-03-01

    Image search reranking methods usually fail to capture the user's intention when the query term is ambiguous. Therefore, reranking with user interactions, or active reranking, is highly demanded to effectively improve the search performance. The essential problem in active reranking is how to target the user's intention. To complete this goal, this paper presents a structural information based sample selection strategy to reduce the user's labeling efforts. Furthermore, to localize the user's intention in the visual feature space, a novel local-global discriminative dimension reduction algorithm is proposed. In this algorithm, a submanifold is learned by transferring the local geometry and the discriminative information from the labelled images to the whole (global) image database. Experiments on both synthetic datasets and a real Web image search dataset demonstrate the effectiveness of the proposed active reranking scheme, including both the structural information based active sample selection strategy and the local-global discriminative dimension reduction algorithm.

  10. Changes in users' Web search performance after ten years ...

    African Journals Online (AJOL)

    The changes in users' Web search performance using search engines over ten years was investigated in this study. Matched data obtained from samples in 2000 and 2010 were used for the comparative analysis. The patterns of Web search engine use suggested a dominance in using a particular search engine. Statistical ...

  11. FedWeb Greatest Hits: Presenting the New Test Collection for Federated Web Search

    NARCIS (Netherlands)

    Demeester, Thomas; Trieschnigg, Rudolf Berend; Zhou, Ke; Nguyen, Dong-Phuong; Hiemstra, Djoerd

    This paper presents 'FedWeb Greatest Hits', a large new test collection for research in web information retrieval. As a combination and extension of the datasets used in the TREC Federated Web Search Track, this collection opens up new research possibilities on federated web search challenges, as

  12. Mining Tasks from the Web Anchor Text Graph: MSR Notebook Paper for the TREC 2015 Tasks Track

    Science.gov (United States)

    2015-11-20

    Mining Tasks from the Web Anchor Text Graph: MSR Notebook Paper for the TREC 2015 Tasks Track Paul N. Bennett Microsoft Research Redmond, USA pauben...investigated the effectiveness of mining session co-occurrence data. For a search engine log, session bound- aries can be defined in the typical way but to...of common failures. To be conservative and attempt to eliminate these failures, we require a candi- date to have overlap with the filter phrase for a

  13. Resource Selection for Federated Search on the Web

    NARCIS (Netherlands)

    Nguyen, Dong-Phuong; Demeester, Thomas; Trieschnigg, Rudolf Berend; Hiemstra, Djoerd

    A publicly available dataset for federated search reflecting a real web environment has long been bsent, making it difficult for researchers to test the validity of their federated search algorithms for the web setting. We present several experiments and analyses on resource selection on the web

  14. Social Tagging for Personalized Web Search

    Science.gov (United States)

    Biancalana, Claudio

    Social networks and collaborative tagging systems are rapidly gaining popularity as primary means for sorting and sharing data: users tag their bookmarks in order to simplify information dissemination and later lookup. Social Bookmarking services are useful in two important respects: first, they can allow an individual to remember the visited URLs, and second, tags can be made by the community to guide users towards valuable content. In this paper we focus on the latter use: we present a novel approach for personalized web search using query expansion. We further extend the family of well-known co-occurence matrix technique models by using a new way of exploring social tagging services. Our approach shows its strength particularly in the case of disambiguation of word contexts. We show how to design and implement such a system in practice and conduct several experiments. To the best of our knowledge this is the first study centered on using social bookmarking and tagging techniques for personalization of web search and its evaluation in a real-world scenario.

  15. Adding to the Students' Toolbox: Using Directories, Search Engines, and the Hidden Web in Search Processes.

    Science.gov (United States)

    Mardis, Marcia A.

    2002-01-01

    Discussion of searching for information on the Web focuses on resources that are not always found by traditional Web searches. Describes sources on the hidden Web, including full-text databases, clearinghouses, digital libraries, and learning objects; explains how search engines operate; and suggests that traditional print sources are still…

  16. Heterogeneous Graph Propagation for Large-Scale Web Image Search.

    Science.gov (United States)

    Xie, Lingxi; Tian, Qi; Zhou, Wengang; Zhang, Bo

    2015-11-01

    State-of-the-art web image search frameworks are often based on the bag-of-visual-words (BoVWs) model and the inverted index structure. Despite the simplicity, efficiency, and scalability, they often suffer from low precision and/or recall, due to the limited stability of local features and the considerable information loss on the quantization stage. To refine the quality of retrieved images, various postprocessing methods have been adopted after the initial search process. In this paper, we investigate the online querying process from a graph-based perspective. We introduce a heterogeneous graph model containing both image and feature nodes explicitly, and propose an efficient reranking approach consisting of two successive modules, i.e., incremental query expansion and image-feature voting, to improve the recall and precision, respectively. Compared with the conventional reranking algorithms, our method does not require using geometric information of visual words, therefore enjoys low consumptions of both time and memory. Moreover, our method is independent of the initial search process, and could cooperate with many BoVW-based image search pipelines, or adopted after other postprocessing algorithms. We evaluate our approach on large-scale image search tasks and verify its competitive search performance.

  17. New Methods and Tools for the World Wide Web Search

    OpenAIRE

    Ceric, Vlatko

    2000-01-01

    Explosive growth of the World Wide Web as well as its heterogeneity call for powerful and easy to use search tools capable to provide the user with a moderate number of relevant answers. This paper presents analysis of key aspects of recently developed Web search methods and tools: visual representation of subject trees, interactive user interfaces, linguistic approaches, image search, ranking and grouping of search results, database search, and scientific information retrieval. Current trend...

  18. IMPROVING PERSONALIZED WEB SEARCH USING BOOKSHELF DATA STRUCTURE

    Directory of Open Access Journals (Sweden)

    S.K. Jayanthi

    2012-10-01

    Full Text Available Search engines are playing a vital role in retrieving relevant information for the web user. In this research work a user profile based web search is proposed. So the web user from different domain may receive different set of results. The main challenging work is to provide relevant results at the right level of reading difficulty. Estimating user expertise and re-ranking the results are the main aspects of this paper. The retrieved results are arranged in Bookshelf Data Structure for easy access. Better presentation of search results hence increases the usability of web search engines significantly in visual mode.

  19. How Users Search the Mobile Web: A Model for Understanding the Impact of Motivation and Context on Search Behaviors

    Directory of Open Access Journals (Sweden)

    Dan Wu

    2016-03-01

    Full Text Available Purpose: This study explores how search motivation and context influence mobile Web search behaviors. Design/methodology/approach: We studied 30 experienced mobile Web users via questionnaires, semi-structured interviews, and an online diary tool that participants used to record their daily search activities. SQLite Developer was used to extract data from the users' phone logs for correlation analysis in Statistical Product and Service Solutions (SPSS. Findings: One quarter of mobile search sessions were driven by two or more search motivations. It was especially difficult to distinguish curiosity from time killing in particular user reporting. Multi-dimensional contexts and motivations influenced mobile search behaviors, and among the context dimensions, gender, place, activities they engaged in while searching, task importance, portal, and interpersonal relations (whether accompanied or alone when searching correlated with each other. Research limitations: The sample was comprised entirely of college students, so our findings may not generalize to other populations. More participants and longer experimental duration will improve the accuracy and objectivity of the research. Practical implications: Motivation analysis and search context recognition can help mobile service providers design applications and services for particular mobile contexts and usages. Originality/value: Most current research focuses on specific contexts, such as studies on place, or other contextual influences on mobile search, and lacks a systematic analysis of mobile search context. Based on analysis of the impact of mobile search motivations and search context on search behaviors, we built a multi-dimensional model of mobile search behaviors.

  20. ExactSearch: a web-based plant motif search tool.

    Science.gov (United States)

    Gunasekara, Chathura; Subramanian, Avinash; Avvari, Janaki Venkata Ram Kumar; Li, Bin; Chen, Su; Wei, Hairong

    2016-01-01

    Plant biologists frequently need to examine if a sequence motif bound by a specific transcription or translation factor is present in the proximal promoters or 3' untranslated regions (3' UTR) of a set of plant genes of interest. To achieve such a task, plant biologists have to not only identify an appropriate algorithm for motif searching, but also manipulate the large volume of sequence data, making it burdensome to carry out or fulfill. In this study, we developed a web portal that enables plant molecular biologists to search for DNA motifs especially degenerate ones in custom sequences or the flanking regions of all genes in the 50 plant species whose genomes have been sequenced. A web tool like this is demanded to meet a variety of needs of plant biologists for identifying the potential gene regulatory relationships. We implemented a suffix tree algorithm to accelerate the searching process of a group of motifs in a multitude of target genes. The motifs to be searched can be in the degenerate bases in addition to adenine (A), cytosine (C), guanine (G), and thymine (T). The target sequences to be searched can be custom sequences or the selected proximal gene sequences from any one of the 50 sequenced plant species. The web portal also contains the functionality to facilitate the search of motifs that are represented by position probability matrix in above-mentioned species. Currently, the algorithm can accomplish an exhaust search of 100 motifs in 35,000 target sequences of 2 kb long in 4.2 min. However, the runtime may change in the future depending on the space availability, number of running jobs, network traffic, data loading, and output packing and delivery through electronic mailing. A web portal was developed to facilitate searching of motifs presents in custom sequences or the proximal promoters or 3' UTR of 50 plant species with the sequenced genomes. This web tool is accessible by using this URL: http://sys.bio.mtu.edu/motif/index.php.

  1. AN OVERVIEW OF SEARCHING AND DISCOVERING WEB BASED INFORMATION RESOURCES

    Directory of Open Access Journals (Sweden)

    Cezar VASILESCU

    2010-01-01

    Full Text Available The Internet becomes for most of us a daily used instrument, for professional or personal reasons. We even do not remember the times when a computer and a broadband connection were luxury items. More and more people are relying on the complicated web network to find the needed information.This paper presents an overview of Internet search related issues, upon search engines and describes the parties and the basic mechanism that is embedded in a search for web based information resources. Also presents ways to increase the efficiency of web searches, through a better understanding of what search engines ignore at websites content.

  2. What Snippets Say About Pages in Federated Web Search

    OpenAIRE

    DEMEESTER, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd; Hou, Yuexian; Nie, Jian-Yun; Sun, Le; Wang, Bo; Zhang, Peng

    2012-01-01

    What is the likelihood that a Web page is considered relevant to a query, given the relevance assessment of the corresponding snippet? Using a new federated IR test collection that contains search results from over a hundred search engines on the internet, we are able to investigate such research questions from a global perspective. Our test collection covers the main Web search engines like Google, Yahoo!, and Bing, as well as a number of smaller search engines dedicated to multimedia, shopp...

  3. Image and video search engine for the World Wide Web

    Science.gov (United States)

    Smith, John R.; Chang, Shih-Fu

    1997-01-01

    We describe a visual information system prototype for searching for images and videos on the World-Wide Web. New visual information in the form of images, graphics, animations and videos is being published on the Web at an incredible rate. However, cataloging this visual data is beyond the capabilities of current text-based Web search engines. In this paper, we describe a complete system by which visual information on the Web is (1) collected by automated agents, (2) processed in both text and visual feature domains, (3) catalogued and (4) indexed for fast search and retrieval. We introduce an image and video search engine which utilizes both text-based navigation and content-based technology for searching visually through the catalogued images and videos. Finally, we provide an initial evaluation based upon the cataloging of over one half million images and videos collected from the Web.

  4. Uncovering Web search strategies in South African higher education

    Directory of Open Access Journals (Sweden)

    Surika Civilcharran

    2016-11-01

    Full Text Available Background: In spite of the enormous amount of information available on the Web and the fact that search engines are continuously evolving to enhance the search experience, students are nevertheless faced with the difficulty of effectively retrieving information. It is, therefore, imperative for the interaction between students and search tools to be understood and search strategies to be identified, in order to promote successful information retrieval. Objectives: This study identifies the Web search strategies used by postgraduate students and forms part of a wider study into information retrieval strategies used by postgraduate students at the University of KwaZulu-Natal (UKZN, Pietermaritzburg campus, South Africa. Method: Largely underpinned by Thatcher’s cognitive search strategies, the mixed-methods approach was utilised for this study, in which questionnaires were employed in Phase 1 and structured interviews in Phase 2. This article reports and reflects on the findings of Phase 2, which focus on identifying the Web search strategies employed by postgraduate students. The Phase 1 results were reported in Civilcharran, Hughes and Maharaj (2015. Results: Findings reveal the Web search strategies used for academic information retrieval. In spite of easy access to the invisible Web and the advent of meta-search engines, the use of Web search engines still remains the preferred search tool. The UKZN online library databases and especially the UKZN online library, Online Public Access Catalogue system, are being underutilised. Conclusion: Being ranked in the top three percent of the world’s universities, UKZN is investing in search tools that are not being used to their full potential. This evidence suggests an urgent need for students to be trained in Web searching and to have a greater exposure to a variety of search tools. This article is intended to further contribute to the design of undergraduate training programmes in order to deal

  5. Uncovering Web search strategies in South African higher education

    Directory of Open Access Journals (Sweden)

    Surika Civilcharran

    2016-04-01

    Full Text Available Background: In spite of the enormous amount of information available on the Web and the fact that search engines are continuously evolving to enhance the search experience, students are nevertheless faced with the difficulty of effectively retrieving information. It is, therefore, imperative for the interaction between students and search tools to be understood and search strategies to be identified, in order to promote successful information retrieval.Objectives: This study identifies the Web search strategies used by postgraduate students and forms part of a wider study into information retrieval strategies used by postgraduate students at the University of KwaZulu-Natal (UKZN, Pietermaritzburg campus, South Africa. Method: Largely underpinned by Thatcher’s cognitive search strategies, the mixed-methods approach was utilised for this study, in which questionnaires were employed in Phase 1 and structured interviews in Phase 2. This article reports and reflects on the findings of Phase 2, which focus on identifying the Web search strategies employed by postgraduate students. The Phase 1 results were reported in Civilcharran, Hughes and Maharaj (2015.Results: Findings reveal the Web search strategies used for academic information retrieval. In spite of easy access to the invisible Web and the advent of meta-search engines, the use of Web search engines still remains the preferred search tool. The UKZN online library databases and especially the UKZN online library, Online Public Access Catalogue system, are being underutilised.Conclusion: Being ranked in the top three percent of the world’s universities, UKZN is investing in search tools that are not being used to their full potential. This evidence suggests an urgent need for students to be trained in Web searching and to have a greater exposure to a variety of search tools. This article is intended to further contribute to the design of undergraduate training programmes in order to deal

  6. From people to entities new semantic search paradigms for the web

    CERN Document Server

    Demartini, G

    2014-01-01

    The exponential growth of digital information available in companies and on the Web creates the need for search tools that can respond to the most sophisticated information needs. Many user tasks would be simplified if Search Engines would support typed search, and return entities instead of just Web documents. For example, an executive who tries to solve a problem needs to find people in the company who are knowledgeable about a certain topic.In the first part of the book, we propose a model for expert finding based on the well-consolidated vector space model for Information Retrieval and inv

  7. Social Search: A Taxonomy of, and a User-Centred Approach to, Social Web Search

    Science.gov (United States)

    McDonnell, Michael; Shiri, Ali

    2011-01-01

    Purpose: The purpose of this paper is to introduce the notion of social search as a new concept, drawing upon the patterns of web search behaviour. It aims to: define social search; present a taxonomy of social search; and propose a user-centred social search method. Design/methodology/approach: A mixed method approach was adopted to investigate…

  8. Searching the Online catalog and the World Wide Web

    Directory of Open Access Journals (Sweden)

    Shu-Hsien L. Chen

    2003-09-01

    Full Text Available The article discusses the searching behaviors of school children using the online catalog and the World Wide Web. The amount of information and search capability for the online catalog and the World Wide Web, though, differ to a great extent, students share several common problems in using them. They have problems in spelling and typing, phrasing of search terms, extracting key concept, formulating search strategy, and evaluating search results. Their specific problems of searching the World Wide Web include rapid navigation of the Internet, overuse of Back button and browsing strategy, and evaluating only the first screen. Teachers and media specialists need to address these problems in the instruction of information literacy skills so that students can fully utilize the power of online searching and become efficient information searchers.

  9. Communities, Collaboration, and Recommender Systems in Personalized Web Search

    Science.gov (United States)

    Smyth, Barry; Coyle, Maurice; Briggs, Peter

    Web search engines are the primary means by which millions of users access information everyday and the sheer scale and success of the leading search engines is a testimony to the scientific and engineering progress that has been made over the last ten years. However, mainstream search engines continue to deliver largely one-size-fits-all services to their user-base, ultimately limiting the relevance of their result-lists. In this chapter we will explore recent research that is seeking to make Web search a more personal and collaborative experience as we look towards a new breed of more social search engines.

  10. Web Search Personalization Via Social Bookmarking and Tagging

    Science.gov (United States)

    Noll, Michael G.; Meinel, Christoph

    In this paper, we present a new approach to web search personalization based on user collaboration and sharing of information about web documents. The proposed personalization technique separates data collection and user profiling from the information system whose contents and indexed documents are being searched for, i.e. the search engines, and uses social bookmarking and tagging to re-rank web search results. It is independent of the search engine being used, so users are free to choose the one they prefer, even if their favorite search engine does not natively support personalization. We show how to design and implement such a system in practice and investigate its feasibility and usefulness with large sets of real-word data and a user study.

  11. A novel visualization model for web search results.

    Science.gov (United States)

    Nguyen, Tien N; Zhang, Jin

    2006-01-01

    This paper presents an interactive visualization system, named WebSearchViz, for visualizing the Web search results and acilitating users' navigation and exploration. The metaphor in our model is the solar system with its planets and asteroids revolving around the sun. Location, color, movement, and spatial distance of objects in the visual space are used to represent the semantic relationships between a query and relevant Web pages. Especially, the movement of objects and their speeds add a new dimension to the visual space, illustrating the degree of relevance among a query and Web search results in the context of users' subjects of interest. By interacting with the visual space, users are able to observe the semantic relevance between a query and a resulting Web page with respect to their subjects of interest, context information, or concern. Users' subjects of interest can be dynamically changed, redefined, added, or deleted from the visual space.

  12. Semantic similarity measure in biomedical domain leverage web search engine.

    Science.gov (United States)

    Chen, Chi-Huang; Hsieh, Sheau-Ling; Weng, Yung-Ching; Chang, Wen-Yung; Lai, Feipei

    2010-01-01

    Semantic similarity measure plays an essential role in Information Retrieval and Natural Language Processing. In this paper we propose a page-count-based semantic similarity measure and apply it in biomedical domains. Previous researches in semantic web related applications have deployed various semantic similarity measures. Despite the usefulness of the measurements in those applications, measuring semantic similarity between two terms remains a challenge task. The proposed method exploits page counts returned by the Web Search Engine. We define various similarity scores for two given terms P and Q, using the page counts for querying P, Q and P AND Q. Moreover, we propose a novel approach to compute semantic similarity using lexico-syntactic patterns with page counts. These different similarity scores are integrated adapting support vector machines, to leverage the robustness of semantic similarity measures. Experimental results on two datasets achieve correlation coefficients of 0.798 on the dataset provided by A. Hliaoutakis, 0.705 on the dataset provide by T. Pedersen with physician scores and 0.496 on the dataset provided by T. Pedersen et al. with expert scores.

  13. How To Do Field Searching in Web Search Engines: A Field Trip.

    Science.gov (United States)

    Hock, Ran

    1998-01-01

    Describes the field search capabilities of selected Web search engines (AltaVista, HotBot, Infoseek, Lycos, Yahoo!) and includes a chart outlining what fields (date, title, URL, images, audio, video, links, page depth) are searchable, where to go on the page to search them, the syntax required (if any), and how field search queries are entered.…

  14. How Google Web Search copes with very similar documents

    NARCIS (Netherlands)

    W. Mettrop (Wouter); P. Nieuwenhuysen; H. Smulders

    2006-01-01

    textabstractA significant portion of the computer files that carry documents, multimedia, programs etc. on the Web are identical or very similar to other files on the Web. How do search engines cope with this? Do they perform some kind of “deduplication”? How should users take into account that

  15. Considerations for the development of task-based search engines

    DEFF Research Database (Denmark)

    Petcu, Paula; Dragusin, Radu

    2013-01-01

    Based on previous experience from working on a task-based search engine, we present a list of suggestions and ideas for an Information Retrieval (IR) framework that could inform the development of next generation professional search systems. The specific task that we start from is the clinicians......' information need in finding rare disease diagnostic hypotheses at the time and place where medical decisions are made. Our experience from the development of a search engine focused on supporting clinicians in completing this task has provided us valuable insights in what aspects should be considered...... by the developers of vertical search engines....

  16. A World Wide Web Region-Based Image Search Engine

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    information. These features along with additional information such as the URL location and the date of index procedure are stored in a database. The user can access and search this indexed content through the Web with an advanced and user friendly interface. The output of the system is a set of links......In this paper the development of an intelligent image content-based search engine for the World Wide Web is presented. This system will offer a new form of media representation and access of content available in WWW. Information Web Crawlers continuously traverse the Internet and collect images...

  17. Specialized tools are needed when searching the web for rare disease diagnoses.

    Science.gov (United States)

    Dragusin, Radu; Petcu, Paula; Lioma, Christina; Larsen, Birger; Jørgensen, Henrik L; Cox, Ingemar J; Hansen, Lars Kai; Ingwersen, Peter; Winther, Ole

    2013-01-01

    In our recent paper, we study web search as an aid in the process of diagnosing rare diseases. To answer the question of how well Google Search and PubMed perform, we created an evaluation framework with 56 diagnostic cases and made our own specialized search engine, FindZebra (findzebra.com). FindZebra uses a set of publicly available curated sources on rare diseases and an open-source information retrieval system, Indri. Our evaluation and the feedback received after the publication of our paper both show that FindZebra outperforms Google Search and PubMed. In this paper, we summarize the original findings and the response to FindZebra, discuss why Google Search is not designed for specialized tasks and outline some of the current trends in using web resources and social media for medical diagnosis.

  18. Improving Web Search for Difficult Queries

    Science.gov (United States)

    Wang, Xuanhui

    2009-01-01

    Search engines have now become essential tools in all aspects of our life. Although a variety of information needs can be served very successfully, there are still a lot of queries that search engines can not answer very effectively and these queries always make users feel frustrated. Since it is quite often that users encounter such "difficult…

  19. What Snippets Say About Pages in Federated Web Search

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd; Hou, Yuexian; Nie, Jian-Yun; Sun, Le; Wang, Bo; Zhang, Peng

    2012-01-01

    What is the likelihood that a Web page is considered relevant to a query, given the relevance assessment of the corresponding snippet? Using a new federated IR test collection that contains search results from over a hundred search engines on the internet, we are able to investigate such research

  20. World Wide Web Metaphors for Search Mission Data

    Science.gov (United States)

    Norris, Jeffrey S.; Wallick, Michael N.; Joswig, Joseph C.; Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Abramyan, Lucy; Crockett, Thomas M.; Shams, Khawaja S.; Fox, Jason M.; hide

    2010-01-01

    A software program that searches and browses mission data emulates a Web browser, containing standard meta - phors for Web browsing. By taking advantage of back-end URLs, users may save and share search states. Also, since a Web interface is familiar to users, training time is reduced. Familiar back and forward buttons move through a local search history. A refresh/reload button regenerates a query, and loads in any new data. URLs can be constructed to save search results. Adding context to the current search is also handled through a familiar Web metaphor. The query is constructed by clicking on hyperlinks that represent new components to the search query. The selection of a link appears to the user as a page change; the choice of links changes to represent the updated search and the results are filtered by the new criteria. Selecting a navigation link changes the current query and also the URL that is associated with it. The back button can be used to return to the previous search state. This software is part of the MSLICE release, which was written in Java. It will run on any current Windows, Macintosh, or Linux system.

  1. Ontology-Based Information Behaviour to Improve Web Search

    Directory of Open Access Journals (Sweden)

    Silvia Calegari

    2010-10-01

    Full Text Available Web Search Engines provide a huge number of answers in response to a user query, many of which are not relevant, whereas some of the most relevant ones may not be found. In the literature several approaches have been proposed in order to help a user to find the information relevant to his/her real needs on the Web. To achieve this goal the individual Information Behavior can been analyzed to ’keep’ track of the user’s interests. Keeping information is a type of Information Behavior, and in several works researchers have referred to it as the study on what people do during a search on the Web. Generally, the user’s actions (e.g., how the user moves from one Web page to another, or her/his download of a document, etc. are recorded in Web logs. This paper reports on research activities which aim to exploit the information extracted from Web logs (or query logs in personalized user ontologies, with the objective to support the user in the process of discovering Web information relevant to her/his information needs. Personalized ontologies are used to improve the quality of Web search by applying two main techniques: query reformulation and re-ranking of query evaluation results. In this paper we analyze various methodologies presented in the literature aimed at using personalized ontologies, defined on the basis of the observation of Information Behaviour to help the user in finding relevant information.

  2. The intelligent web search, smart algorithms, and big data

    CERN Document Server

    Shroff, Gautam

    2013-01-01

    As we use the Web for social networking, shopping, and news, we leave a personal trail. These days, linger over a Web page selling lamps, and they will turn up at the advertising margins as you move around the Internet, reminding you, tempting you to make that purchase. Search engines such as Google can now look deep into the data on the Web to pull out instances of the words you are looking for. And there are pages that collect and assess information to give you a snapshot ofchanging political opinion. These are just basic examples of the growth of ""Web intelligence"", as increasingly sophis

  3. Search strategies in practice: Influence of information and task constraints.

    Science.gov (United States)

    Pacheco, Matheus M; Newell, Karl M

    2017-11-07

    The practice of a motor task has been conceptualized as a process of search through a perceptual-motor workspace. The present study investigated the influence of information and task constraints on the search strategy as reflected in the sequential relations of the outcome in a discrete movement virtual projectile task. The results showed that the relation between the changes of trial-to-trial movement outcome to performance level was dependent on the landscape of the task dynamics and the influence of inherent variability. Furthermore, the search was in a constrained parameter region of the perceptual-motor workspace that depended on the task constraints. These findings show that there is not a single function of trial-to-trial change over practice but rather that local search strategies (proportional, discontinuous, constant) adapt to the level of performance and the confluence of constraints to action. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Uncovering Web search tactics in South African higher education

    Directory of Open Access Journals (Sweden)

    Surika Civilcharran

    2015-02-01

    Full Text Available Background: The potential of the World Wide Web (‘the Web’ as a tool for information retrieval in higher education is beyond question. Harnessing this potential, however, remains a challenge, particularly in the context of developing countries, where students are drawn from diverse socio-economic, educational and technological backgrounds. Objectives: The purpose of this study is to identify the Web search tactics used by postgraduate students in order to address the weaknesses of undergraduate students with regard to their Web searching tactics. This article forms part of a wider study into postgraduate students’ information retrieval strategies at the University of KwaZulu-Natal, Pietermaritzburg campus, South Africa. Method: The study utilised the mixed methods approach, employing both questionnaires (Phase 1 and structured interviews (Phase 2, and was largely underpinned by Bates’s model of information search tactics. This article reports and reflects on the findings of Phase 1, which focused on identifying the Web search tactics employed by postgraduate students. Results: Findings indicated a preference for lower-level Web search tactics, despite respondents largely self-reporting as intermediate or expert users. Moreover, the majority of respondents gained their knowledge on Web searching through experience and only a quarter of respondents have been given formal training on Web searching. Conclusion: In addition to contributing to theory, it is envisaged that this article will contribute to practice by informing the design of undergraduate training interventions to proactively address the information retrieval challenges faced by novice users. Subsequent papers will report on Phase 2 of the study.

  5. Searching for Suicide Information on Web Search Engines in Chinese

    Directory of Open Access Journals (Sweden)

    Yen-Feng Lee

    2017-01-01

    Full Text Available Introduction: Recently, suicide prevention has been an important public health issue. However, with the growing access to information in cyberspace, the harmful information is easily accessible online. To investigate the accessibility of potentially harmful suicide-related information on the internet, we discuss the following issue about searching suicide information on the internet to draw attention to it. Methods: We use five search engines (Google, Yahoo, Bing, Yam, and Sina and four suicide-related search queries (suicide, how to suicide, suicide methods, and want to die in traditional Chinese in April 2016. We classified the first thirty linkages of the search results on each search engine by a psychiatric doctor into suicide prevention, pro-suicide, neutral, unrelated to suicide, or error websites. Results: Among the total 352 unique websites generated, the suicide prevention websites were the most frequent among the search results (37.8%, followed by websites unrelated to suicide (25.9% and neutral websites (23.0%. However, pro-suicide websites were still easily accessible (9.7%. Besides, compared with the USA and China, the search engine originating in Taiwan had the lowest accessibility to pro-suicide information. The results of ANOVA showed a significant difference between the groups, F = 8.772, P < 0.001. Conclusions: This study results suggest a need for further restrictions and regulations of pro-suicide information on the internet. Providing more supportive information online may be an effective plan for suicidal prevention.

  6. Key word placing in Web page body text to increase visibility to search engines

    Directory of Open Access Journals (Sweden)

    W. T. Kritzinger

    2007-11-01

    Full Text Available The growth of the World Wide Web has spawned a wide variety of new information sources, which has also left users with the daunting task of determining which sources are valid. Many users rely on the Web as an information source because of the low cost of information retrieval. It is also claimed that the Web has evolved into a powerful business tool. Examples include highly popular business services such as Amazon.com and Kalahari.net. It is estimated that around 80% of users utilize search engines to locate information on the Internet. This, by implication, places emphasis on the underlying importance of Web pages being listed on search engines indices. Empirical evidence that the placement of key words in certain areas of the body text will have an influence on the Web sites' visibility to search engines could not be found in the literature. The result of two experiments indicated that key words should be concentrated towards the top, and diluted towards the bottom of a Web page to increase visibility. However, care should be taken in terms of key word density, to prevent search engine algorithms from raising the spam alarm.

  7. The effect of query complexity on Web searching results

    Directory of Open Access Journals (Sweden)

    B.J. Jansen

    2000-01-01

    Full Text Available This paper presents findings from a study of the effects of query structure on retrieval by Web search services. Fifteen queries were selected from the transaction log of a major Web search service in simple query form with no advanced operators (e.g., Boolean operators, phrase operators, etc. and submitted to 5 major search engines - Alta Vista, Excite, FAST Search, Infoseek, and Northern Light. The results from these queries became the baseline data. The original 15 queries were then modified using the various search operators supported by each of the 5 search engines for a total of 210 queries. Each of these 210 queries was also submitted to the applicable search service. The results obtained were then compared to the baseline results. A total of 2,768 search results were returned by the set of all queries. In general, increasing the complexity of the queries had little effect on the results with a greater than 70% overlap in results, on average. Implications for the design of Web search services and directions for future research are discussed.

  8. Meta-Search Utilizing Evolitionary Recommendation: A Web Search Architecture Proposal

    Czech Academy of Sciences Publication Activity Database

    Húsek, Dušan; Keyhanipour, A.; Krömer, P.; Moshiri, B.; Owais, S.; Snášel, V.

    2008-01-01

    Roč. 33, - (2008), s. 189-200 ISSN 1870-4069 Institutional research plan: CEZ:AV0Z10300504 Keywords : web search * meta- search engine * intelligent re-ranking * ordered weighted averaging * Boolean search queries optimizing Subject RIV: IN - Informatics, Computer Science

  9. Review of Metadata Elements within the Web Pages Resulting from Searching in General Search Engines

    Directory of Open Access Journals (Sweden)

    Sima Shafi’ie Alavijeh

    2009-12-01

    Full Text Available The present investigation was aimed to study the scope of presence of Dublin Core metadata elements and HTML meta tags in web pages. Ninety web pages were chosen by searching general search engines (Google, Yahoo and MSN. The scope of metadata elements (Dublin Core and HTML Meta tags present in these pages as well as existence of a significant correlation between presence of meta elements and type of search engines were investigated. Findings indicated very low presence of both Dublin Core metadata elements and HTML meta tags in the pages retrieved which in turn illustrates the very low usage of meta data elements in web pages. Furthermore, findings indicated that there are no significant correlation between the type of search engine used and presence of metadata elements. From the standpoint of including metadata in retrieval of web sources, search engines do not significantly differ from one another.

  10. Learning Task Knowledge from Dialog and Web Access

    Directory of Open Access Journals (Sweden)

    Vittorio Perera

    2015-06-01

    Full Text Available We present KnoWDiaL, an approach for Learning and using task-relevant Knowledge from human-robot Dialog and access to the Web. KnoWDiaL assumes that there is an autonomous agent that performs tasks, as requested by humans through speech. The agent needs to “understand” the request, (i.e., to fully ground the task until it can proceed to plan for and execute it. KnoWDiaL contributes such understanding by using and updating a Knowledge Base, by dialoguing with the user, and by accessing the web. We believe that KnoWDiaL, as we present it, can be applied to general autonomous agents. However, we focus on our work with our autonomous collaborative robot, CoBot, which executes service tasks in a building, moving around and transporting objects between locations. Hence, the knowledge acquired and accessed consists of groundings of language to robot actions, and building locations, persons, and objects. KnoWDiaL handles the interpretation of voice commands, is robust regarding speech recognition errors, and is able to learn commands involving referring expressions in an open domain, (i.e., without requiring a lexicon. We present in detail the multiple components of KnoWDiaL, namely a frame-semantic parser, a probabilistic grounding model, a web-based predicate evaluator, a dialog manager, and the weighted predicate-based Knowledge Base. We illustrate the knowledge access and updates from the dialog and Web access, through detailed and complete examples. We further evaluate the correctness of the predicate instances learned into the Knowledge Base, and show the increase in dialog efficiency as a function of the number of interactions. We have extensively and successfully used KnoWDiaL in CoBot dialoguing and accessing the Web, and extract a few corresponding example sequences from captured videos.

  11. Visual search in barn owls: Task difficulty and saccadic behavior.

    Science.gov (United States)

    Orlowski, Julius; Ben-Shahar, Ohad; Wagner, Hermann

    2018-01-01

    How do we find what we are looking for? A target can be in plain view, but it may be detected only after extensive search. During a search we make directed attentional deployments like saccades to segment the scene until we detect the target. Depending on difficulty, the search may be fast with few attentional deployments or slow with many, shorter deployments. Here we study visual search in barn owls by tracking their overt attentional deployments-that is, their head movements-with a camera. We conducted a low-contrast feature search, a high-contrast orientation conjunction search, and a low-contrast orientation conjunction search, each with set sizes varying from 16 to 64 items. The barn owls were able to learn all of these tasks and showed serial search behavior. In a subsequent step, we analyzed how search behavior of owls changes with search complexity. We compared the search mechanisms in these three serial searches with results from pop-out searches our group had reported earlier. Saccade amplitude shortened and fixation duration increased in difficult searches. Also, in conjunction search saccades were guided toward items with shared target features. These data suggest that during visual search, barn owls utilize mechanisms similar to those that humans use.

  12. Intelligent Information Systems for Web Product Search

    NARCIS (Netherlands)

    D. Vandic (Damir)

    2017-01-01

    markdownabstractOver the last few years, we have experienced an increase in online shopping. Consequently, there is a need for efficient and effective product search engines. The rapid growth of e-commerce, however, has also introduced some challenges. Studies show that users can get overwhelmed by

  13. Urban search and rescue medical teams: FEMA Task Force System.

    Science.gov (United States)

    Barbera, J A; Lozano, M

    1993-01-01

    Recent national and international disasters involving collapsed structures and trapped casualties (Mexico City; Armenia; Iran; Philippines; Charleston, South Carolina; Loma Prieta, California; and others) have provoked a heightened national concern for the development of an adequate capability to respond quickly and effectively to this type of calamity. The Federal Emergency Management Agency (FEMA) has responded to this need by developing an Urban Search and Rescue (US&R) Response System, a national system of multi-disciplinary task forces for rapid deployment to the site of a collapsed structure incident. Each 56-person task force includes a medical team capable of providing advanced emergency medical care both for task force members and for victims located and reached by the sophisticated search, rescue, and technical components of the task force. This paper reviews the background and development of urban search and rescue, and describes the make-up and function of the Federal Emergency Management Agency (FEMA) Task Force medical teams.

  14. UAV and Service Robot Coordination for Indoor Object Search Tasks

    OpenAIRE

    Konam, Sandeep; Rosenthal, Stephanie; Veloso, Manuela

    2017-01-01

    Our CoBot robots have successfully performed a variety of service tasks in our multi-building environment including accompanying people to meetings and delivering objects to offices due to its navigation and localization capabilities. However, they lack the capability to visually search over desks and other confined locations for an object of interest. Conversely, an inexpensive GPS-denied quadcopter platform such as the Parrot ARDrone 2.0 could perform this object search task if it had acces...

  15. A World Wide Web Region-Based Image Search Engine

    OpenAIRE

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper the development of an intelligent imagecontent-based search engine for the World Wide Web is presented.This system will offer a new form of media representationand access of content available in WWW. InformationWeb Crawlers continuously traverse the Internet and collectimages that are subsequently indexed based on integratedfeature vectors. As a basis for the indexing, the K-Meansalgorithm is used, modified so as to take into account thecoherence of the regions. Based on the ext...

  16. WISE: a content-based Web image search engine

    Science.gov (United States)

    Qiu, Guoping; Palmer, R. D.

    2000-12-01

    This paper describes the development of a prototype of a Web Image Search Engine (WISE), which allows users to search for images on the WWW by image examples, in a similar fashion to current search engines that allow users to find related Web pages using text matching on keywords. The system takes an image specified by the user and finds similar images available on the WWW by comparing the image contents using low level image features. The current version of the WISE system consists of a graphical user interface (GUI), an autonomous Web agent, an image comparison program and a query processing program. The users specify the URL of a target image and the URL of the starting Web page from where the program will 'crawl' the Web, finding images along the way and retrieve those satisfying a certain constraints. The program then computes the visual features of the retrieved images and performs content-based comparison with the target image. The results of the comparison are then sorted according to a certain similarity measure, which along with thumbnails and information associated with the images, such as the URLs; image size, etc. are then written to an HTML page. The resultant page is stored on a Web server and is outputted onto the user's Web browser once the search process is complete. A unique feature of the current version of WISE is its image content comparison algorithm. It is based on the comparison of image palettes and it therefore very efficient in retrieving one of the two universally accepted image formats on the Web, 'gif.' In gif images, the color palette is contained in its header and therefore it is only necessary to retrieve the header information rather than the whole images, thus making it very efficient.

  17. Equipped Search Results Using Machine Learning from Web Databases

    OpenAIRE

    Ahmed Mudassar Ali; Ramakrishnan, M.

    2015-01-01

    Aim of this study is to form a cluster of search results based on similarity and to assign meaningful label to it Database driven web pages play a vital role in multiple domains like online shopping, e-education systems, cloud computing and other. Such databases are accessible through HTML forms and user interfaces. They return the result pages come from the underlying databases as per the nature of the user query. Such types of databases are termed as Web Databases (WDB). Web databases have ...

  18. Eye Movements Reveal How Task Difficulty Moulds Visual Search

    Science.gov (United States)

    Young, Angela H.; Hulleman, Johan

    2013-01-01

    In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…

  19. Undergraduate Students Searching and Reading Web Sources for Writing

    Science.gov (United States)

    Li, Yongyan

    2012-01-01

    With the Internet-evoked paradigm shift in the academy, there has been a growing interest in students' Web-based information-seeking and source-use practices. Nevertheless, little is known as to how individual students go about searching for sources online and selecting source material for writing particular assignments. This exploratory study…

  20. Museum Web search behavior of special interest visitors

    DEFF Research Database (Denmark)

    Skov, Mette; Ingwersen, Peter

    2014-01-01

    There is a current trend to make museum collections widely accessible by digitising cultural heritage collections for the Internet. The present study takes a user perspective and explores the characteristics of online museum visitors' web search behaviour. A combination of quantitative and qualit...

  1. Analysis of Scifinder Scholar and Web of Science Citation Searches.

    Science.gov (United States)

    Whitley, Katherine M.

    2002-01-01

    With "Chemical Abstracts" and "Science Citation Index" both now available for citation searching, this study compares the duplication and uniqueness of citing references for works of chemistry researchers for the years 1999-2001. The two indexes cover very similar source material. This analysis of SciFinder Scholar and Web of…

  2. Raising Reliability of Web Search Tool Research through Replication and Chaos Theory

    OpenAIRE

    Nicholson, Scott

    1999-01-01

    Because the World Wide Web is a dynamic collection of information, the Web search tools (or "search engines") that index the Web are dynamic. Traditional information retrieval evaluation techniques may not provide reliable results when applied to the Web search tools. This study is the result of ten replications of the classic 1996 Ding and Marchionini Web search tool research. It explores the effects that replication can have on transforming unreliable results from one iteration into replica...

  3. Deep Web Search Interface Identification: A Semi-Supervised Ensemble Approach

    Directory of Open Access Journals (Sweden)

    Hong Wang

    2014-12-01

    Full Text Available To surface the Deep Web, one crucial task is to predict whether a given web page has a search interface (searchable HyperText Markup Language (HTML form or not. Previous studies have focused on supervised classification with labeled examples. However, labeled data are scarce, hard to get and requires tediousmanual work, while unlabeled HTML forms are abundant and easy to obtain. In this research, we consider the plausibility of using both labeled and unlabeled data to train better models to identify search interfaces more effectively. We present a semi-supervised co-training ensemble learning approach using both neural networks and decision trees to deal with the search interface identification problem. We show that the proposed model outperforms previous methods using only labeled data. We also show that adding unlabeled data improves the effectiveness of the proposed model.

  4. Database selection and result merging in P2P web search

    NARCIS (Netherlands)

    Chernov, S.; Serdyukov, Pavel; Bender, M.; Michel, S.; Weikum, G.; Zimmer, C.

    Intelligent Web search engines are extremely popular now. Currently, only the commercial centralized search engines like Google can process terabytes of Web data. Alternative search engines fulfilling collaborative Web search on a voluntary basis are usually based on a blooming Peer-to-Peer (P2P)

  5. Nowcasting Mobile Games Ranking Using Web Search Query Data

    Directory of Open Access Journals (Sweden)

    Yoones A. Sekhavat

    2016-01-01

    Full Text Available In recent years, the Internet has become embedded into the purchasing decision of consumers. The purpose of this paper is to study whether the Internet behavior of users correlates with their actual behavior in computer games market. Rather than proposing the most accurate model for computer game sales, we aim to investigate to what extent web search query data can be exploited to nowcast (contraction of “now” and “forecasting” referring to techniques used to make short-term forecasts (predict the present status of the ranking of mobile games in the world. Google search query data is used for this purpose, since this data can provide a real-time view on the topics of interest. Various statistical techniques are used to show the effectiveness of using web search query data to nowcast mobile games ranking.

  6. Has Retrieval Technology in Vertical Site Search Systems Improved over the Years? A Holistic Evaluation for Real Web Systems

    Directory of Open Access Journals (Sweden)

    Mandl, Thomas

    2015-12-01

    Full Text Available Evaluation of retrieval systems is mostly limited to laboratory settings and rarely considers changes of performance over time. This article presents an evaluation of retrieval systems for internal Web site search systems between the years 2006 and 2011. A holistic evaluation methodology for real Web sites was developed which includes tests for functionality, search quality, and user interaction. Among other sites, one set of 20 Web site search systems was evaluated three times in different years and no substantial improvement could be shown. It is surprising that the communication between site and user still leads to very poor results in many cases. Overall, the quality of these search systems could be improved, and several areas for improvement are apparent from our evaluation. For a comparison, Google’s site search function was also tested with the same tasks.

  7. The Impact of Web Search Engines on Subject Searching in OPAC

    Directory of Open Access Journals (Sweden)

    Holly Yu

    2017-09-01

    Full Text Available This paper analyzes the results of transaction logs at California State University, Los Angeles (CSULA and studies the effects of implementing a Web-based OPAC along with interface changes. The authors find that user success in subject searching remains problematic. A major increase in the frequency of searches that would have been more successful in resources other than the library catalog is noted over the time period 2000-2002. The authors attribute this increase to the prevalence of Web search engines and suggest that metasearching, relevance-ranked results, and relevance feedback ( "more like this" are now expected in user searching and should be integrated into online catalogs as search options.

  8. Start Your Search Engines. Part One: Taming Google--and Other Tips to Master Web Searches

    Science.gov (United States)

    Adam, Anna; Mowers, Helen

    2008-01-01

    There are a lot of useful tools on the Web, all those social applications, and the like. Still most people go online for one thing--to perform a basic search. For most fact-finding missions, the Web is there. But--as media specialists well know--the sheer wealth of online information can hamper efforts to focus on a few reliable references.…

  9. Academic Users' Information Searching on Research Topics: Characteristics of Research Tasks and Search Strategies

    Science.gov (United States)

    Du, Jia Tina; Evans, Nina

    2011-01-01

    This project investigated how academic users search for information on their real-life research tasks. This article presents the findings of the first of two studies. The study data were collected in the Queensland University of Technology (QUT) in Brisbane, Australia. Eleven PhD students' searching behaviors on personal research topics were…

  10. Source evaluation of domain experts and novices during Web search

    OpenAIRE

    Brand-Gruwel, Saskia; Kammerer, Yvonne; Van Meeuwen, Ludo; Van Gog, Tamara

    2018-01-01

    Nowadays, almost everyone uses the World Wide Web (WWW) to search for information of any kind. In education, students frequently use the WWW for selecting information to accomplish assignments such as writing an essay or preparing a presentation. The evaluation of sources and information is an important sub-skill in this process. But many students have not yet optimally developed this skill. On the basis of verbal reports, eye-tracking data, and navigation logs this study investigated how nov...

  11. Reference frame congruency in search-and-rescue tasks.

    Science.gov (United States)

    Pavlovic, Nada J; Keillor, Jocelyn; Hollands, Justin G; Chignell, Mark H

    2009-04-01

    Our aim was to investigate how the congruency between visual displays and auditory cues affects performance on various spatial tasks. Previous studies have demonstrated that spatial auditory cues, when combined with visual displays, can enhance performance and decrease workload. However, this facilitation was achieved only when auditory cues shared a common reference frame (RF) with the visual display. In complex and dynamic environments, such as airborne search and rescue (SAR), it is often difficult to ensure such congruency. In a simulated SAR operation, participants performed three spatial tasks: target search, target localization, and target recall. The interface consisted of the camera view of the terrain from the aircraft-mounted sensor, a map of the area flown over, a joystick that controlled the sensor, and a mouse. Auditory cues were used to indicate target location. While flying in the scenario, participants searched for targets, identified their locations in one of two coordinate systems, and memorized their location relative to the terrain layout. Congruent cues produced the fastest and most accurate performance. Performance advantages were observed even with incongruent cues relative to neutral cues, and egocentric cues were more effective than exocentric cues. Although the congruent cues are most effective, in cases in which the same cue is used across spatial tasks, egocentric cues are a better choice than exocentric cues. Egocentric auditory cues should be used in display design for tasks that involve RF transformations, such as SAR, air traffic control, and unmanned aerial vehicle operations.

  12. Mining social media and web searches for disease detection.

    Science.gov (United States)

    Yang, Y Tony; Horneffer, Michael; DiLisio, Nicole

    2013-04-28

    Web-based social media is increasingly being used across different settings in the health care industry. The increased frequency in the use of the Internet via computer or mobile devices provides an opportunity for social media to be the medium through which people can be provided with valuable health information quickly and directly. While traditional methods of detection relied predominately on hierarchical or bureaucratic lines of communication, these often failed to yield timely and accurate epidemiological intelligence. New web-based platforms promise increased opportunities for a more timely and accurate spreading of information and analysis. This article aims to provide an overview and discussion of the availability of timely and accurate information. It is especially useful for the rapid identification of an outbreak of an infectious disease that is necessary to promptly and effectively develop public health responses. These web-based platforms include search queries, data mining of web and social media, process and analysis of blogs containing epidemic key words, text mining, and geographical information system data analyses. These new sources of analysis and information are intended to complement traditional sources of epidemic intelligence. Despite the attractiveness of these new approaches, further study is needed to determine the accuracy of blogger statements, as increases in public participation may not necessarily mean the information provided is more accurate.

  13. REPTREE CLASSIFIER FOR IDENTIFYING LINK SPAM IN WEB SEARCH ENGINES

    Directory of Open Access Journals (Sweden)

    S.K. Jayanthi

    2013-01-01

    Full Text Available Search Engines are used for retrieving the information from the web. Most of the times, the importance is laid on top 10 results sometimes it may shrink as top 5, because of the time constraint and reliability on the search engines. Users believe that top 10 or 5 of total results are more relevant. Here comes the problem of spamdexing. It is a method to deceive the search result quality. Falsified metrics such as inserting enormous amount of keywords or links in website may take that website to the top 10 or 5 positions. This paper proposes a classifier based on the Reptree (Regression tree representative. As an initial step Link-based features such as neighbors, pagerank, truncated pagerank, trustrank and assortativity related attributes are inferred. Based on this features, tree is constructed. The tree uses the feature inference to differentiate spam sites from legitimate sites. WEBSPAM-UK-2007 dataset is taken as a base. It is preprocessed and converted into five datasets FEATA, FEATB, FEATC, FEATD and FEATE. Only link based features are taken for experiments. This paper focus on link spam alone. Finally a representative tree is created which will more precisely classify the web spam entries. Results are given. Regression tree classification seems to perform well as shown through experiments.

  14. Web Feet Guide to Search Engines: Finding It on the Net.

    Science.gov (United States)

    Web Feet, 2001

    2001-01-01

    This guide to search engines for the World Wide Web discusses selecting the right search engine; interpreting search results; major search engines; online tutorials and guides; search engines for kids; specialized search tools for various subjects; and other specialized engines and gateways. (LRW)

  15. Ontologies for ship's flag search on the Web

    Science.gov (United States)

    Gkinos, Nikolaos-Panagiotis; Triakosaris, Evangelos; Galiotou, Eleni

    2015-02-01

    In this paper, we discuss issues on the representation and search of maritime data on the semantic web. In particular, we are concerned with data related to the requirements put forward by ship registries and should be met in view of a ship's flag acquisition. The choice of a ship's flag is a matter of crucial importance since it is the basis of tax payment and rule and regulation enforcement for the flying of flags. In our approach, flags and corresponding requirements are represented in the form of a ontology. Data "opening" and linking with the use of the ontology in question, will facilitate a ship's flag search based on its particular characteristics and will provide useful information on ship registration requirements.

  16. Novel web service selection model based on discrete group search.

    Science.gov (United States)

    Zhai, Jie; Shao, Zhiqing; Guo, Yi; Zhang, Haiteng

    2014-01-01

    In our earlier work, we present a novel formal method for the semiautomatic verification of specifications and for describing web service composition components by using abstract concepts. After verification, the instantiations of components were selected to satisfy the complex service performance constraints. However, selecting an optimal instantiation, which comprises different candidate services for each generic service, from a large number of instantiations is difficult. Therefore, we present a new evolutionary approach on the basis of the discrete group search service (D-GSS) model. With regard to obtaining the optimal multiconstraint instantiation of the complex component, the D-GSS model has competitive performance compared with other service selection models in terms of accuracy, efficiency, and ability to solve high-dimensional service composition component problems. We propose the cost function and the discrete group search optimizer (D-GSO) algorithm and study the convergence of the D-GSS model through verification and test cases.

  17. The invisible Web uncovering information sources search engines can't see

    CERN Document Server

    Sherman, Chris

    2001-01-01

    Enormous expanses of the Internet are unreachable with standard web search engines. This book provides the key to finding these hidden resources by identifying how to uncover and use invisible web resources. Mapping the invisible Web, when and how to use it, assessing the validity of the information, and the future of Web searching are topics covered in detail. Only 16 percent of Net-based information can be located using a general search engine. The other 84 percent is what is referred to as the invisible Web-made up of information stored in databases. Unlike pages on the visible Web, informa

  18. Search of the Deep and Dark Web via DARPA Memex

    Science.gov (United States)

    Mattmann, C. A.

    2015-12-01

    Search has progressed through several stages due to the increasing size of the Web. Search engines first focused on text and its rate of occurrence; then focused on the notion of link analysis and citation then on interactivity and guided search; and now on the use of social media - who we interact with, what we comment on, and who we follow (and who follows us). The next stage, referred to as "deep search," requires solutions that can bring together text, images, video, importance, interactivity, and social media to solve this challenging problem. The Apache Nutch project provides an open framework for large-scale, targeted, vertical search with capabilities to support all past and potential future search engine foci. Nutch is a flexible infrastructure allowing open access to ranking; URL selection and filtering approaches, to the link graph generated from search, and Nutch has spawned entire sub communities including Apache Hadoop and Apache Tika. It addresses many current needs with the capability to support new technologies such as image and video. On the DARPA Memex project, we are creating create specific extensions to Nutch that will directly improve its overall technological superiority for search and that will directly allow us to address complex search problems including human trafficking. We are integrating state-of-the-art algorithms developed by Kitware for IARPA Aladdin combined with work by Harvard to provide image and video understanding support allowing automatic detection of people and things and massive deployment via Nutch. We are expanding Apache Tika for scene understanding, object/person detection and classification in images/video. We are delivering an interactive and visual interface for initiating Nutch crawls. The interface uses Python technologies to expose Nutch data and to provide a domain specific language for crawls. With the Bokeh visualization library the interface we are delivering simple interactive crawl visualization and

  19. Assessment and Comparison of Search capabilities of Web-based Meta-Search Engines: A Checklist Approach

    OpenAIRE

    Alireza Isfandiyari Moghadam; Zohreh Bahari Mova’fagh

    2010-01-01

      The present investigation concerns evaluation, comparison and analysis of search options existing within web-based meta-search engines. 64 meta-search engines were identified. 19 meta-search engines that were free, accessible and compatible with the objectives of the present study were selected. An author’s constructed check list was used for data collection. Findings indicated that all meta-search engines studied used the AND operator, phrase search, number of results displayed setting, pr...

  20. Enabling task-based information prioritization via semantic web encodings

    Science.gov (United States)

    Michaelis, James R.

    2016-05-01

    Modern Soldiers rely upon accurate and actionable information technology to achieve mission objectives. While increasingly rich sensor networks for Areas of Operation (AO) can offer many directions for aiding Soldiers, limitations are imposed by current tactical edge systems on the rate that content can be transmitted. Furthermore, mission tasks will often require very specific sets of information which may easily be drowned out by other content sources. Prior research on Quality and Value of Information (QoI/VoI) has aimed to define ways to prioritize information objects based on their intrinsic attributes (QoI) and perceived value to a consumer (VoI). As part of this effort, established ranking approaches for obtaining Subject Matter Expert (SME) recommendations, such as the Analytic Hierarchy Process (AHP) have been considered. However, limited work has been done to tie Soldier context - such as descriptions of their mission and tasks - back to intrinsic attributes of information objects. As a first step toward addressing the above challenges, this work introduces an ontology-backed approach - rooted in Semantic Web publication practices - for expressing both AHP decision hierarchies and corresponding SME feedback. Following a short discussion on related QoI/VoI research, an ontology-based data structure is introduced for supporting evaluation of Information Objects, using AHP rankings designed to facilitate information object prioritization. Consistent with alternate AHP approaches, prioritization in this approach is based on pairwise comparisons between Information Objects with respect to established criteria, as well as on pairwise comparison of the criteria to assess their relative importance. The paper concludes with a discussion of both ongoing and future work.

  1. Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE.

    Science.gov (United States)

    Demelo, Jonathan; Parsons, Paul; Sedig, Kamran

    2017-02-02

    Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts.

  2. Web search queries can predict stock market volumes.

    Science.gov (United States)

    Bordino, Ilaria; Battiston, Stefano; Caldarelli, Guido; Cristelli, Matthieu; Ukkonen, Antti; Weber, Ingmar

    2012-01-01

    We live in a computerized and networked society where many of our actions leave a digital trace and affect other people's actions. This has lead to the emergence of a new data-driven research field: mathematical methods of computer science, statistical physics and sociometry provide insights on a wide range of disciplines ranging from social science to human mobility. A recent important discovery is that search engine traffic (i.e., the number of requests submitted by users to search engines on the www) can be used to track and, in some cases, to anticipate the dynamics of social phenomena. Successful examples include unemployment levels, car and home sales, and epidemics spreading. Few recent works applied this approach to stock prices and market sentiment. However, it remains unclear if trends in financial markets can be anticipated by the collective wisdom of on-line users on the web. Here we show that daily trading volumes of stocks traded in NASDAQ-100 are correlated with daily volumes of queries related to the same stocks. In particular, query volumes anticipate in many cases peaks of trading by one day or more. Our analysis is carried out on a unique dataset of queries, submitted to an important web search engine, which enable us to investigate also the user behavior. We show that the query volume dynamics emerges from the collective but seemingly uncoordinated activity of many users. These findings contribute to the debate on the identification of early warnings of financial systemic risk, based on the activity of users of the www.

  3. Web search queries can predict stock market volumes.

    Directory of Open Access Journals (Sweden)

    Ilaria Bordino

    Full Text Available We live in a computerized and networked society where many of our actions leave a digital trace and affect other people's actions. This has lead to the emergence of a new data-driven research field: mathematical methods of computer science, statistical physics and sociometry provide insights on a wide range of disciplines ranging from social science to human mobility. A recent important discovery is that search engine traffic (i.e., the number of requests submitted by users to search engines on the www can be used to track and, in some cases, to anticipate the dynamics of social phenomena. Successful examples include unemployment levels, car and home sales, and epidemics spreading. Few recent works applied this approach to stock prices and market sentiment. However, it remains unclear if trends in financial markets can be anticipated by the collective wisdom of on-line users on the web. Here we show that daily trading volumes of stocks traded in NASDAQ-100 are correlated with daily volumes of queries related to the same stocks. In particular, query volumes anticipate in many cases peaks of trading by one day or more. Our analysis is carried out on a unique dataset of queries, submitted to an important web search engine, which enable us to investigate also the user behavior. We show that the query volume dynamics emerges from the collective but seemingly uncoordinated activity of many users. These findings contribute to the debate on the identification of early warnings of financial systemic risk, based on the activity of users of the www.

  4. Discovering How Students Search a Library Web Site: A Usability Case Study.

    Science.gov (United States)

    Augustine, Susan; Greene, Courtney

    2002-01-01

    Discusses results of a usability study at the University of Illinois Chicago that investigated whether Internet search engines have influenced the way students search library Web sites. Results show students use the Web site's internal search engine rather than navigating through the pages; have difficulty interpreting library terminology; and…

  5. Curating the Web: Building a Google Custom Search Engine for the Arts

    Science.gov (United States)

    Hennesy, Cody; Bowman, John

    2008-01-01

    Google's first foray onto the web made search simple and results relevant. With its Co-op platform, Google has taken another step toward dramatically increasing the relevancy of search results, further adapting the World Wide Web to local needs. Google Custom Search Engine, a tool on the Co-op platform, puts one in control of his or her own search…

  6. Federated Search and the Library Web Site: A Study of Association of Research Libraries Member Web Sites

    Science.gov (United States)

    Williams, Sarah C.

    2010-01-01

    The purpose of this study was to investigate how federated search engines are incorporated into the Web sites of libraries in the Association of Research Libraries. In 2009, information was gathered for each library in the Association of Research Libraries with a federated search engine. This included the name of the federated search service and…

  7. Assessment and Comparison of Search capabilities of Web-based Meta-Search Engines: A Checklist Approach

    Directory of Open Access Journals (Sweden)

    Alireza Isfandiyari Moghadam

    2010-03-01

    Full Text Available   The present investigation concerns evaluation, comparison and analysis of search options existing within web-based meta-search engines. 64 meta-search engines were identified. 19 meta-search engines that were free, accessible and compatible with the objectives of the present study were selected. An author’s constructed check list was used for data collection. Findings indicated that all meta-search engines studied used the AND operator, phrase search, number of results displayed setting, previous search query storage and help tutorials. Nevertheless, none of them demonstrated any search options for hypertext searching and displaying the size of the pages searched. 94.7% support features such as truncation, keywords in title and URL search and text summary display. The checklist used in the study could serve as a model for investigating search options in search engines, digital libraries and other internet search tools.

  8. Proceedings of the ECIR 2012 Workshop on Task-Based and Aggregated Search (TBAS2012)

    DEFF Research Database (Denmark)

    2012-01-01

    Task-based search aims to understand the user's current task and desired outcomes, and how this may provide useful context for the Information Retrieval (IR) process. An example of task-based search is situations where additional user information on e.g. the purpose of the search or what the user...

  9. Visual search in unilateral spatial neglect: The effects of distractors on a dynamic visual search task.

    Science.gov (United States)

    Emerson, Rebeca Lauren; García-Molina, Alberto; López Carballo, Jaume; García Fernández, Juan; Aparicio-López, Celeste; Novo, Junquera; Sánchez-Carrión, Rocío; Enseñat-Cantallops, Antonia; Peña-Casanova, Jordi

    2018-02-22

    The objective of this study was to examine visual scanning performance in patients with Unilateral Spatial Neglect (USN) in a visual search task. Thirty-one right hemisphere stroke patients with USN were recruited. They performed a dynamic visual search task with two conditions, with and without distractors, while eye movements were monitored with an eye-tracker. The main goal of the task was to select target stimuli that appeared from the top of the screen and moved vertically downward. Target detection and visual scanning percentage were assessed over two hemispaces (right, left) on two conditions (distractor, no distractor). Most Scanned Regions (MSR) were calculated to analyze the areas of the screen where most points of fixation were directed to. Higher target detection rate and visual scanning percentages were found on the right hemispace on both conditions. From the MSRs we found that participants with a center of attention further to the right of the screen also presented smaller overall MSRs. Right hemisphere stroke patients with USN presented not only a significant rightward bias but reduced overall search areas, implying hyperattention does not only restrict search on the horizontal (right-left) axis but the vertical axis (top-bottom) too.

  10. Research on Web Search Behavior: How Online Query Data Inform Social Psychology.

    Science.gov (United States)

    Lai, Kaisheng; Lee, Yan Xin; Chen, Hao; Yu, Rongjun

    2017-10-01

    The widespread use of web searches in daily life has allowed researchers to study people's online social and psychological behavior. Using web search data has advantages in terms of data objectivity, ecological validity, temporal resolution, and unique application value. This review integrates existing studies on web search data that have explored topics including sexual behavior, suicidal behavior, mental health, social prejudice, social inequality, public responses to policies, and other psychosocial issues. These studies are categorized as descriptive, correlational, inferential, predictive, and policy evaluation research. The integration of theory-based hypothesis testing in future web search research will result in even stronger contributions to social psychology.

  11. Formal Concept Analysis for Arabic Web Search Results Clustering

    Directory of Open Access Journals (Sweden)

    Issam Sahmoudi

    2017-04-01

    Using a ranked list as return result of a specific user request is time consuming and the browsing style seems to not be user-friendly. In this paper, we propose to study how to integrate and adapt the Formal Concept Analysis (FCA as a new system for Arabic Web Search Results Clustering based on their hierarchical structure. The effectiveness of our proposed system is illustrated by an experimental study using Arabic comprehensive set of documents from the Open Directory Project hierarchy as benchmark, where we compare our system with two others: Suffix Tree Clustering (STC and Lingo. The comparison focuses on the quality of the clustering results and produced label by different systems. It shows that our system outperforms the two others.

  12. Multimodal graph-based reranking for web image search.

    Science.gov (United States)

    Wang, Meng; Li, Hao; Tao, Dacheng; Lu, Ke; Wu, Xindong

    2012-11-01

    This paper introduces a web image search reranking approach that explores multiple modalities in a graph-based learning scheme. Different from the conventional methods that usually adopt a single modality or integrate multiple modalities into a long feature vector, our approach can effectively integrate the learning of relevance scores, weights of modalities, and the distance metric and its scaling for each modality into a unified scheme. In this way, the effects of different modalities can be adaptively modulated and better reranking performance can be achieved. We conduct experiments on a large dataset that contains more than 1000 queries and 1 million images to evaluate our approach. Experimental results demonstrate that the proposed reranking approach is more robust than using each individual modality, and it also performs better than many existing methods.

  13. Task Specificity and the Influence of Memory on Visual Search: Comment on Vo and Wolfe (2012)

    Science.gov (United States)

    Hollingworth, Andrew

    2012-01-01

    Recent results from Vo and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a…

  14. A Search for Understanding. Analysis of Human Performance on Target Acquisition and Search Tasks using Eyetracker Data.

    Science.gov (United States)

    1995-06-01

    USING EYETRACKER DATA CVI T Jeffrey F. Nicoll David H. Hsu DTIC JLMI &** lm* \\& m *** ÄUG 1 6 1995 June 1995 Preparedfor Advanced...Agency. IDA PAPER P-3036 A SEARCH FOR UNDERSTANDING: ANALYSIS OF HUMAN PERFORMANCE ON TARGET ACQUISITION AND SEARCH TASKS USING EYETRACKER DATA...4. TITLE AND SUBTITLE A Search for Understanding: Analysis of Human Performance on Target Acquisition and Search Tasks Using Eyetracker Data 6

  15. A Web Browser Interface to Manage the Searching and Organizing of Information on the Web by Learners

    Science.gov (United States)

    Li, Liang-Yi; Chen, Gwo-Dong

    2010-01-01

    Information Gathering is a knowledge construction process. Web learners make a plan for their Information Gathering task based on their prior knowledge. The plan is evolved with new information encountered and their mental model is constructed through continuously assimilating and accommodating new information gathered from different Web pages. In…

  16. Darwin on the Web: The Evolution of Search Tools.

    Science.gov (United States)

    Vidmar, Dale J.

    1999-01-01

    Discusses various search strategies and tools that can be used for searching on the Internet, including search engines and search directories; Boolean searching; metasearching; relevancy ranking; automatic phrase detection; backlinks; natural-language searching; clustering and cataloging information; image searching; customization and portals;…

  17. DYNIQX: A novel meta-search engine for the web

    OpenAIRE

    Zhu, Jianhan; Song, Dawei; Eisenstadt, Marc; Barladeanu, Cristi; Rüger, Stefan

    2009-01-01

    The effect of metadata in collection fusion has not been sufficiently studied. In response to this, we present a novel meta-search engine called Dyniqx for metadata based search. Dyniqx integrates search results from search services of documents, images, and videos for generating a unified list of ranked search results. Dyniqx exploits the availability of metadata in search services such as PubMed, Google Scholar, Google Image Search, and Google Video Search etc for fusing search results from...

  18. Semantic similarity measures in the biomedical domain by leveraging a web search engine.

    Science.gov (United States)

    Hsieh, Sheau-Ling; Chang, Wen-Yung; Chen, Chi-Huang; Weng, Yung-Ching

    2013-07-01

    Various researches in web related semantic similarity measures have been deployed. However, measuring semantic similarity between two terms remains a challenging task. The traditional ontology-based methodologies have a limitation that both concepts must be resided in the same ontology tree(s). Unfortunately, in practice, the assumption is not always applicable. On the other hand, if the corpus is sufficiently adequate, the corpus-based methodologies can overcome the limitation. Now, the web is a continuous and enormous growth corpus. Therefore, a method of estimating semantic similarity is proposed via exploiting the page counts of two biomedical concepts returned by Google AJAX web search engine. The features are extracted as the co-occurrence patterns of two given terms P and Q, by querying P, Q, as well as P AND Q, and the web search hit counts of the defined lexico-syntactic patterns. These similarity scores of different patterns are evaluated, by adapting support vector machines for classification, to leverage the robustness of semantic similarity measures. Experimental results validating against two datasets: dataset 1 provided by A. Hliaoutakis; dataset 2 provided by T. Pedersen, are presented and discussed. In dataset 1, the proposed approach achieves the best correlation coefficient (0.802) under SNOMED-CT. In dataset 2, the proposed method obtains the best correlation coefficient (SNOMED-CT: 0.705; MeSH: 0.723) with physician scores comparing with measures of other methods. However, the correlation coefficients (SNOMED-CT: 0.496; MeSH: 0.539) with coder scores received opposite outcomes. In conclusion, the semantic similarity findings of the proposed method are close to those of physicians' ratings. Furthermore, the study provides a cornerstone investigation for extracting fully relevant information from digitizing, free-text medical records in the National Taiwan University Hospital database.

  19. A Semantic Web Application for the Air Tasking Order

    National Research Council Canada - National Science Library

    Frantz, Albert; Franco, Milvio

    2005-01-01

    .... We used existing Semantic Web tools to construct an ATO knowledge base. The knowledge base is used to select potential air missions to reassign to strike time sensitive targets by the computer...

  20. Students' Evaluation Strategies in a Web Research Task: Are They Sensitive to Relevance and Reliability?

    Science.gov (United States)

    Rodicio, Héctor García

    2015-01-01

    When searching and using resources on the Web, students have to evaluate Web pages in terms of relevance and reliability. This evaluation can be done in a more or less systematic way, by either considering deep or superficial cues of relevance and reliability. The goal of this study was to examine how systematic students are when evaluating Web…

  1. White Hat Search Engine Optimization (SEO: Structured Web Data for Libraries

    Directory of Open Access Journals (Sweden)

    Dan Scott

    2015-06-01

    Full Text Available “White hat” search engine optimization refers to the practice of publishing web pages that are useful to humans, while enabling search engines and web applications to better understand the structure and content of your website. This article teaches you to add structured data to your website so that search engines can more easily connect patrons to your library locations, hours, and contact information. A web page for a branch of the Greater Sudbury Public Library retrieved in January 2015 is used as the basis for examples that progressively enhance the page with structured data. Finally, some of the advantages structured data enables beyond search engine optimization are explored

  2. A Picture is Worth a Thousand Keywords: Exploring Mobile Image-Based Web Searching

    OpenAIRE

    Konrad Tollmar; Ted Möller; Björn Nilsved

    2008-01-01

    Using images of objects as queries is a new approach to search for information on the Web. Image-based information retrieval goes beyond only matching images, as information in other modalities also can be extracted from data collections using an image search. We have developed a new system that uses images to search for web-based information. This paper has a particular focus on exploring users' experience of general mobile image-based web searches to find what issues and phenomena it contai...

  3. Self-Education through Web-Searching - An Exploratory Study

    Directory of Open Access Journals (Sweden)

    Răzvan-Alexandru Călin

    2015-10-01

    Full Text Available The 21st century is marked by the extensive and easy access to information through the virtual environment. Do we find in today's Romanian school the presence of a formative space - on the one hand, facilitator for a maximal exploitation of opportunities, and on the other hand, a "sensor" for new risks, characteristic to the information era? Is the "digital generation" (Mark Prensky of the beginning of century in Romania ready from these perspectives? The present paper outlines the results of a comparative exploratory study regarding the ordinary methods used by youngsters - from 5th and 6th grades, as well as 11th and 12th grades, from six different schools, high-schools and colleges from Dolj county – to find information about different topics/homework. The results offer the premises for hypothesis regarding this phenomenon at national level. The conclusions indicate as the main method of obtaining information the web-searching. They emphasize the absence of an initial specific educational training in this domain and allow the delineation of a suggestive image regarding possible future methods of action.

  4. Task 28: Web Accessible APIs in the Cloud Trade Study

    Science.gov (United States)

    Gallagher, James; Habermann, Ted; Jelenak, Aleksandar; Lee, Joe; Potter, Nathan; Yang, Muqun

    2017-01-01

    This study explored three candidate architectures for serving NASA Earth Science Hierarchical Data Format Version 5 (HDF5) data via Hyrax running on Amazon Web Services (AWS). We studied the cost and performance for each architecture using several representative Use-Cases. The objectives of the project are: Conduct a trade study to identify one or more high performance integrated solutions for storing and retrieving NASA HDF5 and Network Common Data Format Version 4 (netCDF4) data in a cloud (web object store) environment. The target environment is Amazon Web Services (AWS) Simple Storage Service (S3).Conduct needed level of software development to properly evaluate solutions in the trade study and to obtain required benchmarking metrics for input into government decision of potential follow-on prototyping. Develop a cloud cost model for the preferred data storage solution (or solutions) that accounts for different granulation and aggregation schemes as well as cost and performance trades.

  5. Large-area sheet task advanced dendritic web growth development

    Science.gov (United States)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.

    1984-01-01

    The thermal models used for analyzing dendritic web growth and calculating the thermal stress were reexamined to establish the validity limits imposed by the assumptions of the models. Also, the effects of thermal conduction through the gas phase were evaluated and found to be small. New growth designs, both static and dynamic, were generated using the modeling results. Residual stress effects in dendritic web were examined. In the laboratory, new techniques for the control of temperature distributions in three dimensions were developed. A new maximum undeformed web width of 5.8 cm was achieved. A 58% increase in growth velocity of 150 micrometers thickness was achieved with dynamic hardware. The area throughput goals for transient growth of 30 and 35 sq cm/min were exceeded.

  6. Large area sheet task: Advanced dendritic web growth development

    Science.gov (United States)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Meier, D.; Schruben, J.

    1981-01-01

    The growth of silicon dendritic web for photovoltaic applications was investigated. The application of a thermal model for calculating buckling stresses as a function of temperature profile in the web is discussed. Lid and shield concepts were evaluated to provide the data base for enhancing growth velocity. An experimental web growth machine which embodies in one unit the mechanical and electronic features developed in previous work was developed. In addition, evaluation of a melt level control system was begun, along with preliminary tests of an elongated crucible design. The economic analysis was also updated to incorporate some minor cost changes. The initial applications of the thermal model to a specific configuration gave results consistent with experimental observation in terms of the initiation of buckling vs. width for a given crystal thickness.

  7. Interactive Visualization and Navigation of Web Search Results Revealing Community Structures and Bridges

    OpenAIRE

    Sallaberry, Arnaud; Zaidi, Faraz; Pich, C.; Melançon, Guy

    2010-01-01

    International audience; With the information overload on the Internet, organization and visualization of web search results so as to facilitate faster access to information is a necessity. The classical methods present search results as an ordered list of web pages ranked in terms of relevance to the searched topic. Users thus have to scan text snippets or navigate through various pages before finding the required information. In this paper we present an interactive visualization system for c...

  8. The poor quality of information about laparoscopy on the World Wide Web as indexed by popular search engines.

    Science.gov (United States)

    Allen, J W; Finch, R J; Coleman, M G; Nathanson, L K; O'Rourke, N A; Fielding, G A

    2002-01-01

    This study was undertaken to determine the quality of information on the Internet regarding laparoscopy. Four popular World Wide Web search engines were used with the key word "laparoscopy." Advertisements, patient- or physician-directed information, and controversial material were noted. A total of 14,030 Web pages were found, but only 104 were unique Web sites. The majority of the sites were duplicate pages, subpages within a main Web page, or dead links. Twenty-eight of the 104 pages had a medical product for sale, 26 were patient-directed, 23 were written by a physician or group of physicians, and six represented corporations. The remaining 21 were "miscellaneous." The 46 pages containing educational material were critically reviewed. At least one of the senior authors found that 32 of the pages contained controversial or misleading statements. All of the three senior authors (LKN, NAO, GAF) independently agreed that 17 of the 46 pages contained controversial information. The World Wide Web is not a reliable source for patient or physician information about laparoscopy. Authenticating medical information on the World Wide Web is a difficult task, and no government or surgical society has taken the lead in regulating what is presented as fact on the World Wide Web.

  9. Using Heuristic Task Analysis to Create Web-Based Instructional Design Theory

    Science.gov (United States)

    Fiester, Herbert R.

    2010-01-01

    The first purpose of this study was to identify procedural and heuristic knowledge used when creating web-based instruction. The second purpose of this study was to develop suggestions for improving the Heuristic Task Analysis process, a technique for eliciting, analyzing, and representing expertise in cognitively complex tasks. Three expert…

  10. A study of medical and health queries to web search engines.

    Science.gov (United States)

    Spink, Amanda; Yang, Yin; Jansen, Jim; Nykanen, Pirrko; Lorence, Daniel P; Ozmutlu, Seda; Ozmutlu, H Cenk

    2004-03-01

    This paper reports findings from an analysis of medical or health queries to different web search engines. We report results: (i). comparing samples of 10000 web queries taken randomly from 1.2 million query logs from the AlltheWeb.com and Excite.com commercial web search engines in 2001 for medical or health queries, (ii). comparing the 2001 findings from Excite and AlltheWeb.com users with results from a previous analysis of medical and health related queries from the Excite Web search engine for 1997 and 1999, and (iii). medical or health advice-seeking queries beginning with the word 'should'. Findings suggest: (i). a small percentage of web queries are medical or health related, (ii). the top five categories of medical or health queries were: general health, weight issues, reproductive health and puberty, pregnancy/obstetrics, and human relationships, and (iii). over time, the medical and health queries may have declined as a proportion of all web queries, as the use of specialized medical/health websites and e-commerce-related queries has increased. Findings provide insights into medical and health-related web querying and suggests some implications for the use of the general web search engines when seeking medical/health information.

  11. Identification and Analysis of Multi-tasking Product Information Search Sessions with Query Logs

    Directory of Open Access Journals (Sweden)

    Xiang Zhou

    2016-09-01

    Full Text Available Purpose: This research aims to identify product search tasks in online shopping and analyze the characteristics of consumer multi-tasking search sessions. Design/methodology/approach: The experimental dataset contains 8,949 queries of 582 users from 3,483 search sessions. A sequential comparison of the Jaccard similarity coefficient between two adjacent search queries and hierarchical clustering of queries is used to identify search tasks. Findings: (1 Users issued a similar number of queries (1.43 to 1.47 with similar lengths (7.3-7.6 characters per task in mono-tasking and multi-tasking sessions, and (2 Users spent more time on average in sessions with more tasks, but spent less time for each task when the number of tasks increased in a session. Research limitations: The task identification method that relies only on query terms does not completely reflect the complex nature of consumer shopping behavior. Practical implications: These results provide an exploratory understanding of the relationships among multiple shopping tasks, and can be useful for product recommendation and shopping task prediction. Originality/value: The originality of this research is its use of query clustering with online shopping task identification and analysis, and the analysis of product search session characteristics.

  12. Task specificity and the influence of memory on visual search: comment on Võ and Wolfe (2012).

    Science.gov (United States)

    Hollingworth, Andrew

    2012-12-01

    Recent results from Võ and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a preview task did not improve later search, but Võ and Wolfe used a relatively insensitive, between-subjects design. Here, we replicated the Võ and Wolfe study using a within-subject manipulation of scene preview. A preview session (focused either on object location memory or on the assessment of object semantics) reliably facilitated later search. In addition, information acquired from distractors in a scene-facilitated search when the distractor later became the target. Instead of being strongly constrained by task, visual memory is applied flexibly to guide attention and gaze during visual search.

  13. The Effectiveness of Web Search Engines to Index New Sites from Different Countries

    Science.gov (United States)

    Pirkola, Ari

    2009-01-01

    Introduction: Investigates how effectively Web search engines index new sites from different countries. The primary interest is whether new sites are indexed equally or whether search engines are biased towards certain countries. If major search engines show biased coverage it can be considered a significant economic and political problem because…

  14. Effect of a Body Model on Performance in a Virtual Environment Search Task

    National Research Council Canada - National Science Library

    Singer, Michael

    1998-01-01

    ...) in training dismounted soldiers. This experiment investigated full body representation (generic) versus a hand linked pointer on movement performance in an office building interior during a search task...

  15. Ensemble learned vaccination uptake prediction using web search queries

    DEFF Research Database (Denmark)

    Hansen, Niels Dalum; Lioma, Christina; Mølbak, Kåre

    2016-01-01

    We present a method that uses ensemble learning to combine clinical and web-mined time-series data in order to predict future vaccination uptake. The clinical data is official vaccination registries, and the web data is query frequencies collected from Google Trends. Experiments with official...... vaccine records show that our method predicts vaccination uptake eff?ectively (4.7 Root Mean Squared Error). Whereas performance is best when combining clinical and web data, using solely web data yields comparative performance. To our knowledge, this is the ?first study to predict vaccination uptake...

  16. Web Based Client/Server Interface for Part Task Training

    National Research Council Canada - National Science Library

    Blemel, Peter

    2000-01-01

    .... The project focused on developing concepts for ways to use the Internet to provide individual and cooperative Distance Part Task Training using virtual or real training equipment. The Phase I goal was to define a commercially viable multi-media virtual training environment for providing realistic training wherever and whenever needed.

  17. Utility of Web search query data in testing theoretical assumptions about mephedrone.

    Science.gov (United States)

    Kapitány-Fövény, Máté; Demetrovics, Zsolt

    2017-05-01

    With growing access to the Internet, people who use drugs and traffickers started to obtain information about novel psychoactive substances (NPS) via online platforms. This paper aims to analyze whether a decreasing Web interest in formerly banned substances-cocaine, heroin, and MDMA-and the legislative status of mephedrone predict Web interest about this NPS. Google Trends was used to measure changes of Web interest on cocaine, heroin, MDMA, and mephedrone. Google search results for mephedrone within the same time frame were analyzed and categorized. Web interest about classic drugs found to be more persistent. Regarding geographical distribution, location of Web searches for heroin and cocaine was less centralized. Illicit status of mephedrone was a negative predictor of its Web search query rates. The connection between mephedrone-related Web search rates and legislative status of this substance was significantly mediated by ecstasy-related Web search queries, the number of documentaries, and forum/blog entries about mephedrone. The results might provide support for the hypothesis that mephedrone's popularity was highly correlated with its legal status as well as it functioned as a potential substitute for MDMA. Google Trends was found to be a useful tool for testing theoretical assumptions about NPS. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Virtual Reference Services through Web Search Engines: Study of Academic Libraries in Pakistan

    Directory of Open Access Journals (Sweden)

    Rubia Khan

    2017-03-01

    Full Text Available Web search engines (WSE are powerful and popular tools in the field of information service management. This study is an attempt to examine the impact and usefulness of web search engines in providing virtual reference services (VRS within academic libraries in Pakistan. The study also attempts to investigate the relevant expertise and skills of library professionals in providing digital reference services (DRS efficiently using web search engines. Methodology used in this study is quantitative in nature. The data was collected from fifty public and private sector universities in Pakistan using a structured questionnaire. Microsoft Excel and SPSS were used for data analysis. The study concludes that web search engines are commonly used by librarians to help users (especially research scholars by providing digital reference services. The study also finds a positive correlation between use of web search engines and quality of digital reference services provided to library users. It is concluded that although search engines have increased the expectations of users and are really big competitors to a library’s reference desk, they are however not an alternative to reference service. Findings reveal that search engines pose numerous challenges for librarians and the study also attempts to bring together possible remedial measures. This study is useful for library professionals to understand the importance of search engines in providing VRS. The study also provides an intellectual comparison among different search engines, their capabilities, limitations, challenges and opportunities to provide VRS effectively in libraries.

  19. Categorization of web pages - Performance enhancement to search engine

    Digital Repository Service at National Institute of Oceanography (India)

    Lakshminarayana, S.

    Weight (PFW) for a web page and grouped for categorization. Using these experimental results we classified the web pages into four different groups i.e. (1) Simple type (2) Axis shifted (3) Fluctuated and (4) Oscillating types. Implication in development...

  20. The “I’m Feeling Lucky Syndrome”: Teacher-Candidates’ Knowledge of Web Searching Strategies

    Directory of Open Access Journals (Sweden)

    Corinne Laverty

    2008-06-01

    Full Text Available The need for web literacy has become increasingly important with the exponential growth of learning materials on the web that are freely accessible to educators. Teachers need the skills to locate these tools and also the ability to teach their students web search strategies and evaluation of websites so they can effectively explore the web by themselves. This study examined the web searching strategies of 253 teachers-in-training using both a survey (247 participants and live screen capture with think aloud audio recording (6 participants. The results present a picture of the strategic, syntactic, and evaluative search abilities of these students that librarians and faculty can use to plan how instruction can target information skill deficits in university student populations.

  1. Web text corpus extraction system for linguistic tasks

    Directory of Open Access Journals (Sweden)

    Héctor Fabio Cadavid Rengifo

    2010-05-01

    Full Text Available Internet content, used as text corpus for natural language learning, offers important characteristics for such task, like its huge vo- lume, being permanently up-to-date with linguistic variants and having low time and resource costs regarding the traditional way that text is built for natural language machine learning tasks. This paper describes a system for the automatic extraction of large bodies of text from the Internet as a valuable tool for such learning tasks. A concurrent programming-based, hardware-use opti- misation strategy significantly improving extraction performance is also presented. The strategies incorporated into the system for maximising hardware resource exploitation, thereby reducing extraction time are presented, as are extendibility (supporting digi- tal-content formats and adaptability (regarding how the system cleanses content for obtaining pure natural language samples. The experimental results obtained after processing one of the biggest Spanish domains on the internet, are presented (i.e. es.wikipedia.org. Such results are used for presenting initial conclusions about the validity and applicability of corpus directly ex- tracted from Internet as morphological or syntactical learning input.

  2. Overview of the TREC 2014 Federated Web Search Track

    Science.gov (United States)

    2014-11-01

    University (ECNUCS) [10] East China Normal University introduces the Search En- gine Impact Factor (SEIF), a query-independent measure of a search...Search Engine Impact Factor . In addition, three of these runs (ecomsvz, ecomsv and ecomsvt) make use of external resources (Google Search, data from... Impact Factor (see ECNUCS’ vertical selection submissions) has the biggest contribution to performance improvements, besides the vertical selection

  3. Dropout rates and response times of an occupation search tree in a web survey

    NARCIS (Netherlands)

    Tijdens, K.

    2014-01-01

    Occupation is key in socioeconomic research. As in other survey modes, most web surveys use an open-ended question for occupation, though the absence of interviewers elicits unidentifiable or aggregated responses. Unlike other modes, web surveys can use a search tree with an occupation database.

  4. A New Era of Search Engines: Not Just Web Pages Anymore.

    Science.gov (United States)

    Hock, Ran

    2002-01-01

    Discusses various types of information that can be retrieved from the Web via search engines. Highlights include Web pages; time frames, including historical coverage and currentness; text pages in formats other than HTML; directory sites; news articles; discussion groups; images; and audio and video. (LRW)

  5. Web Search Engines-How to Get What You Want from the World ...

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 11. Web Search Engines - How to Get What You Want from the World Wide Web. T B Rajashekar. General Article Volume 3 Issue 11 November 1998 pp 40-53. Fulltext. Click here to view fulltext PDF. Permanent link:

  6. Uncovering the Hidden Web, Part I: Finding What the Search Engines Don't. ERIC Digest.

    Science.gov (United States)

    Mardis, Marcia

    Currently, the World Wide Web contains an estimated 7.4 million sites (OCLC, 2001). Yet even the most experienced searcher, using the most robust search engines, can access only about 16% of these pages (Dahn, 2001). The other 84% of the publicly available information on the Web is referred to as the "hidden,""invisible," or…

  7. Searching places unknown: law enforcement jurisdiction on the dark web

    National Research Council Canada - National Science Library

    Ghappour, Ahmed

    2017-01-01

    The use of hacking tools by law enforcement to pursue criminal suspects who have anonymized their communications on the dark web presents a looming flashpoint between criminal procedure and international law...

  8. Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task

    Directory of Open Access Journals (Sweden)

    Nicholas T. Bott

    2017-06-01

    Full Text Available Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive “window on the brain,” and the recording of eye movements using web cameras is a burgeoning area of research.Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS.Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera.Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits (r = 0.88–0.92. Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81–0.88. There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets (r = 0.88–0.94. Significantly fewer data quality issues were encountered using the built-in web camera.Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as

  9. Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task.

    Science.gov (United States)

    Bott, Nicholas T; Lange, Alex; Rentz, Dorene; Buffalo, Elizabeth; Clopton, Paul; Zola, Stuart

    2017-01-01

    Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive "window on the brain," and the recording of eye movements using web cameras is a burgeoning area of research. Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC) decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS)] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS). Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera. Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits (r = 0.88-0.92). Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81-0.88). There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets (r = 0.88-0.94). Significantly fewer data quality issues were encountered using the built-in web camera. Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as fixation points, built

  10. State-of-the-Art Review on Relevance of Genetic Algorithm to Internet Web Search

    Directory of Open Access Journals (Sweden)

    Kehinde Agbele

    2012-01-01

    Full Text Available People use search engines to find information they desire with the aim that their information needs will be met. Information retrieval (IR is a field that is concerned primarily with the searching and retrieving of information in the documents and also searching the search engine, online databases, and Internet. Genetic algorithms (GAs are robust, efficient, and optimizated methods in a wide area of search problems motivated by Darwin’s principles of natural selection and survival of the fittest. This paper describes information retrieval systems (IRS components. This paper looks at how GAs can be applied in the field of IR and specifically the relevance of genetic algorithms to internet web search. Finally, from the proposals surveyed it turns out that GA is applied to diverse problem fields of internet web search.

  11. Efficient Top-k Spatial Locality Search for Co-located Spatial Web Objects

    DEFF Research Database (Denmark)

    Qu, Qiang; Liu, Siyuan; Yang, Bin

    2014-01-01

    In step with the web being used widely by mobile users, user location is becoming an essential signal in services, including local intent search. Given a large set of spatial web objects consisting of a geographical location and a textual description (e.g., online business directory entries...... sets with more textually relevant objects render these studies inapplicable. We propose localitySearch, a query that returns top-k sets of spatial web objects and integrates spatial distance and textual relevance in one ranking function. We show that computing the query is NP-hard, and we present two...

  12. Effects of task complexity on online search behavior of adolescents

    NARCIS (Netherlands)

    Walhout, Jaap; Oomen, Paula; Jarodzka, Halszka; Brand-Gruwel, Saskia

    2018-01-01

    Evaluation of information during information problem solving processes already starts when trying to select the appropriate search result on a search engine results page (SERP). Up to now, research has mainly focused on the evaluation of webpages while the evaluation of SERPs received less

  13. Building maps to search the web: the method Sewcom

    Directory of Open Access Journals (Sweden)

    Corrado Petrucco

    2002-01-01

    Full Text Available Seeking information on the Internet is becoming a necessity 'at school, at work and in every social sphere. Unfortunately the difficulties' inherent in the use of search engines and the use of unconscious cognitive approaches inefficient limit their effectiveness. It is in this respect presented a method, called SEWCOM that lets you create conceptual maps through interaction with search engines.

  14. Using metafeatures to increase the effectiveness of latent semantic models in web search

    NARCIS (Netherlands)

    Borisov, A.; Serdyukov, P.; de Rijke, M.

    2016-01-01

    In web search, latent semantic models have been proposed to bridge the lexical gap between queries and documents that is due to the fact that searchers and content creators often use different vocabularies and language styles to express the same concept. Modern search engines simply use the outputs

  15. Index Compression and Efficient Query Processing in Large Web Search Engines

    Science.gov (United States)

    Ding, Shuai

    2013-01-01

    The inverted index is the main data structure used by all the major search engines. Search engines build an inverted index on their collection to speed up query processing. As the size of the web grows, the length of the inverted list structures, which can easily grow to hundreds of MBs or even GBs for common terms (roughly linear in the size of…

  16. Scanning behaviour in natural scenes is influenced by a preceding unrelated visual search task.

    Science.gov (United States)

    Thompson, Catherine; Crundall, David

    2011-01-01

    Three experiments explored the transference of visual scanning behaviour between two unrelated tasks. Participants first viewed letters presented horizontally, vertically, or as a random array. They then viewed still images (experiments 1 and 2) or video clips (experiment 3) of driving scenes, under varying task conditions. Despite having no relevance to the driving images, layout of stimuli in the letter task influenced scanning behaviour in this subsequent task. In the still images, a vertical letter search increased vertical scanning, and in the dynamic clips, a horizontal letter search decreased vertical scanning. This indicated that (i) models of scanning behaviour should account for the influence of a preceding unrelated task; (ii) carry-over is modulated by demand in the current task; and (iii) in situations where particular scanning strategies are important for primary task performance (eg driving safety), secondary task information should be displayed in a manner likely to produce a congruent scanning strategy.

  17. Influence of social presence on eye movements in visual search tasks.

    Science.gov (United States)

    Liu, Na; Yu, Ruifeng

    2017-12-01

    This study employed an eye-tracking technique to investigate the influence of social presence on eye movements in visual search tasks. A total of 20 male subjects performed visual search tasks in a 2 (target presence: present vs. absent) × 2 (task complexity: complex vs. simple) × 2 (social presence: alone vs. a human audience) within-subject experiment. Results indicated that the presence of an audience could evoke a social facilitation effect on response time in visual search tasks. Compared with working alone, the participants made fewer and shorter fixations, larger saccades and shorter scan path in simple search tasks and more and longer fixations, smaller saccades and longer scan path in complex search tasks when working with an audience. The saccade velocity and pupil diameter in the audience-present condition were larger than those in the working-alone condition. No significant change in target fixation number was observed between two social presence conditions. Practitioner Summary: This study employed an eye-tracking technique to examine the influence of social presence on eye movements in visual search tasks. Results clarified the variation mechanism and characteristics of oculomotor scanning induced by social presence in visual search.

  18. Examining Mathematical Task and Pedagogical Usability of Web Contents Authored by Prospective Mathematics Teachers

    Science.gov (United States)

    Akayuure, Peter; Apawu, Jones

    2015-01-01

    The study was designed to engage prospective mathematics teachers in creating web learning modules. The aim was to examine the mathematical task and perceived pedagogical usability of the modules for mathematics instructions in Ghana. The study took place at University of Education, Winneba. Classes of 172 prospective mathematics teachers working…

  19. What and how children search on the web

    NARCIS (Netherlands)

    Duarte Torres, Sergio; Weber, Ingmar

    2011-01-01

    The Internet has become an important part of the daily life of children as a source of information and leisure activities. Nonetheless, given that most of the content available on the web is aimed at the general public, children are constantly exposed to inappropriate content, either because the

  20. Search Engine Optimization for Flash Best Practices for Using Flash on the Web

    CERN Document Server

    Perkins, Todd

    2009-01-01

    Search Engine Optimization for Flash dispels the myth that Flash-based websites won't show up in a web search by demonstrating exactly what you can do to make your site fully searchable -- no matter how much Flash it contains. You'll learn best practices for using HTML, CSS and JavaScript, as well as SWFObject, for building sites with Flash that will stand tall in search rankings.

  1. GoWeb: Semantic Search and Browsing for the Life Sciences

    OpenAIRE

    Dietze, Heiko

    2010-01-01

    Searching is a fundamental task to support research. Current search engines are keyword-based. Semantic technologies promise a next generation of semantic search engines, which will be able to answer questions. Current approaches either apply natural language processing to unstructured text or they assume the existence of structured statements over which they can reason. This work provides a system for combining the classical keyword-based search engines with semantic annotation. Conventi...

  2. A Picture is Worth a Thousand Keywords: Exploring Mobile Image-Based Web Searching

    Directory of Open Access Journals (Sweden)

    Konrad Tollmar

    2008-01-01

    Full Text Available Using images of objects as queries is a new approach to search for information on the Web. Image-based information retrieval goes beyond only matching images, as information in other modalities also can be extracted from data collections using an image search. We have developed a new system that uses images to search for web-based information. This paper has a particular focus on exploring users' experience of general mobile image-based web searches to find what issues and phenomena it contains. This was achieved in a multipart study by creating and letting respondents test prototypes of mobile image-based search systems and collect data using interviews, observations, video observations, and questionnaires. We observed that searching for information based only on visual similarity and without any assistance is sometimes difficult, especially on mobile devices with limited interaction bandwidth. Most of our subjects preferred a search tool that guides the users through the search result based on contextual information, compared to presenting the search result as a plain ranked list.

  3. Studying different tasks of implicit learning across multiple test sessions conducted on the web

    Directory of Open Access Journals (Sweden)

    Werner eSævland

    2016-06-01

    Full Text Available Implicit learning is usually studied through individual performance on a single task, with the most common tasks being Serial Reaction Time task (SRT; Nissen and Bullemer, 1987, Dynamic System Control task (DSC; (Berry and Broadbent, 1984 and artificial Grammar Learning task (AGL; (Reber, 1967. Few attempts have been made to compare performance across different implicit learning tasks within the same experiment. The current experiment was designed study the relationship between performance on the DSC Sugar factory task (Berry and Broadbent, 1984 and the Alternating Serial Reaction Time task (ASRT; (Howard and Howard, 1997. We also addressed another limitation to traditional implicit learning experiments, namely that implicit learning is usually studied in laboratory settings over a restricted time span lasting for less than an hour (Berry and Broadbent, 1984; Nissen and Bullemer, 1987; Reber, 1967. In everyday situations, implicit learning is assumed to involve a gradual accumulation of knowledge across several learning episodes over a larger time span (Norman and Price, 2012. One way to increase the ecological validity of implicit learning experiments could be to present the learning material repeatedly across shorter experimental sessions (Howard and Howard, 1997; Cleeremans and McClelland, 1991. This can most easily be done by using a web-based setup that participants can access from home. We therefore created an online web-based system for measuring implicit learning that could be administered in either single or multiple sessions. Participants (n = 66 were assigned to either a single-session or a multi-session condition. Learning and the degree of conscious awareness of the learned regularities was compared across condition (single vs. multiple sessions and tasks (DSC vs. ASRT. Results showed that learning on the two tasks was not related. However, participants in the multiple sessions condition did show greater improvements in reaction

  4. Re-Ranking Microblogs Using Word2Vec in Microblog Search Task, University of Avignon

    OpenAIRE

    Quillot, Mathias; Delorme, Alexandre

    2017-01-01

    Working note aimed at proposing improved system from In-dri Search Engine for the Microblog Search Task of MC2 CLEF 2017 lab. This improvement is tried thanks to Word2Vec model used in re-classing results in comparing them with query.; Working note aimed at proposing improved system from In-dri Search Engine for the Microblog Search Task of MC2 CLEF 2017 lab. This improvement is tried thanks to Word2Vec model used in re-classing results in comparing them with query.

  5. Adding a Visualization Feature to Web Search Engines: It’s Time

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Pak C.

    2008-11-11

    Since the first world wide web (WWW) search engine quietly entered our lives in 1994, the “information need” behind web searching has rapidly grown into a multi-billion dollar business that dominates the internet landscape, drives e-commerce traffic, propels global economy, and affects the lives of the whole human race. Today’s search engines are faster, smarter, and more powerful than those released just a few years ago. With the vast investment pouring into research and development by leading web technology providers and the intense emotion behind corporate slogans such as “win the web” or “take back the web,” I can’t help but ask why are we still using the very same “text-only” interface that was used 13 years ago to browse our search engine results pages (SERPs)? Why has the SERP interface technology lagged so far behind in the web evolution when the corresponding search technology has advanced so rapidly? In this article I explore some current SERP interface issues, suggest a simple but practical visual-based interface design approach, and argue why a visual approach can be a strong candidate for tomorrow’s SERP interface.

  6. A Portrait of the Audience for Instruction in Web Searching: Results of a Survey Conducted at Two Canadian Universities.

    Science.gov (United States)

    Tillotson, Joy

    2003-01-01

    Describes a survey that was conducted involving participants in the library instruction program at two Canadian universities in order to describe the characteristics of students receiving instruction in Web searching. Examines criteria for evaluating Web sites, search strategies, use of search engines, and frequency of use. Questionnaire is…

  7. Subtle eye movement metrics reveal task-relevant representations prior to visual search.

    Science.gov (United States)

    van Loon, Anouk M; Olmos-Solis, Katya; Olivers, Christian N L

    2017-06-01

    Visual search is thought to be guided by an active visual working memory (VWM) representation of the task-relevant features, referred to as the search template. In three experiments using a probe technique, we investigated which eye movement metrics reveal which search template is activated prior to the search, and distinguish it from future relevant or no longer relevant VWM content. Participants memorized a target color for a subsequent search task, while being instructed to keep central fixation. Before the search display appeared, we briefly presented two task-irrelevant colored probe stimuli to the left and right from fixation, one of which could match the current target template. In all three experiments, participants made both more and larger eye movements towards the probe matching the target color. The bias was predominantly expressed in microsaccades, 100-250 ms after probe onset. Experiment 2 used a retro-cue technique to show that these metrics distinguish between relevant and dropped representations. Finally, Experiment 3 used a sequential task paradigm, and showed that the same metrics also distinguish between current and prospective search templates. Taken together, we show how subtle eye movements track task-relevant representations for selective attention prior to visual search.

  8. An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.

    Science.gov (United States)

    Zweigenbaum, P; Darmoni, S J; Grabar, N; Douyère, M; Benichou, J

    2002-01-01

    Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF.

  9. The Role of Exploratory Talk in Classroom Search Engine Tasks

    Science.gov (United States)

    Knight, Simon; Mercer, Neil

    2015-01-01

    While search engines are commonly used by children to find information, and in classroom-based activities, children are not adept in their information seeking or evaluation of information sources. Prior work has explored such activities in isolated, individual contexts, failing to account for the collaborative, discourse-mediated nature of search…

  10. Keynote Talk: Mining the Web 2.0 for Improved Image Search

    Science.gov (United States)

    Baeza-Yates, Ricardo

    There are several semantic sources that can be found in the Web that are either explicit, e.g. Wikipedia, or implicit, e.g. derived from Web usage data. Most of them are related to user generated content (UGC) or what is called today the Web 2.0. In this talk we show how to use these sources of evidence in Flickr, such as tags, visual annotations or clicks, which represent the the wisdom of crowds behind UGC, to improve image search. These results are the work of the multimedia retrieval team at Yahoo! Research Barcelona and they are already being used in Yahoo! image search. This work is part of a larger effort to produce a virtuous data feedback circuit based on the right combination many different technologies to leverage the Web itself.

  11. WEB-server for search of a periodicity in amino acid and nucleotide sequences

    Science.gov (United States)

    E Frenkel, F.; Skryabin, K. G.; Korotkov, E. V.

    2017-12-01

    A new web server (http://victoria.biengi.ac.ru/splinter/login.php) was designed and developed to search for periodicity in nucleotide and amino acid sequences. The web server operation is based upon a new mathematical method of searching for multiple alignments, which is founded on the position weight matrices optimization, as well as on implementation of the two-dimensional dynamic programming. This approach allows the construction of multiple alignments of the indistinctly similar amino acid and nucleotide sequences that accumulated more than 1.5 substitutions per a single amino acid or a nucleotide without performing the sequences paired comparisons. The article examines the principles of the web server operation and two examples of studying amino acid and nucleotide sequences, as well as information that could be obtained using the web server.

  12. University of Padua at TREC 2014: Federated Web Search Track

    Science.gov (United States)

    2014-11-01

    reported in this paper we did not compute IRF accord- ing to the Spark Jones formulation of IDF — that corresponds to Eq. 1; we instead used the...on the Apache Lucene library and on an XML parser written in Java for extracting the document fields from the sample searches and the sample documents

  13. A Web Service Protocol Realizing Interoperable Internet of Things Tasking Capability

    Science.gov (United States)

    Huang, Chih-Yuan; Wu, Cheng-Hung

    2016-01-01

    The Internet of Things (IoT) is an infrastructure that interconnects uniquely-identifiable devices using the Internet. By interconnecting everyday appliances, various monitoring, and physical mashup applications can be constructed to improve human’s daily life. In general, IoT devices provide two main capabilities: sensing and tasking capabilities. While the sensing capability is similar to the World-Wide Sensor Web, this research focuses on the tasking capability. However, currently, IoT devices created by different manufacturers follow different proprietary protocols and are locked in many closed ecosystems. This heterogeneity issue impedes the interconnection between IoT devices and damages the potential of the IoT. To address this issue, this research aims at proposing an interoperable solution called tasking capability description that allows users to control different IoT devices using a uniform web service interface. This paper demonstrates the contribution of the proposed solution by interconnecting different IoT devices for different applications. In addition, the proposed solution is integrated with the OGC SensorThings API standard, which is a Web service standard defined for the IoT sensing capability. Consequently, the Extended SensorThings API can realize both IoT sensing and tasking capabilities in an integrated and interoperable manner. PMID:27589759

  14. A Web Service Protocol Realizing Interoperable Internet of Things Tasking Capability

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Huang

    2016-08-01

    Full Text Available The Internet of Things (IoT is an infrastructure that interconnects uniquely-identifiable devices using the Internet. By interconnecting everyday appliances, various monitoring, and physical mashup applications can be constructed to improve human’s daily life. In general, IoT devices provide two main capabilities: sensing and tasking capabilities. While the sensing capability is similar to the World-Wide Sensor Web, this research focuses on the tasking capability. However, currently, IoT devices created by different manufacturers follow different proprietary protocols and are locked in many closed ecosystems. This heterogeneity issue impedes the interconnection between IoT devices and damages the potential of the IoT. To address this issue, this research aims at proposing an interoperable solution called tasking capability description that allows users to control different IoT devices using a uniform web service interface. This paper demonstrates the contribution of the proposed solution by interconnecting different IoT devices for different applications. In addition, the proposed solution is integrated with the OGC SensorThings API standard, which is a Web service standard defined for the IoT sensing capability. Consequently, the Extended SensorThings API can realize both IoT sensing and tasking capabilities in an integrated and interoperable manner.

  15. A Web Service Protocol Realizing Interoperable Internet of Things Tasking Capability.

    Science.gov (United States)

    Huang, Chih-Yuan; Wu, Cheng-Hung

    2016-08-31

    The Internet of Things (IoT) is an infrastructure that interconnects uniquely-identifiable devices using the Internet. By interconnecting everyday appliances, various monitoring, and physical mashup applications can be constructed to improve human's daily life. In general, IoT devices provide two main capabilities: sensing and tasking capabilities. While the sensing capability is similar to the World-Wide Sensor Web, this research focuses on the tasking capability. However, currently, IoT devices created by different manufacturers follow different proprietary protocols and are locked in many closed ecosystems. This heterogeneity issue impedes the interconnection between IoT devices and damages the potential of the IoT. To address this issue, this research aims at proposing an interoperable solution called tasking capability description that allows users to control different IoT devices using a uniform web service interface. This paper demonstrates the contribution of the proposed solution by interconnecting different IoT devices for different applications. In addition, the proposed solution is integrated with the OGC SensorThings API standard, which is a Web service standard defined for the IoT sensing capability. Consequently, the Extended SensorThings API can realize both IoT sensing and tasking capabilities in an integrated and interoperable manner.

  16. Crawling PubMed with web agents for literature search and alerting services

    Directory of Open Access Journals (Sweden)

    Sérgio DEUSDADO

    2013-05-01

    Full Text Available In this paper we present ASAP - Automated Search with Agents in PubMed, a web-based service aiming to manage and automate scientific literature search in the PubMed database. The system allows the creation and management of web agents, parameterized thematically and functionally, that crawl the PubMed database autonomously and periodically, aiming to search and retrieve relevant results according the requirements provided by the user. The results, containing the publications list retrieved, are emailed to the agent owner on a weekly basis, during the activity period defined for the web agent. The ASAP service is devoted to help researchers, especially from the field of biomedicine and bioinformatics, in order to increase their productivity, and can be accessed at: http://esa.ipb.pt/~agentes.

  17. ISI web of knowledge: new author searching capability and reports features.

    Science.gov (United States)

    Fitzpatrick, Roberta Bronson

    2007-01-01

    The Institute for Scientific Information (ISI), part of Thompson Scientific, produces the Web of Science database as part of its Web of Knowledge. Recently, ISI introduced some new features, among them a new Author Finder feature, which allows users to zero in on a specific author in a very guided way. In addition, search results may be analyzed and reports created by users, both at the click of a button. This column focuses on these recently introduced features.

  18. Search Techniques for the Web of Things: A Taxonomy and Survey

    Science.gov (United States)

    Zhou, Yuchao; De, Suparna; Wang, Wei; Moessner, Klaus

    2016-01-01

    The Web of Things aims to make physical world objects and their data accessible through standard Web technologies to enable intelligent applications and sophisticated data analytics. Due to the amount and heterogeneity of the data, it is challenging to perform data analysis directly; especially when the data is captured from a large number of distributed sources. However, the size and scope of the data can be reduced and narrowed down with search techniques, so that only the most relevant and useful data items are selected according to the application requirements. Search is fundamental to the Web of Things while challenging by nature in this context, e.g., mobility of the objects, opportunistic presence and sensing, continuous data streams with changing spatial and temporal properties, efficient indexing for historical and real time data. The research community has developed numerous techniques and methods to tackle these problems as reported by a large body of literature in the last few years. A comprehensive investigation of the current and past studies is necessary to gain a clear view of the research landscape and to identify promising future directions. This survey reviews the state-of-the-art search methods for the Web of Things, which are classified according to three different viewpoints: basic principles, data/knowledge representation, and contents being searched. Experiences and lessons learned from the existing work and some EU research projects related to Web of Things are discussed, and an outlook to the future research is presented. PMID:27128918

  19. Work Out the Semantic Web Search: The Cooperative Way

    Directory of Open Access Journals (Sweden)

    Dora Melo

    2012-01-01

    Full Text Available We propose a Cooperative Question Answering System that takes as input natural language queries and is able to return a cooperative answer based on semantic web resources, more specifically DBpedia represented in OWL/RDF as knowledge base and WordNet to build similar questions. Our system resorts to ontologies not only for reasoning but also to find answers and is independent of prior knowledge of the semantic resources by the user. The natural language question is translated into its semantic representation and then answered by consulting the semantics sources of information. The system is able to clarify the problems of ambiguity and helps finding the path to the correct answer. If there are multiple answers to the question posed (or to the similar questions for which DBpedia contains answers, they will be grouped according to their semantic meaning, providing a more cooperative and clarified answer to the user.

  20. The Web-Surf Task: A translational model of human decision-making.

    Science.gov (United States)

    Abram, Samantha V; Breton, Yannick-André; Schmidt, Brandy; Redish, A David; MacDonald, Angus W

    2016-02-01

    Animal models of decision-making are some of the most highly regarded psychological process models; however, there remains a disconnection between how these models are used for pre-clinical applications and the resulting treatment outcomes. This may be due to untested assumptions that different species recruit the same neural or psychological mechanisms. We propose a novel human foraging paradigm (Web-Surf Task) that we translated from a rat foraging paradigm (Restaurant Row) to evaluate cross-species decision-making similarities. We examined behavioral parallels in human and non-human animals using the respective tasks. We also compared two variants of the human task, one using videos and the other using photos as rewards, by correlating revealed and stated preferences. We demonstrate similarities in choice behaviors and decision reaction times in human and rat subjects. Findings also indicate that videos yielded more reliable and valid results. The joint use of the Web-Surf Task and Restaurant Row is therefore a promising approach for functional translational research, aiming to bridge pre-clinical and clinical lines of research using analogous tasks.

  1. A distributed content-based search engine based on mobile code and web service technology

    OpenAIRE

    Roth, V.; Pinsdorf, U.; Peters, J

    2006-01-01

    Current search engines crawl the Web, download content, and digest this content locally. For multimedia content, this involves considerable volumes of data. Furthermore, this process covers only publicly available content because content providers are concerned that they otherwise loose control over the distribution of their intellectual property. We present the prototype of our secure and distributed search engine, which dynamically pushes content based feature extraction to image providers....

  2. DESIGN OF A WEB SEMI-INTELLIGENT METADATA SEARCH MODEL APPLIED IN DATA WAREHOUSING SYSTEMS

    OpenAIRE

    Luna Ramírez,Enrique; Ambriz Delgadillo,Humberto; Nungaray Ornelas,J. Antonio; Álvarez Rodríguez, Francisco Javier; Jorge N Mondragón Reyes

    2008-01-01

    In this paper, the design of a Web metadata search model with semi-intelligent features is proposed. The search model is oriented to retrieve the metadata associated to a data warehouse in a fast, flexible and reliable way. Our proposal includes a set of distinctive functionalities, which consist of the temporary storage of the frequently used metadata in an exclusive store, different to the global data warehouse metadata store, and of the use of control processes to retrieve information from...

  3. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-08-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  4. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-12-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  5. Binaural Sound Reduces Reaction Time in a Virtual Reality Search Task

    DEFF Research Database (Denmark)

    Høeg, Emil Rosenlund; Gerry, Lynda; Thomsen, Lui Albæk

    2017-01-01

    Salient features in a visual search task can direct attention and increase competency on these tasks. Simple cues, such as color change in a salient feature, called the "pop-out effect" can increase task solving efficiency [6]. Previous work has shown that nonspatial auditory signals temporally...... synched with a pop-out effect can improve reaction time in a visual search task, called the "pip and pop effect" [14]. This paper describes a within-group study on the effect of audiospatial attention in virtual reality given a 360-degree visual search. Three cue conditions were compared (no sound, stereo......, and binaural), with rising degrees of difficulty by increasing the set size. The results (n=10) indicate a statistically significant difference in reaction time between the three conditions. Overall, in spite of the small sample size, our results seem to indicate that binaural audio renders a clear advantage...

  6. What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.

    Science.gov (United States)

    Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W

    2015-06-01

    Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.

  7. SA-Search: a web tool for protein structure mining based on a Structural Alphabet

    Science.gov (United States)

    Guyon, Frédéric; Camproux, Anne-Claude; Hochez, Joëlle; Tufféry, Pierre

    2004-01-01

    SA-Search is a web tool that can be used to mine for protein structures and extract structural similarities. It is based on a hidden Markov model derived Structural Alphabet (SA) that allows the compression of three-dimensional (3D) protein conformations into a one-dimensional (1D) representation using a limited number of prototype conformations. Using such a representation, classical methods developed for amino acid sequences can be employed. Currently, SA-Search permits the performance of fast 3D similarity searches such as the extraction of exact words using a suffix tree approach, and the search for fuzzy words viewed as a simple 1D sequence alignment problem. SA-Search is available at http://bioserv.rpbs.jussieu.fr/cgi-bin/SA-Search. PMID:15215446

  8. Comparative analysis of web search trends between experts and public for medicinal herbs in Korea.

    Science.gov (United States)

    Yea, Sang-Jun; Jang, Yunji; Seong, BoSeok; Kim, Chul

    2015-12-24

    The information and knowledge about ethno-medicinal herbs are getting stronger interest in Global and Korea after the agreement of the Nagoya Protocol. However, it is known that there is a serious asymmetry of ethno-medicinal information between experts and public, thus this study aimed to analyze the similarities and differences in interest between experts and public for medicinal herbs in Korea through big data analysis. The medicinal herbs selected in this study were the top 10 herbs in terms of the amounts purchased by TKM centers. And two representative web search engines were selected to collect the web search logs, i.e. big data, of experts and public for medicinal herbs in Korea. Comparative analysis was accomplished through descriptive statistical analysis, Pearson's correlation coefficient, and time-series graph analysis. The web search traffic logs were collected for the past three years (2012-2014) from OASIS and NAVER, which are the representative web search engines of experts and public respectively in Korea. First, regarding OASIS, the most searched medicinal herb was Angelicae Gigantis Radix while the least searched was Alismatis Rhizoma; for NAVER, the most searched medicinal herb was Paeoniae Radix, unlike OASIS, and the least searched was Alismatis Rhizoma, as with OASIS. The coefficient between rank of herbs and OASIS was -0.401, and that between rank of herbs and NAVER was -0.387, and the correlational coefficient for web search trends of OASIS and NAVER during the past three years was 0.438. Also the correlation of interest between experts and public for each herb on monthly web trends basis was similar with regard to Glycyrrhizae Radix et Rhizoma and Angelicae Gigantis Radix, but different with regard to the other 8 medicinal herbs. Finally, significant outcomes or suggestions were figured out through time-series graph analysis. This study presents meaningful results concerning the similarities and differences in interest between experts and

  9. New Architectures for Presenting Search Results Based on Web Search Engines Users Experience

    Science.gov (United States)

    Martinez, F. J.; Pastor, J. A.; Rodriguez, J. V.; Lopez, Rosana; Rodriguez, J. V., Jr.

    2011-01-01

    Introduction: The Internet is a dynamic environment which is continuously being updated. Search engines have been, currently are and in all probability will continue to be the most popular systems in this information cosmos. Method: In this work, special attention has been paid to the series of changes made to search engines up to this point,…

  10. Engaging Student Interpreters in Vocabulary Building: Web Search with Computer Workbench

    Science.gov (United States)

    Lim, Lily

    2014-01-01

    This paper investigates the usefulness of Web portals in a workbench for assisting student interpreters in the search for and collection of vocabulary. The experiment involved a class of fifteen English as a Foreign Language (EFL) student interpreters, who were required to equip themselves with the appropriate English vocabulary to handle an…

  11. Information Retrieval Strategies of Millennial Undergraduate Students in Web and Library Database Searches

    Science.gov (United States)

    Porter, Brandi

    2009-01-01

    Millennial students make up a large portion of undergraduate students attending colleges and universities, and they have a variety of online resources available to them to complete academically related information searches, primarily Web based and library-based online information retrieval systems. The content, ease of use, and required search…

  12. The Use of Social Tags in Text and Image Searching on the Web

    Science.gov (United States)

    Kim, Yong-Mi

    2011-01-01

    In recent years, tags have become a standard feature on a diverse range of sites on the Web, accompanying blog posts, photos, videos, and online news stories. Tags are descriptive terms attached to Internet resources. Despite the rapid adoption of tagging, how people use tags during the search process is not well understood. There is little…

  13. Click models for web search and their applications to IR : WSDM 2016 Tutorial

    NARCIS (Netherlands)

    Chuklin, A.; Markov, I.; de Rijke, M.

    2016-01-01

    In this tutorial we give an overview of click models for web search. We show how the framework of probabilistic graphical models helps to explain user behavior, build new evaluation metrics and perform simulations. The tutorial discusses foundational aspects alongside experimental details and

  14. A geospatial search engine for discovering multi-format geospatial data across the web

    Science.gov (United States)

    Christopher Bone; Alan Ager; Ken Bunzel; Lauren Tierney

    2014-01-01

    The volume of publically available geospatial data on the web is rapidly increasing due to advances in server-based technologies and the ease at which data can now be created. However, challenges remain with connecting individuals searching for geospatial data with servers and websites where such data exist. The objective of this paper is to present a publically...

  15. Changes in users' mental models of Web search engines after ten ...

    African Journals Online (AJOL)

    Ward's Cluster analyses including the Pseudo T² Statistical analyses were used to determine the mental model clusters for the seventeen salient design features of Web search engines at each time point. The cubic clustering criterion (CCC) and the dendogram were conducted for each sample to help determine the number ...

  16. Web-Searching to Learn: The Role of Internet Self-Efficacy in Pre-School Educators' Conceptions and Approaches

    Science.gov (United States)

    Kao, Chia-Pin; Chien, Hui-Min

    2017-01-01

    This study was conducted to explore the relationships between pre-school educators' conceptions of and approaches to learning by web-searching through Internet Self-efficacy. Based on data from 242 pre-school educators who had prior experience of participating in web-searching in Taiwan for path analyses, it was found in this study that…

  17. Dropout Rates and Response Times of an Occupation Search Tree in a Web Survey

    Directory of Open Access Journals (Sweden)

    Tijdens Kea

    2014-03-01

    Full Text Available Occupation is key in socioeconomic research. As in other survey modes, most web surveys use an open-ended question for occupation, though the absence of interviewers elicits unidentifiable or aggregated responses. Unlike other modes, web surveys can use a search tree with an occupation database. They are hardly ever used, but this may change due to technical advancements. This article evaluates a three-step search tree with 1,700 occupational titles, used in the 2010 multilingual WageIndicator web survey for UK, Belgium and Netherlands (22,990 observations. Dropout rates are high; in Step 1 due to unemployed respondents judging the question not to be adequate, and in Step 3 due to search tree item length. Median response times are substantial due to search tree item length, dropout in the next step and invalid occupations ticked. Overall the validity of the occupation data is rather good, 1.7-7.5% of the respondents completing the search tree have ticked an invalid occupation.

  18. Improving Web image search by bag-based reranking.

    Science.gov (United States)

    Duan, Lixin; Li, Wen; Tsang, Ivor Wai-Hung; Xu, Dong

    2011-11-01

    Given a textual query in traditional text-based image retrieval (TBIR), relevant images are to be reranked using visual features after the initial text-based search. In this paper, we propose a new bag-based reranking framework for large-scale TBIR. Specifically, we first cluster relevant images using both textual and visual features. By treating each cluster as a "bag" and the images in the bag as "instances," we formulate this problem as a multi-instance (MI) learning problem. MI learning methods such as mi-SVM can be readily incorporated into our bag-based reranking framework. Observing that at least a certain portion of a positive bag is of positive instances while a negative bag might also contain positive instances, we further use a more suitable generalized MI (GMI) setting for this application. To address the ambiguities on the instance labels in the positive and negative bags under this GMI setting, we develop a new method referred to as GMI-SVM to enhance retrieval performance by propagating the labels from the bag level to the instance level. To acquire bag annotations for (G)MI learning, we propose a bag ranking method to rank all the bags according to the defined bag ranking score. The top ranked bags are used as pseudopositive training bags, while pseudonegative training bags can be obtained by randomly sampling a few irrelevant images that are not associated with the textual query. Comprehensive experiments on the challenging real-world data set NUS-WIDE demonstrate our framework with automatic bag annotation can achieve the best performances compared with existing image reranking methods. Our experiments also demonstrate that GMI-SVM can achieve better performances when using the manually labeled training bags obtained from relevance feedback.

  19. PLAN: a web platform for automating high-throughput BLAST searches and for managing and mining results

    Directory of Open Access Journals (Sweden)

    Zhao Xuechun

    2007-02-01

    Full Text Available Abstract Background BLAST searches are widely used for sequence alignment. The search results are commonly adopted for various functional and comparative genomics tasks such as annotating unknown sequences, investigating gene models and comparing two sequence sets. Advances in sequencing technologies pose challenges for high-throughput analysis of large-scale sequence data. A number of programs and hardware solutions exist for efficient BLAST searching, but there is a lack of generic software solutions for mining and personalized management of the results. Systematically reviewing the results and identifying information of interest remains tedious and time-consuming. Results Personal BLAST Navigator (PLAN is a versatile web platform that helps users to carry out various personalized pre- and post-BLAST tasks, including: (1 query and target sequence database management, (2 automated high-throughput BLAST searching, (3 indexing and searching of results, (4 filtering results online, (5 managing results of personal interest in favorite categories, (6 automated sequence annotation (such as NCBI NR and ontology-based annotation. PLAN integrates, by default, the Decypher hardware-based BLAST solution provided by Active Motif Inc. with a greatly improved efficiency over conventional BLAST software. BLAST results are visualized by spreadsheets and graphs and are full-text searchable. BLAST results and sequence annotations can be exported, in part or in full, in various formats including Microsoft Excel and FASTA. Sequences and BLAST results are organized in projects, the data publication levels of which are controlled by the registered project owners. In addition, all analytical functions are provided to public users without registration. Conclusion PLAN has proved a valuable addition to the community for automated high-throughput BLAST searches, and, more importantly, for knowledge discovery, management and sharing based on sequence alignment results

  20. Relationship between Usefulness Assessments and Perceptions of Work Task Complexity and Search Topic Specificity: An Exploratory Study

    DEFF Research Database (Denmark)

    Ingwersen, Peter; Wang, Peiling

    2012-01-01

    This research investigates the relations between the usefulness assessments of retrieved documents and the perceptions of task complexity and search topic specificity. Twenty-three academic researchers submitted 65 real task-based information search topics. These task topics were searched...... in an integrated document collection consisting of full text research articles in PDFs, abstracts, and bibliographic records (the iSearch Test Collection in Physics). The search results were provided to the researchers who, as task performers, made assessments of usefulness using a four-point sale (highly, fairly...

  1. Spatial Search Techniques for Mobile 3D Queries in Sensor Web Environments

    Directory of Open Access Journals (Sweden)

    James D. Carswell

    2013-03-01

    Full Text Available Developing mobile geo-information systems for sensor web applications involves technologies that can access linked geographical and semantically related Internet information. Additionally, in tomorrow’s Web 4.0 world, it is envisioned that trillions of inexpensive micro-sensors placed throughout the environment will also become available for discovery based on their unique geo-referenced IP address. Exploring these enormous volumes of disparate heterogeneous data on today’s location and orientation aware smartphones requires context-aware smart applications and services that can deal with “information overload”. 3DQ (Three Dimensional Query is our novel mobile spatial interaction (MSI prototype that acts as a next-generation base for human interaction within such geospatial sensor web environments/urban landscapes. It filters information using “Hidden Query Removal” functionality that intelligently refines the search space by calculating the geometry of a three dimensional visibility shape (Vista space at a user’s current location. This 3D shape then becomes the query “window” in a spatial database for retrieving information on only those objects visible within a user’s actual 3D field-of-view. 3DQ reduces information overload and serves to heighten situation awareness on constrained commercial off-the-shelf devices by providing visibility space searching as a mobile web service. The effects of variations in mobile spatial search techniques in terms of query speed vs. accuracy are evaluated and presented in this paper.

  2. Semantic information retrieval for geoscience resources : results and analysis of an online questionnaire of current web search experiences

    OpenAIRE

    Nkisi-Orji, I.

    2016-01-01

    An online questionnaire “Semantic web searches for geoscience resources” was completed by 35 staff of British Geological Survey (BGS) between 28th July 2015 and 28th August 2015. The questionnaire was designed to better understand current web search habits, preferences, and the reception of semantic search features in order to inform PhD research into the use of domain ontologies for semantic information retrieval. The key findings were that relevance ranking is important in fo...

  3. The involvement of central attention in visual search is determined by task demands.

    Science.gov (United States)

    Han, Suk Won

    2017-04-01

    Attention, the mechanism by which a subset of sensory inputs is prioritized over others, operates at multiple processing stages. Specifically, attention enhances weak sensory signal at the perceptual stage, while it serves to select appropriate responses or consolidate sensory representations into short-term memory at the central stage. This study investigated the independence and interaction between perceptual and central attention. To do so, I used a dual-task paradigm, pairing a four-alternative choice task with a visual search task. The results showed that central attention for response selection was engaged in perceptual processing for visual search when the number of search items increased, thereby increasing the demand for serial allocation of focal attention. By contrast, central attention and perceptual attention remained independent as far as the demand for serial shifting of focal attention remained constant; decreasing stimulus contrast or increasing the set size of a parallel search did not evoke the involvement of central attention in visual search. These results suggest that the nature of concurrent visual search process plays a crucial role in the functional interaction between two different types of attention.

  4. Task relevance of emotional information affects anxiety-linked attention bias in visual search.

    Science.gov (United States)

    Dodd, Helen F; Vogt, Julia; Turkileri, Nilgun; Notebaert, Lies

    2017-01-01

    Task relevance affects emotional attention in healthy individuals. Here, we investigate whether the association between anxiety and attention bias is affected by the task relevance of emotion during an attention task. Participants completed two visual search tasks. In the emotion-irrelevant task, participants were asked to indicate whether a discrepant face in a crowd of neutral, middle-aged faces was old or young. Irrelevant to the task, target faces displayed angry, happy, or neutral expressions. In the emotion-relevant task, participants were asked to indicate whether a discrepant face in a crowd of middle-aged neutral faces was happy or angry (target faces also varied in age). Trait anxiety was not associated with attention in the emotion-relevant task. However, in the emotion-irrelevant task, trait anxiety was associated with a bias for angry over happy faces. These findings demonstrate that the task relevance of emotional information affects conclusions about the presence of an anxiety-linked attention bias. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Finding Business Information on the "Invisible Web": Search Utilities vs. Conventional Search Engines.

    Science.gov (United States)

    Darrah, Brenda

    Researchers for small businesses, which may have no access to expensive databases or market research reports, must often rely on information found on the Internet, which can be difficult to find. Although current conventional Internet search engines are now able to index over on billion documents, there are many more documents existing in…

  6. A Novel Framework for Medical Web Information Foraging Using Hybrid ACO and Tabu Search.

    Science.gov (United States)

    Drias, Yassine; Kechid, Samir; Pasi, Gabriella

    2016-01-01

    We present in this paper a novel approach based on multi-agent technology for Web information foraging. We proposed for this purpose an architecture in which we distinguish two important phases. The first one is a learning process for localizing the most relevant pages that might interest the user. This is performed on a fixed instance of the Web. The second takes into account the openness and dynamicity of the Web. It consists on an incremental learning starting from the result of the first phase and reshaping the outcomes taking into account the changes that undergoes the Web. The system was implemented using a colony of artificial ants hybridized with tabu search in order to achieve more effectiveness and efficiency. To validate our proposal, experiments were conducted on MedlinePlus, a real website dedicated for research in the domain of Health in contrast to other previous works where experiments were performed on web logs datasets. The main results are promising either for those related to strong Web regularities and for the response time, which is very short and hence complies the real time constraint.

  7. Effects of display curvature, display zone, and task duration on legibility and visual fatigue during visual search task.

    Science.gov (United States)

    Park, Sungryul; Choi, Donghee; Yi, Jihhyeon; Lee, Songil; Lee, Ja Eun; Choi, Byeonghwa; Lee, Seungbae; Kyung, Gyouhyung

    2017-04-01

    This study examined the effects of display curvature (400, 600, 1200 mm, and flat), display zone (5 zones), and task duration (15 and 30 min) on legibility and visual fatigue. Each participant completed two 15-min visual search task sets at each curvature setting. The 600-mm and 1200-mm settings yielded better results than the flat setting in terms of legibility and perceived visual fatigue. Relative to the corresponding centre zone, the outermost zones of the 1200-mm and flat settings showed a decrease of 8%-37% in legibility, whereas those of the flat setting showed an increase of 26%-45% in perceived visual fatigue. Across curvatures, legibility decreased by 2%-8%, whereas perceived visual fatigue increased by 22% during the second task set. The two task sets induced an increase of 102% in the eye complaint score and a decrease of 0.3 Hz in the critical fusion frequency, both of which indicated an increase in visual fatigue. In summary, a curvature of around 600 mm, central display zones, and frequent breaks are recommended to improve legibility and reduce visual fatigue. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. A semantics-based method for clustering of Chinese web search results

    Science.gov (United States)

    Zhang, Hui; Wang, Deqing; Wang, Li; Bi, Zhuming; Chen, Yong

    2014-01-01

    Information explosion is a critical challenge to the development of modern information systems. In particular, when the application of an information system is over the Internet, the amount of information over the web has been increasing exponentially and rapidly. Search engines, such as Google and Baidu, are essential tools for people to find the information from the Internet. Valuable information, however, is still likely submerged in the ocean of search results from those tools. By clustering the results into different groups based on subjects automatically, a search engine with the clustering feature allows users to select most relevant results quickly. In this paper, we propose an online semantics-based method to cluster Chinese web search results. First, we employ the generalised suffix tree to extract the longest common substrings (LCSs) from search snippets. Second, we use the HowNet to calculate the similarities of the words derived from the LCSs, and extract the most representative features by constructing the vocabulary chain. Third, we construct a vector of text features and calculate snippets' semantic similarities. Finally, we improve the Chameleon algorithm to cluster snippets. Extensive experimental results have shown that the proposed algorithm has outperformed over the suffix tree clustering method and other traditional clustering methods.

  9. Searching in the middle-Capuchins' (Cebus apella) and bonobos' (Pan paniscus) behavior during a spatial search task.

    Science.gov (United States)

    Potì, Patrizia; Kanngiesser, Patricia; Saporiti, Martina; Amiconi, Alessandra; Bläsing, Bettina; Call, Josep

    2010-01-01

    In this study we show that bonobos and capuchin monkeys can learn to search in the middle of a landmark configuration in a small-scale space. Five bonobos (Pan paniscus) and 2 capuchin monkeys (Cebus apella) were tested in a series of experiments with the expansion test paradigm. The primates were trained to search in the middle of a 4- or 2-landmark configuration, and were then tested with the same configuration expanded. Neither species searched in the middle of the expanded 4-landmark configuration. When presented with a 2-landmark configuration and a constant or variable inter-landmark training distance, the subjects sometimes searched preferentially in the middle of the expanded configuration. We discuss 2 alternative explanations of the results: extracting a middle rule or averaging between different goal-landmark vectors. In any case, compared to adult humans, primates appear highly constrained in their abilities to search in the middle of a configuration of detached landmarks. We discuss some of the factors that may influence the primates' behavior in this task.

  10. In Search of the Optimal Path: How Learners at Task Use an Online Dictionary

    Science.gov (United States)

    Hamel, Marie-Josee

    2012-01-01

    We have analyzed circa 180 navigation paths followed by six learners while they performed three language encoding tasks at the computer using an online dictionary prototype. Our hypothesis was that learners who follow an "optimal path" while navigating within the dictionary, using its search and look-up functions, would have a high chance of…

  11. Differences in the Processing of Prefixes and Suffixes Revealed by a Letter-Search Task

    Science.gov (United States)

    Beyersmann, Elisabeth; Ziegler, Johannes C.; Grainger, Jonathan

    2015-01-01

    A letter-search task was used to test the hypothesis that affixes are chunked during morphological processing and that such chunking might operate differently for prefixes and suffixes. Participants had to detect a letter target that was embedded either in a prefix or suffix (e.g., "R" in "propoint" or "filmure") or…

  12. Hybrid self organizing migrating algorithm - Scatter search for the task of capacitated vehicle routing problem

    Science.gov (United States)

    Davendra, Donald; Zelinka, Ivan; Senkerik, Roman; Jasek, Roman; Bialic-Davendra, Magdalena

    2012-11-01

    One of the new emerging application strategies for optimization is the hybridization of existing metaheuristics. The research combines the unique paradigms of solution space sampling of SOMA and memory retention capabilities of Scatter Search for the task of capacitated vehicle routing problem. The new hybrid heuristic is tested on the Taillard sets and obtains good results.

  13. Attentional Capture by Salient Distractors during Visual Search Is Determined by Temporal Task Demands

    DEFF Research Database (Denmark)

    Kiss, Monika; Grubert, Anna; Petersen, Anders

    2012-01-01

    of capture by task-irrelevant color singletons in search arrays that could also contain a shape target. In Experiment 1, all displays were visible until response onset. In Experiment 2, display duration was limited to 200 msec. With long display durations, color singleton distractors elicited an N2pc...

  14. Search Engine Ranking, Quality, and Content of Web Pages That Are Critical Versus Noncritical of Human Papillomavirus Vaccine.

    Science.gov (United States)

    Fu, Linda Y; Zook, Kathleen; Spoehr-Labutta, Zachary; Hu, Pamela; Joseph, Jill G

    2016-01-01

    Online information can influence attitudes toward vaccination. The aim of the present study was to provide a systematic evaluation of the search engine ranking, quality, and content of Web pages that are critical versus noncritical of human papillomavirus (HPV) vaccination. We identified HPV vaccine-related Web pages with the Google search engine by entering 20 terms. We then assessed each Web page for critical versus noncritical bias and for the following quality indicators: authorship disclosure, source disclosure, attribution of at least one reference, currency, exclusion of testimonial accounts, and readability level less than ninth grade. We also determined Web page comprehensiveness in terms of mention of 14 HPV vaccine-relevant topics. Twenty searches yielded 116 unique Web pages. HPV vaccine-critical Web pages comprised roughly a third of the top, top 5- and top 10-ranking Web pages. The prevalence of HPV vaccine-critical Web pages was higher for queries that included term modifiers in addition to root terms. Compared with noncritical Web pages, Web pages critical of HPV vaccine overall had a lower quality score than those with a noncritical bias (p engine queries despite being of lower quality and less comprehensive than noncritical Web pages. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  15. SLIM: an alternative Web interface for MEDLINE/PubMed searches – a preliminary study

    Directory of Open Access Journals (Sweden)

    Ackerman Michael

    2005-12-01

    Full Text Available Abstract Background With the rapid growth of medical information and the pervasiveness of the Internet, online search and retrieval systems have become indispensable tools in medicine. The progress of Web technologies can provide expert searching capabilities to non-expert information seekers. The objective of the project is to create an alternative search interface for MEDLINE/PubMed searches using JavaScript slider bars. SLIM, or Slider Interface for MEDLINE/PubMed searches, was developed with PHP and JavaScript. Interactive slider bars in the search form controlled search parameters such as limits, filters and MeSH terminologies. Connections to PubMed were done using the Entrez Programming Utilities (E-Utilities. Custom scripts were created to mimic the automatic term mapping process of Entrez. Page generation times for both local and remote connections were recorded. Results Alpha testing by developers showed SLIM to be functionally stable. Page generation times to simulate loading times were recorded the first week of alpha and beta testing. Average page generation times for the index page, previews and searches were 2.94 milliseconds, 0.63 seconds and 3.84 seconds, respectively. Eighteen physicians from the US, Australia and the Philippines participated in the beta testing and provided feedback through an online survey. Most users found the search interface user-friendly and easy to use. Information on MeSH terms and the ability to instantly hide and display abstracts were identified as distinctive features. Conclusion SLIM can be an interactive time-saving tool for online medical literature research that improves user control and capability to instantly refine and refocus search strategies. With continued development and by integrating search limits, methodology filters, MeSH terms and levels of evidence, SLIM may be useful in the practice of evidence-based medicine.

  16. A salient and task-irrelevant collinear structure hurts visual search.

    Directory of Open Access Journals (Sweden)

    Chia-Huei Tseng

    Full Text Available Salient distractors draw our attention spontaneously, even when we intentionally want to ignore them. When this occurs, the real targets close to or overlapping with the distractors benefit from attention capture and thus are detected and discriminated more quickly. However, a puzzling opposite effect was observed in a search display with a column of vertical collinear bars presented as a task-irrelevant distractor [6]. In this case, it was harder to discriminate the targets overlapping with the salient distractor. Here we examined whether this effect originated from factors known to modulate attentional capture: (a low probability-the probability occurrence of target location at the collinear column was much less (14% than the rest of the display (86%, and observers might strategically direct their attention away from the collinear distractor; (b attentional control setting-the distractor and target task interfered with each other because they shared the same continuity set in attentional task; and/or (c lack of time to establish the optional strategy. We tested these hypotheses by (a increasing to 60% the trials in which targets overlapped with the same collinear distractor columns, (b replacing the target task to be connectivity-irrelevant (i.e., luminance discrimination, and (c having our observers practice the same search task for 10 days. Our results speak against all these hypotheses and lead us to conclude that a collinear distractor impairs search at a level that is unaffected by probabilistic information, attentional setting, and learning.

  17. Looking and listening: A comparison of intertrial repetition effects in visual and auditory search tasks.

    Science.gov (United States)

    Klein, Michael D; Stolz, Jennifer A

    2015-08-01

    Previous research shows that performance on pop-out search tasks is facilitated when the target and distractors repeat across trials compared to when they switch. This phenomenon has been shown for many different types of visual stimuli. We tested whether the effect would extend beyond visual stimuli to the auditory modality. Using a temporal search task that has previously been shown to elicit priming of pop-out with visual stimuli (Yashar & Lamy, Psychological Science, 21(2), 243-251, 2010), we showed that priming of pop-out does occur with auditory stimuli and has characteristics similar to those of an analogous visual task. These results suggest that either the same or similar mechanisms might underlie priming of pop-out in both modalities.

  18. A unified architecture for biomedical search engines based on semantic web technologies.

    Science.gov (United States)

    Jalali, Vahid; Matash Borujerdi, Mohammad Reza

    2011-04-01

    There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.

  19. Simultaneous priming along multiple feature dimensions in a visual search task.

    Science.gov (United States)

    Kristjánsson, Arni

    2006-08-01

    What we have recently seen generally has a large effect on how we consequently perceive our visual environment. Such priming effects play a surprisingly large role in visual search tasks, for example. It is unclear, however, whether different features of an object show independent but simultaneous priming. For example, if the color and orientation of a target item are the same as on a previous trial, is performance better than if only one of those features is repeated? In other words this paper presents an attempt at assessing the capacity of priming for different feature dimensions. Observers searched for a three featured object (a gabor patch that was either redscale or greenscale, oriented either to the left or right of vertical and of high or low spatial frequency) among distractors with different values along these feature dimensions. Which feature was the target defining feature; which was the response defining feature and which was the irrelevant feature, was varied between the different experiments. Task relevant features (target defining, or response defining) always resulted in priming effects, while when spatial frequency or orientation were task irrelevant neither resulted in priming, but color always did, even when task irrelevant. Further experiments showed that priming from spatial frequency and orientation could occur when they were task irrelevant but only when the other feature of the two was kept constant across all display items. The results show that simultaneous priming for different features can occur simultaneously, but also that task relevance has a strong modulatory effect on the priming.

  20. University Students' Emotion During Online Search Task: A Multiple Achievement Goal Perspective.

    Science.gov (United States)

    Zhou, Mingming

    2016-07-03

    Endorsing a multiple goal perspective, students' academic emotions were examined with different goal profiles while solving learning tasks online. One hundred and seven Chinese undergraduates were classified based on the 2 × 2 achievement goal framework into three groups: Mastery-approach-focused, Approach-oriented, and Avoidance-oriented group. Participants' emotional states were assessed immediately prior to the task and following the task. Prior to the task, the Avoidance-oriented group reported significantly higher levels of deactivated negative emotion (i.e., bored and confused) than the Approach-oriented group. The Mastery-approach-focused group reported significantly higher levels of activated positive emotions (i.e., excited and eager) than the Avoidance-oriented group after the task. Within each group, all three groups followed a similar emotion change pattern prior versus after the search task in deactivated positive emotion, with a significant increase. In addition, the Mastery-approach-focused group also reported a significantly higher level of happiness after completing the task, whereas the other two groups did not report much change. The Avoidance-oriented group also reported a significant drop in the feeling of excitement, eagerness, anxiety, and nervousness; whereas, the Approach-oriented group reported a significantly higher level of confusion after the task was finished. Implications of the findings are further discussed.

  1. Is Internet search better than structured instruction for web-based health education?

    Science.gov (United States)

    Finkelstein, Joseph; Bedra, McKenzie

    2013-01-01

    Internet provides access to vast amounts of comprehensive information regarding any health-related subject. Patients increasingly use this information for health education using a search engine to identify education materials. An alternative approach of health education via Internet is based on utilizing a verified web site which provides structured interactive education guided by adult learning theories. Comparison of these two approaches in older patients was not performed systematically. The aim of this study was to compare the efficacy of a web-based computer-assisted education (CO-ED) system versus searching the Internet for learning about hypertension. Sixty hypertensive older adults (age 45+) were randomized into control or intervention groups. The control patients spent 30 to 40 minutes searching the Internet using a search engine for information about hypertension. The intervention patients spent 30 to 40 minutes using the CO-ED system, which provided computer-assisted instruction about major hypertension topics. Analysis of pre- and post- knowledge scores indicated a significant improvement among CO-ED users (14.6%) as opposed to Internet users (2%). Additionally, patients using the CO-ED program rated their learning experience more positively than those using the Internet.

  2. Distributed Web-Scale Infrastructure For Crawling, Indexing And Search With Semantic Support

    Directory of Open Access Journals (Sweden)

    Stefan Dlugolinsky

    2012-01-01

    Full Text Available In this paper, we describe our work in progress in the scope of web-scale informationextraction and information retrieval utilizing distributed computing. Wepresent a distributed architecture built on top of the MapReduce paradigm forinformation retrieval, information processing and intelligent search supportedby spatial capabilities. Proposed architecture is focused on crawling documentsin several different formats, information extraction, lightweight semantic annotationof the extracted information, indexing of extracted information andfinally on indexing of documents based on the geo-spatial information foundin a document. We demonstrate the architecture on two use cases, where thefirst is search in job offers retrieved from the LinkedIn portal and the second issearch in BBC news feeds and discuss several problems we had to face duringthe implementation. We also discuss spatial search applications for both casesbecause both LinkedIn job offer pages and BBC news feeds contain a lot of spatialinformation to extract and process.

  3. GeNemo: a search engine for web-based functional genomic data.

    Science.gov (United States)

    Zhang, Yongqing; Cao, Xiaoyi; Zhong, Sheng

    2016-07-08

    A set of new data types emerged from functional genomic assays, including ChIP-seq, DNase-seq, FAIRE-seq and others. The results are typically stored as genome-wide intensities (WIG/bigWig files) or functional genomic regions (peak/BED files). These data types present new challenges to big data science. Here, we present GeNemo, a web-based search engine for functional genomic data. GeNemo searches user-input data against online functional genomic datasets, including the entire collection of ENCODE and mouse ENCODE datasets. Unlike text-based search engines, GeNemo's searches are based on pattern matching of functional genomic regions. This distinguishes GeNemo from text or DNA sequence searches. The user can input any complete or partial functional genomic dataset, for example, a binding intensity file (bigWig) or a peak file. GeNemo reports any genomic regions, ranging from hundred bases to hundred thousand bases, from any of the online ENCODE datasets that share similar functional (binding, modification, accessibility) patterns. This is enabled by a Markov Chain Monte Carlo-based maximization process, executed on up to 24 parallel computing threads. By clicking on a search result, the user can visually compare her/his data with the found datasets and navigate the identified genomic regions. GeNemo is available at www.genemo.org. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Searching the Web for Earth Science Data: Semiotics to Cybernetics and Back

    Directory of Open Access Journals (Sweden)

    Bruce R. Barkstrom

    2016-06-01

    Full Text Available This paper discusses a search paradigm for numerical data in Earth science that relies on the intrinsic structure of an archive's collection. Such non-textual data lies outside the normal textual basis for the Semantic Web. The paradigm tries to bypass some of the difficulties associated with keyword searches, such as semantic heterogeneity. The suggested collection structure uses a hierarchical taxonomy based on multidimensional axes of continuous variables. This structure fits the underlying 'geometry' of Earth science data better than sets of keywords in an ontology. The alternative paradigm views the search as a two-agent cooperative game that uses a dialog between the search engine and the data user. In this view, the search engine knows about the objects in the archive. It cannot read the user's mind to identify what the user needs. We assume the user has a clear idea of the search target. However he or she may not have a clear idea of the archive's contents. The paper suggests how the user interface may provide information to deal with the user's difficulties in understanding items in the dialog.

  5. An Evidence-Based Review of Academic Web Search Engines, 2014-2016: Implications for Librarians' Practice and Research Agenda

    National Research Council Canada - National Science Library

    Jody Condit Fagan

    2017-01-01

    Academic web search engines have become central to scholarly research. While the fitness of Google Scholar for research purposes has been examined repeatedly, Microsoft Academic and Google Books have not received much attention...

  6. Measuring cognitive processes involved in the web search: log files, eye-movements and cued rertospective reports

    NARCIS (Netherlands)

    Argelagos, Esther; Jarodzka, Halszka; Pifarre, Manoli

    2011-01-01

    Argelagós, E., Jarodzka, H., & Pifarré, M. (2011, August). Measuring cognitive processes involved in web search: log files, eye-movements and cued retrospective reports compared. Presentation at EARLI, Exeter, UK.

  7. Measuring Spontaneous and Instructed Evaluation Processes during Web Search: Integrating Concurrent Thinking-Aloud Protocols and Eye-Tracking Data

    Science.gov (United States)

    Gerjets, Peter; Kammerer, Yvonne; Werner, Benita

    2011-01-01

    Web searching for complex information requires to appropriately evaluating diverse sources of information. Information science studies identified different criteria applied by searchers to evaluate Web information. However, the explicit evaluation instructions used in these studies might have resulted in a distortion of spontaneous evaluation…

  8. Ideal and visual-search observers: accounting for anatomical noise in search tasks with planar nuclear imaging

    Science.gov (United States)

    Sen, Anando; Gifford, Howard C.

    2015-03-01

    Model observers have frequently been used for hardware optimization of imaging systems. For model observers to reliably mimic human performance it is important to account for the sources of variations in the images. Detection-localization tasks are complicated by anatomical noise present in the images. Several scanning observers have been proposed for such tasks. The most popular of these, the channelized Hotelling observer (CHO) incorporates anatomical variations through covariance matrices. We propose the visual-search (VS) observer as an alternative to the CHO to account for anatomical noise. The VS observer is a two-step process which first identifies suspicious tumor candidates and then performs a detailed analysis on them. The identification of suspicious candidates (search) implicitly accounts for anatomical noise. In this study we present a comparison of these two observers with human observers. The application considered is collimator optimization for planar nuclear imaging. Both observers show similar trends in performance with the VS observer slightly closer to human performance.

  9. Ramakrishnan: Semantics on the Web

    Data.gov (United States)

    National Aeronautics and Space Administration — It is becoming increasingly clear that the next generation of web search and advertising will rely on a deeper understanding of user intent and task modeling, and a...

  10. Using mixed-initiative human-robot interaction to bound performance in a search task

    Energy Technology Data Exchange (ETDEWEB)

    Curtis W. Nielsen; Douglas A. Few; Devin S. Athey

    2008-12-01

    Mobile robots are increasingly used in dangerous domains, because they can keep humans out of harm’s way. Despite their advantages in hazardous environments, their general acceptance in other less dangerous domains has not been apparent and, even in dangerous environments, robots are often viewed as a “last-possible choice.” In order to increase the utility and acceptance of robots in hazardous domains researchers at the Idaho National Laboratory have both developed and tested novel mixed-initiative solutions that support the human-robot interactions. In a recent “dirty-bomb” experiment, participants exhibited different search strategies making it difficult to determine any performance benefits. This paper presents a method for categorizing the search patterns and shows that the mixed-initiative solution decreased the time to complete the task and decreased the performance spread between participants independent of prior training and of individual strategies used to accomplish the task.

  11. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    OpenAIRE

    Filistea Naude; Chris Rensleigh; Adeline S.A. du Toit

    2010-01-01

    This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa) was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The re...

  12. World Wide Web Based Image Search Engine Using Text and Image Content Features

    Science.gov (United States)

    Luo, Bo; Wang, Xiaogang; Tang, Xiaoou

    2003-01-01

    Using both text and image content features, a hybrid image retrieval system for Word Wide Web is developed in this paper. We first use a text-based image meta-search engine to retrieve images from the Web based on the text information on the image host pages to provide an initial image set. Because of the high-speed and low cost nature of the text-based approach, we can easily retrieve a broad coverage of images with a high recall rate and a relatively low precision. An image content based ordering is then performed on the initial image set. All the images are clustered into different folders based on the image content features. In addition, the images can be re-ranked by the content features according to the user feedback. Such a design makes it truly practical to use both text and image content for image retrieval over the Internet. Experimental results confirm the efficiency of the system.

  13. A salient and task-irrelevant collinear structure hurts visual search

    OpenAIRE

    Chia-Huei Tseng; Li Jingling

    2015-01-01

    Salient distractors draw our attention spontaneously, even when we intentionally want to ignore them. When this occurs, the real targets close to or overlapping with the distractors benefit from attention capture and thus are detected and discriminated more quickly. However, a puzzling opposite effect was observed in a search display with a column of vertical collinear bars presented as a task-irrelevant distractor [6]. In this case, it was harder to discriminate the targets overlapping with ...

  14. The Strategies WDK: a graphical search interface and web development kit for functional genomics databases.

    Science.gov (United States)

    Fischer, Steve; Aurrecoechea, Cristina; Brunk, Brian P; Gao, Xin; Harb, Omar S; Kraemer, Eileen T; Pennington, Cary; Treatman, Charles; Kissinger, Jessica C; Roos, David S; Stoeckert, Christian J

    2011-01-01

    Web sites associated with the Eukaryotic Pathogen Bioinformatics Resource Center (EuPathDB.org) have recently introduced a graphical user interface, the Strategies WDK, intended to make advanced searching and set and interval operations easy and accessible to all users. With a design guided by usability studies, the system helps motivate researchers to perform dynamic computational experiments and explore relationships across data sets. For example, PlasmoDB users seeking novel therapeutic targets may wish to locate putative enzymes that distinguish pathogens from their hosts, and that are expressed during appropriate developmental stages. When a researcher runs one of the approximately 100 searches available on the site, the search is presented as a first step in a strategy. The strategy is extended by running additional searches, which are combined with set operators (union, intersect or minus), or genomic interval operators (overlap, contains). A graphical display uses Venn diagrams to make the strategy's flow obvious. The interface facilitates interactive adjustment of the component searches with changes propagating forward through the strategy. Users may save their strategies, creating protocols that can be shared with colleagues. The strategy system has now been deployed on all EuPathDB databases, and successfully deployed by other projects. The Strategies WDK uses a configurable MVC architecture that is compatible with most genomics and biological warehouse databases, and is available for download at code.google.com/p/strategies-wdk. Database URL: www.eupathdb.org.

  15. Lexical-Semantic Search Under Different Covert Verbal Fluency Tasks: An fMRI Study.

    Science.gov (United States)

    Li, Yunqing; Li, Ping; Yang, Qing X; Eslinger, Paul J; Sica, Chris T; Karunanayaka, Prasanna

    2017-01-01

    Background: Verbal fluency is a measure of cognitive flexibility and word search strategies that is widely used to characterize impaired cognitive function. Despite the wealth of research on identifying and characterizing distinct aspects of verbal fluency, the anatomic and functional substrates of retrieval-related search and post-retrieval control processes still have not been fully elucidated. Methods: Twenty-one native English-speaking, healthy, right-handed, adult volunteers (mean age = 31 years; range = 21-45 years; 9 F) took part in a block-design functional Magnetic Resonance Imaging (fMRI) study of free recall, covert word generation tasks when guided by phonemic (P), semantic-category (C), and context-based fill-in-the-blank sentence completion (S) cues. General linear model (GLM), Independent Component Analysis (ICA), and psychophysiological interaction (PPI) were used to further characterize the neural substrate of verbal fluency as a function of retrieval cue type. Results: Common localized activations across P, C, and S tasks occurred in the bilateral superior and left inferior frontal gyrus, left anterior cingulate cortex, bilateral supplementary motor area (SMA), and left insula. Differential task activations were centered in the occipital, temporal and parietal regions as well as the thalamus and cerebellum. The context-based fluency task, i.e., the S task, elicited higher differential brain activity in a lateralized frontal-temporal network typically engaged in complex language processing. P and C tasks elicited activation in limited pathways mainly within the left frontal regions. ICA and PPI results of the S task suggested that brain regions distributed across both hemispheres, extending beyond classical language areas, are recruited for lexical-semantic access and retrieval during sentence completion. Conclusion: Study results support the hypothesis of overlapping, as well as distinct, neural networks for covert word generation when guided by

  16. Identifying Evidence for Public Health Guidance: A Comparison of Citation Searching with Web of Science and Google Scholar

    Science.gov (United States)

    Levay, Paul; Ainsworth, Nicola; Kettle, Rachel; Morgan, Antony

    2016-01-01

    Aim: To examine how effectively forwards citation searching with Web of Science (WOS) or Google Scholar (GS) identified evidence to support public health guidance published by the National Institute for Health and Care Excellence. Method: Forwards citation searching was performed using GS on a base set of 46 publications and replicated using WOS.…

  17. Cardiac Resynchronization Therapy Online: What Patients Find when Searching the World Wide Web.

    Science.gov (United States)

    Modi, Minal; Laskar, Nabila; Modi, Bhavik N

    2016-06-01

    To objectively assess the quality of information available on the World Wide Web on cardiac resynchronization therapy (CRT). Patients frequently search the internet regarding their healthcare issues. It has been shown that patients seeking information can help or hinder their healthcare outcomes depending on the quality of information consulted. On the internet, this information can be produced and published by anyone, resulting in the risk of patients accessing inaccurate and misleading information. The search term "Cardiac Resynchronisation Therapy" was entered into the three most popular search engines and the first 50 pages on each were pooled and analyzed, after excluding websites inappropriate for objective review. The "LIDA" instrument (a validated tool for assessing quality of healthcare information websites) was to generate scores on Accessibility, Reliability, and Usability. Readability was assessed using the Flesch Reading Ease Score (FRES). Of the 150 web-links, 41 sites met the eligibility criteria. The sites were assessed using the LIDA instrument and the FRES. A mean total LIDA score for all the websites assessed was 123.5 of a possible 165 (74.8%). The average Accessibility of the sites assessed was 50.1 of 60 (84.3%), on Usability 41.4 of 54 (76.6%), on Reliability 31.5 of 51 (61.7%), and 41.8 on FRES. There was a significant variability among sites and interestingly, there was no correlation between the sites' search engine ranking and their scores. This study has illustrated the variable quality of online material on the topic of CRT. Furthermore, there was also no apparent correlation between highly ranked, popular websites and their quality. Healthcare professionals should be encouraged to guide their patients toward the online material that contains reliable information. © 2016 Wiley Periodicals, Inc.

  18. HDAPD: a web tool for searching the disease-associated protein structures

    Science.gov (United States)

    2010-01-01

    Background The protein structures of the disease-associated proteins are important for proceeding with the structure-based drug design to against a particular disease. Up until now, proteins structures are usually searched through a PDB id or some sequence information. However, in the HDAPD database presented here the protein structure of a disease-associated protein can be directly searched through the associated disease name keyed in. Description The search in HDAPD can be easily initiated by keying some key words of a disease, protein name, protein type, or PDB id. The protein sequence can be presented in FASTA format and directly copied for a BLAST search. HDAPD is also interfaced with Jmol so that users can observe and operate a protein structure with Jmol. The gene ontological data such as cellular components, molecular functions, and biological processes are provided once a hyperlink to Gene Ontology (GO) is clicked. Further, HDAPD provides a link to the KEGG map such that where the protein is placed and its relationship with other proteins in a metabolic pathway can be found from the map. The latest literatures namely titles, journals, authors, and abstracts searched from PubMed for the protein are also presented as a length controllable list. Conclusions Since the HDAPD data content can be routinely updated through a PHP-MySQL web page built, the new database presented is useful for searching the structures for some disease-associated proteins that may play important roles in the disease developing process for performing the structure-based drug design to against the diseases. PMID:20158919

  19. Omicseq: a web-based search engine for exploring omics datasets.

    Science.gov (United States)

    Sun, Xiaobo; Pittard, William S; Xu, Tianlei; Chen, Li; Zwick, Michael E; Jiang, Xiaoqian; Wang, Fusheng; Qin, Zhaohui S

    2017-04-10

    The development and application of high-throughput genomics technologies has resulted in massive quantities of diverse omics data that continue to accumulate rapidly. These rich datasets offer unprecedented and exciting opportunities to address long standing questions in biomedical research. However, our ability to explore and query the content of diverse omics data is very limited. Existing dataset search tools rely almost exclusively on the metadata. A text-based query for gene name(s) does not work well on datasets wherein the vast majority of their content is numeric. To overcome this barrier, we have developed Omicseq, a novel web-based platform that facilitates the easy interrogation of omics datasets holistically to improve 'findability' of relevant data. The core component of Omicseq is trackRank, a novel algorithm for ranking omics datasets that fully uses the numerical content of the dataset to determine relevance to the query entity. The Omicseq system is supported by a scalable and elastic, NoSQL database that hosts a large collection of processed omics datasets. In the front end, a simple, web-based interface allows users to enter queries and instantly receive search results as a list of ranked datasets deemed to be the most relevant. Omicseq is freely available at http://www.omicseq.org. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. Characterizing interdisciplinarity of researchers and research topics using web search engines.

    Science.gov (United States)

    Sayama, Hiroki; Akaishi, Jin

    2012-01-01

    Researchers' networks have been subject to active modeling and analysis. Earlier literature mostly focused on citation or co-authorship networks reconstructed from annotated scientific publication databases, which have several limitations. Recently, general-purpose web search engines have also been utilized to collect information about social networks. Here we reconstructed, using web search engines, a network representing the relatedness of researchers to their peers as well as to various research topics. Relatedness between researchers and research topics was characterized by visibility boost-increase of a researcher's visibility by focusing on a particular topic. It was observed that researchers who had high visibility boosts by the same research topic tended to be close to each other in their network. We calculated correlations between visibility boosts by research topics and researchers' interdisciplinarity at the individual level (diversity of topics related to the researcher) and at the social level (his/her centrality in the researchers' network). We found that visibility boosts by certain research topics were positively correlated with researchers' individual-level interdisciplinarity despite their negative correlations with the general popularity of researchers. It was also found that visibility boosts by network-related topics had positive correlations with researchers' social-level interdisciplinarity. Research topics' correlations with researchers' individual- and social-level interdisciplinarities were found to be nearly independent from each other. These findings suggest that the notion of "interdisciplinarity" of a researcher should be understood as a multi-dimensional concept that should be evaluated using multiple assessment means.

  1. Web Content Search and Adaptation for IDTV: One Step Forward in the Mediamorphosis Process toward Personal-TV

    Directory of Open Access Journals (Sweden)

    Stefano Ferretti

    2007-01-01

    Full Text Available We are on the threshold of a mediamorphosis that will revolutionize the way we interact with our TV sets. The combination between interactive digital TV (IDTV and the Web fosters the development of new interactive multimedia services enjoyable even through a TV screen and a remote control. Yet, several design constraints complicate the deployment of this new pattern of services. Prominent unresolved issues involve macro-problems such as collecting information on the Web based on users' preferences and appropriately presenting retrieved Web contents on the TV screen. To this aim, we propose a system able to dynamically convey contents from the Web to IDTV systems. Our system presents solutions both for personalized Web content search and automatic TV-format adaptation of retrieved documents. As we demonstrate through two case study applications, our system merges the best of IDTV and Web domains spinning the TV mediamorphosis toward the creation of the personal-TV concept.

  2. The effects of visual realism on search tasks in mixed reality simulation.

    Science.gov (United States)

    Lee, Cha; Rincon, Gustavo A; Meyer, Greg; Höllerer, Tobias; Bowman, Doug A

    2013-04-01

    In this paper, we investigate the validity of Mixed Reality (MR) Simulation by conducting an experiment studying the effects of the visual realism of the simulated environment on various search tasks in Augmented Reality (AR). MR Simulation is a practical approach to conducting controlled and repeatable user experiments in MR, including AR. This approach uses a high-fidelity Virtual Reality (VR) display system to simulate a wide range of equal or lower fidelity displays from the MR continuum, for the express purpose of conducting user experiments. For the experiment, we created three virtual models of a real-world location, each with a different perceived level of visual realism. We designed and executed an AR experiment using the real-world location and repeated the experiment within VR using the three virtual models we created. The experiment looked into how fast users could search for both physical and virtual information that was present in the scene. Our experiment demonstrates the usefulness of MR Simulation and provides early evidence for the validity of MR Simulation with respect to AR search tasks performed in immersive VR.

  3. Rare disease diagnosis: A review of web search, social media and large-scale data-mining approaches

    DEFF Research Database (Denmark)

    Svenstrup, Dan Tito; Jørgensen, Henrik L; Winther, Ole

    2015-01-01

    % and 64%, respectively. Thus, FindZebra has a significantly (p search engines. When tested under the same conditions, Watson and FindZebra showed similar recall@10 accuracy. However, the tests were performed on different subsets of Doctors dilemma questions. Advances...... on the use of web search, social media and data mining in data repositories for medical diagnosis. We compare the retrieval accuracy on 56 rare disease cases with known diagnosis for the web search tools google.com, pubmed.gov, omim.org and our own search tool findzebra.com. We give a detailed description...... in technology and access to high quality data have opened new possibilities for aiding the diagnostic process. Specialized search engines, data mining tools and social media are some of the areas that hold promise....

  4. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    Science.gov (United States)

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  5. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    Directory of Open Access Journals (Sweden)

    Mohammed Abdullahi

    Full Text Available Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS has been shown to perform competitively with Particle Swarm Optimization (PSO. The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA based SOS (SASOS in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  6. Hand movement deviations in a visual search task with cross modal cuing

    Directory of Open Access Journals (Sweden)

    Hürol Aslan

    2007-01-01

    Full Text Available The purpose of this study is to demonstrate the cross-modal effects of an auditory organization on a visual search task and to investigate the influence of the level of detail in instructions describing or hinting at the associations between auditory stimuli and the possible locations of a visual target. In addition to measuring the participants’ reaction times, we paid special attention to tracking the hand movements toward the target. According to the results, the auditory stimuli unassociated with the target locations slightly –but significantly- increased the deviation of the hand movement from the path leading to the target location. The increase in the deviation depended on the degree of association between auditory stimuli and target locations, albeit not on the level of detail in the instructions about the task.

  7. Sagace: A web-based search engine for biomedical databases in Japan

    Directory of Open Access Journals (Sweden)

    Morita Mizuki

    2012-10-01

    Full Text Available Abstract Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data and biological resource banks (such as mouse models of disease and cell lines. With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/.

  8. Gender Asymmetries Encountered in the Search and Exploration of Mining Engineering Program Web Sites: a Portrayal of Posture and Roles

    Science.gov (United States)

    Banning, James H.; Sexton, Julie; Most, David E.; Maier, Shelby

    Photographs found in the search for and exploration of 13 university mining engineering department Web sites were studied for their asymmetries of power by analyzing the role (student, instructor, secretarial staff, miner, and honoree) and posture (sitting, standing) of men and women in the photographs. The Web site photographs showed a higher rate of women occupying student roles than men did. Women had a lower rate of occupying instructor and miner roles. No women were portrayed as being honored. Men exhibited a higher rate of occupying the standing posture than did women. Women were more often shown sitting than men were. Implications of portraying a nonequitable power structure between men and women in the search for and exploration of mining engineering Web sites are discussed, including a recommendation that all academic departments should examine the portrayal of gender on their Web sites.

  9. Utilizing mixed methods research in analyzing Iranian researchers’ informarion search behaviour in the Web and presenting current pattern

    Directory of Open Access Journals (Sweden)

    Maryam Asadi

    2015-12-01

    Full Text Available Using mixed methods research design, the current study has analyzed Iranian researchers’ information searching behaviour on the Web.Then based on extracted concepts, the model of their information searching behavior was revealed. . Forty-four participants, including academic staff from universities and research centers were recruited for this study selected by purposive sampling. Data were gathered from questionnairs including ten questions and semi-structured interview. Each participant’s memos were analyzed using grounded theory methods adapted from Strauss & Corbin (1998. Results showed that the main objectives of subjects were doing a research, writing a paper, studying, doing assignments, downloading files and acquiring public information in using Web. The most important of learning about how to search and retrieve information were trial and error and get help from friends among the subjects. Information resources are identified by searching in information resources (e.g. search engines, references in papers, and search in Online database… communications facilities & tools (e.g. contact with colleagues, seminars & workshops, social networking..., and information services (e.g. RSS, Alerting, and SDI. Also, Findings indicated that searching by search engines, reviewing references, searching in online databases, and contact with colleagues and studying last issue of the electronic journals were the most important for searching. The most important strategies were using search engines and scientific tools such as Google Scholar. In addition, utilizing from simple (Quick search method was the most common among subjects. Using of topic, keywords, title of paper were most important of elements for retrieval information. Analysis of interview showed that there were nine stages in researchers’ information searching behaviour: topic selection, initiating search, formulating search query, information retrieval, access to information

  10. Web 2.0 Tasks in Action: EFL Learning in the U.S. Embassy School Election Project 2012

    Directory of Open Access Journals (Sweden)

    Joannis Kaliampos

    2014-10-01

    Full Text Available Exploring topics that are personally relevant and interesting to young adult English as a foreign language (EFL learners remains a core challenge in language teaching. At the same time, the advent of Web 2.0 applications has many repercussions for authentic language learning. The “U.S. Embassy School Election Project 2012” has addressed these questions by combining a close focus on the U.S. Presidential Election with an interactive project scenario. Over 1,400 students across Germany participated in this project and produced an election forecast for an assigned U.S. state based on a survey of regional news media and social network data. Their predictions were in many cases more accurate than those of major U.S. broadcasting networks. This paper discusses the general educational potential of such projects in the contexts of computer-assisted language learning (CALL, intercultural learning, and learning in a task-based project environment. The authors have applied a multimodal qualitative approach to analyze tasks and learner perceptions of tasks in the context of the election project. In a first step, the micro-perspective of the perception of web-based tasks is investigated by example of one selected task cycle and a focus group of three learners. The second part of the analysis represents a bird’s-eye view on the learner products arising out of such tasks.

  11. Database with web interface and search engine as a diagnostics tool for electromagnetic calorimeter

    CERN Document Server

    Paluoja, Priit

    2017-01-01

    During 2016 data collection, the Compact Muon Solenoid Data Acquisition (CMS DAQ) system has shown a very good reliability. Nevertheless, the high complexity of the hardware and the software involved is, by its nature, prone to some occasional problems. As CMS subdetector, electromagnetic calorimeter (ECAL) is affected in the same way. Some of the issues are not predictable and can appear during the year more than once such as components getting noisy, power shortcuts or failing communication between machines. The chain detection-diagnosis-intervention must be as fast as possible to minimise the downtime of the detector. The aim of this project was to create a diagnostic software for ECAL crew, which consists of database and its web interface that allows to search, add and edit the contents of the database.

  12. Neural Correlates of Changes in a Visual Search Task due to Cognitive Training in Seniors

    Directory of Open Access Journals (Sweden)

    Nele Wild-Wall

    2012-01-01

    Full Text Available This study aimed to elucidate the underlying neural sources of near transfer after a multidomain cognitive training in older participants in a visual search task. Participants were randomly assigned to a social control, a no-contact control and a training group, receiving a 4-month paper-pencil and PC-based trainer guided cognitive intervention. All participants were tested in a before and after session with a conjunction visual search task. Performance and event-related potentials (ERPs suggest that the cognitive training improved feature processing of the stimuli which was expressed in an increased rate of target detection compared to the control groups. This was paralleled by enhanced amplitudes of the frontal P2 in the ERP and by higher activation in lingual and parahippocampal brain areas which are discussed to support visual feature processing. Enhanced N1 and N2 potentials in the ERP for nontarget stimuli after cognitive training additionally suggest improved attention and subsequent processing of arrays which were not immediately recognized as targets. Possible test repetition effects were confined to processes of stimulus categorisation as suggested by the P3b potential. The results show neurocognitive plasticity in aging after a broad cognitive training and allow pinpointing the functional loci of effects induced by cognitive training.

  13. A new task format for investigating information search and organization in multiattribute decisions.

    Science.gov (United States)

    Ettlin, Florence; Bröder, Arndt; Henninger, Mirka

    2015-06-01

    In research on multiattribute decisions, information is typically preorganized in a well-structured manner (e.g., in attributes-by-options matrices). Participants can therefore conveniently identify the information needed for the decision strategy they are using. However, in everyday decision situations, we often face information that is not well-structured; that is, we not only have to search for, but we also need to organize the information. This latter aspect--subjective information organization--has so far largely been neglected in decision research. The few exceptions used crude experimental manipulations, and the assessment of subjective organization suffered from laborious methodology and a lack of objectiveness. We introduce a new task format to overcome these methodological issues, and we provide an organization index (OI) to assess subjective organization of information objectively and automatically. The OI makes it possible to assess information organization on the same scale as the strategy index (SI) typically used for assessing information search behavior. A simulation study shows that the OI has a similar distribution as the SI but that the two indices are a priori largely independent. In a validation experiment with instructed strategy use, we demonstrate the usefulness of the task to trace decision processes in multicue inference situations.

  14. Where perception meets memory: a review of repetition priming in visual search tasks.

    Science.gov (United States)

    Kristjánsson, Arni; Campana, Gianluca

    2010-01-01

    What we have recently seen and attended to strongly influences how we subsequently allocate visual attention. A clear example is how repeated presentation of an object's features or location in visual search tasks facilitates subsequent detection or identification of that item, a phenomenon known as priming. Here, we review a large body of results from priming studies that suggest that a short-term implicit memory system guides our attention to recently viewed items. The nature of this memory system and the processing level at which visual priming occurs are still debated. Priming might be due to activity modulations of low-level areas coding simple stimulus characteristics or to higher level episodic memory representations of whole objects or visual scenes. Indeed, recent evidence indicates that only minor changes to the stimuli used in priming studies may alter the processing level at which priming occurs. We also review recent behavioral, neuropsychological, and neurophysiological evidence that indicates that the priming patterns are reflected in activity modulations at multiple sites along the visual pathways. We furthermore suggest that studies of priming in visual search may potentially shed important light on the nature of cortical visual representations. Our conclusion is that priming occurs at many different levels of the perceptual hierarchy, reflecting activity modulations ranging from lower to higher levels, depending on the stimulus, task, and context-in fact, the neural loci that are involved in the analysis of the stimuli for which priming effects are seen.

  15. Increased Complexities in Visual Search Behavior in Skilled Players for a Self-Paced Aiming Task

    Directory of Open Access Journals (Sweden)

    Jingyi S. Chia

    2017-06-01

    Full Text Available The badminton serve is an important shot for winning a rally in a match. It combines good technique with the ability to accurately integrate visual information from the shuttle, racket, opponent, and intended landing point. Despite its importance and repercussive nature, to date no study has looked at the visual search behaviors during badminton service in the singles discipline. Unlike anticipatory tasks (e.g., shot returns, the serve presents an opportunity to explore the role of visual search behaviors in movement control for self-paced tasks. Accordingly, this study examined skill-related differences in visual behavior during the badminton singles serve. Skilled (n = 12 and less skilled (n = 12 participants performed 30 serves to a live opponent, while real-time eye movements were captured using a mobile gaze registration system. Frame-by-frame analyses of 662 serves were made and the skilled players took a longer preparatory time before serving. Visual behavior of the skilled players was characterized by significantly greater number of fixations on more areas of interest per trial than the less skilled. In addition, the skilled players spent a significantly longer time fixating on the court and net, whereas the less skilled players found the shuttle to be more informative. Quiet eye (QE duration (indicative of superior sports performance however, did not differ significantly between groups which has implications on the perceived importance of QE in the badminton serve. Moreover, while visual behavior differed by skill level, considerable individual differences were also observed especially within the skilled players. This augments the need for not just group-level analyses, but individualized analysis for a more accurate representation of visual behavior. Findings from this study thus provide an insight to the possible visual search strategies as players serve in net-barrier games. Moreover, this study highlighted an important aspect of

  16. The Fundamentals of iSPARQL: A Virtual Triple Approach for Similarity-Based Semantic Web Tasks

    Science.gov (United States)

    Kiefer, Christoph; Bernstein, Abraham; Stocker, Markus

    This research explores three SPARQL-based techniques to solve Semantic Web tasks that often require similarity measures, such as semantic data integration, ontology mapping, and Semantic Web service matchmaking. Our aim is to see how far it is possible to integrate customized similarity functions (CSF) into SPARQL to achieve good results for these tasks. Our first approach exploits virtual triples calling property functions to establish virtual relations among resources under comparison; the second approach uses extension functions to filter out resources that do not meet the requested similarity criteria; finally, our third technique applies new solution modifiers to post-process a SPARQL solution sequence. The semantics of the three approaches are formally elaborated and discussed. We close the paper with a demonstration of the usefulness of our iSPARQL framework in the context of a data integration and an ontology mapping experiment.

  17. Modeling Search Behaviors during the Acquisition of Expertise in a Sequential Decision-Making Task.

    Science.gov (United States)

    Moënne-Loccoz, Cristóbal; Vergara, Rodrigo C; López, Vladimir; Mery, Domingo; Cosmelli, Diego

    2017-01-01

    Our daily interaction with the world is plagued of situations in which we develop expertise through self-motivated repetition of the same task. In many of these interactions, and especially when dealing with computer and machine interfaces, we must deal with sequences of decisions and actions. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion and a specific sequence of choices must be performed in order to produce the expected outcome. But, as we become experts in the use of such interfaces, is it possible to identify specific search and learning strategies? And if so, can we use this information to predict future actions? In addition to better understanding the cognitive processes underlying sequential decision making, this could allow building adaptive interfaces that can facilitate interaction at different moments of the learning curve. Here we tackle the question of modeling sequential decision-making behavior in a simple human-computer interface that instantiates a 4-level binary decision tree (BDT) task. We record behavioral data from voluntary participants while they attempt to solve the task. Using a Hidden Markov Model-based approach that capitalizes on the hierarchical structure of behavior, we then model their performance during the interaction. Our results show that partitioning the problem space into a small set of hierarchically related stereotyped strategies can potentially capture a host of individual decision making policies. This allows us to follow how participants learn and develop expertise in the use of the interface. Moreover, using a Mixture of Experts based on these stereotyped strategies, the model is able to predict the behavior of participants that master the task.

  18. Modeling Search Behaviors during the Acquisition of Expertise in a Sequential Decision-Making Task

    Directory of Open Access Journals (Sweden)

    Cristóbal Moënne-Loccoz

    2017-09-01

    Full Text Available Our daily interaction with the world is plagued of situations in which we develop expertise through self-motivated repetition of the same task. In many of these interactions, and especially when dealing with computer and machine interfaces, we must deal with sequences of decisions and actions. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion and a specific sequence of choices must be performed in order to produce the expected outcome. But, as we become experts in the use of such interfaces, is it possible to identify specific search and learning strategies? And if so, can we use this information to predict future actions? In addition to better understanding the cognitive processes underlying sequential decision making, this could allow building adaptive interfaces that can facilitate interaction at different moments of the learning curve. Here we tackle the question of modeling sequential decision-making behavior in a simple human-computer interface that instantiates a 4-level binary decision tree (BDT task. We record behavioral data from voluntary participants while they attempt to solve the task. Using a Hidden Markov Model-based approach that capitalizes on the hierarchical structure of behavior, we then model their performance during the interaction. Our results show that partitioning the problem space into a small set of hierarchically related stereotyped strategies can potentially capture a host of individual decision making policies. This allows us to follow how participants learn and develop expertise in the use of the interface. Moreover, using a Mixture of Experts based on these stereotyped strategies, the model is able to predict the behavior of participants that master the task.

  19. Large area sheet task. Advanced dendritic web growth development. [silicon films

    Science.gov (United States)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Meier, D.; Frantti, E.; Schruben, J.

    1981-01-01

    The development of a silicon dendritic web growth machine is discussed. Several refinements to the sensing and control equipment for melt replenishment during web growth are described and several areas for cost reduction in the components of the prototype automated web growth furnace are identified. A circuit designed to eliminate the sensitivity of the detector signal to the intensity of the reflected laser beam used to measure melt level is also described. A variable speed motor for the silicon feeder is discussed which allows pellet feeding to be accomplished at a rate programmed to match exactly the silicon removed by web growth.

  20. How happy is your web browsing? A model to quantify satisfaction of an Internet user searching for desired information

    Science.gov (United States)

    Banerji, Anirban; Magarkar, Aniket

    2012-09-01

    We feel happy when web browsing operations provide us with necessary information; otherwise, we feel bitter. How to measure this happiness (or bitterness)? How does the profile of happiness grow and decay during the course of web browsing? We propose a probabilistic framework that models the evolution of user satisfaction, on top of his/her continuous frustration at not finding the required information. It is found that the cumulative satisfaction profile of a web-searching individual can be modeled effectively as the sum of a random number of random terms, where each term is a mutually independent random variable, originating from ‘memoryless’ Poisson flow. Evolution of satisfaction over the entire time interval of a user’s browsing was modeled using auto-correlation analysis. A utilitarian marker, a magnitude of greater than unity of which describes happy web-searching operations, and an empirical limit that connects user’s satisfaction with his frustration level-are proposed too. The presence of pertinent information in the very first page of a website and magnitude of the decay parameter of user satisfaction (frustration, irritation etc.) are found to be two key aspects that dominate the web user’s psychology. The proposed model employed different combinations of decay parameter, searching time and number of helpful websites. The obtained results are found to match the results from three real-life case studies.

  1. Performance on the Hamilton search task, and the influence of lateralization, in captive orange-winged Amazon parrots (Amazona amazonica).

    Science.gov (United States)

    Cussen, Victoria A; Mench, Joy A

    2014-07-01

    Psittacines are generally considered to possess cognitive abilities comparable to those of primates. Most psittacine research has evaluated performance on standardized complex cognition tasks, but studies of basic cognitive processes are limited. We tested orange-winged Amazon parrots (Amazona amazonica) on a spatial foraging assessment, the Hamilton search task. This task is a standardized test used in human and non-human primate studies. It has multiple phases, which require trial and error learning, learning set breaking, and spatial memory. We investigated search strategies used to complete the task, cognitive flexibility, and long-term memory for the task. We also assessed the effects of individual strength of motor lateralization (foot preference) and sex on task performance. Almost all (92%) of the parrots acquired the task. All had significant foot preferences, with 69% preferring their left foot, and showed side preferences contralateral to their preferred limb during location selection. The parrots were able to alter their search strategies when reward contingencies changed, demonstrating cognitive flexibility. They were also able to remember the task over a 6-month period. Lateralization had a significant influence on learning set acquisition but no effect on cognitive flexibility. There were no sex differences. To our knowledge, this is the first cognitive study using this particular species and one of the few studies of cognitive abilities in any Neotropical parrot species.

  2. Urban networks among Chinese cities along "the Belt and Road": A case of web search activity in cyberspace.

    Science.gov (United States)

    Zhang, Lu; Du, Hongru; Zhao, Yannan; Wu, Rongwei; Zhang, Xiaolei

    2017-01-01

    "The Belt and Road" initiative has been expected to facilitate interactions among numerous city centers. This initiative would generate a number of centers, both economic and political, which would facilitate greater interaction. To explore how information flows are merged and the specific opportunities that may be offered, Chinese cities along "the Belt and Road" are selected for a case study. Furthermore, urban networks in cyberspace have been characterized by their infrastructure orientation, which implies that there is a relative dearth of studies focusing on the investigation of urban hierarchies by capturing information flows between Chinese cities along "the Belt and Road". This paper employs Baidu, the main web search engine in China, to examine urban hierarchies. The results show that urban networks become more balanced, shifting from a polycentric to a homogenized pattern. Furthermore, cities in networks tend to have both a hierarchical system and a spatial concentration primarily in regions such as Beijing-Tianjin-Hebei, Yangtze River Delta and the Pearl River Delta region. Urban hierarchy based on web search activity does not follow the existing hierarchical system based on geospatial and economic development in all cases. Moreover, urban networks, under the framework of "the Belt and Road", show several significant corridors and more opportunities for more cities, particularly western cities. Furthermore, factors that may influence web search activity are explored. The results show that web search activity is significantly influenced by the economic gap, geographical proximity and administrative rank of the city.

  3. Urban networks among Chinese cities along "the Belt and Road": A case of web search activity in cyberspace.

    Directory of Open Access Journals (Sweden)

    Lu Zhang

    Full Text Available "The Belt and Road" initiative has been expected to facilitate interactions among numerous city centers. This initiative would generate a number of centers, both economic and political, which would facilitate greater interaction. To explore how information flows are merged and the specific opportunities that may be offered, Chinese cities along "the Belt and Road" are selected for a case study. Furthermore, urban networks in cyberspace have been characterized by their infrastructure orientation, which implies that there is a relative dearth of studies focusing on the investigation of urban hierarchies by capturing information flows between Chinese cities along "the Belt and Road". This paper employs Baidu, the main web search engine in China, to examine urban hierarchies. The results show that urban networks become more balanced, shifting from a polycentric to a homogenized pattern. Furthermore, cities in networks tend to have both a hierarchical system and a spatial concentration primarily in regions such as Beijing-Tianjin-Hebei, Yangtze River Delta and the Pearl River Delta region. Urban hierarchy based on web search activity does not follow the existing hierarchical system based on geospatial and economic development in all cases. Moreover, urban networks, under the framework of "the Belt and Road", show several significant corridors and more opportunities for more cities, particularly western cities. Furthermore, factors that may influence web search activity are explored. The results show that web search activity is significantly influenced by the economic gap, geographical proximity and administrative rank of the city.

  4. Using web search query data to monitor dengue epidemics: a new model for neglected tropical disease surveillance.

    Directory of Open Access Journals (Sweden)

    Emily H Chan

    2011-05-01

    Full Text Available A variety of obstacles including bureaucracy and lack of resources have interfered with timely detection and reporting of dengue cases in many endemic countries. Surveillance efforts have turned to modern data sources, such as Internet search queries, which have been shown to be effective for monitoring influenza-like illnesses. However, few have evaluated the utility of web search query data for other diseases, especially those of high morbidity and mortality or where a vaccine may not exist. In this study, we aimed to assess whether web search queries are a viable data source for the early detection and monitoring of dengue epidemics.Bolivia, Brazil, India, Indonesia and Singapore were chosen for analysis based on available data and adequate search volume. For each country, a univariate linear model was then built by fitting a time series of the fraction of Google search query volume for specific dengue-related queries from that country against a time series of official dengue case counts for a time-frame within 2003-2010. The specific combination of queries used was chosen to maximize model fit. Spurious spikes in the data were also removed prior to model fitting. The final models, fit using a training subset of the data, were cross-validated against both the overall dataset and a holdout subset of the data. All models were found to fit the data quite well, with validation correlations ranging from 0.82 to 0.99.Web search query data were found to be capable of tracking dengue activity in Bolivia, Brazil, India, Indonesia and Singapore. Whereas traditional dengue data from official sources are often not available until after some substantial delay, web search query data are available in near real-time. These data represent valuable complement to assist with traditional dengue surveillance.

  5. Effects of organizational scheme and labeling on task performance in product-centered and user-centered retail Web sites.

    Science.gov (United States)

    Resnick, Marc L; Sanchez, Julian

    2004-01-01

    As companies increase the quantity of information they provide through their Web sites, it is critical that content is structured with an appropriate architecture. However, resource constraints often limit the ability of companies to apply all Web design principles completely. This study quantifies the effects of two major information architecture principles in a controlled study that isolates the incremental effects of organizational scheme and labeling on user performance and satisfaction. Sixty participants with a wide range of Internet and on-line shopping experience were recruited to complete a series of shopping tasks on a prototype retail shopping Web site. User-centered labels provided a significant benefit in performance and satisfaction over labels obtained through company-centered methods. User-centered organization did not result in improved performance except when the label quality was poor. Significant interactions suggest specific guidelines for allocating resources in Web site design. Applications of this research include the design of Web sites for any commercial application, particularly E-commerce.

  6. Rare disease diagnosis: A review of web search, social media and large-scale data-mining approaches.

    Science.gov (United States)

    Svenstrup, Dan; Jørgensen, Henrik L; Winther, Ole

    2015-01-01

    Physicians and the general public are increasingly using web-based tools to find answers to medical questions. The field of rare diseases is especially challenging and important as shown by the long delay and many mistakes associated with diagnoses. In this paper we review recent initiatives on the use of web search, social media and data mining in data repositories for medical diagnosis. We compare the retrieval accuracy on 56 rare disease cases with known diagnosis for the web search tools google.com, pubmed.gov, omim.org and our own search tool findzebra.com. We give a detailed description of IBM's Watson system and make a rough comparison between findzebra.com and Watson on subsets of the Doctor's dilemma dataset. The recall@10 and recall@20 (fraction of cases where the correct result appears in top 10 and top 20) for the 56 cases are found to be be 29%, 16%, 27% and 59% and 32%, 18%, 34% and 64%, respectively. Thus, FindZebra has a significantly (p < 0.01) higher recall than the other 3 search engines. When tested under the same conditions, Watson and FindZebra showed similar recall@10 accuracy. However, the tests were performed on different subsets of Doctors dilemma questions. Advances in technology and access to high quality data have opened new possibilities for aiding the diagnostic process. Specialized search engines, data mining tools and social media are some of the areas that hold promise.

  7. Visual search tasks: measurement of dynamic visual lobe and relationship with display movement velocity.

    Science.gov (United States)

    Yang, Lin-Dong; Yu, Rui-Feng; Lin, Xue-Lian; Xie, Ya-Qing; Ma, Liang

    2018-02-01

    Visual lobe is a useful tool for predicting visual search performance. Up till now, no study has focused on dynamic visual lobe. This study developed a dynamic visual lobe measurement system (DVLMS) that could effectively map dynamic visual lobe and calculate visual lobe shape indices. The effects of display movement velocity on lobe shape indices were examined under four velocity conditions: 0, 4, 8 and 16 deg/s. In general, with the increase of display movement velocity, visual lobe area and perimeter became smaller, whereas lobe shape roundness, boundary smoothness, symmetry and regularity deteriorated. The elongation index was not affected by velocity. Regression analyses indicated that display movement velocity was important in determining dynamic visual lobe shape indices. Dynamic visual lobe provides another option for better understanding dynamic vision, in addition to dynamic visual acuity. Findings of this study can provide guidelines for analysing and designing dynamic visual tasks. Practitioner Summary: Dynamic visual lobe is important in reflecting the visual ability of searching for a moving target. We developed a dynamic visual lobe measurement system (DVLMS) and examined display movement velocity's effects on lobe shape. Findings revealed that velocity was a key factor affecting dynamic visual lobe shape indices.

  8. Intrinsic motivation and attentional capture from gamelike features in a visual search task.

    Science.gov (United States)

    Miranda, Andrew T; Palmer, Evan M

    2014-03-01

    In psychology research studies, the goals of the experimenter and the goals of the participants often do not align. Researchers are interested in having participants who take the experimental task seriously, whereas participants are interested in earning their incentive (e.g., money or course credit) as quickly as possible. Creating experimental methods that are pleasant for participants and that reward them for effortful and accurate data generation, while not compromising the scientific integrity of the experiment, would benefit both experimenters and participants alike. Here, we explored a gamelike system of points and sound effects that rewarded participants for fast and accurate responses. We measured participant engagement at both cognitive and perceptual levels and found that the point system (which invoked subtle, anonymous social competition between participants) led to positive intrinsic motivation, while the sound effects (which were pleasant and arousing) led to attentional capture for rewarded colors. In a visual search task, points were awarded after each trial for fast and accurate responses, accompanied by short, pleasant sound effects. We adapted a paradigm from Anderson, Laurent, and Yantis (Proceedings of the National Academy of Sciences 108(25):10367-10371, 2011b), in which participants completed a training phase during which red and green targets were probabilistically associated with reward (a point bonus multiplier). During a test phase, no points or sounds were delivered, color was irrelevant to the task, and previously rewarded targets were sometimes presented as distractors. Significantly longer response times on trials in which previously rewarded colors were present demonstrated attentional capture, and positive responses to a five-question intrinsic-motivation scale demonstrated participant engagement.

  9. The effect of stimulus duration and motor response in hemispatial neglect during a visual search task.

    Directory of Open Access Journals (Sweden)

    Laura M Jelsone-Swain

    Full Text Available Patients with hemispatial neglect exhibit a myriad of profound deficits. A hallmark of this syndrome is the patients' absence of awareness of items located in their contralesional space. Many studies, however, have demonstrated that neglect patients exhibit some level of processing of these neglected items. It has been suggested that unconscious processing of neglected information may manifest as a fast denial. This theory of fast denial proposes that neglected stimuli are detected in the same way as non-neglected stimuli, but without overt awareness. We evaluated the fast denial theory by conducting two separate visual search task experiments, each differing by the duration of stimulus presentation. Specifically, in Experiment 1 each stimulus remained in the participants' visual field until a response was made. In Experiment 2 each stimulus was presented for only a brief duration. We further evaluated the fast denial theory by comparing verbal to motor task responses in each experiment. Overall, our results from both experiments and tasks showed no evidence for the presence of implicit knowledge of neglected stimuli. Instead, patients with neglect responded the same when they neglected stimuli as when they correctly reported stimulus absence. These findings thus cast doubt on the concept of the fast denial theory and its consequent implications for non-conscious processing. Importantly, our study demonstrated that the only behavior affected was during conscious detection of ipsilesional stimuli. Specifically, patients were slower to detect stimuli in Experiment 1 compared to Experiment 2, suggesting a duration effect occurred during conscious processing of information. Additionally, reaction time and accuracy were similar when reporting verbally versus motorically. These results provide new insights into the perceptual deficits associated with neglect and further support other work that falsifies the fast denial account of non

  10. The effect of stimulus duration and motor response in hemispatial neglect during a visual search task.

    Science.gov (United States)

    Jelsone-Swain, Laura M; Smith, David V; Baylis, Gordon C

    2012-01-01

    Patients with hemispatial neglect exhibit a myriad of profound deficits. A hallmark of this syndrome is the patients' absence of awareness of items located in their contralesional space. Many studies, however, have demonstrated that neglect patients exhibit some level of processing of these neglected items. It has been suggested that unconscious processing of neglected information may manifest as a fast denial. This theory of fast denial proposes that neglected stimuli are detected in the same way as non-neglected stimuli, but without overt awareness. We evaluated the fast denial theory by conducting two separate visual search task experiments, each differing by the duration of stimulus presentation. Specifically, in Experiment 1 each stimulus remained in the participants' visual field until a response was made. In Experiment 2 each stimulus was presented for only a brief duration. We further evaluated the fast denial theory by comparing verbal to motor task responses in each experiment. Overall, our results from both experiments and tasks showed no evidence for the presence of implicit knowledge of neglected stimuli. Instead, patients with neglect responded the same when they neglected stimuli as when they correctly reported stimulus absence. These findings thus cast doubt on the concept of the fast denial theory and its consequent implications for non-conscious processing. Importantly, our study demonstrated that the only behavior affected was during conscious detection of ipsilesional stimuli. Specifically, patients were slower to detect stimuli in Experiment 1 compared to Experiment 2, suggesting a duration effect occurred during conscious processing of information. Additionally, reaction time and accuracy were similar when reporting verbally versus motorically. These results provide new insights into the perceptual deficits associated with neglect and further support other work that falsifies the fast denial account of non-conscious processing in hemispatial

  11. Knowledge-based personalized search engine for the Web-based Human Musculoskeletal System Resources (HMSR) in biomechanics.

    Science.gov (United States)

    Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba

    2013-02-01

    Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Evidence of perceptive impairment in OSA patients investigated by means of a visual search task.

    Science.gov (United States)

    Giora, Enrico; Galbiati, Andrea; Marelli, Sara; Zucconi, Marco; Ferini-Strambi, Luigi

    2017-10-01

    Obstructive Sleep Apnoea (OSA) is a common sleep disorder characterized by episodes of complete or partial obstruction of respiratory airways during sleep that leads to hypoxaemia and sleep fragmentation. One relevant daytime consequence of OSA is a negative impact on neurocognitive domain, ranging from psychomotor performance to executive function. In spite of a huge amount of evidence regarding cognitive impairment, little is known about perceptual processing in these patients. The aim of this research is to investigate the effects of OSA on visual mechanisms by employing a visual search paradigm. 19 OSA patients and 19 age-matched healthy controls (HC) participated in a case-control study. After a nocturnal cardiorespiratory monitoring, patients performed a visual search task in which they had to detect the presence/absence of a target (letter T) embedded in the 50% of trials into a set of distractors (letters Os, Xs, or Ls). Target's salience and distractors' numerosity were manipulated as independent variables, whereas accuracy and reaction times (RT) were recorded as dependent variables. HC, after the exclusion of any sleep disorder or sleepiness, performed the same experiments. Results generally confirmed the typical effects of visual search. OSA patients reported significantly slower RT in comparison with HC, indicating an overall perceptual deficit consisting in a harder extraction of relevant information from noise. Neither patients' age nor the objective clinical indices were associated with RT. This study indicates the presence of an impairment in OSA patients involving basic mechanisms of visual processing and likely ascribable to the disorder per se. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Electrophysiological revelations of trial history effects in a color oddball search task.

    Science.gov (United States)

    Shin, Eunsam; Chong, Sang Chul

    2016-12-01

    In visual oddball search tasks, viewing a no-target scene (i.e., no-target selection trial) leads to the facilitation or delay of the search time for a target in a subsequent trial. Presumably, this selection failure leads to biasing attentional set and prioritizing stimulus features unseen in the no-target scene. We observed attention-related ERP components and tracked the course of attentional biasing as a function of trial history. Participants were instructed to identify color oddballs (i.e., targets) shown in varied trial sequences. The number of no-target scenes preceding a target scene was increased from zero to two to reinforce attentional biasing, and colors presented in two successive no-target scenes were repeated or changed to systematically bias attention to specific colors. For the no-target scenes, the presentation of a second no-target scene resulted in an early selection of, and sustained attention to, the changed colors (mirrored in the frontal selection positivity, the anterior N2, and the P3b). For the target scenes, the N2pc indicated an earlier allocation of attention to the targets with unseen or remotely seen colors. Inhibitory control of attention, shown in the anterior N2, was greatest when the target scene was followed by repeated no-target scenes with repeated colors. Finally, search times and the P3b were influenced by both color previewing and its history. The current results demonstrate that attentional biasing can occur on a trial-by-trial basis and be influenced by both feature previewing and its history. © 2016 Society for Psychophysiological Research.

  14. Linking Annual Prescription Volume of Antidepressants to Corresponding Web Search Query Data: A Possible Proxy for Medical Prescription Behavior?

    Science.gov (United States)

    Gahr, Maximilian; Uzelac, Zeljko; Zeiss, René; Connemann, Bernhard J; Lang, Dirk; Schönfeldt-Lecuona, Carlos

    2015-12-01

    Persons using the Internet to retrieve medical information generate large amounts of health-related data, which are increasingly used in modern health sciences. We analyzed the relation between annual prescription volumes (APVs) of several antidepressants with marketing approval in Germany and corresponding web search query data generated in Google to test whether web search query volume may be a proxy for medical prescription practice. We obtained APVs of several antidepressants related to corresponding prescriptions at the expense of the statutory health insurance in Germany from 2004 to 2013. Web search query data generated in Germany and related to defined search terms (active substance or brand name) were obtained with Google Trends. We calculated correlations (Person's r) between the APVs of each substance and the respective annual "search share" values; coefficients of determination (R) were computed to determine the amount of variability shared by the 2 variables. Significant and strong correlations between substance-specific APVs and corresponding annual query volumes were found for each substance during the observational interval: agomelatine (r = 0.968, R = 0.932, P = 0.01), bupropion (r = 0.962, R = 0.925, P = 0.01), citalopram (r = 0.970, R = 0.941, P = 0.01), escitalopram (r = 0.824, R = 0.682, P = 0.01), fluoxetine (r = 0.885, R = 0.783, P = 0.01), paroxetine (r = 0.801, R = 0.641, P = 0.01), and sertraline (r = 0.880, R = 0.689, P = 0.01). Although the used data did not allow to perform an analysis with a higher temporal resolution (quarters, months), our results suggest that web search query volume may be a proxy for corresponding prescription behavior. However, further studies analyzing other pharmacologic agents and prescription data that facilitate an increased temporal resolution are needed to confirm this hypothesis.

  15. Low cost silicon solar array project large area silicon sheet task: Silicon web process development

    Science.gov (United States)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Blais, P. D.; Davis, J. R., Jr.

    1977-01-01

    Growth configurations were developed which produced crystals having low residual stress levels. The properties of a 106 mm diameter round crucible were evaluated and it was found that this design had greatly enhanced temperature fluctuations arising from convection in the melt. Thermal modeling efforts were directed to developing finite element models of the 106 mm round crucible and an elongated susceptor/crucible configuration. Also, the thermal model for the heat loss modes from the dendritic web was examined for guidance in reducing the thermal stress in the web. An economic analysis was prepared to evaluate the silicon web process in relation to price goals.

  16. EVALUATION OF WEB SEARCHING METHOD USING A NOVEL WPRR ALGORITHM FOR TWO DIFFERENT CASE STUDIES

    Directory of Open Access Journals (Sweden)

    V. Lakshmi Praba

    2012-04-01

    Full Text Available The World-Wide Web provides every internet citizen with access to an abundance of information, but it becomes increasingly difficult to identify the relevant pieces of information. Research in web mining tries to address this problem by applying techniques from data mining and machine learning to web data and documents. Web content mining and web structure mining have important roles in identifying the relevant web page. Relevancy of web page denotes how well a retrieved web page or set of web pages meets the information need of the user. Page Rank, Weighted Page Rank and Hypertext Induced Topic Selection (HITS are existing algorithms which considers only web structure mining. Vector Space Model (VSM, Cover Density Ranking (CDR, Okapi similarity measurement (Okapi and Three-Level Scoring method (TLS are some of existing relevancy score methods which consider only web content mining. In this paper, we propose a new algorithm, Weighted Page with Relevant Rank (WPRR which is blend of both web content mining and web structure mining that demonstrates the relevancy of the page with respect to given query for two different case scenarios. It is shown that WPRR’s performance is better than the existing algorithms.

  17. A Semantic Web Application for the Air Tasking Order (ATO) (Briefing Charts)

    National Research Council Canada - National Science Library

    Frantz, Albert; Franco, Milvio

    2005-01-01

    .... We used existing Semantic Web tools to construct an ATO knowledge base. The knowledge base is used to select potential air missions to reassign to strike time sensitive targets by the computer...

  18. Agility and search and rescue training differently affects pet dogs' behaviour in socio-cognitive tasks.

    Science.gov (United States)

    Marshall-Pescini, Sarah; Passalacqua, Chiara; Barnard, Shanis; Valsecchi, Paola; Prato-Previde, Emanuela

    2009-07-01

    Both genetic factors and life experiences appear to be important in shaping dogs' responses in a test situation. One potentially highly relevant life experience may be the dog's training history, however few studies have investigated this aspect so far. This paper briefly reviews studies focusing on the effects of training on dogs' performance in cognitive tasks, and presents new, preliminary evidence on trained and untrained pet dogs' performance in an 'unsolvable task'. Thirty-nine adult dogs: 13 trained for search and rescue activities (S&R group), 13 for agility competition (Agility group) and 13 untrained pets (Pet group) were tested. Three 'solvable' trials in which dogs could obtain the food by manipulating a plastic container were followed by an 'unsolvable' trial in which obtaining the food became impossible. The dogs' behaviours towards the apparatus and the people present (owner and researcher) were analysed. Both in the first 'solvable' and in the 'unsolvable' trial the groups were comparable on actions towards the apparatus, however differences emerged in their human-directed gazing behaviour. In fact, results in the 'solvable' trial, showed fewer S&R dogs looking back at a person compared to agility dogs, and the latter alternating their gaze between person and apparatus more frequently than pet dogs. In the unsolvable trial no difference between groups emerged in the latency to look at the person however agility dogs looked longer at the owner than both pet and S&R dogs; whereas S&R dogs exhibited significantly more barking (always occurring concurrently to looking at the person or the apparatus) than both other groups. Furthermore, S&R dogs alternated their gaze between person and apparatus more than untrained pet dogs, with agility dogs falling in between these two groups. Thus overall, it seems that the dogs' human-directed communicative behaviours are significantly influenced by their individual training experiences.

  19. A Heuristic Distributed Task Allocation Method for Multivehicle Multitask Problems and Its Application to Search and Rescue Scenario.

    Science.gov (United States)

    Zhao, Wanqing; Meng, Qinggang; Chung, Paul W H

    2016-04-01

    Using distributed task allocation methods for cooperating multivehicle systems is becoming increasingly attractive. However, most effort is placed on various specific experimental work and little has been done to systematically analyze the problem of interest and the existing methods. In this paper, a general scenario description and a system configuration are first presented according to search and rescue scenario. The objective of the problem is then analyzed together with its mathematical formulation extracted from the scenario. Considering the requirement of distributed computing, this paper then proposes a novel heuristic distributed task allocation method for multivehicle multitask assignment problems. The proposed method is simple and effective. It directly aims at optimizing the mathematical objective defined for the problem. A new concept of significance is defined for every task and is measured by the contribution to the local cost generated by a vehicle, which underlies the key idea of the algorithm. The whole algorithm iterates between a task inclusion phase, and a consensus and task removal phase, running concurrently on all the vehicles where local communication exists between them. The former phase is used to include tasks into a vehicle's task list for optimizing the overall objective, while the latter is to reach consensus on the significance value of tasks for each vehicle and to remove the tasks that have been assigned to other vehicles. Numerical simulations demonstrate that the proposed method is able to provide a conflict-free solution and can achieve outstanding performance in comparison with the consensus-based bundle algorithm.

  20. Iterative Radial Basis Functions Neural Networks as Metamodels of Stochastic Simulations of the Quality of Search Engines in the World Wide Web.

    Science.gov (United States)

    Meghabghab, George

    2001-01-01

    Discusses the evaluation of search engines and uses neural networks in stochastic simulation of the number of rejected Web pages per search query. Topics include the iterative radial basis functions (RBF) neural network; precision; response time; coverage; Boolean logic; regression models; crawling algorithms; and implications for search engine…

  1. Oral Proficiency Teaching with WebCERF and Skype: Scenarios for Online Production and Interaction Tasks

    Science.gov (United States)

    Jager, Sake; Meima, Estelle; Oggel, Gerdientje

    2013-01-01

    This article reports our findings on using WebCEF as a CEFR familiarization and self-assessment tool for oral proficiency. Furthermore, we outline how we have implemented Skype as a tool for telecollaboration in our language programmes. The primary purpose of our study was to explore how students and teachers would perceive the potential benefits…

  2. Large-area sheet task: Advanced dendritic-web-growth development

    Science.gov (United States)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Schruben, J.

    1983-01-01

    Thermally generated stresses in the growing web crystal were reduced. These stresses, which if too high cause the ribbon to degenerate, were reduced by a factor of three, resulting in the demonstrated growth of high-quality web crystals to widths of 5.4 cm. This progress was brought about chiefly by the application of thermal models to the development of low-stress growth configurations. A new temperature model was developed which can analyze the thermal effects of much more complex lid and top shield configurations than was possible with the old lumped shield model. Growth experiments which supplied input data such as actual shield temperature and melt levels were used to verify the modeling results. Desirable modifications in the melt level-sensing circuitry were made in the new experimental web growth furnace, and this furnace has been used to carry out growth experiments under steady-state conditions. New growth configurations were tested in long growth runs at Westinghouse AESD which produced wider, lower stress and higher quality web crystals than designs previously used.

  3. Oral proficiency teaching with WebCEF and Skype : Scenarios for online production and interaction tasks

    NARCIS (Netherlands)

    Jager, S.; Meima, Estelle; Oggel, Gerdientje

    2012-01-01

    This article reports our findings on using WebCEF as a CEFR familiarization and self-assessment tool for oral proficiency. Furthermore, we outline how we have implemented Skype as a tool for telecollaboration in our language programmes. The primary purpose of our study was to explore how students

  4. Selected results from a large study of Web searching: the Excite study

    Directory of Open Access Journals (Sweden)

    Amanda Spink

    2000-01-01

    Full Text Available This paper reports selected findings from an ongoing series of studies analyzing large-scale data sets containing queries posed by Excite users, a major Internet search service. The findings presented report on: (1 queries length and frequency, (2 Boolean queries, (3 query reformulation, (4 phrase searching, (5 search term distribution, (6 relevance feedback, (7 viewing pages of results, (8 successive searching, (9 sexually-related searching, (10 image queries and (11 multi-lingual aspects. Further research is discussed.

  5. Eysenbach, Tuische and Diepgen’s Evaluation of Web Searching for Identifying Unpublished Studies for Systematic Reviews: An Innovative Study Which is Still Relevant Today.

    Directory of Open Access Journals (Sweden)

    Simon Briscoe

    2016-09-01

    Full Text Available A Review of: Eysenbach, G., Tuische, J. & Diepgen, T.L. (2001. Evaluation of the usefulness of Internet searches to identify unpublished clinical trials for systematic reviews. Medical Informatics and the Internet in Medicine, 26(3, 203-218. http://dx.doi.org/10.1080/14639230110075459 Objective – To consider whether web searching is a useful method for identifying unpublished studies for inclusion in systematic reviews. Design – Retrospective web searches using the AltaVista search engine were conducted to identify unpublished studies – specifically, clinical trials – for systematic reviews which did not use a web search engine. Setting – The Department of Clinical Social Medicine, University of Heidelberg, Germany. Subjects – n/a Methods – Pilot testing of 11 web search engines was carried out to determine which could handle complex search queries. Pre-specified search requirements included the ability to handle Boolean and proximity operators, and truncation searching. A total of seven Cochrane systematic reviews were randomly selected from the Cochrane Library Issue 2, 1998, and their bibliographic database search strategies were adapted for the web search engine, AltaVista. Each adaptation combined search terms for the intervention, problem, and study type in the systematic review. Hints to planned, ongoing, or unpublished studies retrieved by the search engine, which were not cited in the systematic reviews, were followed up by visiting websites and contacting authors for further details when required. The authors of the systematic reviews were then contacted and asked to comment on the potential relevance of the identified studies. Main Results – Hints to 14 unpublished and potentially relevant studies, corresponding to 4 of the 7 randomly selected Cochrane systematic reviews, were identified. Out of the 14 studies, 2 were considered irrelevant to the corresponding systematic review by the systematic review authors. The

  6. Collaborating and delivering literature search results to clinical teams using web 2.0 tools.

    Science.gov (United States)

    Damani, Shamsha; Fulton, Stephanie

    2010-07-01

    This article describes the experiences of librarians at the Research Medical Library embedded within clinical teams at The University of Texas MD Anderson Cancer Center and their efforts to enhance communication within their teams using Web 2.0 tools. Pros and cons of EndNote Web, Delicious, Connotea, PBWorks, and SharePoint are discussed.

  7. The Invisible Web: Uncovering Information Sources Search Engines Can't See.

    Science.gov (United States)

    Sherman, Chris; Price, Gary

    This book takes a detailed look at the nature and extent of the Invisible Web, and offers pathfinders for accessing the valuable information it contains. It is designed to fit the needs of both novice and advanced Web searchers. Chapter One traces the development of the Internet and many of the early tools used to locate and share information via…

  8. Using anchor text, spam filtering and Wikipedia for web search and entity ranking

    NARCIS (Netherlands)

    Kamps, J.; Kaptein, R.; Koolen, M.; Voorhees, E.M.; Buckland, L.P.

    2010-01-01

    In this paper, we document our efforts in participating to the TREC 2010 Entity Ranking and Web Tracks. We had multiple aims: For the Web Track we wanted to compare the effectiveness of anchor text of the category A and B collections and the impact of global document quality measures such as

  9. Effects of Diacritics on Web Search Engines’ Performance for Retrieval of Yoruba Documents

    Directory of Open Access Journals (Sweden)

    Toluwase Victor Asubiaro

    2014-06-01

    Full Text Available This paper aims to find out the possible effect of the use or nonuse of diacritics in Yoruba search queries on the performance of major search engines, AOL, Bing, Google and Yahoo!, in retrieving documents. 30 Yoruba queries created from the most searched keywords from Nigeria on Google search logs were submitted to the search engines. The search queries were posed to the search engines without diacritics and then with diacritics. All of the search engines retrieved more sites in response to the queries without diacritics. Also, they all retrieved more precise results for queries without diacritics. The search engines also answered more queries without diacritics. There was no significant difference in the precision values of any two of the four search engines for diacritized and undiacritized queries. There was a significant difference in the effectiveness of AOL and Yahoo when diacritics were applied and when they were not applied. The findings of the study indicate that the search engines do not find a relationship between the diacritized Yoruba words and the undiacritized versions. Therefore, there is a need for search engines to add normalization steps to pre-process Yoruba queries and indexes. This study concentrates on a problem with search engines that has not been previously investigated.

  10. An exploratory study into perceived task complexity, topic specificity, and usefulness for integrated search

    DEFF Research Database (Denmark)

    Ingwersen, Peter; Lioma, Christina; Larsen, Birger

    2012-01-01

    We investigate the relations between user perceptions of work task complexity, specificity, and usefulness of retrieved results. 23 academic researchers submitted detailed descriptions of 65 real-life work tasks in the physics domain, and assessed documents retrieved from an integrated collection...... results, and high task specificity led to many useful documents....

  11. Library Catalogue Users Are Influenced by Trends in Web Searching Search Strategies. A review of: Novotny, Eric. “I Don’t Think I Click: A Protocol Analysis Study of Use of a Library Online Catalog in the Internet Age.” College & Research Libraries, 65.6 (Nov. 2004: 525-37.

    Directory of Open Access Journals (Sweden)

    Susan Haigh

    2006-09-01

    Full Text Available Objective – To explore how Web-savvy users think about and search an online catalogue. Design – Protocol analysis study. Setting – Academic library (Pennsylvania State University Libraries. Subjects – Eighteen users (17 students, 1 faculty member of an online public access catalog, divided into two groups of nine first-time and nine experienced users. Method – The study team developed five tasks that represented a range of activities commonly performed by library users, such as searching for a specific item, identifying a library location, and requesting a copy. Seventeen students and one faculty member, divided evenly between novice and experienced searchers, were recruited to “think aloud” through the performance of the tasks. Data were gathered through audio recordings, screen capture software, and investigator notes. The time taken for each task was recorded, and investigators rated task completion as “successful,” “partially successful,” “fail,” or “search aborted.” After the searching session, participants were interviewed to clarify their actions and provide further commentary on the catalogue search. Main results – Participants in both test groups were relatively unsophisticated subject searchers. They made minimal use of Boolean operators, and tended not to repair failed searches by rethinking the search vocabulary and using synonyms. Participants did not have a strong understanding of library catalogue contents or structure and showed little curiosity in developing an understanding of how to utilize the catalogue. Novice users were impatient both in choosing search options and in evaluating their search results. They assumed search results were sorted by relevance, and thus would not typically browse past the initial screen. They quickly followed links, fearlessly tried different searches and options, and rapidly abandoned false trails. Experienced users were more effective and efficient searchers than

  12. SCANPS: a web server for iterative protein sequence database searching by dynamic programing, with display in a hierarchical SCOP browser.

    Science.gov (United States)

    Walsh, Thomas P; Webber, Caleb; Searle, Stephen; Sturrock, Shane S; Barton, Geoffrey J

    2008-07-01

    SCANPS performs iterative profile searching similar to PSI-BLAST but with full dynamic programing on each cycle and on-the-fly estimation of significance. This combination gives good sensitivity and selectivity that outperforms PSI-BLAST in domain-searching benchmarks. Although computationally expensive, SCANPS exploits onchip parallelism (MMX and SSE2 instructions on Intel chips) as well as MPI parallelism to give acceptable turnround times even for large databases. A web server developed to run SCANPS searches is now available at http://www.compbio.dundee.ac.uk/www-scanps. The server interface allows a range of different protein sequence databases to be searched including the SCOP database of protein domains. The server provides the user with regularly updated versions of the main protein sequence databases and is backed up by significant computing resources which ensure that searches are performed rapidly. For SCOP searches, the results may be viewed in a new tree-based representation that reflects the structure of the SCOP hierarchy; this aids the user in placing each hit in the context of its SCOP classification and understanding its relationship to other domains in SCOP.

  13. The FOLDALIGN web server for pairwise structural RNA alignment and mutual motif search

    DEFF Research Database (Denmark)

    Havgaard, Jakob Hull; Lyngsø, Rune B.; Gorodkin, Jan

    2005-01-01

    FOLDALIGN is a Sankoff-based algorithm for making structural alignments of RNA sequences. Here, we present a web server for making pairwise alignments between two RNA sequences, using the recently updated version of FOLDALIGN. The server can be used to scan two sequences for a common structural RNA...... motif of limited size, or the entire sequences can be aligned locally or globally. The web server offers a graphical interface, which makes it simple to make alignments and manually browse the results. the web server can be accessed at http://foldalign.kvl.dk...

  14. Finding people, papers, and posts: Vertical search algorithms and evaluation

    NARCIS (Netherlands)

    Berendsen, R.W.

    2015-01-01

    There is a growing diversity of information access applications. While general web search has been dominant in the past few decades, a wide variety of so-called vertical search tasks and applications have come to the fore. Vertical search is an often used term for search that targets specific

  15. Literaure search for intermittent rivers research using ISI Web of Science

    Data.gov (United States)

    U.S. Environmental Protection Agency — The dataset is the bibliometric information included in the ISI Web of Science database of scientific literature. Table S2 accessible from the dataset link provides...

  16. The efficacy of using search engines in procuring information about orthopaedic foot and ankle problems from the World Wide Web.

    Science.gov (United States)

    Nogler, M; Wimmer, C; Mayr, E; Ofner, D

    1999-05-01

    This study has attempted to demonstrate the feasibility of obtaining information specific to foot and ankle orthopaedics from the World Wide Web (WWW). Six search engines (Lycos, AltaVista, Infoseek, Excite, Webcrawler, and HotBot) were used in scanning the Web for the following key words: "cavus foot," "diabetic foot," "hallux valgus,"and "pes equinovarus." Matches were classified by language, provider, type, and relevance to medical professionals or to patients. Sixty percent (407 sites) of the visited websites contained information intended for use by physicians and other medical professionals; 30% (206 sites) were related to patient information; 10% of the sites were not easily classifiable. Forty-one percent (169 sites) of the websites were commercially oriented homepages that included advertisements.

  17. C-State: an interactive web app for simultaneous multi-gene visualization and comparative epigenetic pattern search.

    Science.gov (United States)

    Sowpati, Divya Tej; Srivastava, Surabhi; Dhawan, Jyotsna; Mishra, Rakesh K

    2017-09-13

    Comparative epigenomic analysis across multiple genes presents a bottleneck for bench biologists working with NGS data. Despite the development of standardized peak analysis algorithms, the identification of novel epigenetic patterns and their visualization across gene subsets remains a challenge. We developed a fast and interactive web app, C-State (Chromatin-State), to query and plot chromatin landscapes across multiple loci and cell types. C-State has an interactive, JavaScript-based graphical user interface and runs locally in modern web browsers that are pre-installed on all computers, thus eliminating the need for cumbersome data transfer, pre-processing and prior programming knowledge. C-State is unique in its ability to extract and analyze multi-gene epigenetic information. It allows for powerful GUI-based pattern searching and visualization. We include a case study to demonstrate its potential for identifying user-defined epigenetic trends in context of gene expression profiles.

  18. Developing a Data Discovery Tool for Interdisciplinary Science: Leveraging a Web-based Mapping Application and Geosemantic Searching

    Science.gov (United States)

    Albeke, S. E.; Perkins, D. G.; Ewers, S. L.; Ewers, B. E.; Holbrook, W. S.; Miller, S. N.

    2015-12-01

    The sharing of data and results is paramount for advancing scientific research. The Wyoming Center for Environmental Hydrology and Geophysics (WyCEHG) is a multidisciplinary group that is driving scientific breakthroughs to help manage water resources in the Western United States. WyCEHG is mandated by the National Science Foundation (NSF) to share their data. However, the infrastructure from which to share such diverse, complex and massive amounts of data did not exist within the University of Wyoming. We developed an innovative framework to meet the data organization, sharing, and discovery requirements of WyCEHG by integrating both open and closed source software, embedded metadata tags, semantic web technologies, and a web-mapping application. The infrastructure uses a Relational Database Management System as the foundation, providing a versatile platform to store, organize, and query myriad datasets, taking advantage of both structured and unstructured formats. Detailed metadata are fundamental to the utility of datasets. We tag data with Uniform Resource Identifiers (URI's) to specify concepts with formal descriptions (i.e. semantic ontologies), thus allowing users the ability to search metadata based on the intended context rather than conventional keyword searches. Additionally, WyCEHG data are geographically referenced. Using the ArcGIS API for Javascript, we developed a web mapping application leveraging database-linked spatial data services, providing a means to visualize and spatially query available data in an intuitive map environment. Using server-side scripting (PHP), the mapping application, in conjunction with semantic search modules, dynamically communicates with the database and file system, providing access to available datasets. Our approach provides a flexible, comprehensive infrastructure from which to store and serve WyCEHG's highly diverse research-based data. This framework has not only allowed WyCEHG to meet its data stewardship

  19. An Exploratory Study into Perceived Task Complexity, Topic Specificity and Usefulness for Integrated Search

    DEFF Research Database (Denmark)

    Ingwersen, Peter; Lioma, Christina; Larsen, Birger

    2012-01-01

    We investigate the relations between user perceptions of work task complexity, topic specificity, and usefulness of retrieved results. 23 academic researchers submitted detailed descriptions of 65 real-life work tasks in the physics domain, and assessed documents retrieved from an integrated...

  20. Very slow search and reach: failure to maximize expected gain in an eye-hand coordination task.

    Directory of Open Access Journals (Sweden)

    Hang Zhang

    Full Text Available We examined an eye-hand coordination task where optimal visual search and hand movement strategies were inter-related. Observers were asked to find and touch a target among five distractors on a touch screen. Their reward for touching the target was reduced by an amount proportional to how long they took to locate and reach to it. Coordinating the eye and the hand appropriately would markedly reduce the search-reach time. Using statistical decision theory we derived the sequence of interrelated eye and hand movements that would maximize expected gain and we predicted how hand movements should change as the eye gathered further information about target location. We recorded human observers' eye movements and hand movements and compared them with the optimal strategy that would have maximized expected gain. We found that most observers failed to adopt the optimal search-reach strategy. We analyze and describe the strategies they did adopt.

  1. StarTracker: An Integrated, Web-based Clinical Search Engine

    OpenAIRE

    Gregg, William; Jirjis, Jim; Lorenzi, Nancy M.; Giuse, Dario

    2003-01-01

    This poster details the design and use of the StarTracker clinical search engine. This program is fully integrated within our electronic medical record system and allows users to enter simple rules that direct formatted searches of multiple legacy databases.

  2. Web Usage Mining Analysis of Federated Search Tools for Egyptian Scholars

    Science.gov (United States)

    Mohamed, Khaled A.; Hassan, Ahmed

    2008-01-01

    Purpose: This paper aims to examine the behaviour of the Egyptian scholars while accessing electronic resources through two federated search tools. The main purpose of this article is to provide guidance for federated search tool technicians and support teams about user issues, including the need for training. Design/methodology/approach: Log…

  3. An active registry for bioinformatics web services.

    Science.gov (United States)

    Pettifer, S; Thorne, D; McDermott, P; Attwood, T; Baran, J; Bryne, J C; Hupponen, T; Mowbray, D; Vriend, G

    2009-08-15

    The EMBRACE Registry is a web portal that collects and monitors web services according to test scripts provided by the their administrators. Users are able to search for, rank and annotate services, enabling them to select the most appropriate working service for inclusion in their bioinformatics analysis tasks. Web site implemented with PHP, Python, MySQL and Apache, with all major browsers supported. (www.embraceregistry.net).

  4. Web-scale near-duplicate search: Techniques and applications : Guest Editor’s Introduction

    NARCIS (Netherlands)

    Ngo, C.W.; Xu, C.; Kraaij, W.; El Saddik, A.

    2013-01-01

    As the bandwidth accessible to average users has increased, audiovisual material has become the fastest growing datatype on the Internet. The impressive growth of the social Web, where users can exchange user-generated content, contributes to the overwhelming number of multimedia files available.

  5. Chemical compound navigator: a web-based chem-BLAST, chemical taxonomy-based search engine for browsing compounds.

    Science.gov (United States)

    Prasanna, M D; Vondrasek, Jiri; Wlodawer, Alexander; Rodriguez, H; Bhat, T N

    2006-06-01

    A novel technique to annotate, query, and analyze chemical compounds has been developed and is illustrated by using the inhibitor data on HIV protease-inhibitor complexes. In this method, all chemical compounds are annotated in terms of standard chemical structural fragments. These standard fragments are defined by using criteria, such as chemical classification; structural, chemical, or functional groups; and commercial, scientific or common names or synonyms. These fragments are then organized into a data tree based on their chemical substructures. Search engines have been developed to use this data tree to enable query on inhibitors of HIV protease (http://xpdb.nist.gov/hivsdb/hivsdb.html). These search engines use a new novel technique, Chemical Block Layered Alignment of Substructure Technique (Chem-BLAST) to search on the fragments of an inhibitor to look for its chemical structural neighbors. This novel technique to annotate and query compounds lays the foundation for the use of the Semantic Web concept on chemical compounds to allow end users to group, sort, and search structural neighbors accurately and efficiently. During annotation, it enables the attachment of "meaning" (i.e., semantics) to data in a manner that far exceeds the current practice of associating "metadata" with data by creating a knowledge base (or ontology) associated with compounds. Intended users of the technique are the research community and pharmaceutical industry, for which it will provide a new tool to better identify novel chemical structural neighbors to aid drug discovery. 2006 Wiley-Liss, Inc.

  6. Feature processing asymmetry in a colour and orientation conjunction-search task

    NARCIS (Netherlands)

    Hannus, A.; Bekkering, H.; Drost, E.; Bontjer, R.; Cornelissen, F.W.

    2004-01-01

    Distinctive visual cortical areas process specific visual features of objects. Does this imply that individual features are also processed independently? To investigate this, visual-search performance for individual features was compared with performance for these same features in a

  7. MotifCombinator: a web-based tool to search for combinations of cis-regulatory motifs

    Directory of Open Access Journals (Sweden)

    Tsunoda Tatsuhiko

    2007-03-01

    Full Text Available Abstract Background A combination of multiple types of transcription factors and cis-regulatory elements is often required for gene expression in eukaryotes, and the combinatorial regulation confers specific gene expression to tissues or environments. To reveal the combinatorial regulation, computational methods are developed that efficiently infer combinations of cis-regulatory motifs that are important for gene expression as measured by DNA microarrays. One promising type of computational method is to utilize regression analysis between expression levels and scores of motifs in input sequences. This type takes full advantage of information on expression levels because it does not require that the expression level of each gene be dichotomized according to whether or not it reaches a certain threshold level. However, there is no web-based tool that employs regression methods to systematically search for motif combinations and that practically handles combinations of more than two or three motifs. Results We here introduced MotifCombinator, an online tool with a user-friendly interface, to systematically search for combinations composed of any number of motifs based on regression methods. The tool utilizes well-known regression methods (the multivariate linear regression, the multivariate adaptive regression spline or MARS, and the multivariate logistic regression method for this purpose, and uses the genetic algorithm to search for combinations composed of any desired number of motifs. The visualization systems in this tool help users to intuitively grasp the process of the combination search, and the backup system allows users to easily stop and restart calculations that are expected to require large computational time. This tool also provides preparatory steps needed for systematic combination search – i.e., selecting single motifs to constitute combinations and cutting out redundant similar motifs based on clustering analysis. Conclusion

  8. Construction of web-based nutrition education contents and searching engine for usage of healthy menu of children

    Science.gov (United States)

    Lee, Tae-Kyong; Chung, Hea-Jung; Park, Hye-Kyung; Lee, Eun-Ju; Nam, Hye-Seon; Jung, Soon-Im; Cho, Jee-Ye; Lee, Jin-Hee; Kim, Gon; Kim, Min-Chan

    2008-01-01

    A diet habit, which is developed in childhood, lasts for a life time. In this sense, nutrition education and early exposure to healthy menus in childhood is important. Children these days have easy access to the internet. Thus, a web-based nutrition education program for children is an effective tool for nutrition education of children. This site provides the material of the nutrition education for children with characters which are personified nutrients. The 151 menus are stored in the site together with video script of the cooking process. The menus are classified by the criteria based on age, menu type and the ethnic origin of the menu. The site provides a search function. There are three kinds of search conditions which are key words, menu type and "between" expression of nutrients such as calorie and other nutrients. The site is developed with the operating system Windows 2003 Server, the web server ZEUS 5, development language JSP, and database management system Oracle 10 g. PMID:20126375

  9. Modeling of protein-peptide interactions using the CABS-dock web server for binding site search and flexible docking.

    Science.gov (United States)

    Blaszczyk, Maciej; Kurcinski, Mateusz; Kouza, Maksim; Wieteska, Lukasz; Debinski, Aleksander; Kolinski, Andrzej; Kmiecik, Sebastian

    2016-01-15

    Protein-peptide interactions play essential functional roles in living organisms and their structural characterization is a hot subject of current experimental and theoretical research. Computational modeling of the structure of protein-peptide interactions is usually divided into two stages: prediction of the binding site at a protein receptor surface, and then docking (and modeling) the peptide structure into the known binding site. This paper presents a comprehensive CABS-dock method for the simultaneous search of binding sites and flexible protein-peptide docking, available as a user's friendly web server. We present example CABS-dock results obtained in the default CABS-dock mode and using its advanced options that enable the user to increase the range of flexibility for chosen receptor fragments or to exclude user-selected binding modes from docking search. Furthermore, we demonstrate a strategy to improve CABS-dock performance by assessing the quality of models with classical molecular dynamics. Finally, we discuss the promising extensions and applications of the CABS-dock method and provide a tutorial appendix for the convenient analysis and visualization of CABS-dock results. The CABS-dock web server is freely available at http://biocomp.chem.uw.edu.pl/CABSdock/. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Searching for Comets on the World Wide Web: The Orbit of 17P/Holmes from the Behavior of Photographers

    Science.gov (United States)

    Lang, Dustin; Hogg, David W.

    2012-08-01

    We performed an image search for "Comet Holmes," using the Yahoo! Web search engine, on 2010 April 1. Thousands of images were returned. We astrometrically calibrated—and therefore vetted—the images using the Astrometry.net system. The calibrated image pointings form a set of data points to which we can fit a test-particle orbit in the solar system, marginalizing over image dates and detecting outliers. The approach is Bayesian and the model is, in essence, a model of how comet astrophotographers point their instruments. In this work, we do not measure the position of the comet within each image, but rather use the celestial position of the whole image to infer the orbit. We find very strong probabilistic constraints on the orbit, although slightly off the Jet Propulsion Lab ephemeris, probably due to limitations of our model. Hyperparameters of the model constrain the reliability of date meta-data and where in the image astrophotographers place the comet; we find that ~70% of the meta-data are correct and that the comet typically appears in the central third of the image footprint. This project demonstrates that discoveries and measurements can be made using data of extreme heterogeneity and unknown provenance. As the size and diversity of astronomical data sets continues to grow, approaches like ours will become more essential. This project also demonstrates that the Web is an enormous repository of astronomical information, and that if an object has been given a name and photographed thousands of times by observers who post their images on the Web, we can (re-)discover it and infer its dynamical properties.

  11. SIFTER search: a web server for accurate phylogeny-based protein function prediction.

    Science.gov (United States)

    Sahraeian, Sayed M; Luo, Kevin R; Brenner, Steven E

    2015-07-01

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. The SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Using Web searches to track interest in synthetic cannabinoids (a/k/a 'herbal incense').

    Science.gov (United States)

    Curtis, Brenda; Alanis-Hirsch, Kelly; Kaynak, Övgü; Cacciola, John; Meyers, Kathy; McLellan, Anthony Thomas

    2015-01-01

    This article reports a content analysis of Internet websites related to an emerging designer drug, synthetic cannabinoids. The number of synthetic cannabinoids searchers in the USA has steadily increased from November 2008 to November 2011. To determine the information available on the Internet in relation to synthetic cannabinoids, sites were identified using the Google search engine and the search term 'herbal incense'. The first 100 consecutive sites were visited and classified by two coders. The websites were evaluated for type of content (retail, information, news, other). US unique monthly visitor data were examined for the top 10 retail sites, and these sites were coded for the quality of information available regarding the legality of synthetic cannabinoids sale and use. The Google search yielded 2,730,000 sites for 'herbal incense' (for comparison of search terms: 'synthetic marijuana', 1,170,000; 'K2 Spice', 247,000; and 'synthetic weed', 122,000). Moreover, in the Google search, 87% of the sites were retail sites, 5% news, 4% informational and 4% non-synthetic cannabinoid sites. Many tools found within Google free services hold promise in providing a technique to identify emerging drug markets. We recommend continued surveillance of the Internet using the online tools presented in this brief report by both drug researchers and policy-makers to identify the emerging trends in synthetic drugs' availability and interest. © 2014 Australasian Professional Society on Alcohol and other Drugs.

  13. The effects of circadian phase, time awake, and imposed sleep restriction on performing complex visual tasks: evidence from comparative visual search.

    Science.gov (United States)

    Pomplun, Marc; Silva, Edward J; Ronda, Joseph M; Cain, Sean W; Münch, Mirjam Y; Czeisler, Charles A; Duffy, Jeanne F

    2012-07-26

    Cognitive performance not only differs between individuals, but also varies within them, influenced by factors that include sleep-wakefulness and biological time of day (circadian phase). Previous studies have shown that both factors influence accuracy rather than the speed of performing a visual search task, which can be hazardous in safety-critical tasks such as air-traffic control or baggage screening. However, prior investigations used simple, brief search tasks requiring little use of working memory. In order to study the effects of circadian phase, time awake, and chronic sleep restriction on the more realistic scenario of longer tasks requiring the sustained interaction of visual working memory and attentional control, the present study employed two comparative visual search tasks. In these tasks, participants had to detect a mismatch between two otherwise identical object distributions, with one of the tasks (mirror task) requiring an additional mental image transformation. Time awake and circadian phase both had significant influences on the speed, but not the accuracy of task performance. Over the course of three weeks of chronic sleep restriction, speed but not accuracy of task performance was impacted. The results suggest measures for safer performance of important tasks and point out the importance of minimizing the impact of circadian phase and sleep-wake history in laboratory vision experiments.

  14. PRESENTING SEARCH RESULT WITH REDUCED UNWANTED WEB ADDRESSES USING FUZZY BASED APPROACH

    Directory of Open Access Journals (Sweden)

    Nancy Jasmine Goldena

    2017-07-01

    Full Text Available Big Data is now the most talked about research subject. Over the year with the internet and storage space expansions vast swaths of data are available for would be searcher. About a decade ago when a content was searched, due to minimum amount of content often you end up with accurate set of results. But nowadays most of the data, if not all are sometimes vague and not even sometime pertain to area of search it was indented to. Hence here a novel approach is presented to perform data cleaning using a simple but effective fuzzy rule to weed out data that won’t produce accurate data.

  15. In search of design principles for developing digital learning & performance support for a student design task

    NARCIS (Netherlands)

    Bollen, Lars; Van der Meij, Hans; Leemkuil, Henny; McKenney, Susan

    2016-01-01

    A digital learning and performance support environment for university student design tasks was developed. This paper describes on the design rationale, process, and the usage results to arrive at a core set of design principles for the construction of such an environment. We present a collection of

  16. In search of design principles for developing digital learning & performance support for a student design task

    NARCIS (Netherlands)

    Bollen, Lars; van der Meij, Hans; Leemkuil, Hendrik H.; McKenney, Susan

    2015-01-01

    A digital learning and performance support environment for university student design tasks was developed. This paper describes on the design rationale, process, and the usage results to arrive at a core set of design principles for the construction of such an environment. We present a collection of

  17. The Effect of Task and Personal Relevance on Credibility Judgements while Searching on the Internet

    Science.gov (United States)

    Kirkyla, Adrius Viktoras

    2010-01-01

    People can view the Internet as an endless source of information although it is not known how individuals might evaluate the credibility of information that is presented on websites. A methodology is needed to incorporate how the information seeking task, as well as the level of personal relevance, influences the criteria individuals use to…

  18. Search for Autonomy in Motor Task Learning in Physical Education University Students

    Science.gov (United States)

    Moreno Murcia, Juan Antonio; Lacarcel, Jose Antonio Vera; Del Villar Alvarez, Fernando

    2010-01-01

    The study focused on discovering the influence that an autonomous motor task learning programme had on the improvement of perceived competence, intrinsic regulation, incremental belief and motivational orientations. The study was performed with two groups of participants (n = 22 and n = 20) aged between 19 and 35 years. The instruments used were…

  19. Are Expert Users Always Better Searchers? Interaction of Expertise and Semantic Grouping in Hypertext Search Tasks

    Science.gov (United States)

    Salmeron, L.; Canas, J. J.; Fajardo, I.

    2005-01-01

    The facilitative effect of expertise in hypertext information retrieval (IR) tasks has been widely reported in related literature. However, recent theories of human expertise question the robustness of this result, since previous works have not fully considered the interaction between user and system characteristics. In this study, the constraint…

  20. Searching for New Answers: The Application of Task-Technology Fit to E-Textbook Usage

    Science.gov (United States)

    Gerhart, Natalie; Peak, Daniel A.; Prybutok, Victor R.

    2015-01-01

    Students have been slow to adopt e-textbooks even though they are often less expensive than traditional textbooks. Prior e-textbook research has focused on adoption behavior, with little research to date on how students perceive e-textbooks fitting their needs. This work builds upon Task-Technology Fit (TTF) and Consumer Acceptance and Use of…

  1. Toward the influence of temporal attention on the selection of targets in a visual search task: An ERP study.

    Science.gov (United States)

    Rolke, Bettina; Festl, Freya; Seibold, Verena C

    2016-11-01

    We used ERPs to investigate whether temporal attention interacts with spatial attention and feature-based attention to enhance visual processing. We presented a visual search display containing one singleton stimulus among a set of homogenous distractors. Participants were asked to respond only to target singletons of a particular color and shape that were presented in an attended spatial position. We manipulated temporal attention by presenting a warning signal before each search display and varying the foreperiod (FP) between the warning signal and the search display in a blocked manner. We observed distinctive ERP effects of both spatial and temporal attention. The amplitudes for the N2pc, SPCN, and P3 were enhanced by spatial attention indicating a processing benefit of relevant stimulus features at the attended side. Temporal attention accelerated stimulus processing; this was indexed by an earlier onset of the N2pc component and a reduction in reaction times to targets. Most importantly, temporal attention did not interact with spatial attention or stimulus features to influence visual processing. Taken together, the results suggest that temporal attention fosters visual perceptual processing in a visual search task independently from spatial attention and feature-based attention; this provides support for the nonspecific enhancement hypothesis of temporal attention. © 2016 Society for Psychophysiological Research.

  2. Making Statistical Data More Easily Accessible on the Web Results of the StatSearch Case Study

    CERN Document Server

    Rajman, M; Boynton, I M; Fridlund, B; Fyhrlund, A; Sundgren, B; Lundquist, P; Thelander, H; Wänerskär, M

    2005-01-01

    In this paper we present the results of the StatSearch case study that aimed at providing an enhanced access to statistical data available on the Web. In the scope of this case study we developed a prototype of an information access tool combining a query-based search engine with semi-automated navigation techniques exploiting the hierarchical structuring of the available data. This tool enables a better control of the information retrieval, improving the quality and ease of the access to statistical information. The central part of the presented StatSearch tool consists in the design of an algorithm for automated navigation through a tree-like hierarchical document structure. The algorithm relies on the computation of query related relevance score distributions over the available database to identify the most relevant clusters in the data structure. These most relevant clusters are then proposed to the user for navigation, or, alternatively, are the support for the automated navigation process. Several appro...

  3. Developing a Web Tool for Searching and Viewing Collections of High-Quality Cultural Images

    Science.gov (United States)

    Lazarinis, Fotis

    2010-01-01

    Purpose: Searching for information and viewing visual representations of products in e-organisations is a common activity of the e-visitors to these organisations. For example, in e-museums, users are shown images or other visual information of the existing objects. The aim of this paper is to present a tool which supports the effective searching…

  4. A Web-Based Search Engine for Chinese Calligraphic Manuscript Images

    Science.gov (United States)

    Zhuang, Yi; Jiang, Nan; Hu, Haiyang

    In this paper, we propose a novel framework for the web-based retrieval of Chinese calligraphic manuscript images which includes two main components: 1). A Shape- Similarity (SS)-based method which is to effectively support a retrieval over large Chinese calligraphic manuscript databases [19]. In this retrieval method, shapes of calligraphic characters are represented by their approximate contour points extracted from the character images; 2). To speed up the retrieval efficiency, we then propose a Composite - Distance- Tree(CD-Tree)-based high-dimensional indexing scheme for it. Comprehensive experiments are conducted to testify the effectiveness and efficiency of our proposed retrieval and indexing methods respectively.

  5. The Hidden Web

    OpenAIRE

    Kautz, Henry; Selman, Bart; Shah, Mehul

    1997-01-01

    The difficulty of finding information on the World Wide Web by browsing hypertext documents has led to the development and deployment of various search engines and indexing techniques. However, many information-gathering tasks are better handled by finding a referral to a human expert rather than by simply interacting with online information sources. A personal referral allows a user to judge the quality of the information he or she is receiving as well as to potentially obtain information th...

  6. Development and Application of an Analyst Process Model for a Search Task Scenario

    Science.gov (United States)

    2013-12-01

    real world Aesthetic and minimalist design A ve ra ge Reduce uncertainty Present new information with meaningful aids to interpretation User...with task experience. Aesthetic appeal of the user interface. Level of frustration with using the system. Level of motivation to continue using the...entitlements to medical and dental care and/or compensation in the event of injury are governed by federal laws and regulations, and that if you

  7. Multi-Robot Search for a Moving Target: Integrating World Modeling, Task Assignment and Context

    Science.gov (United States)

    2016-12-01

    which takes as input the set of sensory data D and external input events I. CS generates a coordination strategy St for each different context: CS...19); Then, the procedure locally computes and returns the most suitable task for the robot i-th. Algorithm 1: Context-Coordination Input: sensory data...experiments are carried out is part of the B-Human architecture , which provides a RoboCup-dedicated simulation platform entirely written in C++4, that

  8. Perancangan Aplikasi Sistem Pendukung Keputusan Penentuan Komponen Komputer Rakitan Dengan Algoritma Depth First Search Berbasis Web

    OpenAIRE

    Budiarto, Bambang

    2014-01-01

    Nowadays, many brands and types of computers sold in the market make the user a choice of difficulties in in the selection of a combination of hardware specifications in assembling a computer based budget that users wanted. Therefore designed a decision support system for the selection of the computer so that the user can determine the right choice by computer according to the needs and abilities (budget). This application was built using Depth First Search Algorithm and based on Decision Sup...

  9. Collaborative Ranking and Profiling: Exploiting the Wisdom of Crowds in Tailored Web Search

    OpenAIRE

    Felber, Pascal; Kropf, Peter; Leonini, Lorenzo; Luu, Toan; Rajman, Martin; Rivière, Etienne

    2010-01-01

    International audience; Popular search engines essentially rely on information about the structure of the graph of linked elements to find the most relevant results for a given query. While this approach is satisfactory for popular interest domains or when the user expectations follow the main trend, it is very sensitive to the case of ambiguous queries, where queries can have answers over several different domains. Elements pertaining to an implicitly targeted interest domain with low popula...

  10. Information-computational system for storage, search and analytical processing of environmental datasets based on the Semantic Web technologies

    Science.gov (United States)

    Titov, A.; Gordov, E.; Okladnikov, I.

    2009-04-01

    In this report the results of the work devoted to the development of working model of the software system for storage, semantically-enabled search and retrieval along with processing and visualization of environmental datasets containing results of meteorological and air pollution observations and mathematical climate modeling are presented. Specially designed metadata standard for machine-readable description of datasets related to meteorology, climate and atmospheric pollution transport domains is introduced as one of the key system components. To provide semantic interoperability the Resource Description Framework (RDF, http://www.w3.org/RDF/) technology means have been chosen for metadata description model realization in the form of RDF Schema. The final version of the RDF Schema is implemented on the base of widely used standards, such as Dublin Core Metadata Element Set (http://dublincore.org/), Directory Interchange Format (DIF, http://gcmd.gsfc.nasa.gov/User/difguide/difman.html), ISO 19139, etc. At present the system is available as a Web server (http://climate.risks.scert.ru/metadatabase/) based on the web-portal ATMOS engine [1] and is implementing dataset management functionality including SeRQL-based semantic search as well as statistical analysis and visualization of selected data archives [2,3]. The core of the system is Apache web server in conjunction with Tomcat Java Servlet Container (http://jakarta.apache.org/tomcat/) and Sesame Server (http://www.openrdf.org/) used as a database for RDF and RDF Schema. At present statistical analysis of meteorological and climatic data with subsequent visualization of results is implemented for such datasets as NCEP/NCAR Reanalysis, Reanalysis NCEP/DOE AMIP II, JMA/CRIEPI JRA-25, ECMWF ERA-40 and local measurements obtained from meteorological stations on the territory of Russia. This functionality is aimed primarily at finding of main characteristics of regional climate dynamics. The proposed system represents

  11. What Top-Down Task Sets Do for Us: An ERP Study on the Benefits of Advance Preparation in Visual Search

    Science.gov (United States)

    Eimer, Martin; Kiss, Monika; Nicholas, Susan

    2011-01-01

    When target-defining features are specified in advance, attentional target selection in visual search is controlled by preparatory top-down task sets. We used ERP measures to study voluntary target selection in the absence of such feature-specific task sets, and to compare it to selection that is guided by advance knowledge about target features.…

  12. GLIDERS - A web-based search engine for genome-wide linkage disequilibrium between HapMap SNPs

    Directory of Open Access Journals (Sweden)

    Broxholme John

    2009-10-01

    Full Text Available Abstract Background A number of tools for the examination of linkage disequilibrium (LD patterns between nearby alleles exist, but none are available for quickly and easily investigating LD at longer ranges (>500 kb. We have developed a web-based query tool (GLIDERS: Genome-wide LInkage DisEquilibrium Repository and Search engine that enables the retrieval of pairwise associations with r2 ≥ 0.3 across the human genome for any SNP genotyped within HapMap phase 2 and 3, regardless of distance between the markers. Description GLIDERS is an easy to use web tool that only requires the user to enter rs numbers of SNPs they want to retrieve genome-wide LD for (both nearby and long-range. The intuitive web interface handles both manual entry of SNP IDs as well as allowing users to upload files of SNP IDs. The user can limit the resulting inter SNP associations with easy to use menu options. These include MAF limit (5-45%, distance limits between SNPs (minimum and maximum, r2 (0.3 to 1, HapMap population sample (CEU, YRI and JPT+CHB combined and HapMap build/release. All resulting genome-wide inter-SNP associations are displayed on a single output page, which has a link to a downloadable tab delimited text file. Conclusion GLIDERS is a quick and easy way to retrieve genome-wide inter-SNP associations and to explore LD patterns for any number of SNPs of interest. GLIDERS can be useful in identifying SNPs with long-range LD. This can highlight mis-mapping or other potential association signal localisation problems.

  13. Literature search, review, and compilation of data for chemical and radiochemical sensors: Task 1 report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-01-01

    During the next several decades, the US Department of Energy is expected to spend tens of billions of dollars in the characterization, cleanup, and monitoring of DOE`s current and former installations that have various degrees of soil and groundwater contamination made up of both hazardous and mixed wastes. Each of these phases will require site surveys to determine type and quantity of hazardous and mixed wastes. It is generally recognized that these required survey and monitoring efforts cannot be performed using traditional chemistry methods based on laboratory evaluation of samples from the field. For that reason, a tremendous push during the past decade or so has been made on research and development of sensors. This report contains the results of an extensive literature search on sensors that are used or have applicability in environmental and waste management. While restricting the search to a relatively small part of the total chemistry spectrum, a sizable body of reference material is included. Results are presented in tabular form for general references obtained from data base searches, as narrative reviews of relevant chapters from proceedings, as book reviews, and as reviews of journal articles with particular relevance to the review. Four broad sensor types are covered: electrochemical processes, piezoelectric devices, fiber optics, and radiochemical processes. The topics of surface chemistry processes and biosensors are not treated separately because they often are an adjunct to one of the four sensors listed. About 1,000 tabular entries are listed, including selected journal articles, reviews of conference/meeting proceedings, and books. Literature to about mid-1992 is covered.

  14. Eye Movement Analysis and Cognitive Assessment. The Use of Comparative Visual Search Tasks in a Non-immersive VR Application.

    Science.gov (United States)

    Rosa, Pedro J; Gamito, Pedro; Oliveira, Jorge; Morais, Diogo; Pavlovic, Matthew; Smyth, Olivia; Maia, Inês; Gomes, Tiago

    2017-03-23

    An adequate behavioral response depends on attentional and mnesic processes. When these basic cognitive functions are impaired, the use of non-immersive Virtual Reality Applications (VRAs) can be a reliable technique for assessing the level of impairment. However, most non-immersive VRAs use indirect measures to make inferences about visual attention and mnesic processes (e.g., time to task completion, error rate). To examine whether the eye movement analysis through eye tracking (ET) can be a reliable method to probe more effectively where and how attention is deployed and how it is linked with visual working memory during comparative visual search tasks (CVSTs) in non-immersive VRAs. The eye movements of 50 healthy participants were continuously recorded while CVSTs, selected from a set of cognitive tasks in the Systemic Lisbon Battery (SLB). Then a VRA designed to assess of cognitive impairments were randomly presented. The total fixation duration, the number of visits in the areas of interest and in the interstimulus space, along with the total execution time was significantly different as a function of the Mini Mental State Examination (MMSE) scores. The present study demonstrates that CVSTs in SLB, when combined with ET, can be a reliable and unobtrusive method for assessing cognitive abilities in healthy individuals, opening it to potential use in clinical samples.

  15. Adaptive Baseline Enhances EM-Based Policy Search: Validation in a View-Based Positioning Task of a Smartphone Balancer

    Science.gov (United States)

    Wang, Jiexin; Uchibe, Eiji; Doya, Kenji

    2017-01-01

    EM-based policy search methods estimate a lower bound of the expected return from the histories of episodes and iteratively update the policy parameters using the maximum of a lower bound of expected return, which makes gradient calculation and learning rate tuning unnecessary. Previous algorithms like Policy learning by Weighting Exploration with the Returns, Fitness Expectation Maximization, and EM-based Policy Hyperparameter Exploration implemented the mechanisms to discard useless low-return episodes either implicitly or using a fixed baseline determined by the experimenter. In this paper, we propose an adaptive baseline method to discard worse samples from the reward history and examine different baselines, including the mean, and multiples of SDs from the mean. The simulation results of benchmark tasks of pendulum swing up and cart-pole balancing, and standing up and balancing of a two-wheeled smartphone robot showed improved performances. We further implemented the adaptive baseline with mean in our two-wheeled smartphone robot hardware to test its performance in the standing up and balancing task, and a view-based approaching task. Our results showed that with adaptive baseline, the method outperformed the previous algorithms and achieved faster, and more precise behaviors at a higher successful rate. PMID:28167910

  16. Adaptive Baseline Enhances EM-Based Policy Search: Validation in a View-Based Positioning Task of a Smartphone Balancer.

    Science.gov (United States)

    Wang, Jiexin; Uchibe, Eiji; Doya, Kenji

    2017-01-01

    EM-based policy search methods estimate a lower bound of the expected return from the histories of episodes and iteratively update the policy parameters using the maximum of a lower bound of expected return, which makes gradient calculation and learning rate tuning unnecessary. Previous algorithms like Policy learning by Weighting Exploration with the Returns, Fitness Expectation Maximization, and EM-based Policy Hyperparameter Exploration implemented the mechanisms to discard useless low-return episodes either implicitly or using a fixed baseline determined by the experimenter. In this paper, we propose an adaptive baseline method to discard worse samples from the reward history and examine different baselines, including the mean, and multiples of SDs from the mean. The simulation results of benchmark tasks of pendulum swing up and cart-pole balancing, and standing up and balancing of a two-wheeled smartphone robot showed improved performances. We further implemented the adaptive baseline with mean in our two-wheeled smartphone robot hardware to test its performance in the standing up and balancing task, and a view-based approaching task. Our results showed that with adaptive baseline, the method outperformed the previous algorithms and achieved faster, and more precise behaviors at a higher successful rate.

  17. Computer vision enhances mobile eye-tracking to expose expert cognition in natural-scene visual-search tasks

    Science.gov (United States)

    Keane, Tommy P.; Cahill, Nathan D.; Tarduno, John A.; Jacobs, Robert A.; Pelz, Jeff B.

    2014-02-01

    Mobile eye-tracking provides the fairly unique opportunity to record and elucidate cognition in action. In our research, we are searching for patterns in, and distinctions between, the visual-search performance of experts and novices in the geo-sciences. Traveling to regions resultant from various geological processes as part of an introductory field studies course in geology, we record the prima facie gaze patterns of experts and novices when they are asked to determine the modes of geological activity that have formed the scene-view presented to them. Recording eye video and scene video in natural settings generates complex imagery that requires advanced applications of computer vision research to generate registrations and mappings between the views of separate observers. By developing such mappings, we could then place many observers into a single mathematical space where we can spatio-temporally analyze inter- and intra-subject fixations, saccades, and head motions. While working towards perfecting these mappings, we developed an updated experiment setup that allowed us to statistically analyze intra-subject eye-movement events without the need for a common domain. Through such analyses we are finding statistical differences between novices and experts in these visual-search tasks. In the course of this research we have developed a unified, open-source, software framework for processing, visualization, and interaction of mobile eye-tracking and high-resolution panoramic imagery.

  18. 3D-SURFER 2.0: web platform for real-time search and characterization of protein surfaces.

    Science.gov (United States)

    Xiong, Yi; Esquivel-Rodriguez, Juan; Sael, Lee; Kihara, Daisuke

    2014-01-01

    The increasing number of uncharacterized protein structures necessitates the development of computational approaches for function annotation using the protein tertiary structures. Protein structure database search is the basis of any structure-based functional elucidation of proteins. 3D-SURFER is a web platform for real-time protein surface comparison of a given protein structure against the entire PDB using 3D Zernike descriptors. It can smoothly navigate the protein structure space in real-time from one query structure to another. A major new feature of Release 2.0 is the ability to compare the protein surface of a single chain, a single domain, or a single complex against databases of protein chains, domains, complexes, or a combination of all three in the latest PDB. Additionally, two types of protein structures can now be compared: all-atom-surface and backbone-atom-surface. The server can also accept a batch job for a large number of database searches. Pockets in protein surfaces can be identified by VisGrid and LIGSITE (csc) . The server is available at http://kiharalab.org/3d-surfer/.

  19. In search of the why : developing a system for answering why-questions

    NARCIS (Netherlands)

    Verberne, S.

    2010-01-01

    Information searching was once a task performed almost exclusively by librarians and domain experts. With the rise of the Internet, searching for information by means of search engines has become a daily activity for many people. The most widespread form of web searching is ad-hoc document

  20. search GenBank: interactive orchestration and ad-hoc choreography of Web services in the exploration of the biomedical resources of the National Center For Biotechnology Information.

    Science.gov (United States)

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Siążnik, Artur

    2013-03-01

    Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user's query, advanced data searching based on the specified user's query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. search GenBank extends standard capabilities of the

  1. Web Searching for Health: Theoretical Foundations and Connections to Health Related Outcomes

    Science.gov (United States)

    Dutta, M. J.; Bodie, G. D.

    Increasingly, consumers are using the Internet to seek out health information. This increasing demand for health information on the Internet has been accompanied by an increase in the number of Websites delivering health information online. This rise in online health information search calls for a theoretical approach that explains consumer health information seeking on the Internet. Based on a review of the literature related to health information seeking, this chapter introduces an integrative model of online health information seeking, arguing that the motivation and ability to seek out health information are two key constructs in predicting health information seeking. Finally, the chapter highlights the implications of adopting the integrative model of online health information seeking in understanding the health outcomes associated with new communication technologies.

  2. Disponibilização do catálogo do acervo das bibliotecas da UNICAMP na web, utilizando o altavista search intranet

    Directory of Open Access Journals (Sweden)

    Mariângela Pisoni Zanaga

    Full Text Available Desenvolvimento e implantação de projeto, visando a disponibilização do catálogo automatizado de monografias (livros e teses, existente nas bibliotecas da UNICAMP, na WEB, utilizando a ferramenta de busca AltaVista Search Intranet.

  3. A Study on Information Search and Commitment Strategies on Web Environment and Internet Usage Self-Efficacy Beliefs of University Students'

    Science.gov (United States)

    Geçer, Aynur Kolburan

    2014-01-01

    This study addresses university students' information search and commitment strategies on web environment and internet usage self-efficacy beliefs in terms of such variables as gender, department, grade level and frequency of internet use; and whether there is a significant relation between these beliefs. Descriptive method was used in the study.…

  4. FirstSearch and NetFirst--Web and Dial-up Access: Plus Ca Change, Plus C'est la Meme Chose?

    Science.gov (United States)

    Koehler, Wallace; Mincey, Danielle

    1996-01-01

    Compares and evaluates the differences between OCLC's dial-up and World Wide Web FirstSearch access methods and their interfaces with the underlying databases. Also examines NetFirst, OCLC's new Internet catalog, the only Internet tracking database from a "traditional" database service. (Author/PEN)

  5. Study of Search Engine Transaction Logs Shows Little Change in How Users use Search Engines. A review of: Jansen, Bernard J., and Amanda Spink. “How Are We Searching the World Wide Web? A Comparison of Nine Search Engine Transaction Logs.” Information Processing & Management 42.1 (2006: 248‐263.

    Directory of Open Access Journals (Sweden)

    David Hook

    2006-09-01

    Full Text Available Objective – To examine the interactions between users and search engines, and how they have changed over time. Design – Comparative analysis of search engine transaction logs. Setting – Nine major analyses of search engine transaction logs. Subjects – Nine web search engine studies (4 European, 5 American over a seven‐year period, covering the search engines Excite, Fireball, AltaVista, BWIE and AllTheWeb. Methods – The results from individual studies are compared by year of study for percentages of single query sessions, one term queries, operator (and, or, not, etc. usage and single result page viewing. As well, the authors group the search queries into eleven different topical categories and compare how the breakdown has changed over time. Main Results – Based on the percentage of single query sessions, it does not appear that the complexity of interactions has changed significantly for either the U.S.‐based or the European‐based search engines. As well, there was little change observed in the percentage of one‐term queries over the years of study for either the U.S.‐based or the European‐based search engines. Few users (generally less than 20% use Boolean or other operators in their queries, and these percentages have remained relatively stable. One area of noticeable change is in the percentage of users viewing only one results page, which has increased over the years of study. Based on the studies of the U.S.‐based search engines, the topical categories of ‘People, Place or Things’ and ‘Commerce, Travel, Employment or Economy’ are becoming more popular, while the categories of ‘Sex and Pornography’ and ‘Entertainment or Recreation’ are declining. Conclusions – The percentage of users viewing only one results page increased during the years of the study, while the percentages of single query sessions, oneterm sessions and operator usage remained stable. The increase in single result page viewing

  6. Evolution of Web Services in EOSDIS: Search and Order Metadata Registry (ECHO)

    Science.gov (United States)

    Mitchell, Andrew; Ramapriyan, Hampapuram; Lowe, Dawn

    2009-01-01

    During 2005 through 2008, NASA defined and implemented a major evolutionary change in it Earth Observing system Data and Information System (EOSDIS) to modernize its capabilities. This implementation was based on a vision for 2015 developed during 2005. The EOSDIS 2015 Vision emphasizes increased end-to-end data system efficiency and operability; increased data usability; improved support for end users; and decreased operations costs. One key feature of the Evolution plan was achieving higher operational maturity (ingest, reconciliation, search and order, performance, error handling) for the NASA s Earth Observing System Clearinghouse (ECHO). The ECHO system is an operational metadata registry through which the scientific community can easily discover and exchange NASA's Earth science data and services. ECHO contains metadata for 2,726 data collections comprising over 87 million individual data granules and 34 million browse images, consisting of NASA s EOSDIS Data Centers and the United States Geological Survey's Landsat Project holdings. ECHO is a middleware component based on a Service Oriented Architecture (SOA). The system is comprised of a set of infrastructure services that enable the fundamental SOA functions: publish, discover, and access Earth science resources. It also provides additional services such as user management, data access control, and order management. The ECHO system has a data registry and a services registry. The data registry enables organizations to publish EOS and other Earth-science related data holdings to a common metadata model. These holdings are described through metadata in terms of datasets (types of data) and granules (specific data items of those types). ECHO also supports browse images, which provide a visual representation of the data. The published metadata can be mapped to and from existing standards (e.g., FGDC, ISO 19115). With ECHO, users can find the metadata stored in the data registry and then access the data either

  7. Use of the DISCERN tool for evaluating web searches in childhood epilepsy.

    Science.gov (United States)

    Cerminara, Caterina; Santarone, Marta Elena; Casarelli, Livia; Curatolo, Paolo; El Malhany, Nadia

    2014-12-01

    Epilepsy is an important cause of neurological disability in children. Nowadays, an increasing number of parents or caregivers use the Internet as a source of health information concerning symptoms, therapy, and prognosis of epilepsy occurring during childhood. Therefore, high-quality websites are necessary to satisfy this request. Using the DISCERN tool, we evaluated online information on childhood epilepsy provided by the first 50 links displayed on the Google search engine. The same links were evaluated by a team of pediatric neurologists (PNs) and by a lay subject (LS). The evaluation performed by the PNs found out that only 9.6% of the websites showed good reliability, that only 7.2% of the websites had a good quality of information on treatment choices, and that only 21.5% of the websites showed good overall quality of the content. With regard to the evaluation performed by the neutral subject, it was found that 21.4% of the websites showed good reliability, that 59.5% of the websites showed poor quality of information on treatment choices, and that only 2% of the websites showed good overall quality of the content. Our conclusion is that online information about childhood epilepsy still lacks reliability, accuracy, and relevance as well as fails to provide a thorough review of treatment choices. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Web Survey Design in ASP.Net 2.0: A Simple Task with One Line of Code

    Science.gov (United States)

    Liu, Chang

    2007-01-01

    Over the past few years, more and more companies have been investing in electronic commerce (EC) by designing and implementing Web-based applications. In the world of practice, the importance of using Web technology to reach individual customers has been presented by many researchers. This paper presents an easy way of conducting marketing…

  9. A study of the influence of task familiarity on user behaviors and performance with a MeSH term suggestion interface for PubMed bibliographic search.

    Science.gov (United States)

    Tang, Muh-Chyun; Liu, Ying-Hsang; Wu, Wan-Ching

    2013-09-01

    Previous research has shown that information seekers in biomedical domain need more support in formulating their queries. A user study was conducted to evaluate the effectiveness of a metadata based query suggestion interface for PubMed bibliographic search. The study also investigated the impact of search task familiarity on search behaviors and the effectiveness of the interface. A real user, user search request and real system approach was used for the study. Unlike tradition IR evaluation, where assigned tasks were used, the participants were asked to search requests of their own. Forty-four researchers in Health Sciences participated in the evaluation - each conducted two research requests of their own, alternately with the proposed interface and the PubMed baseline. Several performance criteria were measured to assess the potential benefits of the experimental interface, including users' assessment of their original and eventual queries, the perceived usefulness of the interfaces, satisfaction with the search results, and the average relevance score of the saved records. The results show that, when searching for an unfamiliar topic, users were more likely to change their queries, indicating the effect of familiarity on search behaviors. The results also show that the interface scored higher on several of the performance criteria, such as the "goodness" of the queries, perceived usefulness, and user satisfaction. Furthermore, in line with our hypothesis, the proposed interface was relatively more effective when less familiar search requests were attempted. Results indicate that there is a selective compatibility between search familiarity and search interface. One implication of the research for system evaluation is the importance of taking into consideration task familiarity when assessing the effectiveness of interactive IR systems. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. Design and Testing of BACRA, a Web-Based Tool for Middle Managers at Health Care Facilities to Lead the Search for Solutions to Patient Safety Incidents.

    Science.gov (United States)

    Carrillo, Irene; Mira, José Joaquín; Vicente, Maria Asuncion; Fernandez, Cesar; Guilabert, Mercedes; Ferrús, Lena; Zavala, Elena; Silvestre, Carmen; Pérez-Pérez, Pastora

    2016-09-27

    Lack of time, lack of familiarity with root cause analysis, or suspicion that the reporting may result in negative consequences hinder involvement in the analysis of safety incidents and the search for preventive actions that can improve patient safety. The aim was develop a tool that enables hospitals and primary care professionals to immediately analyze the causes of incidents and to propose and implement measures intended to prevent their recurrence. The design of the Web-based tool (BACRA) considered research on the barriers for reporting, review of incident analysis tools, and the experience of eight managers from the field of patient safety. BACRA's design was improved in successive versions (BACRA v1.1 and BACRA v1.2) based on feedback from 86 middle managers. BACRA v1.1 was used by 13 frontline professionals to analyze incidents of safety; 59 professionals used BACRA v1.2 and assessed the respective usefulness and ease of use of both versions. BACRA contains seven tabs that guide the user through the process of analyzing a safety incident and proposing preventive actions for similar future incidents. BACRA does not identify the person completing each analysis since the password introduced to hide said analysis only is linked to the information concerning the incident and not to any personal data. The tool was used by 72 professionals from hospitals and primary care centers. BACRA v1.2 was assessed more favorably than BACRA v1.1, both in terms of its usefulness (z=2.2, P=.03) and its ease of use (z=3.0, P=.003). BACRA helps to analyze incidents of safety and to propose preventive actions. BACRA guarantees anonymity of the analysis and reduces the reluctance of professionals to carry out this task. BACRA is useful and easy to use.

  11. An Evidence-Based Review of Academic Web Search Engines, 2014-2016: Implications for Librarians’ Practice and Research Agenda

    Directory of Open Access Journals (Sweden)

    Jody Condit Fagan

    2017-06-01

    Full Text Available Academic web search engines have become central to scholarly research. While the fitness of Google Scholar for research purposes has been examined repeatedly, Microsoft Academic and Google Books have not received much attention. Recent studies have much to tell us about the coverage and utility of Google Scholar, its coverage of the sciences, and its utility for evaluating researcher impact. But other aspects have been woefully understudied, such as coverage of the arts and humanities, books, and non-Western, non-English publications. User research has also tapered off. A small number of articles hint at the opportunity for librarians to become expert advisors concerning opportunities of scholarly communication made possible or enhanced by these platforms. This article seeks to summarize research concerning Google Scholar, Google Books, and Microsoft Academic from the past three years with a mind to informing practice and setting a research agenda. Selected literature from earlier time periods is included to illuminate key findings and to help shape the proposed research agenda, especially in understudied areas.

  12. The effect of patient narratives on information search in a web-based breast cancer decision aid: an eye-tracking study.

    Science.gov (United States)

    Shaffer, Victoria A; Owens, Justin; Zikmund-Fisher, Brian J

    2013-12-17

    Previous research has examined the impact of patient narratives on treatment choices, but to our knowledge, no study has examined the effect of narratives on information search. Further, no research has considered the relative impact of their format (text vs video) on health care decisions in a single study. Our goal was to examine the impact of video and text-based narratives on information search in a Web-based patient decision aid for early stage breast cancer. Fifty-six women were asked to imagine that they had been diagnosed with early stage breast cancer and needed to choose between two surgical treatments (lumpectomy with radiation or mastectomy). Participants were randomly assigned to view one of four versions of a Web decision aid. Two versions of the decision aid included videos of interviews with patients and physicians or videos of interviews with physicians only. To distinguish between the effect of narratives and the effect of videos, we created two text versions of the Web decision aid by replacing the patient and physician interviews with text transcripts of the videos. Participants could freely browse the Web decision aid until they developed a treatment preference. We recorded participants' eye movements using the Tobii 1750 eye-tracking system equipped with Tobii Studio software. A priori, we defined 24 areas of interest (AOIs) in the Web decision aid. These AOIs were either separate pages of the Web decision aid or sections within a single page covering different content. We used multilevel modeling to examine the effect of narrative presence, narrative format, and their interaction on information search. There was a significant main effect of condition, P=.02; participants viewing decision aids with patient narratives spent more time searching for information than participants viewing the decision aids without narratives. The main effect of format was not significant, P=.10. However, there was a significant condition by format interaction on

  13. Large-area sheet task: advanced dendritic web growth development. Quarterly report, October 23-December 31, 1980

    Energy Technology Data Exchange (ETDEWEB)

    Duncan, C. S.; Seidensticker, R. G.; McHugh, J. P.; Hopkins, R. H.; Meier, D.; Frantti, E.; Schruben, J.

    1981-01-31

    Silicon dendritic web is a single crystal ribbon form of silicon capable of fabrication into solar cells with AM1 conversion efficiency in excess of 15%. Progress on a study to demonstrate the technology readiness of the web process to meet the national goals for low cost photovoltaic power is reported. Several refinements were introduced into the sensing and control equipment for melt replenishment during web growth and also several areas were identified for cost reduction in the components of the prototype automated web growth furnace. A new circuit has been designed, assembled and tested to eliminate the sensitivity of the detector signal to the intensity of the reflected laser beam used to measure melt level. Noise due to vibrations on the silicon melt surface has also been eliminated. A new variable speed motor has been identified for the silicon feeder. Pellet feeding will be accomplished at a rate programmed to match exactly the silicon removed by web growth. A system to program the initiation of web growth automatically has been designed and first tests initiated. This should eventually result in reduced labor content and improved process reproducibility. Potential cost reductions in the furnace chamber and storage reel have been identified. A furnace controller providing a functional capability similar to our experimental hardware but at about one third the cost will shortly be tested.

  14. Using Web-Based Search Data to Study the Public's Reactions to Societal Events: The Case of the Sandy Hook Shooting.

    Science.gov (United States)

    Menachemi, Nir; Rahurkar, Saurabh; Rahurkar, Mandar

    2017-03-23

    Internet search is the most common activity on the World Wide Web and generates a vast amount of user-reported data regarding their information-seeking preferences and behavior. Although this data has been successfully used to examine outbreaks, health care utilization, and outcomes related to quality of care, its value in informing public health policy remains unclear. The aim of this study was to evaluate the role of Internet search query data in health policy development. To do so, we studied the public's reaction to a major societal event in the context of the 2012 Sandy Hook School shooting incident. Query data from the Yahoo! search engine regarding firearm-related searches was analyzed to examine changes in user-selected search terms and subsequent websites visited for a period of 14 days before and after the shooting incident. A total of 5,653,588 firearm-related search queries were analyzed. In the after period, queries increased for search terms related to "guns" (+50.06%), "shooting incident" (+333.71%), "ammunition" (+155.14%), and "gun-related laws" (+535.47%). The highest increase (+1054.37%) in Web traffic was seen by news websites following "shooting incident" queries whereas searches for "guns" (+61.02%) and "ammunition" (+173.15%) resulted in notable increases in visits to retail websites. Firearm-related queries generally returned to baseline levels after approximately 10 days. Search engine queries present a viable infodemiology metric on public reactions and subsequent behaviors to major societal events and could be used by policymakers to inform policy development.

  15. Using Web-Based Search Data to Study the Public’s Reactions to Societal Events: The Case of the Sandy Hook Shooting

    Science.gov (United States)

    2017-01-01

    Background Internet search is the most common activity on the World Wide Web and generates a vast amount of user-reported data regarding their information-seeking preferences and behavior. Although this data has been successfully used to examine outbreaks, health care utilization, and outcomes related to quality of care, its value in informing public health policy remains unclear. Objective The aim of this study was to evaluate the role of Internet search query data in health policy development. To do so, we studied the public’s reaction to a major societal event in the context of the 2012 Sandy Hook School shooting incident. Methods Query data from the Yahoo! search engine regarding firearm-related searches was analyzed to examine changes in user-selected search terms and subsequent websites visited for a period of 14 days before and after the shooting incident. Results A total of 5,653,588 firearm-related search queries were analyzed. In the after period, queries increased for search terms related to “guns” (+50.06%), “shooting incident” (+333.71%), “ammunition” (+155.14%), and “gun-related laws” (+535.47%). The highest increase (+1054.37%) in Web traffic was seen by news websites following “shooting incident” queries whereas searches for “guns” (+61.02%) and “ammunition” (+173.15%) resulted in notable increases in visits to retail websites. Firearm-related queries generally returned to baseline levels after approximately 10 days. Conclusions Search engine queries present a viable infodemiology metric on public reactions and subsequent behaviors to major societal events and could be used by policymakers to inform policy development. PMID:28336508

  16. Analysis of Automated Modern Web Crawling and Testing Tools and Their Possible Employment for Information Extraction

    Directory of Open Access Journals (Sweden)

    Tomas Grigalis

    2012-04-01

    Full Text Available World Wide Web has become an enormously big repository of data. Extracting, integrating and reusing this kind of data has a wide range of applications, including meta-searching, comparison shopping, business intelligence tools and security analysis of information in websites. However, reaching information in modern WEB 2.0 web pages, where HTML tree is often dynamically modified by various JavaScript codes, new data are added by asynchronous requests to the web server and elements are positioned with the help of cascading style sheets, is a difficult task. The article reviews automated web testing tools for information extraction tasks.Article in Lithuanian

  17. DicoSE--a Web-interface based on XML for visualizing and searching the topology of the data structure defined by the DICOM 3 standard.

    Science.gov (United States)

    Prinz, Michael; Fischer, Georg; Schuster, Ernst

    2003-01-01

    To adequately visualize and interpret the data fields of DICOM 3 datasets the data structure defined in the DICOM 3 standard has to be applied. The DICOM 3 data structure is very extensive and therefore costly to implement. We are working on an open source system which provides the data structure via a Java-programming interface. The data is held in a freely available XML-database. As a spin-off we are providing the web-based Dicom Search Engine (DicoSE) which will be available via internet soon. DicoSE allows for searching the DICOM standard data dictionary for defined data fields and visualizes the topology of the data which is present in DICOM datasets acquired by various types of modalities. Thus, the interpretation of the meaning of data fields is supported. For maintaining the data stored in the database a web-based administration interface is provided.

  18. The Face in the Crowd Effect Unconfounded: Happy Faces, Not Angry Faces, Are More Efficiently Detected in Single- and Multiple-Target Visual Search Tasks

    Science.gov (United States)

    Becker, D. Vaughn; Anderson, Uriah S.; Mortensen, Chad R.; Neufeld, Samantha L.; Neel, Rebecca

    2011-01-01

    Is it easier to detect angry or happy facial expressions in crowds of faces? The present studies used several variations of the visual search task to assess whether people selectively attend to expressive faces. Contrary to widely cited studies (e.g., Ohman, Lundqvist, & Esteves, 2001) that suggest angry faces "pop out" of crowds, our review of…

  19. Iconicity Influences How Effectively Minimally Verbal Children with Autism and Ability-Matched Typically Developing Children Use Pictures as Symbols in a Search Task

    Science.gov (United States)

    Hartley, Calum; Allen, Melissa L.

    2015-01-01

    Previous word learning studies suggest that children with autism spectrum disorder may have difficulty understanding pictorial symbols. Here we investigate the ability of children with autism spectrum disorder and language-matched typically developing children to contextualize symbolic information communicated by pictures in a search task that did…

  20. Information Interaction Criteria Among Students in Process of Task-Based Information Searching (Role of Objective Complexity and Type of Product

    Directory of Open Access Journals (Sweden)

    Marziyeh Saeedizadeh

    2016-08-01

    Full Text Available Purpose:  human-information interactions must be considered in order to be able to interactively design Information Retrieval Systems (IRS. In this regard, study of users’ interactions must be based on their socio-cultural context (specifically work tasks. Accordingly, this paper aims to explore the use of information-interaction criteria among students in the information searching process according to different kinds of their work tasks.  Methodology: This research is applied qualitative method using exploratory study. The research population consisted of MSc students of Ferdowsi university of Mashhad enrolled in 2012-13  academic year. In 3 stages of sampling (random stratified, quota, and voluntary sampling, 30 cases were selected. Each of these cases searched 6 different types of simulated work tasks. Interaction criteria were extracted ? Content analysis of aloud thinking reports. Validity of tools was verified through Faculties of KIS at Ferdowsi university of Mashhad. Also,0.78  Kripendorff’s alpha ratio based on an agreement between the inter – coder indicates the Dependability  of content analysis. Findings: The findings show that in addition to ‘topic’ criteria, other interaction criteria impact on information- interaction of users, such as: ‘search results ranking’, ‘domain knowledge of user’, ‘layout’, ‘type of information resource’ and etc. based on the level of objective complexity and product of  work tasks, information-interaction criteria change. Conclusion: the users pay attention to different information-interaction criteria in process of information searching, considering to variety of work tasks (level of objective complexity and product. So, it is necessary to pay attention to work task characteristics in order to design interactive and personalized IR systems.

  1. Web corpus construction

    CERN Document Server

    Schafer, Roland

    2013-01-01

    The World Wide Web constitutes the largest existing source of texts written in a great variety of languages. A feasible and sound way of exploiting this data for linguistic research is to compile a static corpus for a given language. There are several adavantages of this approach: (i) Working with such corpora obviates the problems encountered when using Internet search engines in quantitative linguistic research (such as non-transparent ranking algorithms). (ii) Creating a corpus from web data is virtually free. (iii) The size of corpora compiled from the WWW may exceed by several orders of magnitudes the size of language resources offered elsewhere. (iv) The data is locally available to the user, and it can be linguistically post-processed and queried with the tools preferred by her/him. This book addresses the main practical tasks in the creation of web corpora up to giga-token size. Among these tasks are the sampling process (i.e., web crawling) and the usual cleanups including boilerplate removal and rem...

  2. Difficulties in finding DNA mutations and associated phenotypic data in web resources using simple, uncomplicated search terms, and a suggested solution

    Directory of Open Access Journals (Sweden)

    Webb Elizabeth A

    2011-03-01

    Full Text Available Abstract DNA mutation data currently reside in many online databases, which differ markedly in the terminology used to describe or define the mutation and also in completeness of content, potentially making it difficult both to locate a mutation of interest and to find sought-after data (eg phenotypic effect. To highlight the current deficiencies in the accessibility of web-based genetic variation information, we examined the ease with which various resources could be interrogated for five model mutations, using a set of simple search terms relating to the change in amino acid or nucleotide. Fifteen databases were investigated for the time and/or number of mouse clicks; clicks required to find the mutations; availability of phenotype data; the procedure for finding information; and site layout. Google and PubMed were also examined. The three locus-specific databases (LSDBs generally yielded positive outcomes, but the 12 genome-wide databases gave poorer results, with most proving not to be search-able and only three yielding successful outcomes. Google and PubMed searches found some mutations and provided patchy information on phenotype. The results show that many web-based resources are not currently configured for fast and easy access to comprehensive mutation data, with only the isolated LSDBs providing optimal outcomes. Centralising this information within a common repository, coupled with a simple, all-inclusive interrogation process, would improve searching for all gene variation data.

  3. Distributed search engine architecture based on topic specific searches

    Science.gov (United States)

    Abudaqqa, Yousra; Patel, Ahmed

    2015-05-01

    Indisputably, search engines (SEs) abound. The monumental growth of users performing online searches on the Web is a contending issue in the contemporary world nowadays. For example, there are tens of billions of searches performed everyday, which typically offer the users many irrelevant results which are time consuming and costly to the user. Based on the afore-going problem it has become a herculean task for existing Web SEs to provide complete, relevant and up-to-date information response to users' search queries. To overcome this problem, we developed the Distributed Search Engine Architecture (DSEA), which is a new means of smart information query and retrieval of the World Wide Web (WWW). In DSEAs, multiple autonomous search engines, owned by different organizations or individuals, cooperate and act as a single search engine. This paper includes the work reported in this research focusing on development of DSEA, based on topic-specific specialised search engines. In DSEA, the results to specific queries could be provided by any of the participating search engines, for which the user is unaware of. The important design goal of using topic-specific search engines in the research is to build systems that can effectively be used by larger number of users simultaneously. Efficient and effective usage with good response is important, because it involves leveraging the vast amount of searched data from the World Wide Web, by categorising it into condensed focused topic -specific results that meet the user's queries. This design model and the development of the DSEA adopt a Service Directory (SD) to route queries towards topic-specific document hosting SEs. It displays the most acceptable performance which is consistent with the requirements of the users. The evaluation results of the model return a very high priority score which is associated with each frequency of a keyword.

  4. Categorical and Specificity Differences between User-Supplied Tags and Search Query Terms for Images. An Analysis of "Flickr" Tags and Web Image Search Queries

    Science.gov (United States)

    Chung, EunKyung; Yoon, JungWon

    2009-01-01

    Introduction: The purpose of this study is to compare characteristics and features of user supplied tags and search query terms for images on the "Flickr" Website in terms of categories of pictorial meanings and level of term specificity. Method: This study focuses on comparisons between tags and search queries using Shatford's categorization…

  5. User modeling for exploratory search on the Social Web. Exploiting social bookmarking systems for user model extraction, evaluation and integration

    OpenAIRE

    Gontek, Mirko

    2011-01-01

    Exploratory search is an information seeking strategy that extends be- yond the query-and-response paradigm of traditional Information Retrieval models. Users browse through information to discover novel content and to learn more about the newly discovered things. Social bookmarking systems integrate well with exploratory search, because they allow one to search, browse, and filter social bookmarks. Our contribution is an exploratory tag search engine that merges social bookmarking with ex...

  6. Categorical and specificity differences between user-supplied tags and search query terms for images. An analysis of Flickr tags and Web image search queries

    National Research Council Canada - National Science Library

    Yoon, JungWon; Chung, Eun Kyung

    2009-01-01

    Introduction. The purpose of this study is to compare characteristics and features of user-supplied tags and search query terms for images on the Flickr Website in terms of categories of pictorial meanings...

  7. Clinician search behaviors may be influenced by search engine design.

    Science.gov (United States)

    Lau, Annie Y S; Coiera, Enrico; Zrimec, Tatjana; Compton, Paul

    2010-06-30

    Searching the Web for documents using information retrieval systems plays an important part in clinicians' practice of evidence-based medicine. While much research focuses on the design of methods to retrieve documents, there has been little examination of the way different search engine capabilities influence clinician search behaviors. Previous studies have shown that use of task-based search engines allows for faster searches with no loss of decision accuracy compared with resource-based engines. We hypothesized that changes in search behaviors may explain these differences. In all, 75 clinicians (44 doctors and 31 clinical nurse consultants) were randomized to use either a resource-based or a task-based version of a clinical information retrieval system to answer questions about 8 clinical scenarios in a controlled setting in a university computer laboratory. Clinicians using the resource-based system could select 1 of 6 resources, such as PubMed; clinicians using the task-based system could select 1 of 6 clinical tasks, such as diagnosis. Clinicians in both systems could reformulate search queries. System logs unobtrusively capturing clinicians' interactions with the systems were coded and analyzed for clinicians' search actions and query reformulation strategies. The most frequent search action of clinicians using the resource-based system was to explore a new resource with the same query, that is, these clinicians exhibited a "breadth-first" search behaviour. Of 1398 search actions, clinicians using the resource-based system conducted 401 (28.7%, 95% confidence interval [CI] 26.37-31.11) in this way. In contrast, the majority of clinicians using the task-based system exhibited a "depth-first" search behavior in which they reformulated query keywords while keeping to the same task profiles. Of 585 search actions conducted by clinicians using the task-based system, 379 (64.8%, 95% CI 60.83-68.55) were conducted in this way. This study provides evidence that

  8. Frontiers in ICT towards web 3.0

    CERN Document Server

    Levnajic, Zoran

    2014-01-01

    Life without the World Wide Web has become unthinkable, much like life without electricity or water supply. We rely on the web to check public transport schedules, buy a ticket for a concert or exchange photos with friends. However, many everyday tasks cannot be accomplished by the computer itself, since the websites are designed to be read by people, not machines. In addition, the online information is often unstructured and poorly organized, leaving the user with tedious work of searching and filtering. This book takes us to the frontiers of the emerging Web 3.0 or Semantic Web - a new gener

  9. Discussion, Cooperation, Collaboration: The Impact of Task Structure on Student Interaction in a Web-based Translation Exercise Module

    OpenAIRE

    Kenny, Mary Ann

    2017-01-01

    A major challenge facing the online translation instructor is to design learning opportunities that encourage communication and the sharing of ideas between students. This article asks how such group interaction may be facilitated and evaluates, in particular the impact of task structure on student interaction in an online translation exercise module. Drawing on an empirical study carried out at Dublin City University during the academic year 2003/14, the article compares levels of intermessa...

  10. Log analysis to understand medical professionals' image searching behaviour.

    Science.gov (United States)

    Tsikrika, Theodora; Müller, Henning; Kahn, Charles E

    2012-01-01

    This paper reports on the analysis of the query logs of a visual medical information retrieval system that provides access to radiology resources. Our analysis shows that, despite sharing similarities with general Web search and also with biomedical text search, query formulation and query modification when searching for visual biomedical information have unique characteristics that need to be taken into account in order to enhance the effectiveness of the search support offered by such systems. Typical information needs of medical professionals searching radiology resources are also identified with the goal to create realistic search tasks for a medical image retrieval evaluation benchmark.

  11. EPA Web Taxonomy

    Data.gov (United States)

    U.S. Environmental Protection Agency — EPA's Web Taxonomy is a faceted hierarchical vocabulary used to tag web pages with terms from a controlled vocabulary. Tagging enables search and discovery of EPA's...

  12. DESIGN OF A WEB SEMI-INTELLIGENT METADATA SEARCH MODEL APPLIED IN DATA WAREHOUSING SYSTEMS DISEÑO DE UN MODELO SEMIINTELIGENTE DE BÚSQUEDA DE METADATOS EN LA WEB, APLICADO A SISTEMAS DATA WAREHOUSING

    Directory of Open Access Journals (Sweden)

    Enrique Luna Ramírez

    2008-12-01

    Full Text Available In this paper, the design of a Web metadata search model with semi-intelligent features is proposed. The search model is oriented to retrieve the metadata associated to a data warehouse in a fast, flexible and reliable way. Our proposal includes a set of distinctive functionalities, which consist of the temporary storage of the frequently used metadata in an exclusive store, different to the global data warehouse metadata store, and of the use of control processes to retrieve information from both stores through aliases of concepts.En este artículo se propone el diseño de un modelo para la búsqueda Web de metadatos con características semiinteligentes. El modelo ha sido concebido para recuperar de manera rápida, flexible y fiable los metadatos asociados a un data warehouse corporativo. Nuestra propuesta incluye un conjunto de funcionalidades distintivas consistentes en el almacenamiento temporal de los metadatos de uso frecuente en un almacén exclusivo, diferente al almacén global de metadatos, y al uso de procesos de control para recuperar información de ambos almacenes a través de alias de conceptos.

  13. Assessing Ebola-related web search behaviour: insights and implications from an analytical study of Google Trends-based query volumes.

    Science.gov (United States)

    Alicino, Cristiano; Bragazzi, Nicola Luigi; Faccio, Valeria; Amicizia, Daniela; Panatto, Donatella; Gasparini, Roberto; Icardi, Giancarlo; Orsi, Andrea

    2015-12-10

    The 2014 Ebola epidemic in West Africa has attracted public interest worldwide, leading to millions of Ebola-related Internet searches being performed during the period of the epidemic. This study aimed to evaluate and interpret Google search queries for terms related to the Ebola outbreak both at the global level and in all countries where primary cases of Ebola occurred. The study also endeavoured to look at the correlation between the number of overall and weekly web searches and the number of overall and weekly new cases of Ebola. Google Trends (GT) was used to explore Internet activity related to Ebola. The study period was from 29 December 2013 to 14 June 2015. Pearson's correlation was performed to correlate Ebola-related relative search volumes (RSVs) with the number of weekly and overall Ebola cases. Multivariate regression was performed using Ebola-related RSV as a dependent variable, and the overall number of Ebola cases and the Human Development Index were used as predictor variables. The greatest RSV was registered in the three West African countries mainly affected by the Ebola epidemic. The queries varied in the different countries. Both quantitative and qualitative differences between the affected African countries and other Western countries with primary cases were noted, in relation to the different flux volumes and different time courses. In the affected African countries, web query search volumes were mostly concentrated in the capital areas. However, in Western countries, web queries were uniformly distributed over the national territory. In terms of the three countries mainly affected by the Ebola epidemic, the correlation between the number of new weekly cases of Ebola and the weekly GT index varied from weak to moderate. The correlation between the number of Ebola cases registered in all countries during the study period and the GT index was very high. Google Trends showed a coarse-grained nature, strongly correlating with global

  14. New evidence for strategic differences between static and dynamic search tasks: An individual observer analysis of eye movements

    Directory of Open Access Journals (Sweden)

    Christopher eDickinson

    2013-01-01

    Full Text Available Two experiments are reported that further explore the processes underlying dynamic search. In Experiment 1, observers’ oculomotor behavior was monitored while they searched for a randomly oriented T among oriented L distractors under static and dynamic viewing conditions. Despite similar search slopes, eye movements were less frequent and more spatially constrained under dynamic viewing relative to static, with misses also increasing more with target eccentricity in the dynamic condition. These patterns suggest that dynamic search involves a form of sit-and-wait strategy in which search is restricted to a small group of items surrounding fixation. To evaluate this interpretation, we developed a computational model of a sit-and-wait process hypothesized to underlie dynamic search. In Experiment 2 we tested this model by varying fixation position in the display and found that display positions optimized for a sit-and-wait strategy resulted in higher d' values relative to a less optimal location. We conclude that different strategies, and therefore underlying processes, are used to search static and dynamic displays.

  15. New Evidence for Strategic Differences between Static and Dynamic Search Tasks: An Individual Observer Analysis of Eye Movements

    Science.gov (United States)

    Dickinson, Christopher A.; Zelinsky, Gregory J.

    2013-01-01

    Two experiments are reported that further explore the processes underlying dynamic search. In Experiment 1, observers’ oculomotor behavior was monitored while they searched for a randomly oriented T among oriented L distractors under static and dynamic viewing conditions. Despite similar search slopes, eye movements were less frequent and more spatially constrained under dynamic viewing relative to static, with misses also increasing more with target eccentricity in the dynamic condition. These patterns suggest that dynamic search involves a form of sit-and-wait strategy in which search is restricted to a small group of items surrounding fixation. To evaluate this interpretation, we developed a computational model of a sit-and-wait process hypothesized to underlie dynamic search. In Experiment 2 we tested this model by varying fixation position in the display and found that display positions optimized for a sit-and-wait strategy resulted in higher d′ values relative to a less optimal location. We conclude that different strategies, and therefore underlying processes, are used to search static and dynamic displays. PMID:23372555

  16. RNA FRABASE 2.0: an advanced web-accessible database with the capacity to search the three-dimensional fragments within RNA structures.

    Science.gov (United States)

    Popenda, Mariusz; Szachniuk, Marta; Blazewicz, Marek; Wasik, Szymon; Burke, Edmund K; Blazewicz, Jacek; Adamiak, Ryszard W

    2010-05-06

    Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB) in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics) is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA structures is provided. RNA FRABASE 2.0 is freely available

  17. RNA FRABASE 2.0: an advanced web-accessible database with the capacity to search the three-dimensional fragments within RNA structures

    Directory of Open Access Journals (Sweden)

    Wasik Szymon

    2010-05-01

    Full Text Available Abstract Background Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. Description RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA

  18. The Opera del Vocabolario Italiano Database: Full-Text Searching Early Italian Vernacular Sources on the Web.

    Science.gov (United States)

    DuPont, Christian

    2001-01-01

    Introduces and describes the functions of the Opera del Vocabolario Italiano (OVI) database, a powerful Web-based, full-text, searchable electronic archive that contains early Italian vernacular texts whose composition may be dated prior to 1375. Examples are drawn from scholars in various disciplines who have employed the OVI in support of their…

  19. Use of Web 2.0 Technologies in K-12 and Higher Education: The Search for Evidence-Based Practice

    Science.gov (United States)

    Hew, Khe Foon; Cheung, Wing Sum

    2013-01-01

    Evidence-based practice in education entails making pedagogical decisions that are informed by relevant empirical research evidence. The main purpose of this paper is to discuss evidence-based pedagogical approaches related to the use of Web 2.0 technologies in both K-12 and higher education settings. The use of such evidence-based practice would…

  20. ASCOT: a text mining-based web-service for efficient search and assisted creation of clinical trials

    Science.gov (United States)

    2012-01-01

    Clinical trials are mandatory protocols describing medical research on humans and among the most valuable sources of medical practice evidence. Searching for trials relevant to some query is laborious due to the immense number of existing protocols. Apart from search, writing new trials includes composing detailed eligibility criteria, which might be time-consuming, especially for new researchers. In this paper we present ASCOT, an efficient search application customised for clinical trials. ASCOT uses text mining and data mining methods to enrich clinical trials with metadata, that in turn serve as effective tools to narrow down search. In addition, ASCOT integrates a component for recommending eligibility criteria based on a set of selected protocols. PMID:22595088

  1. Google Ajax Search API

    CERN Document Server

    Fitzgerald, Michael

    2007-01-01

    Use the Google Ajax Search API to integrateweb search, image search, localsearch, and other types of search intoyour web site by embedding a simple, dynamicsearch box to display search resultsin your own web pages using a fewlines of JavaScript. For those who do not want to write code,the search wizards and solutions builtwith the Google Ajax Search API generatecode to accomplish common taskslike adding local search results to a GoogleMaps API mashup, adding videosearch thumbnails to your web site, oradding a news reel with the latest up todate stories to your blog. More advanced users can

  2. Web Metasearch Result Clustering System

    Directory of Open Access Journals (Sweden)

    Adina LIPAI

    2008-01-01

    Full Text Available The paper presents a web search result clustering algorithm that was integrated in to a desktop application. The application aims to increase the web search engines performances by reducing the user effort in finding a web page in the list of results returned by the search engines.

  3. Web Search Engines and Indexing and Ranking the Content Object Including Metadata Elements Available at the Dynamic Information Environments

    Directory of Open Access Journals (Sweden)

    Faezeh sadat Tabatabai Amiri

    2012-10-01

    Full Text Available The purpose of this research was to make exam the indexing and ranking of XML content objects containing Dublin Core and MARC 21 metadata elements in dynamic online information environments by general search engines and comparing them together in a comparative-analytical approach. 100 XML content objects in two groups were analyzed: those with DCXML elements and those with MARCXML elements were published in website http://www.marcdcmi.ir. from late Mordad 1388 till Khordad 1389. Then the website was introduced to Google and Yahoo search engines. Google search engine was able to retrieve fully all the content objects during the study period through their Dublin Core and MARC 21 metadata elements; Yahoo search engine, however, did not respond at all. The indexing of metadata elements embedded in content objects in dynamic online information environments and different between indexing and ranking of them were examined. Findings showed all Dublin Core and MARC 21 metadata elements by Google search engine were indexed. And there was not observed difference between indexing and ranking DCXML and MARCXML metadata elements in dynamic online information environments by Google search engine.

  4. Using Open Web APIs in Teaching Web Mining

    Science.gov (United States)

    Chen, Hsinchun; Li, Xin; Chau, M.; Ho, Yi-Jen; Tseng, Chunju

    2009-01-01

    With the advent of the World Wide Web, many business applications that utilize data mining and text mining techniques to extract useful business information on the Web have evolved from Web searching to Web mining. It is important for students to acquire knowledge and hands-on experience in Web mining during their education in information systems…

  5. Trends in Web characteristics

    OpenAIRE

    Miranda, João; Gomes, Daniel

    2009-01-01

    Abstract—The Web is permanently changing, with new technologies and publishing behaviors emerging everyday. It is important to track trends on the evolution of the Web to develop efficient tools to process its data. For instance, Web trends influence the design of browsers, crawlers and search engines. This study presents trends on the evolution of the Web derived from the analysis of 3 characterizations performed within an interval of 5 years. The Web portion used as a c...

  6. The Influence of Surface and Deep Cues on Primary and Secondary School Students' Assessment of Relevance in Web Menus

    Science.gov (United States)

    Rouet, Jean-Francois; Ros, Christine; Goumi, Antonine; Macedo-Rouet, Monica; Dinet, Jerome

    2011-01-01

    Two experiments investigated primary and secondary school students' Web menu selection strategies using simulated Web search tasks. It was hypothesized that students' selections of websites depend on their perception and integration of multiple relevance cues. More specifically, students should be able to disentangle superficial cues (e.g.,…

  7. The effects of task difficulty, novelty and the size of the search space on intrinsically motivated exploration

    Directory of Open Access Journals (Sweden)

    Adrien Fredj Baranes

    2014-10-01

    Full Text Available Devising efficient strategies for exploration in large open-ended spaces is one of the most difficult computational problems of intelligent organisms. Because the available rewards are ambiguous or unknown during the exploratory phase, subjects must act in intrinsically motivated fashion. However, a vast majority of behavioral and neural studies to date have focused on decision making in reward-based tasks, and the rules guiding intrinsically motivated exploration remain largely unknown. To examine this question we developed a paradigm for systematically testing the choices of human observers in a free play context. Adult subjects played a series of short computer games of variable difficulty, and freely choose which game they wished to sample without external guidance or physical rewards. Subjects performed the task in three distinct conditions where they sampled from a small or a large choice set (7 vs 64 possible levels of difficulty, and where they did or did not have the possibility to sample new games at a constant level of difficulty. We show that despite the absence of external constraints, the subjects spontaneously adopted a structured exploration strategy whereby they (1 started with easier games and progressed to more difficult games, (2 sampled the entire choice set including extremely difficult games that could not be learnt, (3 repeated moderately and high difficulty games much more frequently than was predicted by chance, and (4 had higher repetition rates and chose higher speeds if they could generate new sequences at a constant level of difficulty. The results suggest that intrinsically motivated exploration is shaped by several factors including task difficulty, novelty and the size of the choice set, and these come into play to serve two internal goals - maximize the subjects’ knowledge of the available tasks (exploring the limits of the task set, and maximize their competence (performance and skills across the task set.

  8. Web-based citation management compared to EndNote: options for medical sciences.

    Science.gov (United States)

    Gomis, Melissa; Gall, Carole; Brahmi, Frances A

    2008-01-01

    The authors of this article analyzed the differences in output when searching MEDLINE direct and MEDLINE via citation management software, EndNote X1, EndNote Web, and RefWorks. Several searches were performed on Ovid MEDLINE and PubMed directly. These searches were compared against the same searches conducted in Ovid MEDLINE and PubMed using the search features in EndNote X1, EndNote Web, and RefWorks. Findings indicated that for in-depth research users, should search the databases directly rather than through the citation management software interface. The search results indicated it would be appropriate to search databases via citation management software for citation verification tasks and for cursory keyword searching.

  9. In Search of Design Principles for Developing Digital Learning and Performance Support for a Student Design Task

    Science.gov (United States)

    Bollen, Lars; van der Meij, Hans; Leemkuil, Henny; McKenney, Susan

    2015-01-01

    A digital learning and performance support environment for university student design tasks was developed. This paper describes on the design rationale, process, and the usage results to arrive at a core set of design principles for the construction of such an environment. We present a collection of organizational, technical, and course-related…

  10. Effects of menu structure and touch screen scrolling style on the variability of glance durations during in-vehicle visual search tasks.

    Science.gov (United States)

    Kujala, Tuomo; Saariluoma, Pertti

    2011-08-01

    The effects of alternative navigation device display features on drivers' visual sampling efficiency while searching forpoints of interest were studied in two driving simulation experiments with 40 participants. Given that the number of display items was sufficient, display features that facilitate resumption of visual search following interruptions were expected to lead to more consistent in-vehicle glance durations. As predicted, compared with a grid-style menu, searching information in a list-style menu while driving led to smaller variance in durations of in-vehicle glances, in particular with nine item displays. Kinetic touch screen scrolling induced a greater number of very short in-vehicle glances than scrolling with arrow buttons. The touch screen functionality did not significantly diminish the negative effects of the grid-menu compared with physical controls with list-style menus. The findings suggest that resumability of self-paced, in-vehicle visual search tasks could be assessed with the measures of variance of in-vehicle glance duration distributions. Statement of Relevance: The reported research reveals display design factors affecting safety-relevant variability of in-vehicle glance durations and provides a theoretical framework for explaining the effects. The research can have a significant methodical value for driver distraction research and practical value for the design and testing of in-vehicle user interfaces.

  11. Acquiring geographical data with web harvesting

    Science.gov (United States)

    Dramowicz, K.

    2016-04-01

    Many websites contain very attractive and up to date geographical information. This information can be extracted, stored, analyzed and mapped using web harvesting techniques. Poorly organized data from websites are transformed with web harvesting into a more structured format, which can be stored in a database and analyzed. Almost 25% of web traffic is related to web harvesting, mostly while using search engines. This paper presents how to harvest geographic information from web documents using the free tool called the Beautiful Soup, one of the most commonly used Python libraries for pulling data from HTML and XML files. It is a relatively easy task to process one static HTML table. The more challenging task is to extract and save information from tables located in multiple and poorly organized websites. Legal and ethical aspects of web harvesting are discussed as well. The paper demonstrates two case studies. The first one shows how to extract various types of information about the Good Country Index from the multiple web pages, load it into one attribute table and map the results. The second case study shows how script tools and GIS can be used to extract information from one hundred thirty six websites about Nova Scotia wines. In a little more than three minutes a database containing one hundred and six liquor stores selling these wines is created. Then the availability and spatial distribution of various types of wines (by grape types, by wineries, and by liquor stores) are mapped and analyzed.

  12. Systematizing Web Search through a Meta-Cognitive, Systems-Based, Information Structuring Model (McSIS)

    Science.gov (United States)

    Abuhamdieh, Ayman H.; Harder, Joseph T.

    2015-01-01

    This paper proposes a meta-cognitive, systems-based, information structuring model (McSIS) to systematize online information search behavior based on literature review of information-seeking models. The General Systems Theory's (GST) prepositions serve as its framework. Factors influencing information-seekers, such as the individual learning…

  13. The Effects of Visual Discriminability and Rotation Angle on 30-Month-Olds’ Search Performance in Spatial Rotation Tasks

    Science.gov (United States)

    Ebersbach, Mirjam; Nawroth, Christian

    2016-01-01

    Tracking objects that are hidden and then moved is a crucial ability related to object permanence, which develops across several stages in early childhood. In spatial rotation tasks, children observe a target object that is hidden in one of two or more containers before the containers are rotated around a fixed axis. Usually, 30-month-olds fail to find the hidden object after it was rotated by 180°. We examined whether visual discriminability of the containers improves 30-month-olds’ success in this task and whether children perform better after 90° than after 180° rotations. Two potential hiding containers with same or different colors were placed on a board that was rotated by 90° or 180° in a within-subjects design. Children (N = 29) performed above chance level in all four conditions. Their overall success in finding the object did not improve by differently colored containers. However, different colors prevented children from showing an inhibition bias in 90° rotations, that is, choosing the empty container more often when it was located close to them than when it was farther away: This bias emerged in the same colors condition but not in the different colors condition. Results are discussed in view of particular challenges that might facilitate or deteriorate spatial rotation tasks for young children. PMID:27812346

  14. Web document clustering using hyperlink structures

    Energy Technology Data Exchange (ETDEWEB)

    He, Xiaofeng; Zha, Hongyuan; Ding, Chris H.Q; Simon, Horst D.

    2001-05-07

    With the exponential growth of information on the World Wide Web there is great demand for developing efficient and effective methods for organizing and retrieving the information available. Document clustering plays an important role in information retrieval and taxonomy management for the World Wide Web and remains an interesting and challenging problem in the field of web computing. In this paper we consider document clustering methods exploring textual information hyperlink structure and co-citation relations. In particular we apply the normalized cut clustering method developed in computer vision to the task of hyperdocument clustering. We also explore some theoretical connections of the normalized-cut method to K-means method. We then experiment with normalized-cut method in the context of clustering query result sets for web search engines.

  15. Instant PHP web scraping

    CERN Document Server

    Ward, Jacob

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. Short, concise recipes to learn a variety of useful web scraping techniques using PHP.This book is aimed at those new to web scraping, with little or no previous programming experience. Basic knowledge of HTML and the Web is useful, but not necessary.

  16. Querying Data Providing Web Services

    OpenAIRE

    Sabesan, Manivasakan

    2010-01-01

    Web services are often used for search computing where data is retrieved from servers providing information of different kinds. Such data providing web services return a set of objects for a given set of parameters without any side effects. There is need to enable general and scalable search capabilities of data from data providing web services, which is the topic of this Thesis. The Web Service MEDiator (WSMED) system automatically provides relational views of any data providing web service ...

  17. PubMed-EX: a web browser extension to enhance PubMed search with text mining features.

    Science.gov (United States)

    Tsai, Richard Tzong-Han; Dai, Hong-Jie; Lai, Po-Ting; Huang, Chi-Hsin

    2009-11-15

    PubMed-EX is a browser extension that marks up PubMed search results with additional text-mining information. PubMed-EX's page mark-up, which includes section categorization and gene/disease and relation mark-up, can help researchers to quickly focus on key terms and provide additional information on them. All text processing is performed server-side, freeing up user resources. PubMed-EX is freely available at http://bws.iis.sinica.edu.tw/PubMed-EX and http://iisr.cse.yzu.edu.tw:8000/PubMed-EX/.

  18. Discovery of Surface Target Proteins Linking Drugs, Molecular Markers, Gene Regulation, Protein Networks, and Disease by Using a Web-Based Platform Targets-search.

    Science.gov (United States)

    Yan, Bin; Wang, Panwen; Wang, Junwen; Boheler, Kenneth R

    2018-01-01

    Integration and analysis of high content omics data have been critical to the investigation of molecule interactions (e.g., DNA-protein, protein-protein, chemical-protein) in biological systems. Human proteomic strategies that provide enriched information on cell surface proteins can be utilized for repurposing of drug targets and discovery of disease biomarkers. Although several published resources have proved useful to the analysis of these interactions, our newly developed web-based platform Targets-search has the capability of integrating multiple types of omics data to unravel their association with diverse molecule interactions and disease. Here, we describe how to use Targets-search, for the integrated and systemic exploitation of surface proteins to identify potential drug targets, which can further be used to analyze gene regulation, protein networks, and possible biomarkers for diseases and cancers. To illustrate this process, we have taken data from Ewing's sarcoma to identify surface proteins differentially expressed in Ewing's sarcoma cells. These surface proteins were then analyzed to determine which ones were known drug targets. The information suggested putative targets for drug repurposing and subsequent analyses illustrated their regulation by the transcription factor EWSR1.

  19. Study of order effects in the search for information on the Web: the case of an experiment about smoking cessation techniques

    Directory of Open Access Journals (Sweden)

    Stéphane AMATO

    2013-07-01

    Full Text Available This article deals with cognitive biases that could affect the judgment of net surfers while reading a list of answers, after a query in a search engine. The hypothesis is made that order effects i.e primacy and/or recency could be observed in such contexts. The authors choose to test it by doing an experiment in controlled-environment. So they decide to focus more particularly on the field of smoking cessation techniques and refine their questioning as follows: After a query into a search engine, does the place of a medication in a list determines the idea of its relevance, for a student population? By comparing three different groups, the authors demonstrate a primacy effect and no recency effect. In addition, they highlight five moderating variables: sex of the individual, the fact that he is a smoker or not, the fact that he had, or not, originally any opinion about methods of smoking cessation, the fact whether or not he is affected by health problems related to smoking, speed reading on the Web interface. The authors conclude speaking in favour of information literacy education. For them, in the case presented, it would be relevant as a medical point of view, in terms of public health, as a point of socio-economic development.

  20. Surging Seas Risk Finder: A Simple Search-Based Web Tool for Local Sea Level Rise Projections, Coastal Flood Risk Forecasts, and Inundation Exposure Analysis

    Science.gov (United States)

    Strauss, B.; Dodson, D.; Kulp, S. A.; Rizza, D. H.

    2016-12-01

    Surging Seas Risk Finder (riskfinder.org) is an online tool for accessing extensive local projections and analysis of sea level rise; coastal floods; and land, populations, contamination sources, and infrastructure and other assets that may be exposed to inundation. Risk Finder was first published in 2013 for Florida, New York and New Jersey, expanding to all states in the contiguous U.S. by 2016, when a major new version of the tool was released with a completely new interface. The revised tool was informed by hundreds of survey responses from and conversations with planners, local officials and other coastal stakeholders, plus consideration of modern best practices for responsive web design and user interfaces, and social science-based principles for science communication. Overarching design principles include simplicity and ease of navigation, leading to a landing page with Google-like sparsity and focus on search, and to an architecture based on search, so that each coastal zip code, city, county, state or other place type has its own webpage gathering all relevant analysis in modular, scrollable units. Millions of users have visited the Surging Seas suite of tools to date, and downloaded thousands of files, for stated purposes ranging from planning to business to education to personal decisions; and from institutions ranging from local to federal government agencies, to businesses, to NGOs, and to academia.

  1. Designing Effective Web Forms for Older Web Users

    Science.gov (United States)

    Li, Hui; Rau, Pei-Luen Patrick; Fujimura, Kaori; Gao, Qin; Wang, Lin

    2012-01-01

    This research aims to provide insight for web form design for older users. The effects of task complexity and information structure of web forms on older users' performance were examined. Forty-eight older participants with abundant computer and web experience were recruited. The results showed significant differences in task time and error rate…

  2. Web Mining and Social Networking

    DEFF Research Database (Denmark)

    Xu, Guandong; Zhang, Yanchun; Li, Lin

    mining, and the issue of how to incorporate web mining into web personalization and recommendation systems are also reviewed. Additionally, the volume explores web community mining and analysis to find the structural, organizational and temporal developments of web communities and reveal the societal...... sense of individuals or communities. The volume will benefit both academic and industry communities interested in the techniques and applications of web search, web data management, web mining and web knowledge discovery, as well as web community and social network analysis....

  3. Rare disease diagnosis as an information retrieval task

    DEFF Research Database (Denmark)

    Dragusin, Radu; Petcu, Paula; Lioma, Christina

    2011-01-01

    Increasingly more clinicians use web Information Retrieval (IR) systems to assist them in diagnosing difficult medical cases, for instance rare diseases that they may not be familiar with. However, web IR systems are not necessarily optimised for this task. For instance, clinicians’ queries tend...... to be long lists of symptoms, often containing phrases, whereas web IR systems typically expect very short keywordbased queries. Motivated by such differences, this work uses a preliminary study of 30 clinical cases to reflect on rare disease retrieval as an IR task. Initial experiments using both Google web...... search and offline retrieval from a rare disease collection indicate that the retrieval of rare diseases is an open problem with room for improvement....

  4. Rare disease diagnosis as an information retrieval task

    DEFF Research Database (Denmark)

    Dragusin, Radu; Petcu, Paula; Lioma, Christina

    2011-01-01

    Increasingly more clinicians use web Information Retrieval (IR) systems to assist them in diagnosing difficult medical cases, for instance rare diseases that they may not be familiar with. However, web IR systems are not necessarily optimised for this task. For instance, clinicians’ queries tend...... to be long lists of symptoms, often containing phrases, whereas web IR systems typically expect very short keyword-based queries. Motivated by such differences, this work uses a preliminary study of 30 clinical cases to reflect on rare disease retrieval as an IR task. Initial experiments using both Google...... web search and offline retrieval from a rare disease collection indicate that the retrieval of rare diseases is an open problem with room for improvement....

  5. CHANNELING: SEARCH OF THE METAANTHROPOLOGIC MEAS-UREMENTS OF MIND BEING IN THE UNIVERSE (based on the World Wide Web

    Directory of Open Access Journals (Sweden)

    Anatoly T. Tshedrin

    2014-06-01

    Full Text Available The relevance of the study. In the context of religious and philosophical movements of the «New Age» gained channeling phenomenon – «laying channel», «transmission channel» information from the consciousness that is not in human form, to the individual and humanity as a whole. In the socio-cultural environment of the postmodern channeling reflects the problem of finding extraterrestrial intelligence (ETI; «ETC-problem»; SETІ problem and to establish contacts with them, this problem has a different projection, important philosophical and anthropological measurements in culture. Investigation of mechanisms of constructing virtual superhuman personalities in the world web is not only of interest for further analysis of the problem of extraterrestrial intelligence (ETI, but also to extend subject field of anthropology of the Internet as an important area of philosophical and anthropological studies. The purpose of the study. Analysis of the phenomenon of channeling as a projection of the fundamental problems of life ETI, its representation on the World Wide Web, the impact on the archaism of postmodern culture posing problems meta an-thropological dimensions of existence in the universe of reason and contact with him in the doctrinal grounds channeling. Analysis of research on the problem and its empirical base. Clustered nature of the problem of ETI and channeling its element involves the widespread use of radio astronomy paradigm works carriers solve CETI; work in anthropology Internet; works of researchers of the phenomenon of «New Age». Empirical basis of the study are network resources, as well as texts–representatives created and introduced into circulation by the channelers, their predecessors. Research Methodology. Channeling as an object of research, its network of representation – as a matter of methods involve the use of analytical hermeneutics and archaeographic commenting text fractal logic cluster analysis. The main

  6. L-Measure: a web-accessible tool for the analysis, comparison and search of digital reconstructions of neuronal morphologies.

    Science.gov (United States)

    Scorcioni, Ruggero; Polavaram, Sridevi; Ascoli, Giorgio A

    2008-01-01

    L-Measure (LM) is a freely available software tool for the quantitative characterization of neuronal morphology. LM computes a large number of neuroanatomical parameters from 3D digital reconstruction files starting from and combining a set of core metrics. After more than six years of development and use in the neuroscience community, LM enables the execution of commonly adopted analyses as well as of more advanced functions. This report illustrates several LM protocols: (i) extraction of basic morphological parameters, (ii) computation of frequency distributions, (iii) measurements from user-specified subregions of the neuronal arbors, (iv) statistical comparison between two groups of cells and (v) filtered selections and searches from collections of neurons based on any Boolean combination of the available morphometric measures. These functionalities are easily accessed and deployed through a user-friendly graphical interface and typically execute within few minutes on a set of approximately 20 neurons. The tool is available at http://krasnow.gmu.edu/cn3 for either online use on any Java-enabled browser and platform or download for local execution under Windows and Linux.

  7. Search Engine for Antimicrobial Resistance: A Cloud Compatible Pipeline and Web Interface for Rapidly Detecting Antimicrobial Resistance Genes Directly from Sequence Data.

    Science.gov (United States)

    Rowe, Will; Baker, Kate S; Verner-Jeffreys, David; Baker-Austin, Craig; Ryan, Jim J; Maskell, Duncan; Pearce, Gareth

    2015-01-01

    Antimicrobial resistance remains a growing and significant concern in human and veterinary medicine. Current laboratory methods for the detection and surveillance of antimicrobial resistant bacteria are limited in their effectiveness and scope. With the rapidly developing field of whole genome sequencing beginning to be utilised in clinical practice, the ability to interrogate sequencing data quickly and easily for the presence of antimicrobial resistance genes will become increasingly important and useful for informing clinical decisions. Additionally, use of such tools will provide insight into the dynamics of antimicrobial resistance genes in metagenomic samples such as those used in environmental monitoring. Here we present the Search Engine for Antimicrobial Resistance (SEAR), a pipeline and web interface for detection of horizontally acquired antimicrobial resistance genes in raw sequencing data. The pipeline provides gene information, abundance estimation and the reconstructed sequence of antimicrobial resistance genes; it also provides web links to additional information on each gene. The pipeline utilises clustering and read mapping to annotate full-length genes relative to a user-defined database. It also uses local alignment of annotated genes to a range of online databases to provide additional information. We demonstrate SEAR's application in the detection and abundance estimation of antimicrobial resistance genes in two novel environmental metagenomes, 32 human faecal microbiome datasets and 126 clinical isolates of Shigella sonnei. We have developed a pipeline that contributes to the improved capacity for antimicrobial resistance detection afforded by next generation sequencing technologies, allowing for rapid detection of antimicrobial resistance genes directly from sequencing data. SEAR uses raw sequencing data via an intuitive interface so can be run rapidly without requiring advanced bioinformatic skills or resources. Finally, we show that SEAR

  8. Search Engine for Antimicrobial Resistance: A Cloud Compatible Pipeline and Web Interface for Rapidly Detecting Antimicrobial Resistance Genes Directly from Sequence Data.

    Directory of Open Access Journals (Sweden)

    Will Rowe

    Full Text Available Antimicrobial resistance remains a growing and significant concern in human and veterinary medicine. Current laboratory methods for the detection and surveillance of antimicrobial resistant bacteria are limited in their effectiveness and scope. With the rapidly developing field of whole genome sequencing beginning to be utilised in clinical practice, the ability to interrogate sequencing data quickly and easily for the presence of antimicrobial resistance genes will become increasingly important and useful for informing clinical decisions. Additionally, use of such tools will provide insight into the dynamics of antimicrobial resistance genes in metagenomic samples such as those used in environmental monitoring.Here we present the Search Engine for Antimicrobial Resistance (SEAR, a pipeline and web interface for detection of horizontally acquired antimicrobial resistance genes in raw sequencing data. The pipeline provides gene information, abundance estimation and the reconstructed sequence of antimicrobial resistance genes; it also provides web links to additional information on each gene. The pipeline utilises clustering and read mapping to annotate full-length genes relative to a user-defined database. It also uses local alignment of annotated genes to a range of online databases to provide additional information. We demonstrate SEAR's application in the detection and abundance estimation of antimicrobial resistance genes in two novel environmental metagenomes, 32 human faecal microbiome datasets and 126 clinical isolates of Shigella sonnei.We have developed a pipeline that contributes to the improved capacity for antimicrobial resistance detection afforded by next generation sequencing technologies, allowing for rapid detection of antimicrobial resistance genes directly from sequencing data. SEAR uses raw sequencing data via an intuitive interface so can be run rapidly without requiring advanced bioinformatic skills or resources. Finally, we

  9. GeoWeb Crawler: An Extensible and Scalable Web Crawling Framework for Discovering Geospatial Web Resources

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Huang

    2016-08-01

    Full Text Available With the advance of the World-Wide Web (WWW technology, people can easily share content on the Web, including geospatial data and web services. Thus, the “big geospatial data management” issues start attracting attention. Among the big geospatial data issues, this research focuses on discovering distributed geospatial resources. As resources are scattered on the WWW, users cannot find resources of their interests efficiently. While the WWW has Web search engines addressing web resource discovery issues, we envision that the geospatial Web (i.e., GeoWeb also requires GeoWeb search engines. To realize a GeoWeb search engine, one of the first steps is to proactively discover GeoWeb resources on the WWW. Hence, in this study, we propose the GeoWeb Crawler, an extensible Web crawling framework that can find various types of GeoWeb resources, such as Open Geospatial Consortium (OGC web services, Keyhole Markup Language (KML and Environmental Systems Research Institute, Inc (ESRI Shapefiles. In addition, we apply the distributed computing concept to promote the performance of the GeoWeb Crawler. The result shows that for 10 targeted resources types, the GeoWeb Crawler discovered 7351 geospatial services and 194,003 datasets. As a result, the proposed GeoWeb Crawler framework is proven to be extensible and scalable to provide a comprehensive index of GeoWeb.

  10. Iconicity influences how effectively minimally verbal children with autism and ability-matched typically developing children use pictures as symbols in a search task.

    Science.gov (United States)

    Hartley, Calum; Allen, Melissa L

    2015-07-01

    Previous word learning studies suggest that children with autism spectrum disorder may have difficulty understanding pictorial symbols. Here we investigate the ability of children with autism spectrum disorder and language-matched typically developing children to contextualize symbolic information communicated by pictures in a search task that did not involve word learning. Out of the participant's view, a small toy was concealed underneath one of four unique occluders that were individuated by familiar nameable objects or unfamiliar unnamable objects. Children were shown a picture of the hiding location and then searched for the toy. Over three sessions, children completed trials with color photographs, black-and-white line drawings, and abstract color pictures. The results reveal zero group differences; neither children with autism spectrum disorder nor typically developing children were influenced by occluder familiarity, and both groups' errorless retrieval rates were above-chance with all three picture types. However, both groups made significantly more errorless retrievals in the most-iconic photograph trials, and performance was universally predicted by receptive language. Therefore, our findings indicate that children with autism spectrum disorder and young typically developing children can contextualize pictures and use them to adaptively guide their behavior in real time and space. However, this ability is significantly influenced by receptive language development and pictorial iconicity. © The Author(s) 2014.

  11. Enhancing UCSF Chimera through web services.

    Science.gov (United States)

    Huang, Conrad C; Meng, Elaine C; Morris, John H; Pettersen, Eric F; Ferrin, Thomas E

    2014-07-01

    Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. WEB STRUCTURE MINING

    Directory of Open Access Journals (Sweden)

    CLAUDIA ELENA DINUCĂ

    2011-01-01

    Full Text Available The World Wide Web became one of the most valuable resources for information retrievals and knowledge discoveries due to the permanent increasing of the amount of data available online. Taking into consideration the web dimension, the users get easily lost in the web’s rich hyper structure. Application of data mining methods is the right solution for knowledge discovery on the Web. The knowledge extracted from the Web can be used to raise the performances for Web information retrievals, question answering and Web based data warehousing. In this paper, I provide an introduction of Web mining categories and I focus on one of these categories: the Web structure mining. Web structure mining, one of three categories of web mining for data, is a tool used to identify the relationship between Web pages linked by information or direct link connection. It offers information about how different pages are linked together to form this huge web. Web Structure Mining finds hidden basic structures and uses hyperlinks for more web applications such as web search.

  13. Exploring the academic invisible web

    OpenAIRE

    Lewandowski, Dirk; Mayr, Philipp

    2007-01-01

    Purpose: To provide a critical review of Bergman’s 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scientific search engines. We provide an overview of approaches followed thus far. Design/methodol...

  14. Natural Language Search Interfaces: Health Data Needs Single-Field Variable Search

    Science.gov (United States)

    Smith, Sam; Sufi, Shoaib; Goble, Carole; Buchan, Iain

    2016-01-01

    Background Data discovery, particularly the discovery of key variables and their inter-relationships, is key to secondary data analysis, and in-turn, the evolving field of data science. Interface designers have presumed that their users are domain experts, and so they have provided complex interfaces to support these “experts.” Such interfaces hark back to a time when searches needed to be accurate first time as there was a high computational cost associated with each search. Our work is part of a governmental research initiative between the medical and social research funding bodies to improve the use of social data in medical research. Objective The cross-disciplinary nature of data science can make no assumptions regarding the domain expertise of a particular scientist, whose interests may intersect multiple domains. Here we consider the common requirement for scientists to seek archived data for secondary analysis. This has more in common with search needs of the “Google generation” than with their single-domain, single-tool forebears. Our study compares a Google-like interface with traditional ways of searching for noncomplex health data in a data archive. Methods Two user interfaces are evaluated for the same set of tasks in extracting data from surveys stored in the UK Data Archive (UKDA). One interface, Web search, is “Google-like,” enabling users to browse, search for, and view metadata about study variables, whereas the other, traditional search, has standard multioption user interface. Results Using a comprehensive set of tasks with 20 volunteers, we found that the Web search interface met data discovery needs and expectations better than the traditional search. A task × interface repeated measures analysis showed a main effect indicating that answers found through the Web search interface were more likely to be correct (F 1,19=37.3, Psearch interface (F 1,19=18.0, Psearch interface received significantly higher ratings than the traditional

  15. MuZeeker - Adapting a music search engine for mobile phones

    DEFF Research Database (Denmark)

    Larsen, Jakob Eg; Halling, Søren Christian; Sigurdsson, Magnus Kristinn

    2010-01-01

    We describe MuZeeker, a search engine with domain knowledge based on Wikipedia. MuZeeker enables the user to refine a search in multiple steps by means of category selection. In the present version we focus on multimedia search related to music and we present two prototype search applications (web......-based and mobile) and discuss the issues involved in adapting the search engine for mobile phones. A category based filtering approach enables the user to refine a search through relevance feedback by category selection instead of typing additional text, which is hypothesized to be an advantage in the mobile Mu......Zeeker application. We report from two usability experiments using the think aloud protocol, in which N=20 participants performed tasks using MuZeeker and a customized Google search engine. In both experiments web-based and mobile user interfaces were used. The experiment shows that participants are capable...

  16. Risky Behavior in Gambling Tasks in Individuals with ADHD : A Systematic Literature Review

    NARCIS (Netherlands)

    Groen, Yvonne; Gaastra, Geraldina; Lewis-Evans, Ben; Tucha, Oliver

    2013-01-01

    Objective: The aim of this review was to gain insight into the relationship between Attention deficit hyperactivity disorder (ADHD) and risky performance in gambling tasks and to identify any potential alternate explanatory factors. Methods: PsycINFO, PubMed, and Web of Knowledge were searched for

  17. An Empirical Comparison of Visualization Tools To Assist Information Retrieval on the Web.

    Science.gov (United States)

    Heo, Misook; Hirtle, Stephen C.

    2001-01-01

    Discusses problems with navigation in hypertext systems, including cognitive overload, and describes a study that tested information visualization techniques to see which best represented the underlying structure of Web space. Considers the effects of visualization techniques on user performance on information searching tasks and the effects of…

  18. Towards Distributed Information Retrieval in the Semantic Web: Query Reformulation Using the oMAP Framework

    NARCIS (Netherlands)

    U. Straccia; R. Troncy (Raphael)

    2006-01-01

    textabstractThis paper introduces a general methodology for performing distributed search in the Semantic Web. We propose to define this task as a three steps process, namely resource selection, query reformulation/ontology alignment and rank aggregation/data fusion. For the second problem, we have

  19. FindZebra: a search engine for rare diseases.

    Science.gov (United States)

    Dragusin, Radu; Petcu, Paula; Lioma, Christina; Larsen, Birger; Jørgensen, Henrik L; Cox, Ingemar J; Hansen, Lars Kai; Ingwersen, Peter; Winther, Ole

    2013-06-01

    The web has become a primary information resource about illnesses and treatments for both medical and non-medical users. Standard web search is by far the most common interface to this information. It is therefore of interest to find out how well web search engines work for diagnostic queries and what factors contribute to successes and failures. Among diseases, rare (or orphan) diseases represent an especially challenging and thus interesting class to diagnose as each is rare, diverse in symptoms and usually has scattered resources associated with it. We design an evaluation approach for web search engines for rare disease diagnosis which includes 56 real life diagnostic cases, performance measures, information resources and guidelines for customising Google Search to this task. In addition, we introduce FindZebra, a specialized (vertical) rare disease search engine. FindZebra is powered by open source search technology and uses curated freely available online medical information. FindZebra outperforms Google Search in both default set-up and customised to the resources used by FindZebra. We extend FindZebra with specialized functionalities exploiting medical ontological information and UMLS medical concepts to demonstrate different ways of displaying the retrieved results to medical experts. Our results indicate that a specialized search engine can improve the diagnostic quality without compromising the ease of use of the currently widely popular standard web search. The proposed evaluation approach can be valuable for future development and benchmarking. The FindZebra search engine is available at http://www.findzebra.com/. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. APLIKASI WEB CRAWLER UNTUK WEB CONTENT PADA MOBILE PHONE

    Directory of Open Access Journals (Sweden)

    Sarwosri Sarwosri

    2009-01-01

    Full Text Available Crawling is the process behind a search engine, which served through the World Wide Web in a structured and with certain ethics. Applications that run the crawling process is called Web Crawler, also called web spider or web robot. The growth of mobile search services provider, followed by growth of a web crawler that can browse web pages in mobile content type. Crawler Web applications can be accessed by mobile devices and only web pages that type Mobile Content to be explored is the Web Crawler. Web Crawler duty is to collect a number of Mobile Content. A mobile application functions as a search application that will use the results from the Web Crawler. Crawler Web server consists of the Servlet, Mobile Content Filter and datastore. Servlet is a gateway connection between the client with the server. Datastore is the storage media crawling results. Mobile Content Filter selects a web page, only the appropriate web pages for mobile devices or with mobile content that will be forwarded.

  1. A Caching Mechanism for Semantic Web Service Discovery

    Science.gov (United States)

    Stollberg, Michael; Hepp, Martin; Hoffmann, Jörg

    The discovery of suitable Web services for a given task is one of the central operations in Service-oriented Architectures (SOA), and research on Semantic Web services (SWS) aims at automating this step. For the large amount of available Web services that can be expected in real-world settings, the computational costs of automated discovery based on semantic matchmaking become important. To make a discovery engine a reliable software component, we must thus aim at minimizing both the mean and the variance of the duration of the discovery task. For this, we present an extension for discovery engines in SWS environments that exploits structural knowledge and previous discovery results for reducing the search space of consequent discovery operations. Our prototype implementation shows significant improvements when applied to the Stanford SWS Challenge scenario and dataset.

  2. Natural Language Search Interfaces: Health Data Needs Single-Field Variable Search.

    Science.gov (United States)

    Jay, Caroline; Harper, Simon; Dunlop, Ian; Smith, Sam; Sufi, Shoaib; Goble, Carole; Buchan, Iain

    2016-01-14

    Data discovery, particularly the discovery of key variables and their inter-relationships, is key to secondary data analysis, and in-turn, the evolving field of data science. Interface designers have presumed that their users are domain experts, and so they have provided complex interfaces to support these "experts." Such interfaces hark back to a time when searches needed to be accurate first time as there was a high computational cost associated with each search. Our work is part of a governmental research initiative between the medical and social research funding bodies to improve the use of social data in medical research. The cross-disciplinary nature of data science can make no assumptions regarding the domain expertise of a particular scientist, whose interests may intersect multiple domains. Here we consider the common requirement for scientists to seek archived data for secondary analysis. This has more in common with search needs of the "Google generation" than with their single-domain, single-tool forebears. Our study compares a Google-like interface with traditional ways of searching for noncomplex health data in a data archive. Two user interfaces are evaluated for the same set of tasks in extracting data from surveys stored in the UK Data Archive (UKDA). One interface, Web search, is "Google-like," enabling users to browse, search for, and view metadata about study variables, whereas the other, traditional search, has standard multioption user interface. Using a comprehensive set of tasks with 20 volunteers, we found that the Web search interface met data discovery needs and expectations better than the traditional search. A task × interface repeated measures analysis showed a main effect indicating that answers found through the Web search interface were more likely to be correct (F1,19=37.3, Pserendipity as part of the refinement. The results provide clear evidence that data science should adopt single-field natural language search interfaces for

  3. Search Engine Optimization

    CERN Document Server

    Davis, Harold

    2006-01-01

    SEO--short for Search Engine Optimization--is the art, craft, and science of driving web traffic to web sites. Web traffic is food, drink, and oxygen--in short, life itself--to any web-based business. Whether your web site depends on broad, general traffic, or high-quality, targeted traffic, this PDF has the tools and information you need to draw more traffic to your site. You'll learn how to effectively use PageRank (and Google itself); how to get listed, get links, and get syndicated; and much more. The field of SEO is expanding into all the possible ways of promoting web traffic. This

  4. On Building a Search Interface Discovery System

    Science.gov (United States)

    Shestakov, Denis

    A huge portion of the Web known as the deep Web is accessible via search interfaces to myriads of databases on the Web. While relatively good approaches for querying the contents of web databases have been recently proposed, one cannot fully utilize them having most search interfaces unlocated. Thus, the automatic recognition of search interfaces to online databases is crucial for any application accessing the deep Web. This paper describes the architecture of the I-Crawler, a system for finding and classifying search interfaces. The I-Crawler is intentionally designed to be used in the deep web characterization surveys and for constructing directories of deep web resources.

  5. User Behavior Analysis from Web Log using Log Analyzer Tool

    OpenAIRE

    Brijesh Bakariya; Ghanshyam Singh Thakur

    2013-01-01

    Now a day, internet plays a role of huge database in which many websites, information and search engines are available. But due to unstructured and semi-structured data in webpage, it has become a challenging task to extract relevant information. Its main reason is that traditional knowledge based technique are not correct to efficiently utilization the knowledge, because it consist of many discover pattern, contains a lots of noise and uncertainty. In this paper, analyzing of web usage minin...

  6. Changes in College Students' Perceptions of Use of Web-Based Resources for Academic Tasks with Wikipedia Projects: A Preliminary Exploration

    Science.gov (United States)

    Traphagan, Tomoko; Traphagan, John; Dickens, Linda Neavel; Resta, Paul

    2014-01-01

    Motivated by the need to facilitate Net Generation students' information literacy (IL), or more specifically, to promote student understanding of legitimate, effective use of Web-based resources, this exploratory study investigated how analyzing, writing, posting, and monitoring Wikipedia entries might help students develop critical…

  7. Development and tuning of an original search engine for patent libraries in medicinal chemistry.

    Science.gov (United States)

    Pasche, Emilie; Gobeill, Julien; Kreim, Olivier; Oezdemir-Zaech, Fatma; Vachon, Therese; Lovis, Christian; Ruch, Patrick

    2014-01-01

    The large increase in the size of patent collections has led to the need of efficient search strategies. But the development of advanced text-mining applications dedicated to patents of the biomedical field remains rare, in particular to address the needs of the pharmaceutical & biotech industry, which intensively uses patent libraries for competitive intelligence and drug development. We describe here the development of an advanced retrieval engine to search information in patent collections in the field of medicinal chemistry. We investigate and combine different strategies and evaluate their respective impact on the performance of the search engine applied to various search tasks, which covers the putatively most frequent search behaviours of intellectual property officers in medical chemistry: 1) a prior art search task; 2) a technical survey task; and 3) a variant of the technical survey task, sometimes called known-item search task, where a single patent is targeted. The optimal tuning of our engine resulted in a top-precision of 6.76% for the prior art search task, 23.28% for the technical survey task and 46.02% for the variant of the technical survey task. We observed that co-citation boosting was an appropriate strategy to improve prior art search tasks, while IPC classification of queries was improving retrieval effectiveness for technical survey tasks. Surprisingly, the use of the full body of the patent was always detrimental for search effectiveness. It was also observed that normalizing biomedical entities using curated dictionaries had simply no impact on the search tasks we evaluate. The search engine was finally implemented as a web-application within Novartis Pharma. The application is briefly described in the report. We have presented the development of a search engine dedicated to patent search, based on state of the art methods applied to patent corpora. We have shown that a proper tuning of the system to adapt to the various search tasks

  8. Advanced Visualization and Interactive Display Rapid Innovation and Discovery Evaluation Research (VISRIDER) Program Task 6: Point Cloud Visualization Techniques for Desktop and Web Platforms

    Science.gov (United States)

    2017-04-01

    ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH (VISRIDER) PROGRAM TASK 6: POINT CLOUD...To) OCT 2013 – SEP 2014 4. TITLE AND SUBTITLE ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH

  9. Intelligent Agent Based Semantic Web in Cloud Computing Environment

    OpenAIRE

    Mukhopadhyay, Debajyoti; Sharma, Manoj; Joshi, Gajanan; Pagare, Trupti; Palwe, Adarsha

    2013-01-01

    Considering today's web scenario, there is a need of effective and meaningful search over the web which is provided by Semantic Web. Existing search engines are keyword based. They are vulnerable in answering intelligent queries from the user due to the dependence of their results on information available in web pages. While semantic search engines provides efficient and relevant results as the semantic web is an extension of the current web in which information is given well defined meaning....

  10. Web-based collaboration tools.

    Science.gov (United States)

    Wink, Diane M

    2009-01-01

    In this bimonthly series, the author examines how nurse educators can use Internet and Web-based computer technologies such as search, communication, and collaborative writing tools; social networking and social bookmarking sites; virtual worlds; and Web-based teaching and learning programs. This article describes Web-based collaboration tools and techniques to increase their effectiveness.

  11. Digging Deeper: The Deep Web.

    Science.gov (United States)

    Turner, Laura

    2001-01-01

    Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…

  12. Spelling and the Web

    Science.gov (United States)

    Varnhagen, Connie K.; McFall, G. Peggy; Figueredo, Lauren; Takach, Bonnie Sadler; Daniels, Jason; Cuthbertson, Heather

    2008-01-01

    Correct spelling is increasingly important in our technological world. We examined children's and adults' Web search behavior for easy and more difficult to spell target keywords. Grade 4 children and university students searched for the life cycle of the lemming (easy to spell target keyword) or the ptarmigan (difficult to spell target keyword).…

  13. Semantic Web Evaluation Challenge

    CERN Document Server

    2014-01-01

    This book constitutes the thoroughly refereed post conference proceedings of the first edition of the Semantic Web Evaluation Challenge, SemWebEval 2014, co-located with the 11th Extended Semantic Web conference, held in Anissaras, Crete, Greece, in May 2014. This book includes the descriptions of all methods and tools that competed at SemWebEval 2014, together with a detailed description of the tasks, evaluation procedures and datasets. The contributions are grouped in three areas: semantic publishing (sempub), concept-level sentiment analysis (ssa), and linked-data enabled recommender systems (recsys).

  14. Usability Evaluation of Public Web Mapping Sites

    Science.gov (United States)

    Wang, C.

    2014-04-01

    Web mapping sites are interactive maps that are accessed via Webpages. With the rapid development of Internet and Geographic Information System (GIS) field, public web mapping sites are not foreign to people. Nowadays, people use these web mapping sites for various reasons, in that increasing maps and related map services of web mapping sites are freely available for end users. Thus, increased users of web mapping sites led to more usability studies. Usability Engineering (UE), for instance, is an approach for analyzing and improving the usability of websites through examining and evaluating an interface. In this research, UE method was employed to explore usability problems of four public web mapping sites, analyze the problems quantitatively and provide guidelines for future design based on the test results. Firstly, the development progress for usability studies were described, and simultaneously several usability evaluation methods such as Usability Engineering (UE), User-Centered Design (UCD) and Human-Computer Interaction (HCI) were generally introduced. Then the method and procedure of experiments for the usability test were presented in detail. In this usability evaluation experiment, four public web mapping sites (Google Maps, Bing maps, Mapquest, Yahoo Maps) were chosen as the testing websites. And 42 people, who having different GIS skills (test users or experts), gender (male or female), age and nationality, participated in this test to complete the several test tasks in different teams. The test comprised three parts: a pretest background information questionnaire, several test tasks for quantitative statistics and progress analysis, and a posttest questionnaire. The pretest and posttest questionnaires focused on gaining the verbal explanation of their actions qualitatively. And the design for test tasks targeted at gathering quantitative data for the errors and problems of the websites. Then, the results mainly from the test part were analyzed. The

  15. A Quantum Query Expansion Approach for Session Search

    Directory of Open Access Journals (Sweden)

    Peng Zhang

    2016-04-01

    Full Text Available Recently, Quantum Theory (QT has been employed to advance the theory of Information Retrieval (IR. Various analogies between QT and IR have been established. Among them, a typical one is applying the idea of photon polarization in IR tasks, e.g., for document ranking and query expansion. In this paper, we aim to further extend this work by constructing a new superposed state of each document in the information need space, based on which we can incorporate the quantum interference idea in query expansion. We then apply the new quantum query expansion model to session search, which is a typical Web search task. Empirical evaluation on the large-scale Clueweb12 dataset has shown that the proposed model is effective in the session search tasks, demonstrating the potential of developing novel and effective IR models based on intuitions and formalisms of QT.

  16. Searching with Kids.

    Science.gov (United States)

    Valenza, Joyce Kasman

    2000-01-01

    Discusses student search tools for the World Wide Web and describes and compares seven that are appropriate for pre-kindergarten through secondary school. Highlights include access; age appropriateness; directories versus full search engines; filters; limits of some tools; and adult-oriented search tools that can be limited for students' access.…

  17. An Online Game Approach for Improving Students' Learning Performance in Web-Based Problem-Solving Activities

    Science.gov (United States)

    Hwang, Gwo-Jen; Wu, Po-Han; Chen, Chi-Chang

    2012-01-01

    In this paper, an online game was developed in the form of a competitive board game for conducting web-based problem-solving activities. The participants of the game determined their move by throwing a dice. Each location of the game board corresponds to a gaming task, which could be a web-based information-searching question or a mini-game; the…

  18. Working with Data: Discovering Knowledge through Mining and Analysis; Systematic Knowledge Management and Knowledge Discovery; Text Mining; Methodological Approach in Discovering User Search Patterns through Web Log Analysis; Knowledge Discovery in Databases Using Formal Concept Analysis; Knowledge Discovery with a Little Perspective.

    Science.gov (United States)

    Qin, Jian; Jurisica, Igor; Liddy, Elizabeth D.; Jansen, Bernard J; Spink, Amanda; Priss, Uta; Norton, Melanie J.

    2000-01-01

    These six articles discuss knowledge discovery in databases (KDD). Topics include data mining; knowledge management systems; applications of knowledge discovery; text and Web mining; text mining and information retrieval; user search patterns through Web log analysis; concept analysis; data collection; and data structure inconsistency. (LRW)

  19. Millennial Generation Students Search the Web Erratically, with Minimal Evaluation of Information Quality. A Review of: Taylor, A. (2012. A study of the information search behaviour of the millennial generation. Information Research, 17(1, paper 508. Retrieved from http://informationr.net/ir/17-1/paper508.html

    Directory of Open Access Journals (Sweden)

    Dominique Daniel

    2013-03-01

    Full Text Available Objective – To identify how millennial generation students proceed through the information search process and select resources on the web; to determine whether students evaluate the quality of web resources and how they use general information websites.Design – Longitudinal study.Setting – University in the United States.Subjects – 80 undergraduate students of the millennial generation enrolled in a business course.Methods – The students were required to complete a research report with a bibliography in five weeks. They also had to turn in interim assignments during that period (including an abstract, an outline, and rough draft. Their search behaviour was monitored using a modified Yahoo search engine that allowed subjects to search, and then to fill out surveys integrated directly below their search results. The students were asked to indicate the relevance of the resources they found on the open web, to identify the criteria they used toevaluate relevance, and to specify the stage they were at in the search process. They could choose from five stages defined by the author, based on Wilson (1999: initiation, exploration, differentiation, extracting, and verifying. Datawere collected using anonymous user IDs and included URLs for sources selected along with subject answers until completion of all assignments. The students provided 758 distinct web page evaluations.Main Results – Students did not progress in orderly fashion through the search process, but rather proceeded erratically. A substantial number reported being in fewer than four of the five search stages. Only a small percentage ever declared being in the final stage of verifying previously gathered information, and during preparation of the final report a majority still declared being in the extracting stage. In fact, participants selected documents (extracting stage throughout the process. In addition, students were not much concerned with the quality, validity, or

  20. search GenBank: interactive orchestration and ad-hoc choreography of Web services in the exploration of the biomedical resources of the National Center For Biotechnology Information

    National Research Council Canada - National Science Library

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Siążnik, Artur

    2013-01-01

    ..., its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components...

  1. Citation Analysis using the Medline Database at the Web of Knowledge: Searching "Times Cited" with Medical Subject Headings (MeSH)

    CERN Document Server

    Leydesdorff, Loet

    2012-01-01

    Citation analysis of documents retrieved from the Medline database (at the Web of Knowledge) has been possible only on a case-by-case basis. A technique is here developed for citation analysis in batch mode using both Medical Subject Headings (MeSH) at the Web of Knowledge and the Science Citation Index at the Web of Science. This freeware routine is applied to the case of "Brugada Syndrome," a specific disease and field of research (since 1992). The journals containing these publications are attributed to Web-of-Science Categories other than "Cardiac and Cardiovascular Systems"), perhaps because of the possibility of genetic testing for this syndrome in the clinic. With this routine, all the instruments available for citation analysis can be used on the basis of MeSH terms.

  2. Experience of Developing a Meta-Semantic Search Engine

    OpenAIRE

    Mukhopadhyay, Debajyoti; Sharma, Manoj; Joshi, Gajanan; Pagare, Trupti; Palwe, Adarsha

    2013-01-01

    Thinking of todays web search scenario which is mainly keyword based, leads to the need of effective and meaningful search provided by Semantic Web. Existing search engines are vulnerable to provide relevant answers to users query due to their dependency on simple data available in web pages. On other hand, semantic search engines provide efficient and relevant results as the semantic web manages information with well defined meaning using ontology. A Meta-Search engine is a search tool that ...

  3. Discovering Land Cover Web Map Services from the Deep Web with JavaScript Invocation Rules

    Directory of Open Access Journals (Sweden)

    Dongyang Hou

    2016-06-01

    Full Text Available Automatic discovery of isolated land cover web map services (LCWMSs can potentially help in sharing land cover data. Currently, various search engine-based and crawler-based approaches have been developed for finding services dispersed throughout the surface web. In fact, with the prevalence of geospatial web applications, a considerable number of LCWMSs are hidden in JavaScript code, which belongs to the deep web. However, discovering LCWMSs from JavaScript code remains an open challenge. This paper aims to solve this challenge by proposing a focused deep web crawler for finding more LCWMSs from deep web JavaScript code and the surface web. First, the names of a group of JavaScript links are abstracted as initial judgements. Through name matching, these judgements are utilized to judge whether or not the fetched webpages contain predefined JavaScript links that may prompt JavaScript code to invoke WMSs. Secondly, some JavaScript invocation functions and URL formats for WMS are summarized as JavaScript invocation rules from prior knowledge of how WMSs are employed and coded in JavaScript. These invocation rules are used to identify the JavaScript code for extracting candidate WMSs through rule matching. The above two operations are incorporated into a traditional focused crawling strategy situated between the tasks of fetching webpages and parsing webpages. Thirdly, LCWMSs are selected by matching services with a set of land cover keywords. Moreover, a search engine for LCWMSs is implemented that uses the focused deep web crawler to retrieve and integrate the LCWMSs it discovers. In the first experiment, eight online geospatial web applications serve as seed URLs (Uniform Resource Locators and crawling scopes; the proposed crawler addresses only the JavaScript code in these eight applications. All 32 available WMSs hidden in JavaScript code were found using the proposed crawler, while not one WMS was discovered through the focused crawler

  4. EuroGOV: Engineering a Multilingual Web Corpus

    NARCIS (Netherlands)

    Sigurbjörnsson, B.; Kamps, J.; de Rijke, M.

    2005-01-01

    EuroGOV is a multilingual web corpus that was created to serve as the document collection for WebCLEF, the CLEF 2005 web retrieval task. EuroGOV is a collection of web pages crawled from the European Union portal, European Union member state governmental web sites, and Russian government web sites.

  5. Indexing and Retrieval for the Web.

    Science.gov (United States)

    Rasmussen, Edie M.

    2003-01-01

    Explores current research on indexing and ranking as retrieval functions of search engines on the Web. Highlights include measuring search engine stability; evaluation of Web indexing and retrieval; Web crawlers; hyperlinks for indexing and ranking; ranking for metasearch; document structure; citation indexing; relevance; query evaluation;…

  6. ACCOUNTING FOR USERS' INFLATED ASSESSMENTS OF ONLINE CATALOG SEARCH PERFORMANCE AND USEFULNESS: AN EXPERIMENTAL STUDY

    Directory of Open Access Journals (Sweden)

    Charles R. Hildreth

    2001-01-01

    Full Text Available User-oriented approaches to information retrieval (IR system performance evaluation assign a major role to user satisfaction with search results and overall system performance. Expressed satisfaction with search results is often used as a measure of utility. Many research studies indicate that users of on-line library catalogs (OPACs and other IR systems often express satisfaction with poor search results. This phenomenon of "false positives," inflated assessments of search results and system performance, has not been adequately explained. Non-performance factors such as interface style and ease of use may have an affect on a searcher's satisfaction with search results. The research described in this report investigates this phenomenon. This paper presents the findings of an experimental study which compared users' search performance and assessments of ease of use, system usefulness, and satisfaction with search results after use of a Web OPAC or its conventional counterpart. The primary questions addressed by this research center on the influence of two experimental factors, OPAC search interface style and search task level of difficulty, on the dependent variables: actual search performance, perceptions of ease of use and system usefulness, and assessments of satisfaction with search results. The study also investigated associations between perceived ease of use, system usefulness, and satisfaction with search results. Lastly, the study looked for associations between the dependent variables and personal characteristics. No association was found between satisfaction with search results and actual search performance. Web OPAC searchers outperformed Text OPAC searchers, but search task level of difficulty is a major determinant of search success. A strong positive correlation was found between perceptions of system ease of use and assessments of search results.

  7. Searching for strategies to reduce the mechanical demands of the sit-to-stand task with a muscle-actuated optimal control model

    NARCIS (Netherlands)

    Bobbert, M.F.; Kistemaker, D.A.; Vaz, M.A.; Ackermann, M

    2016-01-01

    Background The sit-to-stand task, which involves rising unassisted from sitting on a chair to standing, is important in daily life. Many people with muscle weakness, reduced range of motion or loading-related pain in a particular joint have difficulty performing the task. How should a person

  8. Apports et limites des tâches web 2.0 dans un projet de télécollaboration asymétrique / Benefits and limitations of web 2.0 tasks in an asymmetrical tele-collaboration project

    Directory of Open Access Journals (Sweden)

    Charlotte Dejean-Thircuir

    2014-05-01

    Full Text Available Cet article se penche sur un échange en ligne lors duquel des étudiants de Master FLE (futurs enseignants de français ont fait réaliser une soixantaine de tâches d’apprentissage à distance à des apprenants chypriotes et lettons. Pour un tiers de ces tâches, qui constituent l’objet d’analyse, les étudiants de FLE ont fait appel à des applications du Web 2.0. L’article en propose d’abord une délimitation et une catégorisation. Puis il cherche à comprendre les raisons pour lesquelles ces tâches n’ont pas débouché sur des échanges avec le monde extérieur, ni même, dans certains cas, à une diffusion des productions finales ; il s’oriente pour cela vers la question du ou des destinataire(s des productions réalisées par les apprenants. This article focuses on an online exchange in which students in a Master FLE (French as a Foreign Language class (future French language teachers asked Cypriot and Latvian French-language learners to complete sixty distance-learning tasks. One third of these tasks, which are the focus of this article, used Web 2.0 applications. The article first describes and categorizes the tasks. Then it tries to understand why these tasks have not led to the expected interactions with a wider online audience nor to the anticipated dissemination of the content developed by the learners. It (continues concludes by exploring the question of who are the intended recipient(s of the content produced by these learners.

  9. Extracting Macroscopic Information from Web Links.

    Science.gov (United States)

    Thelwall, Mike

    2001-01-01

    Discussion of Web-based link analysis focuses on an evaluation of Ingversen's proposed external Web Impact Factor for the original use of the Web, namely the interlinking of academic research. Studies relationships between academic hyperlinks and research activities for British universities and discusses the use of search engines for Web link…

  10. Evidence From Web-Based Dietary Search Patterns to the Role of B12 Deficiency in Non-Specific Chronic Pain: A Large-Scale Observational Study.

    Science.gov (United States)

    Giat, Eitan; Yom-Tov, Elad

    2018-01-05

    Profound vitamin B12 deficiency is a known cause of disease, but the role of low or intermediate levels of B12 in the development of neuropathy and other neuropsychiatric symptoms, as well as the relationship between eating meat and B12 levels, is unclear. The objective of our study was to investigate the role of low or intermediate levels of B12 in the development of neuropathy and other neuropsychiatric symptoms. We used food-related Internet search patterns from a sample of 8.5 million people based in the US as a proxy for B12 intake and correlated these searches with Internet searches related to possible effects of B12 deficiency. Food-related search patterns were highly correlated with known consumption and food-related searches (ρ=.69). Awareness of B12 deficiency was associated with a higher consumption of B12-rich foods and with queries for B12 supplements. Searches for terms related to neurological disorders were correlated with searches for B12-poor foods, in contrast with control terms. Popular medicines, those having fewer indications, and those which are predominantly used to treat pain, were more strongly correlated with the ability to predict neuropathic pain queries using the B12 contents of food. Our findings show that Internet search patterns are a useful way of investigating health questions in large populations, and suggest that low B12 intake may be associated with a broader spectrum of neurological disorders than previously thought.

  11. Search engines that learn from their users

    NARCIS (Netherlands)

    Schuth, A.G.

    2016-01-01

    More than half the world’s population uses web search engines, resulting in over half a billion search queries every single day. For many people web search engines are among the first resources they go to when a question arises. Moreover, search engines have for many become the most trusted route to

  12. Technical aspects of pediatric epilepsy surgery: Report of a multicenter, multinational web-based survey by the ILAE Task Force on Pediatric Epilepsy Surgery.

    Science.gov (United States)

    Cukiert, Arthur; Rydenhag, Bertil; Harkness, William; Cross, J Helen; Gaillard, William D

    2016-02-01

    Surgical techniques may vary extensively between centers. We report on a web-based survey aimed at evaluating the current technical approaches in different centers around the world performing epilepsy surgery in children. The intention of the survey was to establish technical standards. A request was made to 88 centers to complete a web-based survey comprising 51 questions. There were 14 questions related to general issues, 13 questions investigating the different technical aspects for children undergoing epilepsy surgery, and 24 questions investigating surgical strategies in pediatric epilepsy surgery. Fifty-two centers covering a wide geographic representation completed the questionnaire. The median number of resective procedures per center per year was 47. Some important technical practices appeared (>80% of the responses) such as the use of prophylactic antibiotics (98%), the use of high-speed drills for bone opening (88%), nonresorbable material for bone flap closure (85%), head fixation (90%), use of the surgical microscope (100%), and of free bone flaps. Other questions, such as the use of drains, electrocorticography (ECoG) and preoperative withdrawal of valproate, led to mixed, inconclusive results. Complications were noted in 3.8% of the patients submitted to cortical resection, 9.9% hemispheric surgery, 5% callosotomy, 1.8% depth electrode implantation, 5.9% subdural grids implantation, 11.9% hypothalamic hamartoma resection, 0.9% vagus nerve stimulation (VNS), and 0.5% deep brain stimulation. There were no major differences across regions or countries in any of the subitems above. The present data offer the first overview of the technical aspects of pediatric epilepsy surgery worldwide. Surprisingly, there seem to be more similarities than differences. That aside many of the evaluated issues should be examined by adequately designed multicenter randomized controlled trials (RCTs). Further knowledge on these technical issues might lead to increased

  13. SearchResultFinder: federated search made easy

    NARCIS (Netherlands)

    Trieschnigg, Rudolf Berend; Tjin-Kam-Jet, Kien; Hiemstra, Djoerd

    Building a federated search engine based on a large number existing web search engines is a challenge: implementing the programming interface (API) for each search engine is an exacting and time-consuming job. In this demonstration we present SearchResultFinder, a browser plugin which speeds up

  14. Dark Web 101

    Science.gov (United States)

    2016-07-21

    access the dark web? - Download an anonymizing software (like Tor) -Follow installation instructions -Launch browser (it should connect au- tomatically...Projects Agency (DARPA) has been de- veloping tools as part of its Memex program that access and catalog this mysterious online world. Researchers at...NASA’s Jet Propulsion Laboratory in Pasadena, California, have joined the Memex effort to harness the benefits of deep Web searching for science

  15. BORDERLESS GEOSPATIAL WEB (BOLEGWEB)

    OpenAIRE

    V. Cetl; Kliment, T.; Kliment, M.

    2016-01-01

    The effective access and use of geospatial information (GI) resources acquires a critical value of importance in modern knowledge based society. Standard web services defined by Open Geospatial Consortium (OGC) are frequently used within the implementations of spatial data infrastructures (SDIs) to facilitate discovery and use of geospatial data. This data is stored in databases located in a layer, called the invisible web, thus are ignored by search engines. SDI uses a catalogue (discovery) ...

  16. An active visual search interface for Medline.

    Science.gov (United States)

    Xuan, Weijian; Dai, Manhong; Mirel, Barbara; Wilson, Justin; Athey, Brian; Watson, Stanley J; Meng, Fan

    2007-01-01

    Searching the Medline database is almost a daily necessity for many biomedical researchers. However, available Medline search solutions are mainly designed for the quick retrieval of a small set of most relevant documents. Because of this search model, they are not suitable for the large-scale exploration of literature and the underlying biomedical conceptual relationships, which are common tasks in the age of high throughput experimental data analysis and cross-discipline research. We try to develop a new Medline exploration approach by incorporating interactive visualization together with powerful grouping, summary, sorting and active external content retrieval functions. Our solution, PubViz, is based on the FLEX platform designed for interactive web applications and its prototype is publicly available at: http://brainarray.mbni.med.umich.edu/Brainarray/DataMining/PubViz.

  17. Programming NET Web Services

    CERN Document Server

    Ferrara, Alex

    2007-01-01

    Web services are poised to become a key technology for a wide range of Internet-enabled applications, spanning everything from straight B2B systems to mobile devices and proprietary in-house software. While there are several tools and platforms that can be used for building web services, developers are finding a powerful tool in Microsoft's .NET Framework and Visual Studio .NET. Designed from scratch to support the development of web services, the .NET Framework simplifies the process--programmers find that tasks that took an hour using the SOAP Toolkit take just minutes. Programming .NET

  18. Next-Gen Search Engines

    Science.gov (United States)

    Gupta, Amardeep

    2005-01-01

    Current search engines--even the constantly surprising Google--seem unable to leap the next big barrier in search: the trillions of bytes of dynamically generated data created by individual web sites around the world, or what some researchers call the "deep web." The challenge now is not information overload, but information overlook.…

  19. Evaluating aggregated search using interleaving

    NARCIS (Netherlands)

    Chuklin, A.; Schuth, A.; Hofmann, K.; Serdyukov, P.; de Rijke, M.

    2013-01-01

    A result page of a modern web search engine is often much more complicated than a simple list of "ten blue links." In particular, a search engine may combine results from different sources (e.g., Web, News, and Images), and display these as grouped results to provide a better user experience. Such a

  20. Automatic search of geospatial features for disaster and emergency management

    Science.gov (United States)

    Zhang, Chuanrong; Zhao, Tian; Li, Weidong

    2010-12-01

    Although the fast development of OGC (Open Geospatial Consortium) WFS (Web Feature Service) technologies has undoubtedly improved the sharing and synchronization of feature-level geospatial information across diverse resources, literature shows that there are still apparent limitations in the current implementation of OGC WFSs. Currently, the implementation of OGC WFSs only emphasizes syntactic data interoperability via standard interfaces and cannot resolve semantic heterogeneity problems in geospatial data sharing. To help emergency responders and disaster managers find new ways of efficiently searching for needed geospatial information at the feature level, this paper aims to propose a framework for automatic search of geospatial features using Geospatial Semantic Web technologies and natural language interfaces. We focus on two major tasks: (1) intelligent geospatial feature retrieval using Geospatial Semantic Web technologies; (2) a natural language interface to a geospatial knowledge base and web feature services over the Semantic Web. Based on the proposed framework we implemented a prototype. Results show that it is practical to directly discover desirable geospatial features from multiple semantically heterogeneous sources using Geospatial Semantic Web technologies and natural language interfaces.

  1. How often people google for vaccination: Qualitative and quantitative insights from a systematic search of the web-based activities using Google Trends.

    Science.gov (United States)

    Bragazzi, Nicola Luigi; Barberis, Ilaria; Rosselli, Roberto; Gianfredi, Vincenza; Nucci, Daniele; Moretti, Massimo; Salvatori, Tania; Martucci, Gianfranco; Martini, Mariano

    2017-02-01

    Nowadays, more and more people surf the Internet seeking health-related information. Information and communication technologies (ICTs) can represent an important opportunities in the field of Public Health and vaccinology. The aim of our current research was to investigate a) how often people search the Internet for vaccination-related information, b) if this search is spontaneous or induced by media, and c) which kind of information is in particular searched. We used Google Trends (GT) for monitoring the interest for preventable infections and related vaccines. When looking for vaccine preventable infectious diseases, vaccine was not a popular topic, with some valuable exceptions, including the vaccine against Human Papillomavirus (HPV). Vaccines-related queries represented approximately one third of the volumes regarding preventable infections, greatly differing among the vaccines. However, the interest for vaccines is increasing throughout time: in particular, users seek information about possible vaccine-related side-effects. The five most searched vaccines are those against 1) influenza; 2) meningitis; 3) diphtheria, pertussis (whooping cough), and tetanus; 4) yellow fever; and 5) chickenpox. ICTs can have a positive influence on parental vaccine-related knowledge, attitudes, beliefs and vaccination willingness. GT can be used for monitoring the interest for vaccinations and the main information searched.

  2. Approche interactionnelle et didactique invisible – Deux concepts pour la conception et la mise en œuvre de tâches sur le web social The interaction-based approach and invisible didactics – Two concepts for the design and practice of tasks on the social web

    Directory of Open Access Journals (Sweden)

    Christian Ollivier

    2012-03-01

    Full Text Available Cet article commence par revenir sur ce qu'est la compétence d'action et de communication que nous concevons comme ayant lieu, toutes les deux, sous contrainte relationnelle. Sur cette base est proposée une approche interactionnelle de l'enseignement / apprentissage des langues qui ajoute aux catégories de tâches telles qu'elles sont généralement conçues (notamment par le CECRL des tâches de la vie réelle à réaliser dans des interactions sociales réelles dépassant le cadre de la classe. Nous abordons ensuite le principe de didactique invisible qui a permis de construire les sites du projet Babelweb (Ollivier et al. dans le but de créer des espaces d'interaction de type web social sur lesquels les apprenants peuvent se comporter en "acteurs sociaux" à part entière. La comparaison, croisée avec les résultats de recherches antérieures, du comportement langagier des apprenants sur un blogue de Babelweb et de francophones réalisant une tâche très similaire sur le web social permet de faire émerger les convergences et divergences entre le comportement de ces deux groupes et de faire ressortir les limites, mais aussi les avantages de la didactique invisible.This paper starts by going back to the notions of the actional and communicative competence, both of which we consider to be determined by the relationship between the persons who are involved in the action and/or communication. On this basis, we define an interaction-based approach of language learning/teaching that, in addition to the usual kinds of tasks integrates real-life tasks, which learners accomplish within the framework of real social interactions outside the classroom. Then, we introduce the principle of invisible didactics used to design the websites of the Babelweb project (Ollivier et al. and which allows us to create virtual interaction spaces that resemble other social media on which learners can act as real “social agents”. The comparison, cross

  3. With News Search Engines

    Science.gov (United States)

    Gunn, Holly

    2005-01-01

    Although there are many news search engines on the Web, finding the news items one wants can be challenging. Choosing appropriate search terms is one of the biggest challenges. Unless one has seen the article that one is seeking, it is often difficult to select words that were used in the headline or text of the article. The limited archives of…

  4. Building Internet Search Engines

    Directory of Open Access Journals (Sweden)

    Mustafa Akgül

    1996-09-01

    Full Text Available Internet search engines are powerful tools to find electronics objects such as addresses of individuals and institutions, documents, statistics of all kinds, dictionaries, cata­logs, product information etc. This paper explains how to build and run some very common search engines on Unix platforms, so as to serve documents through the Web.

  5. ElasticSearch cookbook

    CERN Document Server

    Paro, Alberto

    2015-01-01

    If you are a developer who implements ElasticSearch in your web applications and want to sharpen your understanding of the core elements and applications, this is the book for you. It is assumed that you've got working knowledge of JSON and, if you want to extend ElasticSearch, of Java and related technologies.

  6. Federated Search in the Wild: the combined power of over a hundred search engines

    NARCIS (Netherlands)

    Nguyen, Dong-Phuong; Demeester, Thomas; Trieschnigg, Rudolf Berend; Hiemstra, Djoerd

    2012-01-01

    Federated search has the potential of improving web search: the user becomes less dependent on a single search provider and parts of the deep web become available through a unified interface, leading to a wider variety in the retrieved search results. However, a publicly available dataset for

  7. Personalization of Rule-based Web Services

    Science.gov (United States)

    Choi, Okkyung; Han, SangYong

    2008-01-01

    Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work. PMID:27879827

  8. Personalization of Rule-based Web Services.

    Science.gov (United States)

    Choi, Okkyung; Han, Sang Yong

    2008-04-04

    Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work.

  9. Web Caching

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 7. Web Caching - A Technique to Speedup Access to Web Contents. Harsha Srinath Shiva Shankar Ramanna. General Article Volume 7 Issue 7 July 2002 pp 54-62 ... Keywords. World wide web; data caching; internet traffic; web page access.

  10. WebScipio: An online tool for the determination of gene structures using protein sequences

    Directory of Open Access Journals (Sweden)

    Waack Stephan

    2008-09-01

    Full Text Available Abstract Background Obtaining the gene structure for a given protein encoding gene is an important step in many analyses. A software suited for this task should be readily accessible, accurate, easy to handle and should provide the user with a coherent representation of the most probable gene structure. It should be rigorous enough to optimise features on the level of single bases and at the same time flexible enough to allow for cross-species searches. Results WebScipio, a web interface to the Scipio software, allows a user to obtain the corresponding coding sequence structure of a here given a query protein sequence that belongs to an already assembled eukaryotic genome. The resulting gene structure is presented in various human readable formats like a schematic representation, and a detailed alignment of the query and the target sequence highlighting any discrepancies. WebScipio can also be used to identify and characterise the gene structures of homologs in related organisms. In addition, it offers a web service for integration with other programs. Conclusion WebScipio is a tool that allows users to get a high-quality gene structure prediction from a protein query. It offers more than 250 eukaryotic genomes that can be searched and produces predictions that are close to what can be achieved by manual annotation, for in-species and cross-species searches alike. WebScipio is freely accessible at http://www.webscipio.org.

  11. On the efficient determination of most near neighbors horseshoes, hand grenades, web search and other situations when close is close enough

    CERN Document Server

    Manasse, Mark S

    2012-01-01

    The time-worn aphorism ""close only counts in horseshoes and hand-grenades"" is clearly inadequate. Close also counts in golf, shuffleboard, archery, darts, curling, and other games of accuracy in which hitting the precise center of the target isn't to be expected every time, or in which we can expect to be driven from the target by skilled opponents. This lecture is not devoted to sports discussions, but to efficient algorithms for determining pairs of closely related web pages -- and a few other situations in which we have found that inexact matching is good enough; where proximity suffices.

  12. Linking Wikipedia to the web

    NARCIS (Netherlands)

    Kaptein, R.; Serdyukov, P.; Kamps, J.; Chen, H.-H.; Efthimiadis, E.N.; Savoy, J.; Crestani, F.; Marchand-Maillet, S.

    2010-01-01

    We investigate the task of finding links from Wikipedia pages to external web pages. Such external links significantly extend the information in Wikipedia with information from the Web at large, while retaining the encyclopedic organization of Wikipedia. We use a language modeling approach to create

  13. Exploring the academic invisible web

    OpenAIRE

    Lewandowski, Dirk

    2006-01-01

    The Invisible Web is often discussed in the academic context, where its contents (mainly in the form of databases) are of great importance. But this discussion is mainly based on some seminal research done by Sherman and Price (2001) and Bergman (2001), respectively. We focus on the types of Invisible Web content relevant for academics and the improvements made by search engines to deal with these content types. In addition, we question the volume of the Invisible Web as stated by Bergman. Ou...

  14. Web Engineering

    OpenAIRE

    Deshpande, Yogesh; Murugesan, San; Ginige, Athula; Hansen, Steve; Schwabe, Daniel; Gaedke, Martin; White, Bebo

    2003-01-01

    Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: a) why is it needed? b) what is its domain of operation? c) how does it help and what should it do to improve Web application develo...

  15. The effects of brief visual interruption tasks on drivers' ability to resume their visual search for a pre-cued hazard.

    Science.gov (United States)

    Borowsky, Avinoam; Horrey, William J; Liang, Yulan; Garabet, Angela; Simmons, Lucinda; Fisher, Donald L

    2016-08-01

    Driver visual distraction is known to increase the likelihood of being involved in a crash, especially for long glances inside the vehicle. The detrimental impact of these in-vehicle glances may carry over and disrupt the ongoing processing of information after the driver glances back up on the road. This study explored the effect of different types of visual tasks inside the vehicle on the top-down processes that guide the detection and monitoring of road hazards after the driver glances back towards the road. Using a driving simulator, 56 participants were monitored with an eye tracking system while they navigated various hazardous scenarios in one of four experimental conditions. In all conditions, a potential hazard was visible 4-5s before the driver could strike the potential hazard were it to materialize. All interruptions were exactly two seconds in length. After the interruption the potential hazard again became visible for about a half-second after which the driver passed by the hazard. The nature of the in-vehicle visual interruption presented to the participants was varied across conditions: (1) Visual interruptions comprised of spatial, driving unrelated, tasks; (2) visual interruptions comprised of non-spatial, driving unrelated, tasks; (3) visual interruptions with no tasks added; and (4) no visual interruptions. In the first three conditions drivers glancing on the forward roadway was momentarily interrupted (either with or without a task) just after the potential hazard first became visible by the occurrence of an in-vehicle task lasting two seconds. In the last condition (no interruptions) the driver could not see the potential hazard after it just became visible because of obstructions in the built or natural environment. The obstruction (like the interruption) lasted for two seconds. In other words, across all conditions the hazard was visible, then became invisible, and finally became visible again. Importantly, the results show that the

  16. Design and Implementation of Domain based Semantic Hidden Web Crawler

    OpenAIRE

    Manvi; Bhatia, Komal Kumar; Dixit, Ashutosh

    2015-01-01

    Web is a wide term which mainly consists of surface web and hidden web. One can easily access the surface web using traditional web crawlers, but they are not able to crawl the hidden portion of the web. These traditional crawlers retrieve contents from web pages, which are linked by hyperlinks ignoring the information hidden behind form pages, which cannot be extracted using simple hyperlink structure. Thus, they ignore large amount of data hidden behind search forms. This paper emphasizes o...

  17. ElasticSearch cookbook

    CERN Document Server

    Paro, Alberto

    2013-01-01

    Written in an engaging, easy-to-follow style, the recipes will help you to extend the capabilities of ElasticSearch to manage your data effectively.If you are a developer who implements ElasticSearch in your web applications, manage data, or have decided to start using ElasticSearch, this book is ideal for you. This book assumes that you've got working knowledge of JSON and Java

  18. Collaborative web hosting challenges and research directions

    CERN Document Server

    Ahmed, Reaz

    2014-01-01

    This brief presents a peer-to-peer (P2P) web-hosting infrastructure (named pWeb) that can transform networked, home-entertainment devices into lightweight collaborating Web servers for persistently storing and serving multimedia and web content. The issues addressed include ensuring content availability, Plexus routing and indexing, naming schemes, web ID, collaborative web search, network architecture and content indexing. In pWeb, user-generated voluminous multimedia content is proactively uploaded to a nearby network location (preferably within the same LAN or at least, within the same ISP)

  19. Development of intelligent semantic search system for rubber research data in Thailand

    Science.gov (United States)

    Kaewboonma, Nattapong; Panawong, Jirapong; Pianhanuruk, Ekkawit; Buranarach, Marut

    2017-10-01

    The rubber production of Thailand increased not only by strong demand from the world market, but was also stimulated strongly through the replanting program of the Thai Government from 1961 onwards. With the continuous growth of rubber research data volume on the Web, the search for information has become a challenging task. Ontologies are used to improve the accuracy of information retrieval from the web by incorporating a degree of semantic analysis during the search. In this context, we propose an intelligent semantic search system for rubber research data in Thailand. The research methods included 1) analyzing domain knowledge, 2) ontologies development, and 3) intelligent semantic search system development to curate research data in trusted digital repositories may be shared among the wider Thailand rubber research community.

  20. Effect of pre-task music on sports or exercise performance.

    Science.gov (United States)

    Smirmaul, Bruno P

    2017-01-01

    Pre-task music is a very common strategy among sports competitors. However, as opposed to in-task music, the scientific evidence to support its ergogenic effects on either sports or exercise performance is limited. This brief review critically addresses the existing literature investigating the effects of pre-task music on sports and exercise performance, focusing on the methods and results of experimental studies, and offers basic and practical recommendations. In July 2015, a comprehensive literature search was performed in Web of Science, PubMed, and Google Scholar using the following key words in combination: "pre-task music," "pre-test music," "pre-exercise music," "exercise performance," "sports performance." The literature search was further expanded by both hand searching review articles on the topic and by searching the reference lists from the articles retrieved for any relevant references. Overall, a total of 15 studies in 14 articles were included. Pre-task music research has been unsystematic, methodologically limited and infrequent. Using this review as a starting point to overcome previous methodological limitations when designing future experiments may contribute to the development of pre-task music research, which is still in its infancy. Currently, there is no sufficient evidence to support the overall ergogenic effects of pre-task music on sports or exercise performance. Nonetheless, pre-task music has showed a likely ergogenic effect on shorter and predominantly anaerobic tasks such as grip strength, Wingate test, and short-duration sports or sports-like tasks, in contrast to longer and predominantly aerobic tasks.