WorldWideScience

Sample records for radiology-centric search engine

  1. Utilization of a radiology-centric search engine.

    Science.gov (United States)

    Sharpe, Richard E; Sharpe, Megan; Siegel, Eliot; Siddiqui, Khan

    2010-04-01

    Internet-based search engines have become a significant component of medical practice. Physicians increasingly rely on information available from search engines as a means to improve patient care, provide better education, and enhance research. Specialized search engines have emerged to more efficiently meet the needs of physicians. Details about the ways in which radiologists utilize search engines have not been documented. The authors categorized every 25th search query in a radiology-centric vertical search engine by radiologic subspecialty, imaging modality, geographic location of access, time of day, use of abbreviations, misspellings, and search language. Musculoskeletal and neurologic imagings were the most frequently searched subspecialties. The least frequently searched were breast imaging, pediatric imaging, and nuclear medicine. Magnetic resonance imaging and computed tomography were the most frequently searched modalities. A majority of searches were initiated in North America, but all continents were represented. Searches occurred 24 h/day in converted local times, with a majority occurring during the normal business day. Misspellings and abbreviations were common. Almost all searches were performed in English. Search engine utilization trends are likely to mirror trends in diagnostic imaging in the region from which searches originate. Internet searching appears to function as a real-time clinical decision-making tool, a research tool, and an educational resource. A more thorough understanding of search utilization patterns can be obtained by analyzing phrases as actually entered as well as the geographic location and time of origination. This knowledge may contribute to the development of more efficient and personalized search engines.

  2. Development of a Google-based search engine for data mining radiology reports.

    Science.gov (United States)

    Erinjeri, Joseph P; Picus, Daniel; Prior, Fred W; Rubin, David A; Koppel, Paul

    2009-08-01

    The aim of this study is to develop a secure, Google-based data-mining tool for radiology reports using free and open source technologies and to explore its use within an academic radiology department. A Health Insurance Portability and Accountability Act (HIPAA)-compliant data repository, search engine and user interface were created to facilitate treatment, operations, and reviews preparatory to research. The Institutional Review Board waived review of the project, and informed consent was not required. Comprising 7.9 GB of disk space, 2.9 million text reports were downloaded from our radiology information system to a fileserver. Extensible markup language (XML) representations of the reports were indexed using Google Desktop Enterprise search engine software. A hypertext markup language (HTML) form allowed users to submit queries to Google Desktop, and Google's XML response was interpreted by a practical extraction and report language (PERL) script, presenting ranked results in a web browser window. The query, reason for search, results, and documents visited were logged to maintain HIPAA compliance. Indexing averaged approximately 25,000 reports per hour. Keyword search of a common term like "pneumothorax" yielded the first ten most relevant results of 705,550 total results in 1.36 s. Keyword search of a rare term like "hemangioendothelioma" yielded the first ten most relevant results of 167 total results in 0.23 s; retrieval of all 167 results took 0.26 s. Data mining tools for radiology reports will improve the productivity of academic radiologists in clinical, educational, research, and administrative tasks. By leveraging existing knowledge of Google's interface, radiologists can quickly perform useful searches.

  3. Transforming Systems Engineering through Model Centric Engineering

    Science.gov (United States)

    2017-08-08

    Contract No. HQ0034-13-D-0004 Report No. SERC-2017-TR-110 Date: August 8, 2017 Transforming Systems Engineering through Model-Centric... Engineering Technical Report SERC-2017-TR-110 Update: August 8, 2017 Principal Investigator: Mark Blackburn, Stevens Institute of Technology Co...Evangelista Sponsor: U.S. Army Armament Research, Development and Engineering Center (ARDEC), Office of the Deputy Assistant Secretary of Defense for

  4. Transforming Systems Engineering through Model-Centric Engineering

    Science.gov (United States)

    2018-02-28

    Contract No. HQ0034-13-D-0004 Research Tasks: 48, 118, 141, 157, 170 Report No. SERC-2018-TR-103 Transforming Systems Engineering through...Model-Centric Engineering Technical Report SERC-2018-TR-103 February 28, 2018 Principal Investigator Dr. Mark Blackburn, Stevens Institute of...Systems Engineering Research Center This material is based upon work supported, in whole or in part, by the U.S. Department of Defense through the

  5. Interactive Model-Centric Systems Engineering (IMCSE) Phase 5

    Science.gov (United States)

    2018-02-28

    Interactive Model-Centric Systems Engineering (IMCSE) Phase 5 Technical Report SERC-2018-TR-104 Feb 28, 2018 Principal Investigator...Date February 28, 2018 Copyright © 2018 Stevens Institute of Technology, Systems Engineering ...Research Center The Systems Engineering Research Center (SERC) is a federally funded University Affiliated Research Center managed by Stevens

  6. A product feature-based user-centric product search model

    OpenAIRE

    Ben Jabeur, Lamjed; Soulier, Laure; Tamine, Lynda; Mousset, Paul

    2016-01-01

    During the online shopping process, users would search for interesting products and quickly access those that fit with their needs among a long tail of similar or closely related products. Our contribution addresses head queries that are frequently submitted on e-commerce Web sites. Head queries usually target featured products with several variations, accessories, and complementary products. We present in this paper a product feature-based user-centric model for product search involving in a...

  7. How do radiologists use the human search engine?

    International Nuclear Information System (INIS)

    Wolfe, Jeremy M.; Evans, Karla K.; Drew, Trafton; Aizenman, Avigael; Josephs, Emilie

    2016-01-01

    Radiologists perform many 'visual search tasks' in which they look for one or more instances of one or more types of target item in a medical image (e.g. cancer screening). To understand and improve how radiologists do such tasks, it must be understood how the human 'search engine' works. This article briefly reviews some of the relevant work into this aspect of medical image perception. Questions include how attention and the eyes are guided in radiologic search? How is global (image-wide) information used in search? How might properties of human vision and human cognition lead to errors in radiologic search? (authors)

  8. Informatics in radiology: RADTF: a semantic search-enabled, natural language processor-generated radiology teaching file.

    Science.gov (United States)

    Do, Bao H; Wu, Andrew; Biswal, Sandip; Kamaya, Aya; Rubin, Daniel L

    2010-11-01

    Storing and retrieving radiology cases is an important activity for education and clinical research, but this process can be time-consuming. In the process of structuring reports and images into organized teaching files, incidental pathologic conditions not pertinent to the primary teaching point can be omitted, as when a user saves images of an aortic dissection case but disregards the incidental osteoid osteoma. An alternate strategy for identifying teaching cases is text search of reports in radiology information systems (RIS), but retrieved reports are unstructured, teaching-related content is not highlighted, and patient identifying information is not removed. Furthermore, searching unstructured reports requires sophisticated retrieval methods to achieve useful results. An open-source, RadLex(®)-compatible teaching file solution called RADTF, which uses natural language processing (NLP) methods to process radiology reports, was developed to create a searchable teaching resource from the RIS and the picture archiving and communication system (PACS). The NLP system extracts and de-identifies teaching-relevant statements from full reports to generate a stand-alone database, thus converting existing RIS archives into an on-demand source of teaching material. Using RADTF, the authors generated a semantic search-enabled, Web-based radiology archive containing over 700,000 cases with millions of images. RADTF combines a compact representation of the teaching-relevant content in radiology reports and a versatile search engine with the scale of the entire RIS-PACS collection of case material. ©RSNA, 2010

  9. HOW DO RADIOLOGISTS USE THE HUMAN SEARCH ENGINE?

    Science.gov (United States)

    Wolfe, Jeremy M; Evans, Karla K; Drew, Trafton; Aizenman, Avigael; Josephs, Emilie

    2016-06-01

    Radiologists perform many 'visual search tasks' in which they look for one or more instances of one or more types of target item in a medical image (e.g. cancer screening). To understand and improve how radiologists do such tasks, it must be understood how the human 'search engine' works. This article briefly reviews some of the relevant work into this aspect of medical image perception. Questions include how attention and the eyes are guided in radiologic search? How is global (image-wide) information used in search? How might properties of human vision and human cognition lead to errors in radiologic search? © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Comparing image search behaviour in the ARRS GoldMiner search engine and a clinical PACS/RIS.

    Science.gov (United States)

    De-Arteaga, Maria; Eggel, Ivan; Do, Bao; Rubin, Daniel; Kahn, Charles E; Müller, Henning

    2015-08-01

    Information search has changed the way we manage knowledge and the ubiquity of information access has made search a frequent activity, whether via Internet search engines or increasingly via mobile devices. Medical information search is in this respect no different and much research has been devoted to analyzing the way in which physicians aim to access information. Medical image search is a much smaller domain but has gained much attention as it has different characteristics than search for text documents. While web search log files have been analysed many times to better understand user behaviour, the log files of hospital internal systems for search in a PACS/RIS (Picture Archival and Communication System, Radiology Information System) have rarely been analysed. Such a comparison between a hospital PACS/RIS search and a web system for searching images of the biomedical literature is the goal of this paper. Objectives are to identify similarities and differences in search behaviour of the two systems, which could then be used to optimize existing systems and build new search engines. Log files of the ARRS GoldMiner medical image search engine (freely accessible on the Internet) containing 222,005 queries, and log files of Stanford's internal PACS/RIS search called radTF containing 18,068 queries were analysed. Each query was preprocessed and all query terms were mapped to the RadLex (Radiology Lexicon) terminology, a comprehensive lexicon of radiology terms created and maintained by the Radiological Society of North America, so the semantic content in the queries and the links between terms could be analysed, and synonyms for the same concept could be detected. RadLex was mainly created for the use in radiology reports, to aid structured reporting and the preparation of educational material (Lanlotz, 2006) [1]. In standard medical vocabularies such as MeSH (Medical Subject Headings) and UMLS (Unified Medical Language System) specific terms of radiology are often

  11. Internet Search Engines

    OpenAIRE

    Fatmaa El Zahraa Mohamed Abdou

    2004-01-01

    A general study about the internet search engines, the study deals main 7 points; the differance between search engines and search directories, components of search engines, the percentage of sites covered by search engines, cataloging of sites, the needed time for sites appearance in search engines, search capabilities, and types of search engines.

  12. Meta Search Engines.

    Science.gov (United States)

    Garman, Nancy

    1999-01-01

    Describes common options and features to consider in evaluating which meta search engine will best meet a searcher's needs. Discusses number and names of engines searched; other sources and specialty engines; search queries; other search options; and results options. (AEF)

  13. The internet and intelligent machines: search engines, agents and robots

    International Nuclear Information System (INIS)

    Achenbach, S.; Alfke, H.

    2000-01-01

    The internet plays an important role in a growing number of medical applications. Finding relevant information is not always easy as the amount of available information on the Web is rising quickly. Even the best Search Engines can only collect links to a fraction of all existing Web pages. In addition, many of these indexed documents have been changed or deleted. The vast majority of information on the Web is not searchable with conventional methods. New search strategies, technologies and standards are combined in Intelligent Search Agents (ISA) an Robots, which can retrieve desired information in a specific approach. Conclusion: The article describes differences between ISAs and conventional Search Engines and how communication between Agents improves their ability to find information. Examples of existing ISAs are given and the possible influences on the current and future work in radiology is discussed. (orig.) [de

  14. A practical approach for inexpensive searches of radiology report databases.

    Science.gov (United States)

    Desjardins, Benoit; Hamilton, R Curtis

    2007-06-01

    We present a method to perform full text searches of radiology reports for the large number of departments that do not have this ability as part of their radiology or hospital information system. A tool written in Microsoft Access (front-end) has been designed to search a server (back-end) containing the indexed backup weekly copy of the full relational database extracted from a radiology information system (RIS). This front end-/back-end approach has been implemented in a large academic radiology department, and is used for teaching, research and administrative purposes. The weekly second backup of the 80 GB, 4 million record RIS database takes 2 hours. Further indexing of the exported radiology reports takes 6 hours. Individual searches of the indexed database typically take less than 1 minute on the indexed database and 30-60 minutes on the nonindexed database. Guidelines to properly address privacy and institutional review board issues are closely followed by all users. This method has potential to improve teaching, research, and administrative programs within radiology departments that cannot afford more expensive technology.

  15. SPARTex: A Vertex-Centric Framework for RDF Data Analytics

    KAUST Repository

    Abdelaziz, Ibrahim

    2015-08-31

    A growing number of applications require combining SPARQL queries with generic graph search on RDF data. However, the lack of procedural capabilities in SPARQL makes it inappropriate for graph analytics. Moreover, RDF engines focus on SPARQL query evaluation whereas graph management frameworks perform only generic graph computations. In this work, we bridge the gap by introducing SPARTex, an RDF analytics framework based on the vertex-centric computation model. In SPARTex, user-defined vertex centric programs can be invoked from SPARQL as stored procedures. SPARTex allows the execution of a pipeline of graph algorithms without the need for multiple reads/writes of input data and intermediate results. We use a cost-based optimizer for minimizing the communication cost. SPARTex evaluates queries that combine SPARQL and generic graph computations orders of magnitude faster than existing RDF engines. We demonstrate a real system prototype of SPARTex running on a local cluster using real and synthetic datasets. SPARTex has a real-time graphical user interface that allows the participants to write regular SPARQL queries, use our proposed SPARQL extension to declaratively invoke graph algorithms or combine/pipeline both SPARQL querying and generic graph analytics.

  16. Myanmar Language Search Engine

    OpenAIRE

    Pann Yu Mon; Yoshiki Mikami

    2011-01-01

    With the enormous growth of the World Wide Web, search engines play a critical role in retrieving information from the borderless Web. Although many search engines are available for the major languages, but they are not much proficient for the less computerized languages including Myanmar. The main reason is that those search engines are not considering the specific features of those languages. A search engine which capable of searching the Web documents written in those languages is highly n...

  17. Teknik Perangkingan Meta-search Engine

    OpenAIRE

    Puspitaningrum, Diyah

    2014-01-01

    Meta-search engine mengorganisasikan penyatuan hasil dari berbagai search engine dengan tujuan untuk meningkatkan presisi hasil pencarian dokumen web. Pada survei teknik perangkingan meta-search engine ini akan didiskusikan isu-isu pra-pemrosesan, rangking, dan berbagai teknik penggabungan hasil pencarian dari search engine yang berbeda-beda (multi-kombinasi). Isu-isu implementasi penggabungan 2 search engine dan 3 search engine juga menjadi sorotan. Pada makalah ini juga dibahas arahan penel...

  18. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  19. Radiological Engineering: A graduate engineering - based curriculum for radiation protection

    International Nuclear Information System (INIS)

    Kearfott, K.J.; Wepfer, W.J.

    1994-01-01

    Several U.S. universities maintain formal graduate health physics curricula within their Colleges of Engineering. The term radiological engineering was coined to describe the discipline of applying engineering principles to the radiation protection aspects of nuclear technology. Radiological engineering programmes may require a specific core group of courses such as radiation biology, radiation protection practice, nuclear physics, radiation detectors, and radiation dosimetry. Students then might specialist in environmental, nuclear facilities or medical applications areas by selecting advanced courses and graduate design or research projects. In some instances the master's degree may be completed through remotely-delivered lectures. Such programmes promise to assist in educating a new group of engineering professionals dedicated to the safe utilisation of nuclear technology. The Georgis Institute of Technology's programme will serve as the specific example for this report. 8 refs., 1 fig

  20. Custom Search Engines: Tools & Tips

    Science.gov (United States)

    Notess, Greg R.

    2008-01-01

    Few have the resources to build a Google or Yahoo! from scratch. Yet anyone can build a search engine based on a subset of the large search engines' databases. Use Google Custom Search Engine or Yahoo! Search Builder or any of the other similar programs to create a vertical search engine targeting sites of interest to users. The basic steps to…

  1. BIOMedical Search Engine Framework: Lightweight and customized implementation of domain-specific biomedical search engines.

    Science.gov (United States)

    Jácome, Alberto G; Fdez-Riverola, Florentino; Lourenço, Anália

    2016-07-01

    Text mining and semantic analysis approaches can be applied to the construction of biomedical domain-specific search engines and provide an attractive alternative to create personalized and enhanced search experiences. Therefore, this work introduces the new open-source BIOMedical Search Engine Framework for the fast and lightweight development of domain-specific search engines. The rationale behind this framework is to incorporate core features typically available in search engine frameworks with flexible and extensible technologies to retrieve biomedical documents, annotate meaningful domain concepts, and develop highly customized Web search interfaces. The BIOMedical Search Engine Framework integrates taggers for major biomedical concepts, such as diseases, drugs, genes, proteins, compounds and organisms, and enables the use of domain-specific controlled vocabulary. Technologies from the Typesafe Reactive Platform, the AngularJS JavaScript framework and the Bootstrap HTML/CSS framework support the customization of the domain-oriented search application. Moreover, the RESTful API of the BIOMedical Search Engine Framework allows the integration of the search engine into existing systems or a complete web interface personalization. The construction of the Smart Drug Search is described as proof-of-concept of the BIOMedical Search Engine Framework. This public search engine catalogs scientific literature about antimicrobial resistance, microbial virulence and topics alike. The keyword-based queries of the users are transformed into concepts and search results are presented and ranked accordingly. The semantic graph view portraits all the concepts found in the results, and the researcher may look into the relevance of different concepts, the strength of direct relations, and non-trivial, indirect relations. The number of occurrences of the concept shows its importance to the query, and the frequency of concept co-occurrence is indicative of biological relations

  2. Start Your Engines: Surfing with Search Engines for Kids.

    Science.gov (United States)

    Byerly, Greg; Brodie, Carolyn S.

    1999-01-01

    Suggests that to be an effective educator and user of the Web it is essential to know the basics about search engines. Presents tips for using search engines. Describes several search engines for children and young adults, as well as some general filtered search engines for children. (AEF)

  3. Multimedia Search Engines : Concept, Performance, and Types

    OpenAIRE

    Sayed Rabeh Sayed

    2005-01-01

    A Research about multimedia search engines, it starts with definition of search engines at general and multimedia search engines, then explains how they work, and divided them into: Video search engines, Images search engines, and Audio search engines. Finally, it reviews a samples to multimedia search engines.

  4. Market Dominance and Search Quality in the Search Engine Market

    NARCIS (Netherlands)

    Lianos, I.; Motchenkova, E.I.

    2013-01-01

    We analyze a search engine market from a law and economics perspective and incorporate the choice of quality-improving innovations by a search engine platform in a two-sided model of Internet search engine. In the proposed framework, we first discuss the legal issues the search engine market raises

  5. INTERFACING GOOGLE SEARCH ENGINE TO CAPTURE USER WEB SEARCH BEHAVIOR

    OpenAIRE

    Fadhilah Mat Yamin; T. Ramayah

    2013-01-01

    The behaviour of the searcher when using the search engine especially during the query formulation is crucial. Search engines capture users’ activities in the search log, which is stored at the search engine server. Due to the difficulty of obtaining this search log, this paper proposed and develops an interface framework to interface a Google search engine. This interface will capture users’ queries before redirect them to Google. The analysis of the search log will show that users are utili...

  6. Vertical Search Engines

    OpenAIRE

    Curran, Kevin; Mc Glinchey, Jude

    2017-01-01

    This paper outlines the growth in popularity of vertical search engines, their origins, the differences between them and well-known broad based search engines such as Google and Yahoo. We also discuss their use in business-to-business, their marketing and advertising costs, what the revenue streams are and who uses them.

  7. [Advanced online search techniques and dedicated search engines for physicians].

    Science.gov (United States)

    Nahum, Yoav

    2008-02-01

    In recent years search engines have become an essential tool in the work of physicians. This article will review advanced search techniques from the world of information specialists, as well as some advanced search engine operators that may help physicians improve their online search capabilities, and maximize the yield of their searches. This article also reviews popular dedicated scientific and biomedical literature search engines.

  8. Web Search Engines

    OpenAIRE

    Rajashekar, TB

    1998-01-01

    The World Wide Web is emerging as an all-in-one information source. Tools for searching Web-based information include search engines, subject directories and meta search tools. We take a look at key features of these tools and suggest practical hints for effective Web searching.

  9. HOW DO RADIOLOGISTS USE THE HUMAN SEARCH ENGINE?

    Science.gov (United States)

    Wolfe, Jeremy M.; Evans, Karla K.; Drew, Trafton; Aizenman, Avigael; Josephs, Emilie

    2016-01-01

    Radiologists perform many ‘visual search tasks’ in which they look for one or more instances of one or more types of target item in a medical image (e.g. cancer screening). To understand and improve how radiologists do such tasks, it must be understood how the human ‘search engine’ works. This article briefly reviews some of the relevant work into this aspect of medical image perception. Questions include how attention and the eyes are guided in radiologic search? How is global (image-wide) information used in search? How might properties of human vision and human cognition lead to errors in radiologic search? PMID:26656078

  10. Da "Search engines" a "Shop engines"

    OpenAIRE

    Lupi, Mauro

    2001-01-01

    The change occuring related to “search engines” is going towards e-commerce, transforming all the main search engines into information and commercial suggestion conveying means, basing their businnes on this activity. In a next future we will find two main series of search engines: from one side, the portals that will offer a general orientation guide being convoying means for services and to-buy products; from the other side, vertical portals able to offer information and products on specifi...

  11. [Development of domain specific search engines].

    Science.gov (United States)

    Takai, T; Tokunaga, M; Maeda, K; Kaminuma, T

    2000-01-01

    As cyber space exploding in a pace that nobody has ever imagined, it becomes very important to search cyber space efficiently and effectively. One solution to this problem is search engines. Already a lot of commercial search engines have been put on the market. However these search engines respond with such cumbersome results that domain specific experts can not tolerate. Using a dedicate hardware and a commercial software called OpenText, we have tried to develop several domain specific search engines. These engines are for our institute's Web contents, drugs, chemical safety, endocrine disruptors, and emergent response for chemical hazard. These engines have been on our Web site for testing.

  12. Internet use in radiology: results of a nationwide survey

    Energy Technology Data Exchange (ETDEWEB)

    Vorbeck, F; Zimmermann, C; Vorbeck-Meister, I; Kainberger, F; Imhof, H

    1999-08-01

    Purpose: To determine the number of radiologists who currently have Internet access, their use patterns on the Internet for Radiology purposes, the web sites they would recommend, and the potential site access to the Internet that they would like to see in the future. In addition, this study analyzed the best way to find nationwide radiological sites and their content. Materials and Methods: In a nationwide survey, 854 Austrian radiologist were asked to fill out and return a questionnaire about Internet access, current problems, current and future use, which web site they recommend, and about the use of e-mail. Next, the available nationwide radiological sites were searched with seven major search engines using 37 different keywords, as well as by category search, and by searching for links on the homepages of the radiological departments of all Universities of Austria. Then, the offered information of the founded pages was classified in categories. Results: Of the 210 (24.6%) radiologists who returned the questionnaire, 154 (73%) had Internet access. Time expenditure was considered the main problem using the Internet. The Internet was used for literature research by 69% of the radiologists with Internet access, for e-mail by 60%, and for congress information by 57%. In future, 43% would like to read electronic journals more often and 39% would like to use the web more intensively for scientific congresses. At the present time, we found 17 radiological web sites in Austria. The most promising way to find these sites was to use the search engines Alta Vista and Hotbot. Fifteen (88%) sites offered information for patients, seven (41%) for radiologists, five (29%) for students, and four (24%) for researchers. Summary: Many radiologists in Austria already have Internet access, although time expenditure was considered the main problem with Internet use. Survey responses showed a need for electronic journals. To our point of view, Universities and radiological societies

  13. Internet use in radiology: results of a nationwide survey

    International Nuclear Information System (INIS)

    Vorbeck, F.; Zimmermann, C.; Vorbeck-Meister, I.; Kainberger, F.; Imhof, H.

    1999-01-01

    Purpose: To determine the number of radiologists who currently have Internet access, their use patterns on the Internet for Radiology purposes, the web sites they would recommend, and the potential site access to the Internet that they would like to see in the future. In addition, this study analyzed the best way to find nationwide radiological sites and their content. Materials and Methods: In a nationwide survey, 854 Austrian radiologist were asked to fill out and return a questionnaire about Internet access, current problems, current and future use, which web site they recommend, and about the use of e-mail. Next, the available nationwide radiological sites were searched with seven major search engines using 37 different keywords, as well as by category search, and by searching for links on the homepages of the radiological departments of all Universities of Austria. Then, the offered information of the founded pages was classified in categories. Results: Of the 210 (24.6%) radiologists who returned the questionnaire, 154 (73%) had Internet access. Time expenditure was considered the main problem using the Internet. The Internet was used for literature research by 69% of the radiologists with Internet access, for e-mail by 60%, and for congress information by 57%. In future, 43% would like to read electronic journals more often and 39% would like to use the web more intensively for scientific congresses. At the present time, we found 17 radiological web sites in Austria. The most promising way to find these sites was to use the search engines Alta Vista and Hotbot. Fifteen (88%) sites offered information for patients, seven (41%) for radiologists, five (29%) for students, and four (24%) for researchers. Summary: Many radiologists in Austria already have Internet access, although time expenditure was considered the main problem with Internet use. Survey responses showed a need for electronic journals. To our point of view, Universities and radiological societies

  14. TECHNIQUES USED IN SEARCH ENGINE MARKETING

    OpenAIRE

    Assoc. Prof. Liviu Ion Ciora Ph. D; Lect. Ion Buligiu Ph. D

    2010-01-01

    Search engine marketing (SEM) is a generic term covering a variety of marketing techniques intended for attracting web traffic in search engines and directories. SEM is a popular tool since it has the potential of substantial gains with minimum investment. On the one side, most search engines and directories offer free or extremely cheap listing. On the other side, the traffic coming from search engines and directories tends to be motivated for acquisitions, making these visitors some of the ...

  15. Trends in Publications in Radiology Journals Designated as Relating to Patient-Centered Care.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Rawson, James V

    2017-05-01

    To assess trends in publications in radiology journals designated as dealing with patient-centered care. PubMed was searched for articles in radiology journals for which the article's record referenced patient-centered/patient-centric care. Among these, original research articles were identified and assigned major themes. Trends were assessed descriptively. A total of 115 articles in radiology journals designated as dealing with patient-centered care were identified, including 40 original research articles. The number of articles annually ranged from 0 to 4 in 2000-2008, 5 to 9 in 2010-2012, 14 to 15 in 2013-2014, and 25 in 2015. Only four radiology journals had published more than one of the original research articles. Original research articles' most common themes were: optimization of patients' access to reports and images (n=7); patients' examination experience (5); image evaluation (n=4); radiologists meeting with patients (n=4); improving patients' knowledge of imaging (n=3); examination wait times/efficiency (n=3); examination utilization/appropriateness (n=3); and IT enhancements (n=3). A total of 13 of 40 original research articles solicited opinions from patients. One study involved patients in educating trainees regarding patient-centered care. No study involved patients in system-level decisions regarding health care design and delivery. Articles dealing with patient-centered care in radiology are increasing, though they remain concentrated in a limited number of journals. Though major themes included image/report access, patient experiences, and radiologists meeting with patients, many studies dealt with less clearly patient-centric topics such as examination interpretation, while inclusion of patients in systems design was lacking. Further research in radiology is encouraged to target a broader range of ideals of patient-centered care, such as diversity, autonomy, and compassion, and to incorporate greater patient engagement. Copyright © 2016

  16. Dyniqx: a novel meta-search engine for metadata based cross search

    OpenAIRE

    Zhu, Jianhan; Song, Dawei; Eisenstadt, Marc; Barladeanu, Cristi; Rüger, Stefan

    2008-01-01

    The effect of metadata in collection fusion has not been sufficiently studied. In response to this, we present a novel meta-search engine called Dyniqx for metadata based cross search. Dyniqx exploits the availability of metadata in academic search services such as PubMed and Google Scholar etc for fusing search results from heterogeneous search engines. In addition, metadata from these search engines are used for generating dynamic query controls such as sliders and tick boxes etc which are ...

  17. Search Engines: Gateway to a New ``Panopticon''?

    Science.gov (United States)

    Kosta, Eleni; Kalloniatis, Christos; Mitrou, Lilian; Kavakli, Evangelia

    Nowadays, Internet users are depending on various search engines in order to be able to find requested information on the Web. Although most users feel that they are and remain anonymous when they place their search queries, reality proves otherwise. The increasing importance of search engines for the location of the desired information on the Internet usually leads to considerable inroads into the privacy of users. The scope of this paper is to study the main privacy issues with regard to search engines, such as the anonymisation of search logs and their retention period, and to examine the applicability of the European data protection legislation to non-EU search engine providers. Ixquick, a privacy-friendly meta search engine will be presented as an alternative to privacy intrusive existing practices of search engines.

  18. Human-Centric Interfaces for Ambient Intelligence

    CERN Document Server

    Aghajan, Hamid; Delgado, Ramon Lopez-Cozar

    2009-01-01

    To create truly effective human-centric ambient intelligence systems both engineering and computing methods are needed. This is the first book to bridge data processing and intelligent reasoning methods for the creation of human-centered ambient intelligence systems. Interdisciplinary in nature, the book covers topics such as multi-modal interfaces, human-computer interaction, smart environments and pervasive computing, addressing principles, paradigms, methods and applications. This book will be an ideal reference for university researchers, R&D engineers, computer engineers, and graduate s

  19. Self-learning search engines

    NARCIS (Netherlands)

    Schuth, A.

    2015-01-01

    How does a search engine such as Google know which search results to display? There are many competing algorithms that generate search results, but which one works best? We developed a new probabilistic method for quickly comparing large numbers of search algorithms by examining the results users

  20. NASA Indexing Benchmarks: Evaluating Text Search Engines

    Science.gov (United States)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  1. ROLE AND IMPORTANCE OF SEARCH ENGINE OPTIMIZATION

    OpenAIRE

    Gurneet Kaur

    2017-01-01

    Search Engines are an indispensible platform for users all over the globe to search for relevant information online. Search Engine Optimization (SEO) is the exercise of improving the position of a website in search engine rankings, for a chosen set of keywords. SEO is divided into two parts: On-Page and Off-Page SEO. In order to be successful, both the areas require equal attention. This paper aims to explain the functioning of the search engines along with the role and importance of search e...

  2. Search Engine Liability for Copyright Infringement

    Science.gov (United States)

    Fitzgerald, B.; O'Brien, D.; Fitzgerald, A.

    The chapter provides a broad overview to the topic of search engine liability for copyright infringement. In doing so, the chapter examines some of the key copyright law principles and their application to search engines. The chapter also provides a discussion of some of the most important cases to be decided within the courts of the United States, Australia, China and Europe regarding the liability of search engines for copyright infringement. Finally, the chapter will conclude with some thoughts for reform, including how copyright law can be amended in order to accommodate and realise the great informative power which search engines have to offer society.

  3. Search engines that learn from their users

    NARCIS (Netherlands)

    Schuth, A.G.

    2016-01-01

    More than half the world’s population uses web search engines, resulting in over half a billion search queries every single day. For many people web search engines are among the first resources they go to when a question arises. Moreover, search engines have for many become the most trusted route to

  4. New generation of the multimedia search engines

    Science.gov (United States)

    Mijes Cruz, Mario Humberto; Soto Aldaco, Andrea; Maldonado Cano, Luis Alejandro; López Rodríguez, Mario; Rodríguez Vázqueza, Manuel Antonio; Amaya Reyes, Laura Mariel; Cano Martínez, Elizabeth; Pérez Rosas, Osvaldo Gerardo; Rodríguez Espejo, Luis; Flores Secundino, Jesús Abimelek; Rivera Martínez, José Luis; García Vázquez, Mireya Saraí; Zamudio Fuentes, Luis Miguel; Sánchez Valenzuela, Juan Carlos; Montoya Obeso, Abraham; Ramírez Acosta, Alejandro Álvaro

    2016-09-01

    Current search engines are based upon search methods that involve the combination of words (text-based search); which has been efficient until now. However, the Internet's growing demand indicates that there's more diversity on it with each passing day. Text-based searches are becoming limited, as most of the information on the Internet can be found in different types of content denominated multimedia content (images, audio files, video files). Indeed, what needs to be improved in current search engines is: search content, and precision; as well as an accurate display of expected search results by the user. Any search can be more precise if it uses more text parameters, but it doesn't help improve the content or speed of the search itself. One solution is to improve them through the characterization of the content for the search in multimedia files. In this article, an analysis of the new generation multimedia search engines is presented, focusing the needs according to new technologies. Multimedia content has become a central part of the flow of information in our daily life. This reflects the necessity of having multimedia search engines, as well as knowing the real tasks that it must comply. Through this analysis, it is shown that there are not many search engines that can perform content searches. The area of research of multimedia search engines of new generation is a multidisciplinary area that's in constant growth, generating tools that satisfy the different needs of new generation systems.

  5. Internet-centric collaborative design in a distributed environment

    International Nuclear Information System (INIS)

    Kim, Hyun; Kim, Hyoung Sun; Do, Nam Chul; Lee, Jae Yeol; Lee, Joo Haeng; Myong, Jae Hyong

    2001-01-01

    Recently, advanced information technologies including internet-related technology and distributed object technology have opened new possibilities for collaborative designs. In this paper, we discuss computer supports for collaborative design in a distributed environment. The proposed system is the internet-centric system composed of an engineering framework, collaborative virtual workspace and engineering service. It allows the distributed designers to more efficiently and collaboratively work their engineering tasks throughout the design process

  6. Combining Search Engines for Comparative Proteomics

    Science.gov (United States)

    Tabb, David

    2012-01-01

    Many proteomics laboratories have found spectral counting to be an ideal way to recognize biomarkers that differentiate cohorts of samples. This approach assumes that proteins that differ in quantity between samples will generate different numbers of identifiable tandem mass spectra. Increasingly, researchers are employing multiple search engines to maximize the identifications generated from data collections. This talk evaluates four strategies to combine information from multiple search engines in comparative proteomics. The “Count Sum” model pools the spectra across search engines. The “Vote Counting” model combines the judgments from each search engine by protein. Two other models employ parametric and non-parametric analyses of protein-specific p-values from different search engines. We evaluated the four strategies in two different data sets. The ABRF iPRG 2009 study generated five LC-MS/MS analyses of “red” E. coli and five analyses of “yellow” E. coli. NCI CPTAC Study 6 generated five concentrations of Sigma UPS1 spiked into a yeast background. All data were identified with X!Tandem, Sequest, MyriMatch, and TagRecon. For both sample types, “Vote Counting” appeared to manage the diverse identification sets most effectively, yielding heightened discrimination as more search engines were added.

  7. Analysis of Search Engines and Meta Search Engines\\\\\\' Position by University of Isfahan Users Based on Rogers\\\\\\' Diffusion of Innovation Theory

    Directory of Open Access Journals (Sweden)

    Maryam Akbari

    2012-10-01

    Full Text Available The present study investigated the analysis of search engines and meta search engines adoption process by University of Isfahan users during 2009-2010 based on the Rogers' diffusion of innovation theory. The main aim of the research was to study the rate of adoption and recognizing the potentials and effective tools in search engines and meta search engines adoption among University of Isfahan users. The research method was descriptive survey study. The cases of the study were all of the post graduate students of the University of Isfahan. 351 students were selected as the sample and categorized by a stratified random sampling method. Questionnaire was used for collecting data. The collected data was analyzed using SPSS 16 in both descriptive and analytic statistic. For descriptive statistic frequency, percentage and mean were used, while for analytic statistic t-test and Kruskal-Wallis non parametric test (H-test were used. The finding of t-test and Kruscal-Wallis indicated that the mean of search engines and meta search engines adoption did not show statistical differences gender, level of education and the faculty. Special search engines adoption process was different in terms of gender but not in terms of the level of education and the faculty. Other results of the research indicated that among general search engines, Google had the most adoption rate. In addition, among the special search engines, Google Scholar and among the meta search engines Mamma had the most adopting rate. Findings also showed that friends played an important role on how students adopted general search engines while professors had important role on how students adopted special search engines and meta search engines. Moreover, results showed that the place where students got the most acquaintance with search engines and meta search engines was in the university. The finding showed that the curve of adoption rate was not normal and it was not also in S-shape. Morover

  8. Comparative analysis of some search engines

    Directory of Open Access Journals (Sweden)

    Taiwo O. Edosomwan

    2010-10-01

    Full Text Available We compared the information retrieval performances of some popular search engines (namely, Google, Yahoo, AlltheWeb, Gigablast, Zworks and AltaVista and Bing/MSN in response to a list of ten queries, varying in complexity. These queries were run on each search engine and the precision and response time of the retrieved results were recorded. The first ten documents on each retrieval output were evaluated as being ‘relevant’ or ‘non-relevant’ for evaluation of the search engine’s precision. To evaluate response time, normalised recall ratios were calculated at various cut-off points for each query and search engine. This study shows that Google appears to be the best search engine in terms of both average precision (70% and average response time (2 s. Gigablast and AlltheWeb performed the worst overall in this study.

  9. Search engine optimization

    OpenAIRE

    Marolt, Klemen

    2013-01-01

    Search engine optimization techniques, often shortened to “SEO,” should lead to first positions in organic search results. Some optimization techniques do not change over time, yet still form the basis for SEO. However, as the Internet and web design evolves dynamically, new optimization techniques flourish and flop. Thus, we looked at the most important factors that can help to improve positioning in search results. It is important to emphasize that none of the techniques can guarantee high ...

  10. Chemical-text hybrid search engines.

    Science.gov (United States)

    Zhou, Yingyao; Zhou, Bin; Jiang, Shumei; King, Frederick J

    2010-01-01

    As the amount of chemical literature increases, it is critical that researchers be enabled to accurately locate documents related to a particular aspect of a given compound. Existing solutions, based on text and chemical search engines alone, suffer from the inclusion of "false negative" and "false positive" results, and cannot accommodate diverse repertoire of formats currently available for chemical documents. To address these concerns, we developed an approach called Entity-Canonical Keyword Indexing (ECKI), which converts a chemical entity embedded in a data source into its canonical keyword representation prior to being indexed by text search engines. We implemented ECKI using Microsoft Office SharePoint Server Search, and the resultant hybrid search engine not only supported complex mixed chemical and keyword queries but also was applied to both intranet and Internet environments. We envision that the adoption of ECKI will empower researchers to pose more complex search questions that were not readily attainable previously and to obtain answers at much improved speed and accuracy.

  11. The evolution of radiology information management

    International Nuclear Information System (INIS)

    Patacsil, L.

    2002-01-01

    Aim: The use of PACS and the need for images and clinical data is increasing exponentially. This presentation will provide a historical overview of radiology information systems (RIS) and their interaction with hospital information systems (HIS), PACS systems and modalities. Materials and Methods: The various approaches to interfacing and integrating RIS and PACS systems will be described as well as their impact on radiology departmental work-flow and the productivity of radiologists, technologists and other RIS users. Specifically, the image-centric and RIS-centric approaches to PACS will be compared. Results: Instant access to vital clinical information enables radiology departments to go beyond simply managing processes to making fundamental improvements in productivity and care processes. Unity of design translates into simplicity of operations. Integration of historically separate applications makes the environment easier to support, reduces turnaround time, and improves patient care. Conclusion: Finally, insight on new technologies and future trends in PACS, system integration and work-flow strategies will be presented in light of delivery of both clinical images and clinical data

  12. Regulating Search Engines: Taking Stock And Looking Ahead

    OpenAIRE

    Gasser, Urs

    2006-01-01

    Since the creation of the first pre-Web Internet search engines in the early 1990s, search engines have become almost as important as email as a primary online activity. Arguably, search engines are among the most important gatekeepers in today's digitally networked environment. Thus, it does not come as a surprise that the evolution of search technology and the diffusion of search engines have been accompanied by a series of conflicts among stakeholders such as search operators, content crea...

  13. Database Search Engines: Paradigms, Challenges and Solutions.

    Science.gov (United States)

    Verheggen, Kenneth; Martens, Lennart; Berven, Frode S; Barsnes, Harald; Vaudel, Marc

    2016-01-01

    The first step in identifying proteins from mass spectrometry based shotgun proteomics data is to infer peptides from tandem mass spectra, a task generally achieved using database search engines. In this chapter, the basic principles of database search engines are introduced with a focus on open source software, and the use of database search engines is demonstrated using the freely available SearchGUI interface. This chapter also discusses how to tackle general issues related to sequence database searching and shows how to minimize their impact.

  14. Game-centric pedagogy and curriculums in higher education

    DEFF Research Database (Denmark)

    Nørgård, Rikke Toft; Murray, John; Morgan, James

    2017-01-01

    This paper examines some recent trends in game-centric education for STEAM (science, technology,engineering, art and mathematics) fields, especially those that explore and promote collaborationamong multiple disciplines. We discuss various multimodal design research activities that draw uponthe a...

  15. Children's Search Engines from an Information Search Process Perspective.

    Science.gov (United States)

    Broch, Elana

    2000-01-01

    Describes cognitive and affective characteristics of children and teenagers that may affect their Web searching behavior. Reviews literature on children's searching in online public access catalogs (OPACs) and using digital libraries. Profiles two Web search engines. Discusses some of the difficulties children have searching the Web, in the…

  16. A fuzzy-match search engine for physician directories.

    Science.gov (United States)

    Rastegar-Mojarad, Majid; Kadolph, Christopher; Ye, Zhan; Wall, Daniel; Murali, Narayana; Lin, Simon

    2014-11-04

    A search engine to find physicians' information is a basic but crucial function of a health care provider's website. Inefficient search engines, which return no results or incorrect results, can lead to patient frustration and potential customer loss. A search engine that can handle misspellings and spelling variations of names is needed, as the United States (US) has culturally, racially, and ethnically diverse names. The Marshfield Clinic website provides a search engine for users to search for physicians' names. The current search engine provides an auto-completion function, but it requires an exact match. We observed that 26% of all searches yielded no results. The goal was to design a fuzzy-match algorithm to aid users in finding physicians easier and faster. Instead of an exact match search, we used a fuzzy algorithm to find similar matches for searched terms. In the algorithm, we solved three types of search engine failures: "Typographic", "Phonetic spelling variation", and "Nickname". To solve these mismatches, we used a customized Levenshtein distance calculation that incorporated Soundex coding and a lookup table of nicknames derived from US census data. Using the "Challenge Data Set of Marshfield Physician Names," we evaluated the accuracy of fuzzy-match engine-top ten (90%) and compared it with exact match (0%), Soundex (24%), Levenshtein distance (59%), and fuzzy-match engine-top one (71%). We designed, created a reference implementation, and evaluated a fuzzy-match search engine for physician directories. The open-source code is available at the codeplex website and a reference implementation is available for demonstration at the datamarsh website.

  17. Clinician search behaviors may be influenced by search engine design.

    Science.gov (United States)

    Lau, Annie Y S; Coiera, Enrico; Zrimec, Tatjana; Compton, Paul

    2010-06-30

    Searching the Web for documents using information retrieval systems plays an important part in clinicians' practice of evidence-based medicine. While much research focuses on the design of methods to retrieve documents, there has been little examination of the way different search engine capabilities influence clinician search behaviors. Previous studies have shown that use of task-based search engines allows for faster searches with no loss of decision accuracy compared with resource-based engines. We hypothesized that changes in search behaviors may explain these differences. In all, 75 clinicians (44 doctors and 31 clinical nurse consultants) were randomized to use either a resource-based or a task-based version of a clinical information retrieval system to answer questions about 8 clinical scenarios in a controlled setting in a university computer laboratory. Clinicians using the resource-based system could select 1 of 6 resources, such as PubMed; clinicians using the task-based system could select 1 of 6 clinical tasks, such as diagnosis. Clinicians in both systems could reformulate search queries. System logs unobtrusively capturing clinicians' interactions with the systems were coded and analyzed for clinicians' search actions and query reformulation strategies. The most frequent search action of clinicians using the resource-based system was to explore a new resource with the same query, that is, these clinicians exhibited a "breadth-first" search behaviour. Of 1398 search actions, clinicians using the resource-based system conducted 401 (28.7%, 95% confidence interval [CI] 26.37-31.11) in this way. In contrast, the majority of clinicians using the task-based system exhibited a "depth-first" search behavior in which they reformulated query keywords while keeping to the same task profiles. Of 585 search actions conducted by clinicians using the task-based system, 379 (64.8%, 95% CI 60.83-68.55) were conducted in this way. This study provides evidence that

  18. Building a semantic search engine with games and crowdsourcing

    OpenAIRE

    Wieser, Christoph

    2014-01-01

    Semantic search engines aim at improving conventional search with semantic information, or meta-data, on the data searched for and/or on the searchers. So far, approaches to semantic search exploit characteristics of the searchers like age, education, or spoken language for selecting and/or ranking search results. Such data allow to build up a semantic search engine as an extension of a conventional search engine. The crawlers of well established search engines like Google, Yahoo! or Bing ...

  19. Experience of Developing a Meta-Semantic Search Engine

    OpenAIRE

    Mukhopadhyay, Debajyoti; Sharma, Manoj; Joshi, Gajanan; Pagare, Trupti; Palwe, Adarsha

    2013-01-01

    Thinking of todays web search scenario which is mainly keyword based, leads to the need of effective and meaningful search provided by Semantic Web. Existing search engines are vulnerable to provide relevant answers to users query due to their dependency on simple data available in web pages. On other hand, semantic search engines provide efficient and relevant results as the semantic web manages information with well defined meaning using ontology. A Meta-Search engine is a search tool that ...

  20. Assessment and Comparison of Search capabilities of Web-based Meta-Search Engines: A Checklist Approach

    Directory of Open Access Journals (Sweden)

    Alireza Isfandiyari Moghadam

    2010-03-01

    Full Text Available   The present investigation concerns evaluation, comparison and analysis of search options existing within web-based meta-search engines. 64 meta-search engines were identified. 19 meta-search engines that were free, accessible and compatible with the objectives of the present study were selected. An author’s constructed check list was used for data collection. Findings indicated that all meta-search engines studied used the AND operator, phrase search, number of results displayed setting, previous search query storage and help tutorials. Nevertheless, none of them demonstrated any search options for hypertext searching and displaying the size of the pages searched. 94.7% support features such as truncation, keywords in title and URL search and text summary display. The checklist used in the study could serve as a model for investigating search options in search engines, digital libraries and other internet search tools.

  1. The end of meta search engines in Europe?

    NARCIS (Netherlands)

    Husovec, Martin

    2015-01-01

    The technology behind the meta search engines supports countless number of Internet services ranging from the price and quality comparison websites to more sophisticated traffic connection finders and general search engines like Google. Meta search engines generally increase market transparency,

  2. A Literature Review of Indexing and Searching Techniques Implementation in Educational Search Engines

    Science.gov (United States)

    El Guemmat, Kamal; Ouahabi, Sara

    2018-01-01

    The objective of this article is to analyze the searching and indexing techniques of educational search engines' implementation while treating future challenges. Educational search engines could greatly help in the effectiveness of e-learning if used correctly. However, these engines have several gaps which influence the performance of e-learning…

  3. How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology.

    Science.gov (United States)

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; Ten Cate, Th J

    2017-08-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye

  4. BioCarian: search engine for exploratory searches in heterogeneous biological databases.

    Science.gov (United States)

    Zaki, Nazar; Tennakoon, Chandana

    2017-10-02

    There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search

  5. Applying Systems Engineering Reduces Radiology Transport Cycle Times in the Emergency Department

    Science.gov (United States)

    White, Benjamin A.; Yun, Brian J.; Lev, Michael H.; Raja, Ali S.

    2017-01-01

    Introduction Emergency department (ED) crowding is widespread, and can result in care delays, medical errors, increased costs, and decreased patient satisfaction. Simultaneously, while capacity constraints on EDs are worsening, contributing factors such as patient volume and inpatient bed capacity are often outside the influence of ED administrators. Therefore, systems engineering approaches that improve throughput and reduce waste may hold the most readily available gains. Decreasing radiology turnaround times improves ED patient throughput and decreases patient waiting time. We sought to investigate the impact of systems engineering science targeting ED radiology transport delays and determine the most effective techniques. Methods This prospective, before-and-after analysis of radiology process flow improvements in an academic hospital ED was exempt from institutional review board review as a quality improvement initiative. We hypothesized that reorganization of radiology transport would improve radiology cycle time and reduce waste. The intervention included systems engineering science-based reorganization of ED radiology transport processes, largely using Lean methodologies, and adding no resources. The primary outcome was average transport time between study order and complete time. All patients presenting between 8/2013–3/2016 and requiring plain film imaging were included. We analyzed electronic medical record data using Microsoft Excel and SAS version 9.4, and we used a two-sample t-test to compare data from the pre- and post-intervention periods. Results Following the intervention, average transport time decreased significantly and sustainably. Average radiology transport time was 28.7 ± 4.2 minutes during the three months pre-intervention. It was reduced by 15% in the first three months (4.4 minutes [95% confidence interval [CI] 1.5–7.3]; to 24.3 ± 3.3 min, P=0.021), 19% in the following six months (5.4 minutes, 95% CI [2.7–8.2]; to 23.3 ± 3

  6. FindZebra: A search engine for rare diseases

    DEFF Research Database (Denmark)

    Dragusin, Radu; Petcu, Paula; Lioma, Christina Amalia

    2013-01-01

    Background: The web has become a primary information resource about illnesses and treatments for both medical and non-medical users. Standard web search is by far the most common interface for such information. It is therefore of interest to find out how well web search engines work for diagnostic...... approach for web search engines for rare disease diagnosis which includes 56 real life diagnostic cases, state-of-the-art evaluation measures, and curated information resources. In addition, we introduce FindZebra, a specialized (vertical) rare disease search engine. FindZebra is powered by open source...... medical concepts to demonstrate different ways of displaying the retrieved results to medical experts. Conclusions: Our results indicate that a specialized search engine can improve the diagnostic quality without compromising the ease of use of the currently widely popular web search engines. The proposed...

  7. Combining results of multiple search engines in proteomics.

    Science.gov (United States)

    Shteynberg, David; Nesvizhskii, Alexey I; Moritz, Robert L; Deutsch, Eric W

    2013-09-01

    A crucial component of the analysis of shotgun proteomics datasets is the search engine, an algorithm that attempts to identify the peptide sequence from the parent molecular ion that produced each fragment ion spectrum in the dataset. There are many different search engines, both commercial and open source, each employing a somewhat different technique for spectrum identification. The set of high-scoring peptide-spectrum matches for a defined set of input spectra differs markedly among the various search engine results; individual engines each provide unique correct identifications among a core set of correlative identifications. This has led to the approach of combining the results from multiple search engines to achieve improved analysis of each dataset. Here we review the techniques and available software for combining the results of multiple search engines and briefly compare the relative performance of these techniques.

  8. Combining Results of Multiple Search Engines in Proteomics*

    Science.gov (United States)

    Shteynberg, David; Nesvizhskii, Alexey I.; Moritz, Robert L.; Deutsch, Eric W.

    2013-01-01

    A crucial component of the analysis of shotgun proteomics datasets is the search engine, an algorithm that attempts to identify the peptide sequence from the parent molecular ion that produced each fragment ion spectrum in the dataset. There are many different search engines, both commercial and open source, each employing a somewhat different technique for spectrum identification. The set of high-scoring peptide-spectrum matches for a defined set of input spectra differs markedly among the various search engine results; individual engines each provide unique correct identifications among a core set of correlative identifications. This has led to the approach of combining the results from multiple search engines to achieve improved analysis of each dataset. Here we review the techniques and available software for combining the results of multiple search engines and briefly compare the relative performance of these techniques. PMID:23720762

  9. Next-Gen Search Engines

    Science.gov (United States)

    Gupta, Amardeep

    2005-01-01

    Current search engines--even the constantly surprising Google--seem unable to leap the next big barrier in search: the trillions of bytes of dynamically generated data created by individual web sites around the world, or what some researchers call the "deep web." The challenge now is not information overload, but information overlook.…

  10. Multiple Presents: How Search Engines Re-write the Past

    NARCIS (Netherlands)

    Hellsten, I; Leydesdorff, L.; Wouters, P.

    2006-01-01

    Internet search engines function in a present which changes continuously. The search engines update their indices regularly, overwriting webpages with newer ones, adding new pages to the index and losing older ones. Some search engines can be used to search for information on the internet for

  11. Using Internet search engines to estimate word frequency.

    Science.gov (United States)

    Blair, Irene V; Urland, Geoffrey R; Ma, Jennifer E

    2002-05-01

    The present research investigated Internet search engines as a rapid, cost-effective alternative for estimating word frequencies. Frequency estimates for 382 words were obtained and compared across four methods: (1) Internet search engines, (2) the Kucera and Francis (1967) analysis of a traditional linguistic corpus, (3) the CELEX English linguistic database (Baayen, Piepenbrock, & Gulikers, 1995), and (4) participant ratings of familiarity. The results showed that Internet search engines produced frequency estimates that were highly consistent with those reported by Kucera and Francis and those calculated from CELEX, highly consistent across search engines, and very reliable over a 6-month period of time. Additional results suggested that Internet search engines are an excellent option when traditional word frequency analyses do not contain the necessary data (e.g., estimates for forenames and slang). In contrast, participants' familiarity judgments did not correspond well with the more objective estimates of word frequency. Researchers are advised to use search engines with large databases (e.g., AltaVista) to ensure the greatest representativeness of the frequency estimates.

  12. People searching for people: analysis of a people search engine log

    NARCIS (Netherlands)

    Weerkamp, W.; Berendsen, R.; Kovachev, B.; Meij, E.; Balog, K.; de Rijke, M.

    2011-01-01

    Recent years show an increasing interest in vertical search: searching within a particular type of information. Understanding what people search for in these "verticals" gives direction to research and provides pointers for the search engines themselves. In this paper we analyze the search logs of

  13. Web Feet Guide to Search Engines: Finding It on the Net.

    Science.gov (United States)

    Web Feet, 2001

    2001-01-01

    This guide to search engines for the World Wide Web discusses selecting the right search engine; interpreting search results; major search engines; online tutorials and guides; search engines for kids; specialized search tools for various subjects; and other specialized engines and gateways. (LRW)

  14. An Exploratory Survey of Student Perspectives Regarding Search Engines

    Science.gov (United States)

    Alshare, Khaled; Miller, Don; Wenger, James

    2005-01-01

    This study explored college students' perceptions regarding their use of search engines. The main objective was to determine how frequently students used various search engines, whether advanced search features were used, and how many search engines were used. Various factors that might influence student responses were examined. Results showed…

  15. IBRI-CASONTO: Ontology-based semantic search engine

    Directory of Open Access Journals (Sweden)

    Awny Sayed

    2017-11-01

    Full Text Available The vast availability of information, that added in a very fast pace, in the data repositories creates a challenge in extracting correct and accurate information. Which has increased the competition among developers in order to gain access to technology that seeks to understand the intent researcher and contextual meaning of terms. While the competition for developing an Arabic Semantic Search systems are still in their infancy, and the reason could be traced back to the complexity of Arabic Language. It has a complex morphological, grammatical and semantic aspects, as it is a highly inflectional and derivational language. In this paper, we try to highlight and present an Ontological Search Engine called IBRI-CASONTO for Colleges of Applied Sciences, Oman. Our proposed engine supports both Arabic and English language. It is also employed two types of search which are a keyword-based search and a semantics-based search. IBRI-CASONTO is based on different technologies such as Resource Description Framework (RDF data and Ontological graph. The experiments represent in two sections, first it shows a comparison among Entity-Search and the Classical-Search inside the IBRI-CASONTO itself, second it compares the Entity-Search of IBRI-CASONTO with currently used search engines, such as Kngine, Wolfram Alpha and the most popular engine nowadays Google, in order to measure their performance and efficiency.

  16. Evaluating a Federated Medical Search Engine

    Science.gov (United States)

    Belden, J.; Williams, J.; Richardson, B.; Schuster, K.

    2014-01-01

    Summary Background Federated medical search engines are health information systems that provide a single access point to different types of information. Their efficiency as clinical decision support tools has been demonstrated through numerous evaluations. Despite their rigor, very few of these studies report holistic evaluations of medical search engines and even fewer base their evaluations on existing evaluation frameworks. Objectives To evaluate a federated medical search engine, MedSocket, for its potential net benefits in an established clinical setting. Methods This study applied the Human, Organization, and Technology (HOT-fit) evaluation framework in order to evaluate MedSocket. The hierarchical structure of the HOT-factors allowed for identification of a combination of efficiency metrics. Human fit was evaluated through user satisfaction and patterns of system use; technology fit was evaluated through the measurements of time-on-task and the accuracy of the found answers; and organization fit was evaluated from the perspective of system fit to the existing organizational structure. Results Evaluations produced mixed results and suggested several opportunities for system improvement. On average, participants were satisfied with MedSocket searches and confident in the accuracy of retrieved answers. However, MedSocket did not meet participants’ expectations in terms of download speed, access to information, and relevance of the search results. These mixed results made it necessary to conclude that in the case of MedSocket, technology fit had a significant influence on the human and organization fit. Hence, improving technological capabilities of the system is critical before its net benefits can become noticeable. Conclusions The HOT-fit evaluation framework was instrumental in tailoring the methodology for conducting a comprehensive evaluation of the search engine. Such multidimensional evaluation of the search engine resulted in recommendations for

  17. Adding a visualization feature to web search engines: it's time.

    Science.gov (United States)

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.

  18. Comparative Study on Three Major Internet Search Engines ...

    African Journals Online (AJOL)

    , Google and ask.com search engines. Experimental method was used with ten reference questions which were used to query each of the search engines . Yahoo obtained the highest results (521,801,043) among the three Web search ...

  19. Variability of patient spine education by Internet search engine.

    Science.gov (United States)

    Ghobrial, George M; Mehdi, Angud; Maltenfort, Mitchell; Sharan, Ashwini D; Harrop, James S

    2014-03-01

    Patients are increasingly reliant upon the Internet as a primary source of medical information. The educational experience varies by search engine, search term, and changes daily. There are no tools for critical evaluation of spinal surgery websites. To highlight the variability between common search engines for the same search terms. To detect bias, by prevalence of specific kinds of websites for certain spinal disorders. Demonstrate a simple scoring system of spinal disorder website for patient use, to maximize the quality of information exposed to the patient. Ten common search terms were used to query three of the most common search engines. The top fifty results of each query were tabulated. A negative binomial regression was performed to highlight the variation across each search engine. Google was more likely than Bing and Yahoo search engines to return hospital ads (P=0.002) and more likely to return scholarly sites of peer-reviewed lite (P=0.003). Educational web sites, surgical group sites, and online web communities had a significantly higher likelihood of returning on any search, regardless of search engine, or search string (P=0.007). Likewise, professional websites, including hospital run, industry sponsored, legal, and peer-reviewed web pages were less likely to be found on a search overall, regardless of engine and search string (P=0.078). The Internet is a rapidly growing body of medical information which can serve as a useful tool for patient education. High quality information is readily available, provided that the patient uses a consistent, focused metric for evaluating online spine surgery information, as there is a clear variability in the way search engines present information to the patient. Published by Elsevier B.V.

  20. An Innovative Approach for online Meta Search Engine Optimization

    OpenAIRE

    Manral, Jai; Hossain, Mohammed Alamgir

    2015-01-01

    This paper presents an approach to identify efficient techniques used in Web Search Engine Optimization (SEO). Understanding SEO factors which can influence page ranking in search engine is significant for webmasters who wish to attract large number of users to their website. Different from previous relevant research, in this study we developed an intelligent Meta search engine which aggregates results from various search engines and ranks them based on several important SEO parameters. The r...

  1. Search Engine : an effective tool for exploring the Internet

    OpenAIRE

    Ranasinghe, W.M. Tharanga Dilruk

    2006-01-01

    The Internet has become the largest source of information. Today, millions of Websites exist and this number continuous to grow. Finding the right information at the right time is the challenge in the Internet age. Search engine is searchable database which allows locating the information on the Internet by submitting the keywords. Search engines can be divided into two categories as the Individual and Meta Search engines. This article discusses the features of these search engines in detail.

  2. Search Engine Advertising Effectiveness in a Multimedia Campaign

    NARCIS (Netherlands)

    Zenetti, German; Bijmolt, Tammo H. A.; Leeflang, Peter S. H.; Klapper, Daniel

    2014-01-01

    Search engine advertising has become a multibillion-dollar business and one of the dominant forms of advertising on the Internet. This study examines the effectiveness of search engine advertising within a multimedia campaign, with explicit consideration of the interaction effects between search

  3. Searching for Suicide Information on Web Search Engines in Chinese

    Directory of Open Access Journals (Sweden)

    Yen-Feng Lee

    2017-01-01

    Full Text Available Introduction: Recently, suicide prevention has been an important public health issue. However, with the growing access to information in cyberspace, the harmful information is easily accessible online. To investigate the accessibility of potentially harmful suicide-related information on the internet, we discuss the following issue about searching suicide information on the internet to draw attention to it. Methods: We use five search engines (Google, Yahoo, Bing, Yam, and Sina and four suicide-related search queries (suicide, how to suicide, suicide methods, and want to die in traditional Chinese in April 2016. We classified the first thirty linkages of the search results on each search engine by a psychiatric doctor into suicide prevention, pro-suicide, neutral, unrelated to suicide, or error websites. Results: Among the total 352 unique websites generated, the suicide prevention websites were the most frequent among the search results (37.8%, followed by websites unrelated to suicide (25.9% and neutral websites (23.0%. However, pro-suicide websites were still easily accessible (9.7%. Besides, compared with the USA and China, the search engine originating in Taiwan had the lowest accessibility to pro-suicide information. The results of ANOVA showed a significant difference between the groups, F = 8.772, P < 0.001. Conclusions: This study results suggest a need for further restrictions and regulations of pro-suicide information on the internet. Providing more supportive information online may be an effective plan for suicidal prevention.

  4. A design perspective on aligning process-centric and technology-centric approaches

    DEFF Research Database (Denmark)

    Siurdyban, Artur Henryk; Svejvig, Per; Møller, Charles

    2012-01-01

    Enterprise systems management (ESM) and business process management (BPM), although highly correlated, have evolved as alternative approaches to operational transformation. As a result, companies struggle to find the right balance when prioritizing technology and processes as change drivers....... The purpose of this paper is to propose a direction towards aligning the process-centric and technology-centric approaches. Using the case study method, we gain insight into two implementation projects: one of an information technology (IT) system and one of a process. We compare them using design thinking...... and strategic alignment theories. Based on the discussion, we assess the shortcomings of the process-centric and technology-centric approaches and argue that a conjoint design approach is required to achieve alignment between processes and technology. From a theoretical stance, this paper offers design-informed...

  5. Google Patents: The global patent search engine

    OpenAIRE

    Noruzi, Alireza; Abdekhoda, Mohammadhiwa

    2014-01-01

    Google Patents (www.google.com/patents) includes over 8 million full-text patents. Google Patents works in the same way as the Google search engine. Google Patents is the global patent search engine that lets users search through patents from the USPTO (United States Patent and Trademark Office), EPO (European Patent Office), etc. This study begins with an overview of how to use Google Patent and identifies advanced search techniques not well-documented by Google Patent. It makes several sug...

  6. The Little Engines That Could: Modeling the Performance of World Wide Web Search Engines

    OpenAIRE

    Eric T. Bradlow; David C. Schmittlein

    2000-01-01

    This research examines the ability of six popular Web search engines, individually and collectively, to locate Web pages containing common marketing/management phrases. We propose and validate a model for search engine performance that is able to represent key patterns of coverage and overlap among the engines. The model enables us to estimate the typical additional benefit of using multiple search engines, depending on the particular set of engines being considered. It also provides an estim...

  7. Reflections on New Search Engine 新型搜索引擎畅想

    OpenAIRE

    Huang, Jiannian

    2007-01-01

    English abstract]Quick increment of need on internet information resources leads to a rush of search engines. This article introduces some new type of search engines which is appearing and will appear. These search engines includes as follows: grey document search engine, invisible web search engine, knowledge discovery search engine, clustering meta search engine, academic clustering search engine, conception comparison and conception analogy search engine, consultation search engine, teachi...

  8. IntegromeDB: an integrated system and biological search engine.

    Science.gov (United States)

    Baitaluk, Michael; Kozhenkov, Sergey; Dubinina, Yulia; Ponomarenko, Julia

    2012-01-19

    With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback.

  9. Subject Gateway Sites and Search Engine Ranking.

    Science.gov (United States)

    Thelwall, Mike

    2002-01-01

    Discusses subject gateway sites and commercial search engines for the Web and presents an explanation of Google's PageRank algorithm. The principle question addressed is the conditions under which a gateway site will increase the likelihood that a target page is found in search engines. (LRW)

  10. Human Flesh Search Engine and Online Privacy.

    Science.gov (United States)

    Zhang, Yang; Gao, Hong

    2016-04-01

    Human flesh search engine can be a double-edged sword, bringing convenience on the one hand and leading to infringement of personal privacy on the other hand. This paper discusses the ethical problems brought about by the human flesh search engine, as well as possible solutions.

  11. Understanding and modeling users of modern search engines

    NARCIS (Netherlands)

    Chuklin, A.

    2017-01-01

    As search is being used by billions of people, modern search engines are becoming more and more complex. And complexity does not just come from the algorithms. Richer and richer content is being added to search engine result pages: news and sports results, definitions and translations, images and

  12. FindZebra: a search engine for rare diseases.

    Science.gov (United States)

    Dragusin, Radu; Petcu, Paula; Lioma, Christina; Larsen, Birger; Jørgensen, Henrik L; Cox, Ingemar J; Hansen, Lars Kai; Ingwersen, Peter; Winther, Ole

    2013-06-01

    The web has become a primary information resource about illnesses and treatments for both medical and non-medical users. Standard web search is by far the most common interface to this information. It is therefore of interest to find out how well web search engines work for diagnostic queries and what factors contribute to successes and failures. Among diseases, rare (or orphan) diseases represent an especially challenging and thus interesting class to diagnose as each is rare, diverse in symptoms and usually has scattered resources associated with it. We design an evaluation approach for web search engines for rare disease diagnosis which includes 56 real life diagnostic cases, performance measures, information resources and guidelines for customising Google Search to this task. In addition, we introduce FindZebra, a specialized (vertical) rare disease search engine. FindZebra is powered by open source search technology and uses curated freely available online medical information. FindZebra outperforms Google Search in both default set-up and customised to the resources used by FindZebra. We extend FindZebra with specialized functionalities exploiting medical ontological information and UMLS medical concepts to demonstrate different ways of displaying the retrieved results to medical experts. Our results indicate that a specialized search engine can improve the diagnostic quality without compromising the ease of use of the currently widely popular standard web search. The proposed evaluation approach can be valuable for future development and benchmarking. The FindZebra search engine is available at http://www.findzebra.com/. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Music Search Engines: Specifications and Challenges

    DEFF Research Database (Denmark)

    Nanopoulos, Alexandros; Rafilidis, Dimitrios; Manolopoulos, Yannis

    2009-01-01

    Nowadays we have a proliferation of music data available over the Web. One of the imperative challenges is how to search these vast, global-scale musical resources to find preferred music. Recent research has envisaged the notion of music search engines (MSEs) that allow for searching preferred...

  14. Applying Russian Search Engines to market Finnish Corporates

    OpenAIRE

    Pankratovs, Vladimirs

    2013-01-01

    The goal of this thesis work is to provide basic knowledge of Russia-based Search Engines marketing capabilities. After reading this material, the user is able to diverge different kinds of Search Engine Marketing tools and can perform advertising campaigns. This study includes information about the majority of tools available to the user and provides up to date screenshots of Russian Search engines front-end, which can be useful in further work. Study discusses the main principles and ba...

  15. Using declarative workflow languages to develop process-centric web applications

    NARCIS (Netherlands)

    Bernardi, M.L.; Cimitile, M.; Di Lucca, G.A.; Maggi, F.M.

    2012-01-01

    Nowadays, process-centric Web Applications (WAs) are extensively used in contexts where multi-user, coordinated work is required. Recently, Model Driven Engineering (MDE) techniques have been investigated for the development of this kind of applications. However, there are still some open issues.

  16. Group Centric Networking: Large Scale Over the Air Testing of Group Centric Networking

    Science.gov (United States)

    2016-11-01

    Large Scale Over-the-Air Testing of Group Centric Networking Logan Mercer, Greg Kuperman, Andrew Hunter, Brian Proulx MIT Lincoln Laboratory...performance of Group Centric Networking (GCN), a networking protocol developed for robust and scalable communications in lossy networks where users are...devices, and the ad-hoc nature of the network . Group Centric Networking (GCN) is a proposed networking protocol that addresses challenges specific to

  17. Use of search engine optimization factors for Google page rank prediction

    OpenAIRE

    Tvrdi, Barbara

    2012-01-01

    Over the years, search engines have become an important tool for finding information. It is known that users select the link on the first page of search results in 62% of the cases. Search engine optimization techniques enable website improvement and therefore a better ranking in search engines. The exact specification of the factors that affect website ranking is not disclosed by search engine owners. In this thesis we tried to choose some most frequently mentioned search engine optimizatio...

  18. Searching Choices: Quantifying Decision-Making Processes Using Search Engine Data.

    Science.gov (United States)

    Moat, Helen Susannah; Olivola, Christopher Y; Chater, Nick; Preis, Tobias

    2016-07-01

    When making a decision, humans consider two types of information: information they have acquired through their prior experience of the world, and further information they gather to support the decision in question. Here, we present evidence that data from search engines such as Google can help us model both sources of information. We show that statistics from search engines on the frequency of content on the Internet can help us estimate the statistical structure of prior experience; and, specifically, we outline how such statistics can inform psychological theories concerning the valuation of human lives, or choices involving delayed outcomes. Turning to information gathering, we show that search query data might help measure human information gathering, and it may predict subsequent decisions. Such data enable us to compare information gathered across nations, where analyses suggest, for example, a greater focus on the future in countries with a higher per capita GDP. We conclude that search engine data constitute a valuable new resource for cognitive scientists, offering a fascinating new tool for understanding the human decision-making process. Copyright © 2016 The Authors. Topics in Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  19. The Use of Web Search Engines in Information Science Research.

    Science.gov (United States)

    Bar-Ilan, Judit

    2004-01-01

    Reviews the literature on the use of Web search engines in information science research, including: ways users interact with Web search engines; social aspects of searching; structure and dynamic nature of the Web; link analysis; other bibliometric applications; characterizing information on the Web; search engine evaluation and improvement; and…

  20. Game-centric pedagogy and curriculums in higher education

    DEFF Research Database (Denmark)

    Nørgård, Rikke Toft; Murray, John; Morgan, James

    2017-01-01

    This paper examines some recent trends in game-centric education for STEAM (science, technology,engineering, art and mathematics) fields, especially those that explore and promote collaborationamong multiple disciplines. We discuss various multimodal design research activities that draw uponthe...... applications and usage of popular technical hackathons and game design jams in educationalenvironments. The intent of this work is to guide and inform new approaches to the corecomponents of STEAM curriculums.Game-centric methods appear to be well-suited to a variety of education and trainingcircumstances...... a valuable vehicle for enhancing general education andlong-term life skills.We conclude by describing some opportunities to undertake qualitative and quantitative research on teams of participants in popular game development events, such as the multinational Global GameJam (GGJ) series. This process involves...

  1. Tasks of physicists and graduated engineers in diagnostic radiology

    International Nuclear Information System (INIS)

    Angerstein, W.

    1987-01-01

    The tasks of physicists and engineers in diagnostic radiology are compiled and trends of development are discussed. Specific duties can be selected from these tasks for each department and physicist individually. An attempt is made to characterize the specific tasks of medical physics. The most important tasks are concerning subjects of (1) investment planning, (2) quality control and quality assurance, (3) service and maintenance, (4) radiation protection and electrical safety, (5) development, testing and adaption of equipment, (6) assistance in running the radiologic department, (7) research, (8) pre- and postgraduate training, (9) educational training, (10) miscellaneous. (author)

  2. Basic Study on Data-Centric design information integration system framework development for adapting Nuclear Power Plant construction in Korea

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Byung Ki [KHNP, Gyeongju (Korea, Republic of)

    2016-05-15

    This study established the concept of data-centric design, which is the latest design technique, by analyzing the existing literature so that the data-centric design would be applied to the nuclear power plant projects in Korea and analyzed the status of data-centric design application by the advanced companies and the domestic design companies participating in the nuclear power plant projects. By analyzing the function of the 3D CAD commercial system and all design drawings used in the nuclear power plant projects in Korea, a data-centric design integrated system model has been developed. This study established the concept of data-centric design technology, analyzed the functions of the plant architect engineering (A/E) software being globally used in the plant field and the design process status of nuclear power plant projects in Korea. A design information integration system building model, which is capable of data-centric design, in the place of the existing document-centric system design such as P and ID and SLD, has been suggested through the investigation on the data-centric design cases of the advanced companies. The major functions of the suggested model required for the application to the domestic industry were drawn. The suggested framework builds the field design, which was performed in the 3D system of the constructor, as an owner's field design system, which can manage all design drawings generated from the field design and the related information in integrated way. An as-built full model integrated of plant architect engineering, supplier design and field design is built. It is handed over to the operation team at the O and M stage and utilized in the maintenance and repair. As a power plant full model of future construction project has been enabled, an improved design process has been suggested, in which only the design change information during the plant architect engineering (A/E) and the design change information during the field design

  3. Semantic interpretation of search engine resultant

    Science.gov (United States)

    Nasution, M. K. M.

    2018-01-01

    In semantic, logical language can be interpreted in various forms, but the certainty of meaning is included in the uncertainty, which directly always influences the role of technology. One results of this uncertainty applies to search engines as user interfaces with information spaces such as the Web. Therefore, the behaviour of search engine results should be interpreted with certainty through semantic formulation as interpretation. Behaviour formulation shows there are various interpretations that can be done semantically either temporary, inclusion, or repeat.

  4. Document Clustering Approach for Meta Search Engine

    Science.gov (United States)

    Kumar, Naresh, Dr.

    2017-08-01

    The size of WWW is growing exponentially with ever change in technology. This results in huge amount of information with long list of URLs. Manually it is not possible to visit each page individually. So, if the page ranking algorithms are used properly then user search space can be restricted up to some pages of searched results. But available literatures show that no single search system can provide qualitative results from all the domains. This paper provides solution to this problem by introducing a new meta search engine that determine the relevancy of query corresponding to web page and cluster the results accordingly. The proposed approach reduces the user efforts, improves the quality of results and performance of the meta search engine.

  5. Using Internet Search Engines to Obtain Medical Information: A Comparative Study

    Science.gov (United States)

    Wang, Liupu; Wang, Juexin; Wang, Michael; Li, Yong; Liang, Yanchun

    2012-01-01

    Background The Internet has become one of the most important means to obtain health and medical information. It is often the first step in checking for basic information about a disease and its treatment. The search results are often useful to general users. Various search engines such as Google, Yahoo!, Bing, and Ask.com can play an important role in obtaining medical information for both medical professionals and lay people. However, the usability and effectiveness of various search engines for medical information have not been comprehensively compared and evaluated. Objective To compare major Internet search engines in their usability of obtaining medical and health information. Methods We applied usability testing as a software engineering technique and a standard industry practice to compare the four major search engines (Google, Yahoo!, Bing, and Ask.com) in obtaining health and medical information. For this purpose, we searched the keyword breast cancer in Google, Yahoo!, Bing, and Ask.com and saved the results of the top 200 links from each search engine. We combined nonredundant links from the four search engines and gave them to volunteer users in an alphabetical order. The volunteer users evaluated the websites and scored each website from 0 to 10 (lowest to highest) based on the usefulness of the content relevant to breast cancer. A medical expert identified six well-known websites related to breast cancer in advance as standards. We also used five keywords associated with breast cancer defined in the latest release of Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) and analyzed their occurrence in the websites. Results Each search engine provided rich information related to breast cancer in the search results. All six standard websites were among the top 30 in search results of all four search engines. Google had the best search validity (in terms of whether a website could be opened), followed by Bing, Ask.com, and Yahoo!. The search

  6. Using Internet search engines to obtain medical information: a comparative study.

    Science.gov (United States)

    Wang, Liupu; Wang, Juexin; Wang, Michael; Li, Yong; Liang, Yanchun; Xu, Dong

    2012-05-16

    The Internet has become one of the most important means to obtain health and medical information. It is often the first step in checking for basic information about a disease and its treatment. The search results are often useful to general users. Various search engines such as Google, Yahoo!, Bing, and Ask.com can play an important role in obtaining medical information for both medical professionals and lay people. However, the usability and effectiveness of various search engines for medical information have not been comprehensively compared and evaluated. To compare major Internet search engines in their usability of obtaining medical and health information. We applied usability testing as a software engineering technique and a standard industry practice to compare the four major search engines (Google, Yahoo!, Bing, and Ask.com) in obtaining health and medical information. For this purpose, we searched the keyword breast cancer in Google, Yahoo!, Bing, and Ask.com and saved the results of the top 200 links from each search engine. We combined nonredundant links from the four search engines and gave them to volunteer users in an alphabetical order. The volunteer users evaluated the websites and scored each website from 0 to 10 (lowest to highest) based on the usefulness of the content relevant to breast cancer. A medical expert identified six well-known websites related to breast cancer in advance as standards. We also used five keywords associated with breast cancer defined in the latest release of Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) and analyzed their occurrence in the websites. Each search engine provided rich information related to breast cancer in the search results. All six standard websites were among the top 30 in search results of all four search engines. Google had the best search validity (in terms of whether a website could be opened), followed by Bing, Ask.com, and Yahoo!. The search results highly overlapped between the

  7. An ontology-based search engine for protein-protein interactions.

    Science.gov (United States)

    Park, Byungkyu; Han, Kyungsook

    2010-01-18

    Keyword matching or ID matching is the most common searching method in a large database of protein-protein interactions. They are purely syntactic methods, and retrieve the records in the database that contain a keyword or ID specified in a query. Such syntactic search methods often retrieve too few search results or no results despite many potential matches present in the database. We have developed a new method for representing protein-protein interactions and the Gene Ontology (GO) using modified Gödel numbers. This representation is hidden from users but enables a search engine using the representation to efficiently search protein-protein interactions in a biologically meaningful way. Given a query protein with optional search conditions expressed in one or more GO terms, the search engine finds all the interaction partners of the query protein by unique prime factorization of the modified Gödel numbers representing the query protein and the search conditions. Representing the biological relations of proteins and their GO annotations by modified Gödel numbers makes a search engine efficiently find all protein-protein interactions by prime factorization of the numbers. Keyword matching or ID matching search methods often miss the interactions involving a protein that has no explicit annotations matching the search condition, but our search engine retrieves such interactions as well if they satisfy the search condition with a more specific term in the ontology.

  8. Radiological protection. Responsibility of the Safety Engineering Company

    International Nuclear Information System (INIS)

    Netto, A.L.

    1987-01-01

    This subject takes care of the Safety Engineering at the Radiologic Protection area on the X and Gama Rays Services. It mainly emphasis the case of that companies that, due do not have proper X and Gama Rays Services utilize partime task force on this area, but answer themselves for the safety of their employees in case of any accident occurence. (author) [pt

  9. D-score: a search engine independent MD-score.

    Science.gov (United States)

    Vaudel, Marc; Breiter, Daniela; Beck, Florian; Rahnenführer, Jörg; Martens, Lennart; Zahedi, René P

    2013-03-01

    While peptides carrying PTMs are routinely identified in gel-free MS, the localization of the PTMs onto the peptide sequences remains challenging. Search engine scores of secondary peptide matches have been used in different approaches in order to infer the quality of site inference, by penalizing the localization whenever the search engine similarly scored two candidate peptides with different site assignments. In the present work, we show how the estimation of posterior error probabilities for peptide candidates allows the estimation of a PTM score called the D-score, for multiple search engine studies. We demonstrate the applicability of this score to three popular search engines: Mascot, OMSSA, and X!Tandem, and evaluate its performance using an already published high resolution data set of synthetic phosphopeptides. For those peptides with phosphorylation site inference uncertainty, the number of spectrum matches with correctly localized phosphorylation increased by up to 25.7% when compared to using Mascot alone, although the actual increase depended on the fragmentation method used. Since this method relies only on search engine scores, it can be readily applied to the scoring of the localization of virtually any modification at no additional experimental or in silico cost. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Considerations for the development of task-based search engines

    DEFF Research Database (Denmark)

    Petcu, Paula; Dragusin, Radu

    2013-01-01

    Based on previous experience from working on a task-based search engine, we present a list of suggestions and ideas for an Information Retrieval (IR) framework that could inform the development of next generation professional search systems. The specific task that we start from is the clinicians......' information need in finding rare disease diagnostic hypotheses at the time and place where medical decisions are made. Our experience from the development of a search engine focused on supporting clinicians in completing this task has provided us valuable insights in what aspects should be considered...... by the developers of vertical search engines....

  11. Combining Vertex-centric Graph Processing with SPARQL for Large-scale RDF Data Analytics

    KAUST Repository

    Abdelaziz, Ibrahim

    2017-06-27

    Modern applications, such as drug repositioning, require sophisticated analytics on RDF graphs that combine structural queries with generic graph computations. Existing systems support either declarative SPARQL queries, or generic graph processing, but not both. We bridge the gap by introducing Spartex, a versatile framework for complex RDF analytics. Spartex extends SPARQL to support programs that combine seamlessly generic graph algorithms (e.g., PageRank, Shortest Paths, etc.) with SPARQL queries. Spartex builds on existing vertex-centric graph processing frameworks, such as Graphlab or Pregel. It implements a generic SPARQL operator as a vertex-centric program that interprets SPARQL queries and executes them efficiently using a built-in optimizer. In addition, any graph algorithm implemented in the underlying vertex-centric framework, can be executed in Spartex. We present various scenarios where our framework simplifies significantly the implementation of complex RDF data analytics programs. We demonstrate that Spartex scales to datasets with billions of edges, and show that our core SPARQL engine is at least as fast as the state-of-the-art specialized RDF engines. For complex analytical tasks that combine generic graph processing with SPARQL, Spartex is at least an order of magnitude faster than existing alternatives.

  12. Grooker, KartOO, Addict-o-Matic and More: Really Different Search Engines

    Science.gov (United States)

    Descy, Don E.

    2009-01-01

    There are hundreds of unique search engines in the United States and thousands of unique search engines around the world. If people get into search engines designed just to search particular web sites, the number is in the hundreds of thousands. This article looks at: (1) clustering search engines, such as KartOO (www.kartoo.com) and Grokker…

  13. New Architectures for Presenting Search Results Based on Web Search Engines Users Experience

    Science.gov (United States)

    Martinez, F. J.; Pastor, J. A.; Rodriguez, J. V.; Lopez, Rosana; Rodriguez, J. V., Jr.

    2011-01-01

    Introduction: The Internet is a dynamic environment which is continuously being updated. Search engines have been, currently are and in all probability will continue to be the most popular systems in this information cosmos. Method: In this work, special attention has been paid to the series of changes made to search engines up to this point,…

  14. Searching for a New Way to Reach Patrons: A Search Engine Optimization Pilot Project at Binghamton University Libraries

    Science.gov (United States)

    Rushton, Erin E.; Kelehan, Martha Daisy; Strong, Marcy A.

    2008-01-01

    Search engine use is one of the most popular online activities. According to a recent OCLC report, nearly all students start their electronic research using a search engine instead of the library Web site. Instead of viewing search engines as competition, however, librarians at Binghamton University Libraries decided to employ search engine…

  15. Evaluation of Proteomic Search Engines for the Analysis of Histone Modifications

    Science.gov (United States)

    2015-01-01

    Identification of histone post-translational modifications (PTMs) is challenging for proteomics search engines. Including many histone PTMs in one search increases the number of candidate peptides dramatically, leading to low search speed and fewer identified spectra. To evaluate database search engines on identifying histone PTMs, we present a method in which one kind of modification is searched each time, for example, unmodified, individually modified, and multimodified, each search result is filtered with false discovery rate less than 1%, and the identifications of multiple search engines are combined to obtain confident results. We apply this method for eight search engines on histone data sets. We find that two search engines, pFind and Mascot, identify most of the confident results at a reasonable speed, so we recommend using them to identify histone modifications. During the evaluation, we also find some important aspects for the analysis of histone modifications. Our evaluation of different search engines on identifying histone modifications will hopefully help those who are hoping to enter the histone proteomics field. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium with the data set identifier PXD001118. PMID:25167464

  16. Evaluation of proteomic search engines for the analysis of histone modifications.

    Science.gov (United States)

    Yuan, Zuo-Fei; Lin, Shu; Molden, Rosalynn C; Garcia, Benjamin A

    2014-10-03

    Identification of histone post-translational modifications (PTMs) is challenging for proteomics search engines. Including many histone PTMs in one search increases the number of candidate peptides dramatically, leading to low search speed and fewer identified spectra. To evaluate database search engines on identifying histone PTMs, we present a method in which one kind of modification is searched each time, for example, unmodified, individually modified, and multimodified, each search result is filtered with false discovery rate less than 1%, and the identifications of multiple search engines are combined to obtain confident results. We apply this method for eight search engines on histone data sets. We find that two search engines, pFind and Mascot, identify most of the confident results at a reasonable speed, so we recommend using them to identify histone modifications. During the evaluation, we also find some important aspects for the analysis of histone modifications. Our evaluation of different search engines on identifying histone modifications will hopefully help those who are hoping to enter the histone proteomics field. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium with the data set identifier PXD001118.

  17. Chemical Information in Scirus and BASE (Bielefeld Academic Search Engine)

    Science.gov (United States)

    Bendig, Regina B.

    2009-01-01

    The author sought to determine to what extent the two search engines, Scirus and BASE (Bielefeld Academic Search Engines), would be useful to first-year university students as the first point of searching for chemical information. Five topics were searched and the first ten records of each search result were evaluated with regard to the type of…

  18. Estimating Search Engine Index Size Variability

    DEFF Research Database (Denmark)

    Van den Bosch, Antal; Bogers, Toine; De Kunder, Maurice

    2016-01-01

    One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel...... method of estimating the size of a Web search engine’s index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing’s indices over a nine-year period, from March 2006...... until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find...

  19. The effective use of search engines on the Internet.

    Science.gov (United States)

    Younger, P

    This article explains how nurses can get the most out of researching information on the internet using the search engine Google. It also explores some of the other types of search engines that are available. Internet users are shown how to find text, images and reports and search within sites. Copyright issues are also discussed.

  20. Ontology-centric integration and navigation of the dengue literature.

    Science.gov (United States)

    Rajapakse, Menaka; Kanagasabai, Rajaraman; Ang, Wee Tiong; Veeramani, Anitha; Schreiber, Mark J; Baker, Christopher J O

    2008-10-01

    Uninhibited access to the unstructured information distributed across the web and in scientific literature databases continues to be beyond the reach of scientists and health professionals. To address this challenge we have developed a literature driven, ontology-centric navigation infrastructure consisting of a content acquisition engine, a domain-specific ontology (in OWL-DL) and an ontology instantiation pipeline delivering sentences derived by domain-specific text mining. A visual query tool for reasoning over A-box instances in the populated ontology is presented and used to build conceptual queries that can be issued to the knowledgebase. We have deployed this generic infrastructure to facilitate data integration and knowledge sharing in the domain of dengue, which is one of the most prevalent viral diseases that continue to infect millions of people in the tropical and subtropical regions annually. Using our unique methodology we illustrate simplified search and discovery on dengue information derived from distributed resources and aggregated according to dengue ontology. Furthermore we apply data mining to the instantiated ontology to elucidate trends in the mentions of dengue serotypes in scientific abstracts since 1974.

  1. Evidence-based Medicine Search: a customizable federated search engine.

    Science.gov (United States)

    Bracke, Paul J; Howse, David K; Keim, Samuel M

    2008-04-01

    This paper reports on the development of a tool by the Arizona Health Sciences Library (AHSL) for searching clinical evidence that can be customized for different user groups. The AHSL provides services to the University of Arizona's (UA's) health sciences programs and to the University Medical Center. Librarians at AHSL collaborated with UA College of Medicine faculty to create an innovative search engine, Evidence-based Medicine (EBM) Search, that provides users with a simple search interface to EBM resources and presents results organized according to an evidence pyramid. EBM Search was developed with a web-based configuration component that allows the tool to be customized for different specialties. Informal and anecdotal feedback from physicians indicates that EBM Search is a useful tool with potential in teaching evidence-based decision making. While formal evaluation is still being planned, a tool such as EBM Search, which can be configured for specific user populations, may help lower barriers to information resources in an academic health sciences center.

  2. Noesis: Ontology based Scoped Search Engine and Resource Aggregator for Atmospheric Science

    Science.gov (United States)

    Ramachandran, R.; Movva, S.; Li, X.; Cherukuri, P.; Graves, S.

    2006-12-01

    The goal for search engines is to return results that are both accurate and complete. The search engines should find only what you really want and find everything you really want. Search engines (even meta search engines) lack semantics. The basis for search is simply based on string matching between the user's query term and the resource database and the semantics associated with the search string is not captured. For example, if an atmospheric scientist is searching for "pressure" related web resources, most search engines return inaccurate results such as web resources related to blood pressure. In this presentation Noesis, which is a meta-search engine and a resource aggregator that uses domain ontologies to provide scoped search capabilities will be described. Noesis uses domain ontologies to help the user scope the search query to ensure that the search results are both accurate and complete. The domain ontologies guide the user to refine their search query and thereby reduce the user's burden of experimenting with different search strings. Semantics are captured by refining the query terms to cover synonyms, specializations, generalizations and related concepts. Noesis also serves as a resource aggregator. It categorizes the search results from different online resources such as education materials, publications, datasets, web search engines that might be of interest to the user.

  3. Radiological controls integrated into design

    Energy Technology Data Exchange (ETDEWEB)

    Kindred, G.W. [Cleveland Electric Illuminating Co., Perry, OH (United States)

    1995-03-01

    Radiological controls are required by law in the design of commercial nuclear power reactor facilities. These controls can be relatively minor or significant, relative to cost. To ensure that radiological controls are designed into a project, the health physicist (radiological engineer) must be involved from the beginning. This is especially true regarding keeping costs down. For every radiological engineer at a nuclear power plant there must be fifty engineers of other disciplines. The radiological engineer cannot be an expert on every discipline of engineering. However, he must be knowledgeable to the degree of how a design will impact the facility from a radiological perspective. This paper will address how to effectively perform radiological analyses with the goal of radiological controls integrated into the design package.

  4. The AXES-lite video search engine

    NARCIS (Netherlands)

    Chen, Shu; McGuinness, Kevin; Aly, Robin; de Jong, Franciska M.G.; O' Connor, Noel E.

    The aim of AXES is to develop tools that provide various types of users with new engaging ways to interact with audiovisual libraries, helping them discover, browse, navigate, search, and enrich archives. This paper describes the initial (lite) version of the AXES search engine, which is targeted at

  5. Dermatological image search engines on the Internet: do they work?

    Science.gov (United States)

    Cutrone, M; Grimalt, R

    2007-02-01

    Atlases on CD-ROM first substituted the use of paediatric dermatology atlases printed on paper. This permitted a faster search and a practical comparison of differential diagnoses. The third step in the evolution of clinical atlases was the onset of the online atlas. Many doctors now use the Internet image search engines to obtain clinical images directly. The aim of this study was to test the reliability of the image search engines compared to the online atlases. We tested seven Internet image search engines with three paediatric dermatology diseases. In general, the service offered by the search engines is good, and continues to be free of charge. The coincidence between what we searched for and what we found was generally excellent, and contained no advertisements. Most Internet search engines provided similar results but some were more user friendly than others. It is not necessary to repeat the same research with Picsearch, Lycos and MSN, as the response would be the same; there is a possibility that they might share software. Image search engines are a useful, free and precise method to obtain paediatric dermatology images for teaching purposes. There is still the matter of copyright to be resolved. What are the legal uses of these 'free' images? How do we define 'teaching purposes'? New watermark methods and encrypted electronic signatures might solve these problems and answer these questions.

  6. Concept of „long centric"

    Directory of Open Access Journals (Sweden)

    Martinović Željko

    2004-01-01

    Full Text Available The objective of this paper was to show the historical perspective of the „long centric" occlusal concept and its importance in the modern dentistry, especially from the gnathological aspect. The „long centric" concept represents therapeutic modality used in modern dentistry and occlusal adjustment in all patients showing differences in strong and weak closure of the lower jaw starting from the position of physiological rest/long centric" concept is applied only for anterior teeth and occlusal movements from rather than toward the center. Whenever the „long centric" parameters are not adequate, occlusal disturbance, resulting from the „wedge" effect during the initial closure of the lower jaw, is present. Different degrees of abrasion or hypermobility of the teeth are often the result of the above-mentioned occlusal disturbances and can potentially trigger bruxism and malfunction. Modus procedendi should be the regular approach of every dentist to any occlusion, because only the built-in „long centric" efficiently contributes to the occlusal stability of the anterior portion of the dentition. All occlusions should be routinely tested regarding their need for „long centric", especially when the extensive therapeutic interventions (conservative, prosthetics of the occlusal complex are required.

  7. Sundanese ancient manuscripts search engine using probability approach

    Science.gov (United States)

    Suryani, Mira; Hadi, Setiawan; Paulus, Erick; Nurma Yulita, Intan; Supriatna, Asep K.

    2017-10-01

    Today, Information and Communication Technology (ICT) has become a regular thing for every aspect of live include cultural and heritage aspect. Sundanese ancient manuscripts as Sundanese heritage are in damage condition and also the information that containing on it. So in order to preserve the information in Sundanese ancient manuscripts and make them easier to search, a search engine has been developed. The search engine must has good computing ability. In order to get the best computation in developed search engine, three types of probabilistic approaches: Bayesian Networks Model, Divergence from Randomness with PL2 distribution, and DFR-PL2F as derivative form DFR-PL2 have been compared in this study. The three probabilistic approaches supported by index of documents and three different weighting methods: term occurrence, term frequency, and TF-IDF. The experiment involved 12 Sundanese ancient manuscripts. From 12 manuscripts there are 474 distinct terms. The developed search engine tested by 50 random queries for three types of query. The experiment results showed that for the single query and multiple query, the best searching performance given by the combination of PL2F approach and TF-IDF weighting method. The performance has been evaluated using average time responds with value about 0.08 second and Mean Average Precision (MAP) about 0.33.

  8. Architect engineer balance-of-plant radiological design considerations

    International Nuclear Information System (INIS)

    Piccot, A.R.

    1975-01-01

    Methods which are or may be used by Architect Engineers in dealing with the problems of radiological safety in the design of a nuclear power plant are discussed. The bases and basic requirements for a radiation protection program are briefly noted. Requirements in the areas of planning, organization, responsibilities and implementation of radiation protection are discussed. Lists of safety tasks which should be performed during the various design phases are presented

  9. Short-term Internet search using makes people rely on search engines when facing unknown issues.

    Science.gov (United States)

    Wang, Yifan; Wu, Lingdan; Luo, Liang; Zhang, Yifen; Dong, Guangheng

    2017-01-01

    The Internet search engines, which have powerful search/sort functions and ease of use features, have become an indispensable tool for many individuals. The current study is to test whether the short-term Internet search training can make people more dependent on it. Thirty-one subjects out of forty subjects completed the search training study which included a pre-test, a six-day's training of Internet search, and a post-test. During the pre- and post- tests, subjects were asked to search online the answers to 40 unusual questions, remember the answers and recall them in the scanner. Un-learned questions were randomly presented at the recalling stage in order to elicited search impulse. Comparing to the pre-test, subjects in the post-test reported higher impulse to use search engines to answer un-learned questions. Consistently, subjects showed higher brain activations in dorsolateral prefrontal cortex and anterior cingulate cortex in the post-test than in the pre-test. In addition, there were significant positive correlations self-reported search impulse and brain responses in the frontal areas. The results suggest that a simple six-day's Internet search training can make people dependent on the search tools when facing unknown issues. People are easily dependent on the Internet search engines.

  10. Short-term Internet search using makes people rely on search engines when facing unknown issues.

    Directory of Open Access Journals (Sweden)

    Yifan Wang

    Full Text Available The Internet search engines, which have powerful search/sort functions and ease of use features, have become an indispensable tool for many individuals. The current study is to test whether the short-term Internet search training can make people more dependent on it. Thirty-one subjects out of forty subjects completed the search training study which included a pre-test, a six-day's training of Internet search, and a post-test. During the pre- and post- tests, subjects were asked to search online the answers to 40 unusual questions, remember the answers and recall them in the scanner. Un-learned questions were randomly presented at the recalling stage in order to elicited search impulse. Comparing to the pre-test, subjects in the post-test reported higher impulse to use search engines to answer un-learned questions. Consistently, subjects showed higher brain activations in dorsolateral prefrontal cortex and anterior cingulate cortex in the post-test than in the pre-test. In addition, there were significant positive correlations self-reported search impulse and brain responses in the frontal areas. The results suggest that a simple six-day's Internet search training can make people dependent on the search tools when facing unknown issues. People are easily dependent on the Internet search engines.

  11. Copyright over Works Reproduced and Published Online by Search Engines

    Directory of Open Access Journals (Sweden)

    Ernesto Rengifo García

    2016-12-01

    Full Text Available Search engines are an important technological tool that facilitates the dissemination and access to information on the Internet. However, when it comes to works protected by authors rights, in the case of continental law, or Copyright, for the Anglo-Saxon tradition, it is difficult to define if search engines infringe the rights of the owners of these works. In the face of this situation, the US and Europe have employed the exceptions to autorights and Fair Use to decide whether search engines infringes owners rights. This article carries out a comparative analysis of the different judicial decisions in the US and Europe on search engines and protected works.

  12. Automatic Planning of External Search Engine Optimization

    Directory of Open Access Journals (Sweden)

    Vita Jasevičiūtė

    2015-07-01

    Full Text Available This paper describes an investigation of the external search engine optimization (SEO action planning tool, dedicated to automatically extract a small set of most important keywords for each month during whole year period. The keywords in the set are extracted accordingly to external measured parameters, such as average number of searches during the year and for every month individually. Additionally the position of the optimized web site for each keyword is taken into account. The generated optimization plan is similar to the optimization plans prepared manually by the SEO professionals and can be successfully used as a support tool for web site search engine optimization.

  13. Study of Search Engine Transaction Logs Shows Little Change in How Users use Search Engines. A review of: Jansen, Bernard J., and Amanda Spink. “How Are We Searching the World Wide Web? A Comparison of Nine Search Engine Transaction Logs.” Information Processing & Management 42.1 (2006: 248‐263.

    Directory of Open Access Journals (Sweden)

    David Hook

    2006-09-01

    Full Text Available Objective – To examine the interactions between users and search engines, and how they have changed over time. Design – Comparative analysis of search engine transaction logs. Setting – Nine major analyses of search engine transaction logs. Subjects – Nine web search engine studies (4 European, 5 American over a seven‐year period, covering the search engines Excite, Fireball, AltaVista, BWIE and AllTheWeb. Methods – The results from individual studies are compared by year of study for percentages of single query sessions, one term queries, operator (and, or, not, etc. usage and single result page viewing. As well, the authors group the search queries into eleven different topical categories and compare how the breakdown has changed over time. Main Results – Based on the percentage of single query sessions, it does not appear that the complexity of interactions has changed significantly for either the U.S.‐based or the European‐based search engines. As well, there was little change observed in the percentage of one‐term queries over the years of study for either the U.S.‐based or the European‐based search engines. Few users (generally less than 20% use Boolean or other operators in their queries, and these percentages have remained relatively stable. One area of noticeable change is in the percentage of users viewing only one results page, which has increased over the years of study. Based on the studies of the U.S.‐based search engines, the topical categories of ‘People, Place or Things’ and ‘Commerce, Travel, Employment or Economy’ are becoming more popular, while the categories of ‘Sex and Pornography’ and ‘Entertainment or Recreation’ are declining. Conclusions – The percentage of users viewing only one results page increased during the years of the study, while the percentages of single query sessions, oneterm sessions and operator usage remained stable. The increase in single result page viewing

  14. Can electronic search engines optimize screening of search results in systematic reviews: an empirical study.

    Science.gov (United States)

    Sampson, Margaret; Barrowman, Nicholas J; Moher, David; Clifford, Tammy J; Platt, Robert W; Morrison, Andra; Klassen, Terry P; Zhang, Li

    2006-02-24

    Most electronic search efforts directed at identifying primary studies for inclusion in systematic reviews rely on the optimal Boolean search features of search interfaces such as DIALOG and Ovid. Our objective is to test the ability of an Ultraseek search engine to rank MEDLINE records of the included studies of Cochrane reviews within the top half of all the records retrieved by the Boolean MEDLINE search used by the reviewers. Collections were created using the MEDLINE bibliographic records of included and excluded studies listed in the review and all records retrieved by the MEDLINE search. Records were converted to individual HTML files. Collections of records were indexed and searched through a statistical search engine, Ultraseek, using review-specific search terms. Our data sources, systematic reviews published in the Cochrane library, were included if they reported using at least one phase of the Cochrane Highly Sensitive Search Strategy (HSSS), provided citations for both included and excluded studies and conducted a meta-analysis using a binary outcome measure. Reviews were selected if they yielded between 1000-6000 records when the MEDLINE search strategy was replicated. Nine Cochrane reviews were included. Included studies within the Cochrane reviews were found within the first 500 retrieved studies more often than would be expected by chance. Across all reviews, recall of included studies into the top 500 was 0.70. There was no statistically significant difference in ranking when comparing included studies with just the subset of excluded studies listed as excluded in the published review. The relevance ranking provided by the search engine was better than expected by chance and shows promise for the preliminary evaluation of large results from Boolean searches. A statistical search engine does not appear to be able to make fine discriminations concerning the relevance of bibliographic records that have been pre-screened by systematic reviewers.

  15. Concept of „long centric"

    OpenAIRE

    Martinović Željko; Obradović-Đuričić Kosovka; Teodorović Nevenka S.; Živković Rade

    2004-01-01

    The objective of this paper was to show the historical perspective of the „long centric" occlusal concept and its importance in the modern dentistry, especially from the gnathological aspect. The „long centric" concept represents therapeutic modality used in modern dentistry and occlusal adjustment in all patients showing differences in strong and weak closure of the lower jaw starting from the position of physiological rest/long centric" concept is applied only for anterior teeth and occlusa...

  16. Teen smoking cessation help via the Internet: a survey of search engines.

    Science.gov (United States)

    Edwards, Christine C; Elliott, Sean P; Conway, Terry L; Woodruff, Susan I

    2003-07-01

    The objective of this study was to assess Web sites related to teen smoking cessation on the Internet. Seven Internet search engines were searched using the keywords teen quit smoking. The top 20 hits from each search engine were reviewed and categorized. The keywords teen quit smoking produced between 35 and 400,000 hits depending on the search engine. Of 140 potential hits, 62% were active, unique sites; 85% were listed by only one search engine; and 40% focused on cessation. Findings suggest that legitimate on-line smoking cessation help for teens is constrained by search engine choice and the amount of time teens spend looking through potential sites. Resource listings should be updated regularly. Smoking cessation Web sites need to be picked up on multiple search engine searches. Further evaluation of smoking cessation Web sites need to be conducted to identify the most effective help for teens.

  17. Interest in Anesthesia as Reflected by Keyword Searches using Common Search Engines.

    Science.gov (United States)

    Liu, Renyu; García, Paul S; Fleisher, Lee A

    2012-01-23

    Since current general interest in anesthesia is unknown, we analyzed internet keyword searches to gauge general interest in anesthesia in comparison with surgery and pain. The trend of keyword searches from 2004 to 2010 related to anesthesia and anaesthesia was investigated using Google Insights for Search. The trend of number of peer reviewed articles on anesthesia cited on PubMed and Medline from 2004 to 2010 was investigated. The average cost on advertising on anesthesia, surgery and pain was estimated using Google AdWords. Searching results in other common search engines were also analyzed. Correlation between year and relative number of searches was determined with psearch engines may provide different total number of searching results (available posts), the ratios of searching results between some common keywords related to perioperative care are comparable, indicating similar trend. The peer reviewed manuscripts on "anesthesia" and the proportion of papers on "anesthesia and outcome" are trending up. Estimates for spending of advertising dollars are less for anesthesia-related terms when compared to that for pain or surgery due to relative smaller number of searching traffic. General interest in anesthesia (anaesthesia) as measured by internet searches appears to be decreasing. Pain, preanesthesia evaluation, anesthesia and outcome and side effects of anesthesia are the critical areas that anesthesiologists should focus on to address the increasing concerns.

  18. Practical and Efficient Searching in Proteomics: A Cross Engine Comparison

    Science.gov (United States)

    Paulo, Joao A.

    2014-01-01

    Background Analysis of large datasets produced by mass spectrometry-based proteomics relies on database search algorithms to sequence peptides and identify proteins. Several such scoring methods are available, each based on different statistical foundations and thereby not producing identical results. Here, the aim is to compare peptide and protein identifications using multiple search engines and examine the additional proteins gained by increasing the number of technical replicate analyses. Methods A HeLa whole cell lysate was analyzed on an Orbitrap mass spectrometer for 10 technical replicates. The data were combined and searched using Mascot, SEQUEST, and Andromeda. Comparisons were made of peptide and protein identifications among the search engines. In addition, searches using each engine were performed with incrementing number of technical replicates. Results The number and identity of peptides and proteins differed across search engines. For all three search engines, the differences in proteins identifications were greater than the differences in peptide identifications indicating that the major source of the disparity may be at the protein inference grouping level. The data also revealed that analysis of 2 technical replicates can increase protein identifications by up to 10-15%, while a third replicate results in an additional 4-5%. Conclusions The data emphasize two practical methods of increasing the robustness of mass spectrometry data analysis. The data show that 1) using multiple search engines can expand the number of identified proteins (union) and validate protein identifications (intersection), and 2) analysis of 2 or 3 technical replicates can substantially expand protein identifications. Moreover, information can be extracted from a dataset by performing database searching with different engines and performing technical repeats, which requires no additional sample preparation and effectively utilizes research time and effort. PMID:25346847

  19. Development of flexible process-centric web applications: An integrated model driven approach

    NARCIS (Netherlands)

    Bernardi, M.L.; Cimitile, M.; Di Lucca, G.A.; Maggi, F.M.

    2012-01-01

    In recent years, Model Driven Engineering (MDE) approaches have been proposed and used to develop and evolve WAs. However, the definition of appropriate MDE approaches for the development of flexible process-centric WAs is still limited. In particular, (flexible) workflow models have never been

  20. A unified architecture for biomedical search engines based on semantic web technologies.

    Science.gov (United States)

    Jalali, Vahid; Matash Borujerdi, Mohammad Reza

    2011-04-01

    There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.

  1. Evaluating search effectiveness of some selected search engines ...

    African Journals Online (AJOL)

    With advancement in technology, many individuals are getting familiar with the internet a lot of users seek for information on the World Wide Web (WWW) using variety of search engines. This research work evaluates the retrieval effectiveness of Google, Yahoo, Bing, AOL and Baidu. Precision, relative recall and response ...

  2. The Effect of Internet Searches on Afforestation: The Case of a Green Search Engine

    Directory of Open Access Journals (Sweden)

    Pedro Palos-Sanchez

    2018-01-01

    Full Text Available Ecosia is an Internet search engine that plants trees with the income obtained from advertising. This study explored the factors that affect the adoption of Ecosia.org from the perspective of technology adoption and trust. This was done by using the Unified Theory of Acceptance and Use of Technology (UTAUT2 and then analyzing the results with PLS-SEM (Partial Least Squares-Structural Equation Modeling. Subsequently, a survey was conducted with a structured questionnaire on search engines, which yielded the following results: (1 the idea of a company helping to mitigate the effects of climate change by planting trees is well received by Internet users. However, few people accept the idea of changing their habits from using traditional search engines; (2 Ecosia is a search engine believed to have higher compatibility rates, and needing less hardware resources, and (3 ecological marketing is an appropriate and future strategy that can increase the intention to use a technological product. Based on the results obtained, this study shows that a search engine or other service provided by the Internet, which can be audited (visits, searches, files, etc., can also contribute to curb the effects of deforestation and climate change. In addition, companies, and especially technological start-ups, are advised to take into account that users feel better using these tools. Finally, this study urges foundations and non-governmental organizations to fight against the effects of deforestation by supporting these initiatives. The study also urges companies to support technological services, and follow the behavior of Ecosia.org in order to positively influence user satisfaction by using ecological marketing strategies.

  3. Search Engine For Ebook Portal

    Directory of Open Access Journals (Sweden)

    Prashant Kanade

    2017-05-01

    Full Text Available The purpose of this paper is to establish the textual analytics involved in developing a search engine for an ebook portal. We have extracted our dataset from Project Gutenberg using a robot harvester. Textual Analytics is used for efficient search retrieval. The entire dataset is represented using Vector Space Model where each document is a vector in the vector space. Further for computational purposes we represent our dataset in the form of a Term Frequency- Inverse Document Frequency tf-idf matrix. The first step involves obtaining the most coherent sequence of words of the search query entered. The entered query is processed using Front End algorithms this includes-Spell Checker Text Segmentation and Language Modeling. Back End processing includes Similarity Modeling Clustering Indexing and Retrieval. The relationship between documents and words is established using cosine similarity measured between the documents and words in Vector Space. Clustering performed is used to suggest books that are similar to the search query entered by the user. Lastly the Lucene Based Elasticsearch engine is used for indexing on the documents. This allows faster retrieval of data. Elasticsearch returns a dictionary and creates a tf-idf matrix. The processed query is compared with the dictionary obtained and tf-idf matrix is used to calculate the score for each match to give most relevant result.

  4. Real-time earthquake monitoring using a search engine method.

    Science.gov (United States)

    Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong

    2014-12-04

    When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.

  5. Dynamics of a macroscopic model characterizing mutualism of search engines and web sites

    Science.gov (United States)

    Wang, Yuanshi; Wu, Hong

    2006-05-01

    We present a model to describe the mutualism relationship between search engines and web sites. In the model, search engines and web sites benefit from each other while the search engines are derived products of the web sites and cannot survive independently. Our goal is to show strategies for the search engines to survive in the internet market. From mathematical analysis of the model, we show that mutualism does not always result in survival. We show various conditions under which the search engines would tend to extinction, persist or grow explosively. Then by the conditions, we deduce a series of strategies for the search engines to survive in the internet market. We present conditions under which the initial number of consumers of the search engines has little contribution to their persistence, which is in agreement with the results in previous works. Furthermore, we show novel conditions under which the initial value plays an important role in the persistence of the search engines and deduce new strategies. We also give suggestions for the web sites to cooperate with the search engines in order to form a win-win situation.

  6. User-centric networking future perspectives

    CERN Document Server

    Aldini, Alessandro

    2014-01-01

    This work represents a milestone for the? 'ULOOP User-centric Wireless Local Loop' project funded by the EU IST Seventh Framework Programme.ULOOP is focused on the robust, secure, and autonomic deployment of user-centric wireless networks. Contributions by ULOOP partners as well as invited tutorials by international experts in the field. The expected impact is to increase awareness to user-centric networking in terms, e.g., of business opportunities and quality of experience, and to present adequate technology to sustain the growth of user-friendly wireless architectures.Throughout the last 3

  7. The Theory of Planned Behaviour Applied to Search Engines as a Learning Tool

    Science.gov (United States)

    Liaw, Shu-Sheng

    2004-01-01

    Search engines have been developed for helping learners to seek online information. Based on theory of planned behaviour approach, this research intends to investigate the behaviour of using search engines as a learning tool. After factor analysis, the results suggest that perceived satisfaction of search engine, search engines as an information…

  8. MuZeeker - Adapting a music search engine for mobile phones

    DEFF Research Database (Denmark)

    Larsen, Jakob Eg; Halling, Søren Christian; Sigurdsson, Magnus Kristinn

    2010-01-01

    We describe MuZeeker, a search engine with domain knowledge based on Wikipedia. MuZeeker enables the user to refine a search in multiple steps by means of category selection. In the present version we focus on multimedia search related to music and we present two prototype search applications (web......-based and mobile) and discuss the issues involved in adapting the search engine for mobile phones. A category based filtering approach enables the user to refine a search through relevance feedback by category selection instead of typing additional text, which is hypothesized to be an advantage in the mobile Mu......Zeeker application. We report from two usability experiments using the think aloud protocol, in which N=20 participants performed tasks using MuZeeker and a customized Google search engine. In both experiments web-based and mobile user interfaces were used. The experiment shows that participants are capable...

  9. Enhancing discovery in spatial data infrastructures using a search engine

    Directory of Open Access Journals (Sweden)

    Paolo Corti

    2018-05-01

    Full Text Available A spatial data infrastructure (SDI is a framework of geospatial data, metadata, users and tools intended to provide an efficient and flexible way to use spatial information. One of the key software components of an SDI is the catalogue service which is needed to discover, query and manage the metadata. Catalogue services in an SDI are typically based on the Open Geospatial Consortium (OGC Catalogue Service for the Web (CSW standard which defines common interfaces for accessing the metadata information. A search engine is a software system capable of supporting fast and reliable search, which may use ‘any means necessary’ to get users to the resources they need quickly and efficiently. These techniques may include full text search, natural language processing, weighted results, fuzzy tolerance results, faceting, hit highlighting, recommendations and many others. In this paper we present an example of a search engine being added to an SDI to improve search against large collections of geospatial datasets. The Centre for Geographic Analysis (CGA at Harvard University re-engineered the search component of its public domain SDI (Harvard WorldMap which is based on the GeoNode platform. A search engine was added to the SDI stack to enhance the CSW catalogue discovery abilities. It is now possible to discover spatial datasets from metadata by using the standard search operations of the catalogue and to take advantage of the new abilities of the search engine, to return relevant and reliable content to SDI users.

  10. Intelligent image retrieval based on radiology reports

    Energy Technology Data Exchange (ETDEWEB)

    Gerstmair, Axel; Langer, Mathias; Kotter, Elmar [University Medical Center Freiburg, Department of Diagnostic Radiology, Freiburg (Germany); Daumke, Philipp; Simon, Kai [Averbis GmbH, Freiburg (Germany)

    2012-12-15

    To create an advanced image retrieval and data-mining system based on in-house radiology reports. Radiology reports are semantically analysed using natural language processing (NLP) techniques and stored in a state-of-the-art search engine. Images referenced by sequence and image number in the reports are retrieved from the picture archiving and communication system (PACS) and stored for later viewing. A web-based front end is used as an interface to query for images and show the results with the retrieved images and report text. Using a comprehensive radiological lexicon for the underlying terminology, the search algorithm also finds results for synonyms, abbreviations and related topics. The test set was 108 manually annotated reports analysed by different system configurations. Best results were achieved using full syntactic and semantic analysis with a precision of 0.929 and recall of 0.952. Operating successfully since October 2010, 258,824 reports have been indexed and a total of 405,146 preview images are stored in the database. Data-mining and NLP techniques provide quick access to a vast repository of images and radiology reports with both high precision and recall values. Consequently, the system has become a valuable tool in daily clinical routine, education and research. (orig.)

  11. Intelligent image retrieval based on radiology reports

    International Nuclear Information System (INIS)

    Gerstmair, Axel; Langer, Mathias; Kotter, Elmar; Daumke, Philipp; Simon, Kai

    2012-01-01

    To create an advanced image retrieval and data-mining system based on in-house radiology reports. Radiology reports are semantically analysed using natural language processing (NLP) techniques and stored in a state-of-the-art search engine. Images referenced by sequence and image number in the reports are retrieved from the picture archiving and communication system (PACS) and stored for later viewing. A web-based front end is used as an interface to query for images and show the results with the retrieved images and report text. Using a comprehensive radiological lexicon for the underlying terminology, the search algorithm also finds results for synonyms, abbreviations and related topics. The test set was 108 manually annotated reports analysed by different system configurations. Best results were achieved using full syntactic and semantic analysis with a precision of 0.929 and recall of 0.952. Operating successfully since October 2010, 258,824 reports have been indexed and a total of 405,146 preview images are stored in the database. Data-mining and NLP techniques provide quick access to a vast repository of images and radiology reports with both high precision and recall values. Consequently, the system has become a valuable tool in daily clinical routine, education and research. (orig.)

  12. PlateRunner: A Search Engine to Identify EMR Boilerplates.

    Science.gov (United States)

    Divita, Guy; Workman, T Elizabeth; Carter, Marjorie E; Redd, Andrew; Samore, Matthew H; Gundlapalli, Adi V

    2016-01-01

    Medical text contains boilerplated content, an artifact of pull-down forms from EMRs. Boilerplated content is the source of challenges for concept extraction on clinical text. This paper introduces PlateRunner, a search engine on boilerplates from the US Department of Veterans Affairs (VA) EMR. Boilerplates containing concepts should be identified and reviewed to recognize challenging formats, identify high yield document titles, and fine tune section zoning. This search engine has the capability to filter negated and asserted concepts, save and search query results. This tool can save queries, search results, and documents found for later analysis.

  13. Auditing Search Engines for Differential Satisfaction Across Demographics

    OpenAIRE

    Mehrotra, R.; Anderson, A.; Diaz, F.; Sharma, A.; Wallach, H. M.; Yilmaz, E.

    2017-01-01

    Many online services, such as search engines, social media platforms, and digital marketplaces, are advertised as being available to any user, regardless of their age, gender, or other demographic factors. However, there are growing concerns that these services may systematically underserve some groups of users. In this paper, we present a framework for internally auditing such services for differences in user satisfaction across demographic groups, using search engines as a case study. We fi...

  14. [Biomedical information on the internet using search engines. A one-year trial].

    Science.gov (United States)

    Corrao, Salvatore; Leone, Francesco; Arnone, Sabrina

    2004-01-01

    The internet is a communication medium and content distributor that provide information in the general sense but it could be of great utility regarding as the search and retrieval of biomedical information. Search engines represent a great deal to rapidly find information on the net. However, we do not know whether general search engines and meta-search ones are reliable in order to find useful and validated biomedical information. The aim of our study was to verify the reproducibility of a search by key-words (pediatric or evidence) using 9 international search engines and 1 meta-search engine at the baseline and after a one year period. We analysed the first 20 citations as output of each searching. We evaluated the formal quality of Web-sites and their domain extensions. Moreover, we compared the output of each search at the start of this study and after a one year period and we considered as a criterion of reliability the number of Web-sites cited again. We found some interesting results that are reported throughout the text. Our findings point out an extreme dynamicity of the information on the Web and, for this reason, we advice a great caution when someone want to use search and meta-search engines as a tool for searching and retrieve reliable biomedical information. On the other hand, some search and meta-search engines could be very useful as a first step searching for defining better a search and, moreover, for finding institutional Web-sites too. This paper allows to know a more conscious approach to the internet biomedical information universe.

  15. An open-source, mobile-friendly search engine for public medical knowledge.

    Science.gov (United States)

    Samwald, Matthias; Hanbury, Allan

    2014-01-01

    The World Wide Web has become an important source of information for medical practitioners. To complement the capabilities of currently available web search engines we developed FindMeEvidence, an open-source, mobile-friendly medical search engine. In a preliminary evaluation, the quality of results from FindMeEvidence proved to be competitive with those from TRIP Database, an established, closed-source search engine for evidence-based medicine.

  16. Search engines and the production of academic knowledge

    OpenAIRE

    van Dijck, J.

    2010-01-01

    This article argues that search engines in general, and Google Scholar in particular, have become significant co-producers of academic knowledge. Knowledge is not simply conveyed to users, but is co-produced by search engines’ ranking systems and profiling systems, none of which are open to the rules of transparency, relevance and privacy in a manner known from library scholarship in the public domain. Inexperienced users tend to trust proprietary engines as neutral mediators of knowledge and...

  17. A real-time all-atom structural search engine for proteins.

    Science.gov (United States)

    Gonzalez, Gabriel; Hannigan, Brett; DeGrado, William F

    2014-07-01

    Protein designers use a wide variety of software tools for de novo design, yet their repertoire still lacks a fast and interactive all-atom search engine. To solve this, we have built the Suns program: a real-time, atomic search engine integrated into the PyMOL molecular visualization system. Users build atomic-level structural search queries within PyMOL and receive a stream of search results aligned to their query within a few seconds. This instant feedback cycle enables a new "designability"-inspired approach to protein design where the designer searches for and interactively incorporates native-like fragments from proven protein structures. We demonstrate the use of Suns to interactively build protein motifs, tertiary interactions, and to identify scaffolds compatible with hot-spot residues. The official web site and installer are located at http://www.degradolab.org/suns/ and the source code is hosted at https://github.com/godotgildor/Suns (PyMOL plugin, BSD license), https://github.com/Gabriel439/suns-cmd (command line client, BSD license), and https://github.com/Gabriel439/suns-search (search engine server, GPLv2 license).

  18. Virtual Reference Services through Web Search Engines: Study of Academic Libraries in Pakistan

    Directory of Open Access Journals (Sweden)

    Rubia Khan

    2017-03-01

    Full Text Available Web search engines (WSE are powerful and popular tools in the field of information service management. This study is an attempt to examine the impact and usefulness of web search engines in providing virtual reference services (VRS within academic libraries in Pakistan. The study also attempts to investigate the relevant expertise and skills of library professionals in providing digital reference services (DRS efficiently using web search engines. Methodology used in this study is quantitative in nature. The data was collected from fifty public and private sector universities in Pakistan using a structured questionnaire. Microsoft Excel and SPSS were used for data analysis. The study concludes that web search engines are commonly used by librarians to help users (especially research scholars by providing digital reference services. The study also finds a positive correlation between use of web search engines and quality of digital reference services provided to library users. It is concluded that although search engines have increased the expectations of users and are really big competitors to a library’s reference desk, they are however not an alternative to reference service. Findings reveal that search engines pose numerous challenges for librarians and the study also attempts to bring together possible remedial measures. This study is useful for library professionals to understand the importance of search engines in providing VRS. The study also provides an intellectual comparison among different search engines, their capabilities, limitations, challenges and opportunities to provide VRS effectively in libraries.

  19. Information retrieval implementing and evaluating search engines

    CERN Document Server

    Büttcher, Stefan; Cormack, Gordon V

    2016-01-01

    Information retrieval is the foundation for modern search engines. This textbook offers an introduction to the core topics underlying modern search technologies, including algorithms, data structures, indexing, retrieval, and evaluation. The emphasis is on implementation and experimentation; each chapter includes exercises and suggestions for student projects. Wumpus -- a multiuser open-source information retrieval system developed by one of the authors and available online -- provides model implementations and a basis for student work. The modular structure of the book allows instructors to use it in a variety of graduate-level courses, including courses taught from a database systems perspective, traditional information retrieval courses with a focus on IR theory, and courses covering the basics of Web retrieval. In addition to its classroom use, Information Retrieval will be a valuable reference for professionals in computer science, computer engineering, and software engineering.

  20. LoyalTracker: Visualizing Loyalty Dynamics in Search Engines.

    Science.gov (United States)

    Shi, Conglei; Wu, Yingcai; Liu, Shixia; Zhou, Hong; Qu, Huamin

    2014-12-01

    The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines.

  1. `Googling' Terrorists: Are Northern Irish Terrorists Visible on Internet Search Engines?

    Science.gov (United States)

    Reilly, P.

    In this chapter, the analysis suggests that Northern Irish terrorists are not visible on Web search engines when net users employ conventional Internet search techniques. Editors of mass media organisations traditionally have had the ability to decide whether a terrorist atrocity is `newsworthy,' controlling the `oxygen' supply that sustains all forms of terrorism. This process, also known as `gatekeeping,' is often influenced by the norms of social responsibility, or alternatively, with regard to the interests of the advertisers and corporate sponsors that sustain mass media organisations. The analysis presented in this chapter suggests that Internet search engines can also be characterised as `gatekeepers,' albeit without the ability to shape the content of Websites before it reaches net users. Instead, Internet search engines give priority retrieval to certain Websites within their directory, pointing net users towards these Websites rather than others on the Internet. Net users are more likely to click on links to the more `visible' Websites on Internet search engine directories, these sites invariably being the highest `ranked' in response to a particular search query. A number of factors including the design of the Website and the number of links to external sites determine the `visibility' of a Website on Internet search engines. The study suggests that Northern Irish terrorists and their sympathisers are unlikely to achieve a greater degree of `visibility' online than they enjoy in the conventional mass media through the perpetration of atrocities. Although these groups may have a greater degree of freedom on the Internet to publicise their ideologies, they are still likely to be speaking to the converted or members of the press. Although it is easier to locate Northern Irish terrorist organisations on Internet search engines by linking in via ideology, ideological description searches, such as `Irish Republican' and `Ulster Loyalist,' are more likely to

  2. Developing as new search engine and browser for libraries to search and organize the World Wide Web library resources

    OpenAIRE

    Sreenivasulu, V.

    2000-01-01

    Internet Granthalaya urges world wide advocates and targets at the task of creating a new search engine and dedicated browseer. Internet Granthalaya may be the ultimate search engine exclusively dedicated for every library use to search and organize the world wide web libary resources

  3. Development of health information search engine based on metadata and ontology.

    Science.gov (United States)

    Song, Tae-Min; Park, Hyeoun-Ae; Jin, Dal-Lae

    2014-04-01

    The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers.

  4. Health literacy and usability of clinical trial search engines.

    Science.gov (United States)

    Utami, Dina; Bickmore, Timothy W; Barry, Barbara; Paasche-Orlow, Michael K

    2014-01-01

    Several web-based search engines have been developed to assist individuals to find clinical trials for which they may be interested in volunteering. However, these search engines may be difficult for individuals with low health and computer literacy to navigate. The authors present findings from a usability evaluation of clinical trial search tools with 41 participants across the health and computer literacy spectrum. The study consisted of 3 parts: (a) a usability study of an existing web-based clinical trial search tool; (b) a usability study of a keyword-based clinical trial search tool; and (c) an exploratory study investigating users' information needs when deciding among 2 or more candidate clinical trials. From the first 2 studies, the authors found that users with low health literacy have difficulty forming queries using keywords and have significantly more difficulty using a standard web-based clinical trial search tool compared with users with adequate health literacy. From the third study, the authors identified the search factors most important to individuals searching for clinical trials and how these varied by health literacy level.

  5. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  6. Flexible patient information search and retrieval framework: pilot implementation

    Science.gov (United States)

    Erdal, Selnur; Catalyurek, Umit V.; Saltz, Joel; Kamal, Jyoti; Gurcan, Metin N.

    2007-03-01

    Medical centers collect and store significant amount of valuable data pertaining to patients' visit in the form of medical free-text. In addition, standardized diagnosis codes (International Classification of Diseases, Ninth Revision, Clinical Modification: ICD9-CM) related to those dictated reports are usually available. In this work, we have created a framework where image searches could be initiated through a combination of free-text reports as well as ICD9 codes. This framework enables more comprehensive search on existing large sets of patient data in a systematic way. The free text search is enriched by computer-aided inclusion of additional search terms enhanced by a thesaurus. This combination of enriched search allows users to access to a larger set of relevant results from a patient-centric PACS in a simpler way. Therefore, such framework is of particular use in tasks such as gathering images for desired patient populations, building disease models, and so on. As the motivating application of our framework, we implemented a search engine. This search engine processed two years of patient data from the OSU Medical Center's Information Warehouse and identified lung nodule location information using a combination of UMLS Meta-Thesaurus enhanced text report searches along with ICD9 code searches on patients that have been discharged. Five different queries with various ICD9 codes involving lung cancer were carried out on 172552 cases. Each search was completed under a minute on average per ICD9 code and the inclusion of UMLS thesaurus increased the number of relevant cases by 45% on average.

  7. Archiving, ordering and searching: search engines, algorithms, databases and deep mediatization

    DEFF Research Database (Denmark)

    Andersen, Jack

    2018-01-01

    This article argues that search engines, algorithms, and databases can be considered as a way of understanding deep mediatization (Couldry & Hepp, 2016). They are embedded in a variety of social and cultural practices and as such they change our communicative actions to be shaped by their logic o...... reviewed recent trends in mediatization research, the argument is discussed and unfolded in-between the material and social constructivist-phenomenological interpretations of mediatization. In conclusion, it is discussed how deep this form of mediatization can be taken to be.......This article argues that search engines, algorithms, and databases can be considered as a way of understanding deep mediatization (Couldry & Hepp, 2016). They are embedded in a variety of social and cultural practices and as such they change our communicative actions to be shaped by their logic...

  8. Optimizing Online Suicide Prevention: A Search Engine-Based Tailored Approach.

    Science.gov (United States)

    Arendt, Florian; Scherr, Sebastian

    2017-11-01

    Search engines are increasingly used to seek suicide-related information online, which can serve both harmful and helpful purposes. Google acknowledges this fact and presents a suicide-prevention result for particular search terms. Unfortunately, the result is only presented to a limited number of visitors. Hence, Google is missing the opportunity to provide help to vulnerable people. We propose a two-step approach to a tailored optimization: First, research will identify the risk factors. Second, search engines will reweight algorithms according to the risk factors. In this study, we show that the query share of the search term "poisoning" on Google shows substantial peaks corresponding to peaks in actual suicidal behavior. Accordingly, thresholds for showing the suicide-prevention result should be set to the lowest levels during the spring, on Sundays and Mondays, on New Year's Day, and on Saturdays following Thanksgiving. Search engines can help to save lives globally by utilizing a more tailored approach to suicide prevention.

  9. Integrated Proteomic Pipeline Using Multiple Search Engines for a Proteogenomic Study with a Controlled Protein False Discovery Rate.

    Science.gov (United States)

    Park, Gun Wook; Hwang, Heeyoun; Kim, Kwang Hoe; Lee, Ju Yeon; Lee, Hyun Kyoung; Park, Ji Yeong; Ji, Eun Sun; Park, Sung-Kyu Robin; Yates, John R; Kwon, Kyung-Hoon; Park, Young Mok; Lee, Hyoung-Joo; Paik, Young-Ki; Kim, Jin Young; Yoo, Jong Shin

    2016-11-04

    In the Chromosome-Centric Human Proteome Project (C-HPP), false-positive identification by peptide spectrum matches (PSMs) after database searches is a major issue for proteogenomic studies using liquid-chromatography and mass-spectrometry-based large proteomic profiling. Here we developed a simple strategy for protein identification, with a controlled false discovery rate (FDR) at the protein level, using an integrated proteomic pipeline (IPP) that consists of four engrailed steps as follows. First, using three different search engines, SEQUEST, MASCOT, and MS-GF+, individual proteomic searches were performed against the neXtProt database. Second, the search results from the PSMs were combined using statistical evaluation tools including DTASelect and Percolator. Third, the peptide search scores were converted into E-scores normalized using an in-house program. Last, ProteinInferencer was used to filter the proteins containing two or more peptides with a controlled FDR of 1.0% at the protein level. Finally, we compared the performance of the IPP to a conventional proteomic pipeline (CPP) for protein identification using a controlled FDR of <1% at the protein level. Using the IPP, a total of 5756 proteins (vs 4453 using the CPP) including 477 alternative splicing variants (vs 182 using the CPP) were identified from human hippocampal tissue. In addition, a total of 10 missing proteins (vs 7 using the CPP) were identified with two or more unique peptides, and their tryptic peptides were validated using MS/MS spectral pattern from a repository database or their corresponding synthetic peptides. This study shows that the IPP effectively improved the identification of proteins, including alternative splicing variants and missing proteins, in human hippocampal tissues for the C-HPP. All RAW files used in this study were deposited in ProteomeXchange (PXD000395).

  10. MSblender: A probabilistic approach for integrating peptide identifications from multiple database search engines.

    Science.gov (United States)

    Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M

    2011-07-01

    Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.

  11. A search engine for the engineering and equipment data management system (EDMS) at CERN

    International Nuclear Information System (INIS)

    Tsyganov, A; Amerigo, S M; Petit, S; Pettersson, T; Suwalska, A

    2008-01-01

    CERN, the European Laboratory for Particle Physics, located in Geneva -Switzerland, is currently building the LHC (Large Hadron Collider), a 27 km particle accelerator. The equipment life-cycle management of this project is provided by the Engineering and Equipment Data Management System (EDMS) Service. Using an Oracle database, it supports the management and follow-up of different kinds of documentation through the whole life cycle of the LHC project: design, manufacturing, installation, commissioning data etc... The equipment data collection phase is now slowing down and the project is getting closer to the 'As-Built' phase: the phase of the project consuming and exploring the large volumes of data stored since 1996. Searching through millions of items of information (documents, equipment parts, operations...) multiplied by dozens of points of view (operators, maintainers...) requires an efficient and flexible search engine. This paper describes the process followed by the team to implement the search engine for the LHC As-built project in the EDMS Service. The emphasis is put on the design decision to decouple the search engine from any user interface, potentially enabling other systems to also use it. Projections, algorithms, and the planned implementation are described in this paper. The implementation of the first version started in early 2007

  12. Building a better search engine for earth science data

    Science.gov (United States)

    Armstrong, E. M.; Yang, C. P.; Moroni, D. F.; McGibbney, L. J.; Jiang, Y.; Huang, T.; Greguska, F. R., III; Li, Y.; Finch, C. J.

    2017-12-01

    Free text data searching of earth science datasets has been implemented with varying degrees of success and completeness across the spectrum of the 12 NASA earth sciences data centers. At the JPL Physical Oceanography Distributed Active Archive Center (PO.DAAC) the search engine has been developed around the Solr/Lucene platform. Others have chosen other popular enterprise search platforms like Elasticsearch. Regardless, the default implementations of these search engines leveraging factors such as dataset popularity, term frequency and inverse document term frequency do not fully meet the needs of precise relevancy and ranking of earth science search results. For the PO.DAAC, this shortcoming has been identified for several years by its external User Working Group that has assigned several recommendations to improve the relevancy and discoverability of datasets related to remotely sensed sea surface temperature, ocean wind, waves, salinity, height and gravity that comprise a total count of over 500 public availability datasets. Recently, the PO.DAAC has teamed with an effort led by George Mason University to improve the improve the search and relevancy ranking of oceanographic data via a simple search interface and powerful backend services called MUDROD (Mining and Utilizing Dataset Relevancy from Oceanographic Datasets to Improve Data Discovery) funded by the NASA AIST program. MUDROD has mined and utilized the combination of PO.DAAC earth science dataset metadata, usage metrics, and user feedback and search history to objectively extract relevance for improved data discovery and access. In addition to improved dataset relevance and ranking, the MUDROD search engine also returns recommendations to related datasets and related user queries. This presentation will report on use cases that drove the architecture and development, and the success metrics and improvements on search precision and recall that MUDROD has demonstrated over the existing PO.DAAC search

  13. Search engines, the new bottleneck for content access

    NARCIS (Netherlands)

    van Eijk, N.; Preissl, B.; Haucap, J.; Curwen, P.

    2009-01-01

    The core function of a search engine is to make content and sources of information easily accessible (although the search results themselves may actually include parts of the underlying information). In an environment with unlimited amounts of information available on open platforms such as the

  14. Improving sensitivity in proteome studies by analysis of false discovery rates for multiple search engines.

    Science.gov (United States)

    Jones, Andrew R; Siepen, Jennifer A; Hubbard, Simon J; Paton, Norman W

    2009-03-01

    LC-MS experiments can generate large quantities of data, for which a variety of database search engines are available to make peptide and protein identifications. Decoy databases are becoming widely used to place statistical confidence in result sets, allowing the false discovery rate (FDR) to be estimated. Different search engines produce different identification sets so employing more than one search engine could result in an increased number of peptides (and proteins) being identified, if an appropriate mechanism for combining data can be defined. We have developed a search engine independent score, based on FDR, which allows peptide identifications from different search engines to be combined, called the FDR Score. The results demonstrate that the observed FDR is significantly different when analysing the set of identifications made by all three search engines, by each pair of search engines or by a single search engine. Our algorithm assigns identifications to groups according to the set of search engines that have made the identification, and re-assigns the score (combined FDR Score). The combined FDR Score can differentiate between correct and incorrect peptide identifications with high accuracy, allowing on average 35% more peptide identifications to be made at a fixed FDR than using a single search engine.

  15. Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.

    Science.gov (United States)

    Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile

    2016-01-01

    This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.

  16. DRUMS: a human disease related unique gene mutation search engine.

    Science.gov (United States)

    Li, Zuofeng; Liu, Xingnan; Wen, Jingran; Xu, Ye; Zhao, Xin; Li, Xuan; Liu, Lei; Zhang, Xiaoyan

    2011-10-01

    With the completion of the human genome project and the development of new methods for gene variant detection, the integration of mutation data and its phenotypic consequences has become more important than ever. Among all available resources, locus-specific databases (LSDBs) curate one or more specific genes' mutation data along with high-quality phenotypes. Although some genotype-phenotype data from LSDB have been integrated into central databases little effort has been made to integrate all these data by a search engine approach. In this work, we have developed disease related unique gene mutation search engine (DRUMS), a search engine for human disease related unique gene mutation as a convenient tool for biologists or physicians to retrieve gene variant and related phenotype information. Gene variant and phenotype information were stored in a gene-centred relational database. Moreover, the relationships between mutations and diseases were indexed by the uniform resource identifier from LSDB, or another central database. By querying DRUMS, users can access the most popular mutation databases under one interface. DRUMS could be treated as a domain specific search engine. By using web crawling, indexing, and searching technologies, it provides a competitively efficient interface for searching and retrieving mutation data and their relationships to diseases. The present system is freely accessible at http://www.scbit.org/glif/new/drums/index.html. © 2011 Wiley-Liss, Inc.

  17. The Effectiveness of Web Search Engines to Index New Sites from Different Countries

    Science.gov (United States)

    Pirkola, Ari

    2009-01-01

    Introduction: Investigates how effectively Web search engines index new sites from different countries. The primary interest is whether new sites are indexed equally or whether search engines are biased towards certain countries. If major search engines show biased coverage it can be considered a significant economic and political problem because…

  18. Search engines and the production of academic knowledge

    NARCIS (Netherlands)

    van Dijck, J.

    2010-01-01

    This article argues that search engines in general, and Google Scholar in particular, have become significant co-producers of academic knowledge. Knowledge is not simply conveyed to users, but is co-produced by search engines’ ranking systems and profiling systems, none of which are open to the

  19. Determination of geographic variance in stroke prevalence using Internet search engine analytics.

    Science.gov (United States)

    Walcott, Brian P; Nahed, Brian V; Kahle, Kristopher T; Redjal, Navid; Coumans, Jean-Valery

    2011-06-01

    Previous methods to determine stroke prevalence, such as nationwide surveys, are labor-intensive endeavors. Recent advances in search engine query analytics have led to a new metric for disease surveillance to evaluate symptomatic phenomenon, such as influenza. The authors hypothesized that the use of search engine query data can determine the prevalence of stroke. The Google Insights for Search database was accessed to analyze anonymized search engine query data. The authors' search strategy utilized common search queries used when attempting either to identify the signs and symptoms of a stroke or to perform stroke education. The search logic was as follows: (stroke signs + stroke symptoms + mini stroke--heat) from January 1, 2005, to December 31, 2010. The relative number of searches performed (the interest level) for this search logic was established for all 50 states and the District of Columbia. A Pearson product-moment correlation coefficient was calculated from the statespecific stroke prevalence data previously reported. Web search engine interest level was available for all 50 states and the District of Columbia over the time period for January 1, 2005-December 31, 2010. The interest level was highest in Alabama and Tennessee (100 and 96, respectively) and lowest in California and Virginia (58 and 53, respectively). The Pearson correlation coefficient (r) was calculated to be 0.47 (p = 0.0005, 2-tailed). Search engine query data analysis allows for the determination of relative stroke prevalence. Further investigation will reveal the reliability of this metric to determine temporal pattern analysis and prevalence in this and other symptomatic diseases.

  20. A Longitudinal Analysis of Search Engine Index Size

    DEFF Research Database (Denmark)

    Van den Bosch, Antal; Bogers, Toine; De Kunder, Maurice

    2015-01-01

    One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel...... method of estimating the size of a Web search engine’s index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing’s indexes over a nine-year period, from March 2006...... until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find...

  1. Brief Report: Consistency of Search Engine Rankings for Autism Websites

    Science.gov (United States)

    Reichow, Brian; Naples, Adam; Steinhoff, Timothy; Halpern, Jason; Volkmar, Fred R.

    2012-01-01

    The World Wide Web is one of the most common methods used by parents to find information on autism spectrum disorders and most consumers find information through search engines such as Google or Bing. However, little is known about how the search engines operate or the consistency of the results that are returned over time. This study presents the…

  2. An approach in building a chemical compound search engine in oracle database.

    Science.gov (United States)

    Wang, H; Volarath, P; Harrison, R

    2005-01-01

    A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated. The convenience of the implementation emphasizes the efficiency and simplicity of the framework.

  3. Task and Interruption Management in Activity-Centric Computing

    DEFF Research Database (Denmark)

    Jeuris, Steven

    to address these not in isolation, but by fundamentally reevaluating the current computing paradigm. To this end, activity-centric computing has been brought forward as an alternative computing paradigm, addressing the increasing strain put on modern-day computing systems. Activity-centric computing follows...... the scalability and intelligibility of current research prototypes. In this dissertation, I postulate that such issues arise due to a lack of support for the full set of practices which make up activity management. Most notably, although task and interruption management are an integral part of personal...... information management, they have thus far been neglected in prior activity-centric computing systems. Advancing the research agenda of activity-centric computing, I (1) implement and evaluate an activity-centric desktop computing system, incorporating support for interruptions and long-term task management...

  4. Google chemtrails: a methodology to analyze topic representation in search engine results

    OpenAIRE

    Ballatore, Andrea

    2015-01-01

    Search engine results influence the visibility of different viewpoints in political, cultural, and scientific debates. Treating search engines as editorial products with intrinsic biases can help understand the structure of information flows in new media. This paper outlines an empirical methodology to analyze the representation of topics in search engines, reducing the spatial and temporal biases in the results. As a case study, the methodology is applied to 15 popular conspiracy theories, e...

  5. Theorizing Network-Centric Activity in Education

    Science.gov (United States)

    HaLevi, Andrew

    2011-01-01

    Networks and network-centric activity are increasingly prevalent in schools and school districts. In addition to ubiquitous social network tools like Facebook and Twitter, educational leaders deal with a wide variety of network organizational forms that include professional development, advocacy, informational networks and network-centric reforms.…

  6. 'Sciencenet'--towards a global search and share engine for all scientific knowledge.

    Science.gov (United States)

    Lütjohann, Dominic S; Shah, Asmi H; Christen, Michael P; Richter, Florian; Knese, Karsten; Liebel, Urban

    2011-06-15

    Modern biological experiments create vast amounts of data which are geographically distributed. These datasets consist of petabytes of raw data and billions of documents. Yet to the best of our knowledge, a search engine technology that searches and cross-links all different data types in life sciences does not exist. We have developed a prototype distributed scientific search engine technology, 'Sciencenet', which facilitates rapid searching over this large data space. By 'bringing the search engine to the data', we do not require server farms. This platform also allows users to contribute to the search index and publish their large-scale data to support e-Science. Furthermore, a community-driven method guarantees that only scientific content is crawled and presented. Our peer-to-peer approach is sufficiently scalable for the science web without performance or capacity tradeoff. The free to use search portal web page and the downloadable client are accessible at: http://sciencenet.kit.edu. The web portal for index administration is implemented in ASP.NET, the 'AskMe' experiment publisher is written in Python 2.7, and the backend 'YaCy' search engine is based on Java 1.6.

  7. The Gaze of the Perfect Search Engine: Google as an Infrastructure of Dataveillance

    Science.gov (United States)

    Zimmer, M.

    Web search engines have emerged as a ubiquitous and vital tool for the successful navigation of the growing online informational sphere. The goal of the world's largest search engine, Google, is to "organize the world's information and make it universally accessible and useful" and to create the "perfect search engine" that provides only intuitive, personalized, and relevant results. While intended to enhance intellectual mobility in the online sphere, this chapter reveals that the quest for the perfect search engine requires the widespread monitoring and aggregation of a users' online personal and intellectual activities, threatening the values the perfect search engines were designed to sustain. It argues that these search-based infrastructures of dataveillance contribute to a rapidly emerging "soft cage" of everyday digital surveillance, where they, like other dataveillance technologies before them, contribute to the curtailing of individual freedom, affect users' sense of self, and present issues of deep discrimination and social justice.

  8. GeoSearcher: Location-Based Ranking of Search Engine Results.

    Science.gov (United States)

    Watters, Carolyn; Amoudi, Ghada

    2003-01-01

    Discussion of Web queries with geospatial dimensions focuses on an algorithm that assigns location coordinates dynamically to Web sites based on the URL. Describes a prototype search system that uses the algorithm to re-rank search engine results for queries with a geospatial dimension, thus providing an alternative ranking order for search engine…

  9. Search Engine Optimization

    CERN Document Server

    Davis, Harold

    2006-01-01

    SEO--short for Search Engine Optimization--is the art, craft, and science of driving web traffic to web sites. Web traffic is food, drink, and oxygen--in short, life itself--to any web-based business. Whether your web site depends on broad, general traffic, or high-quality, targeted traffic, this PDF has the tools and information you need to draw more traffic to your site. You'll learn how to effectively use PageRank (and Google itself); how to get listed, get links, and get syndicated; and much more. The field of SEO is expanding into all the possible ways of promoting web traffic. This

  10. MOMFER: A Search Engine of Thompson's Motif-Index of Folk Literature

    NARCIS (Netherlands)

    Karsdorp, F.B.; van der Meulen, Marten; Meder, Theo; van den Bosch, Antal

    2015-01-01

    More than fifty years after the first edition of Thompson's seminal Motif-Indexof Folk Literature, we present an online search engine tailored to fully disclose the index digitally. This search engine, called MOMFER, greatly enhances the searchability of the Motif-Index and provides exciting new

  11. Survey of formal and informal citation in Google search engine

    Directory of Open Access Journals (Sweden)

    Afsaneh Teymourikhani

    2016-03-01

    Full Text Available Aim: Informal citations is bibliographic information (title or Internet address, citing sources of information resources for informal scholarly communication and always neglected in traditional citation databases. This study is done, in order to answer the question of whether informal citations in the web environment are traceable. The present research aims to determine what proportion of web citations of Google search engine is related to formal and informal citation. Research method: Webometrics is the method used. The study is done on 1344 research articles of 98 open access journal, and the method that is used to extract the web citation from Google search engine is “Web / URL citation extraction". Findings: The findings showed that ten percent of the web citations of Google search engine are formal and informal citations. The highest formal citation in the Google search engine with 19/27% is in the field of library and information science and the lowest official citation by 1/54% is devoted to the field of civil engineering. The highest percentage of informal citations with 3/57% is devoted to sociology and the lowest percentage of informal citations by 0/39% is devoted to the field of civil engineering. Journal Citation is highest with 94/12% in the surgical field and lowest with 5/26 percent in the philosophy filed. Result: Due to formal and informal citations in the Google search engine which is about 10 percent and the reduction of this amount compared to previous research, it seems that track citations by this engine should be treated with more caution. We see that the amount of formal citation is variable in different disciplines. Cited journals in the field of surgery, is highest and in the filed of philosophy is lowest, this indicates that in the filed of philosophy, that is a subset of the social sciences, journals in scientific communication do not play a significant role. On the other hand, book has a key role in this filed

  12. PubData: search engine for bioinformatics databases worldwide

    OpenAIRE

    Vand, Kasra; Wahlestedt, Thor; Khomtchouk, Kelly; Sayed, Mohammed; Wahlestedt, Claes; Khomtchouk, Bohdan

    2016-01-01

    We propose a search engine and file retrieval system for all bioinformatics databases worldwide. PubData searches biomedical data in a user-friendly fashion similar to how PubMed searches biomedical literature. PubData is built on novel network programming, natural language processing, and artificial intelligence algorithms that can patch into the file transfer protocol servers of any user-specified bioinformatics database, query its contents, retrieve files for download, and adapt to the use...

  13. Is the Internet a Suitable Patient Resource for Information on Common Radiological Investigations?: Radiology-Related Information on the Internet.

    Science.gov (United States)

    Bowden, Dermot J; Yap, Lee-Chien; Sheppard, Declan G

    2017-07-01

    This study aimed to assess the quality of Internet information about common radiological investigations. Four search engines (Google, Bing, Yahoo, and Duckduckgo) were searched using the terms "X-ray," "cat scan," "MRI," "ultrasound," and "pet scan." The first 10 webpage results returned for each search term were recorded, and their quality and readability were analyzed by two independent reviewers (DJB and LCY), with discrepancies resolved by consensus. Analysis of information quality was conducted using validated instruments for the assessment of health-care information (DISCERN score is a multi-domain tool for assessment of health-care information quality by health-care professionals and laypeople (max 80 points)) and readability (Flesch-Kincaid and SMOG or Simple Measure of Gobbledygook scores). The search result pages were further classified into categories as follows: commercial, academic (educational/institutional), and news/magazine. Several organizations offer website accreditation for health-care information, and accreditation is recognized by the presence of a hallmark or logo on the website. The presence of any valid accreditation marks on each website was recorded. Mean scores between groups were compared for significance using the Student t test. A total of 200 webpages returned (108 unique website addresses). The average DISCERN score was search engines. No significant difference was seen in readability between modalities or between search engines. Websites carrying validated accreditation marks were associated with higher average DISCERN scores: X-ray (39.36 vs 25.35), computed tomography (45.45 vs 31.33), and ultrasound (40.91 vs 27.62) (P information on the Internet is poor. High-quality online resources should be identified so that patients may avoid the use of poor-quality information derived from general search engine queries. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  14. Querying archetype-based EHRs by search ontology-based XPath engineering.

    Science.gov (United States)

    Kropf, Stefan; Uciteli, Alexandr; Schierle, Katrin; Krücken, Peter; Denecke, Kerstin; Herre, Heinrich

    2018-05-11

    Legacy data and new structured data can be stored in a standardized format as XML-based EHRs on XML databases. Querying documents on these databases is crucial for answering research questions. Instead of using free text searches, that lead to false positive results, the precision can be increased by constraining the search to certain parts of documents. A search ontology-based specification of queries on XML documents defines search concepts and relates them to parts in the XML document structure. Such query specification method is practically introduced and evaluated by applying concrete research questions formulated in natural language on a data collection for information retrieval purposes. The search is performed by search ontology-based XPath engineering that reuses ontologies and XML-related W3C standards. The key result is that the specification of research questions can be supported by the usage of search ontology-based XPath engineering. A deeper recognition of entities and a semantic understanding of the content is necessary for a further improvement of precision and recall. Key limitation is that the application of the introduced process requires skills in ontology and software development. In future, the time consuming ontology development could be overcome by implementing a new clinical role: the clinical ontologist. The introduced Search Ontology XML extension connects Search Terms to certain parts in XML documents and enables an ontology-based definition of queries. Search ontology-based XPath engineering can support research question answering by the specification of complex XPath expressions without deep syntax knowledge about XPaths.

  15. Sexual information seeking on web search engines.

    Science.gov (United States)

    Spink, Amanda; Koricich, Andrew; Jansen, B J; Cole, Charles

    2004-02-01

    Sexual information seeking is an important element within human information behavior. Seeking sexually related information on the Internet takes many forms and channels, including chat rooms discussions, accessing Websites or searching Web search engines for sexual materials. The study of sexual Web queries provides insight into sexually-related information-seeking behavior, of value to Web users and providers alike. We qualitatively analyzed queries from logs of 1,025,910 Alta Vista and AlltheWeb.com Web user queries from 2001. We compared the differences in sexually-related Web searching between Alta Vista and AlltheWeb.com users. Differences were found in session duration, query outcomes, and search term choices. Implications of the findings for sexual information seeking are discussed.

  16. Quality Dimensions of Internet Search Engines.

    Science.gov (United States)

    Xie, M.; Wang, H.; Goh, T. N.

    1998-01-01

    Reviews commonly used search engines (AltaVista, Excite, infoseek, Lycos, HotBot, WebCrawler), focusing on existing comparative studies; considers quality dimensions from the customer's point of view based on a SERVQUAL framework; and groups these quality expectations in five dimensions: tangibles, reliability, responsiveness, assurance, and…

  17. The physics and engineering aspects of radiology. Textbook with questions and answers

    International Nuclear Information System (INIS)

    Link, T.M.; Heppe, A.; Meier, N.; Fiebich, M.

    1994-01-01

    The textbook formulates and answers the questions encountered in practice by students in the radiology professions, covering the physics and engineering aspects as well as quality control and the relevant requirements set by the X-ray Ordinance and the Quality Assurance Guide issued by the Bundesaerztekammer for diagnostic radiography and computed tomography. The text is accompanied by simplified illustrations that are easy to remember. The book is intended to serve as a textbook for readers preparing for their examination as a medical specialist, or for participants of obligatory courses in radiological protection, or radiographers. Readers will also find it useful as a refresher course. (orig.) [de

  18. Information Retrieval for Education: Making Search Engines Language Aware

    Science.gov (United States)

    Ott, Niels; Meurers, Detmar

    2010-01-01

    Search engines have been a major factor in making the web the successful and widely used information source it is today. Generally speaking, they make it possible to retrieve web pages on a topic specified by the keywords entered by the user. Yet web searching currently does not take into account which of the search results are comprehensible for…

  19. A longitudinal analysis of search engine index size

    NARCIS (Netherlands)

    Bosch, A.P.J. van den; Bogers, T.; Kunder, M. de; Salah, A. A.; Tonta, Y.; Salah, A. A. A.; Sugimoto, C.; Al, U.

    2015-01-01

    One of the determining factors of the quality of Web search engines is the size and quality of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We

  20. Andromeda - a peptide search engine integrated into the MaxQuant environment

    DEFF Research Database (Denmark)

    Cox, Jurgen; Neuhauser, Nadin; Michalski, Annette

    2011-01-01

    A key step in mass spectrometry (MS)-based proteomics is the identification of peptides in sequence databases by their fragmentation spectra. Here we describe Andromeda, a novel peptide search engine using a probabilistic scoring model. On proteome data Andromeda performs as well as Mascot......, a widely used commercial search engine, as judged by sensitivity and specificity analysis based on target decoy searches. Furthermore, it can handle data with arbitrarily high fragment mass accuracy, is able to assign and score complex patterns of post-translational modifications, such as highly...... phosphorylated peptides and accommodates extremely large databases. The algorithms of Andromeda are provided. Andromeda can function independently or as an integrated search engine of the widely used MaxQuant computational proteomics platform and both are freely available at www.maxquant.org. The combination...

  1. Estimating search engine index size variability: a 9-year longitudinal study.

    Science.gov (United States)

    van den Bosch, Antal; Bogers, Toine; de Kunder, Maurice

    One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel method of estimating the size of a Web search engine's index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing's indices over a nine-year period, from March 2006 until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find that much, if not all of this variability can be explained by changes in the indexing and ranking infrastructure of Google and Bing. This casts further doubt on whether Web search engines can be used reliably for cross-sectional webometric studies.

  2. 2012 International Conference on Human-centric Computing

    CERN Document Server

    Jin, Qun; Yeo, Martin; Hu, Bin; Human Centric Technology and Service in Smart Space, HumanCom 2012

    2012-01-01

    The theme of HumanCom is focused on the various aspects of human-centric computing for advances in computer science and its applications and provides an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of human-centric computing. In addition, the conference will publish high quality papers which are closely related to the various theories and practical applications in human-centric computing. Furthermore, we expect that the conference and its publications will be a trigger for further related research and technology improvements in this important subject.

  3. Taking It to the Top: A Lesson in Search Engine Optimization

    Science.gov (United States)

    Frydenberg, Mark; Miko, John S.

    2011-01-01

    Search engine optimization (SEO), the promoting of a Web site so it achieves optimal position with a search engine's rankings, is an important strategy for organizations and individuals in order to promote their brands online. Techniques for achieving SEO are relevant to students of marketing, computing, media arts, and other disciplines, and many…

  4. TOWARDS ACTIVE SEO (SEARCH ENGINE OPTIMIZATION 2.0

    Directory of Open Access Journals (Sweden)

    Charles-Victor Boutet

    2012-12-01

    Full Text Available In the age of writable web, new skills and new practices are appearing. In an environment that allows everyone to communicate information globally, internet referencing (or SEO is a strategic discipline that aims to generate visibility, internet traffic and a maximum exploitation of sites publications. Often misperceived as a fraud, SEO has evolved to be a facilitating tool for anyone who wishes to reference their website with search engines. In this article we show that it is possible to achieve the first rank in search results of keywords that are very competitive. We show methods that are quick, sustainable and legal; while applying the principles of active SEO 2.0. This article also clarifies some working functions of search engines, some advanced referencing techniques (that are completely ethical and legal and we lay the foundations for an in depth reflection on the qualities and advantages of these techniques.

  5. Predicting Drug Recalls From Internet Search Engine Queries.

    Science.gov (United States)

    Yom-Tov, Elad

    2017-01-01

    Batches of pharmaceuticals are sometimes recalled from the market when a safety issue or a defect is detected in specific production runs of a drug. Such problems are usually detected when patients or healthcare providers report abnormalities to medical authorities. Here, we test the hypothesis that defective production lots can be detected earlier by monitoring queries to Internet search engines. We extracted queries from the USA to the Bing search engine, which mentioned one of the 5195 pharmaceutical drugs during 2015 and all recall notifications issued by the Food and Drug Administration (FDA) during that year. By using attributes that quantify the change in query volume at the state level, we attempted to predict if a recall of a specific drug will be ordered by FDA in a time horizon ranging from 1 to 40 days in future. Our results show that future drug recalls can indeed be identified with an AUC of 0.791 and a lift at 5% of approximately 6 when predicting a recall occurring one day ahead. This performance degrades as prediction is made for longer periods ahead. The most indicative attributes for prediction are sudden spikes in query volume about a specific medicine in each state. Recalls of prescription drugs and those estimated to be of medium-risk are more likely to be identified using search query data. These findings suggest that aggregated Internet search engine data can be used to facilitate in early warning of faulty batches of medicines.

  6. Radiology Teaching Files on the Internet

    International Nuclear Information System (INIS)

    Lim, Eun Chung; Kim, Eun Kyung

    1996-01-01

    There is increasing attention about radiology teaching files on the Internet in the field of diagnostic radiology. The purpose of this study was to aid in the creation of new radiology teaching file by analysing the present radiology teaching file sites on the Internet with many aspects and evaluating images on those sites, using Macintosh II ci compute r, 28.8kbps TelePort Fax/Modem, Netscape Navigator 2.0 software. The results were as follow : 1. Analysis of radiology teaching file sites (1) Country distribution was the highest in USA (57.5%). (2) Average number of cases was 186 cases and radiology teaching file sites with search engine were 9 sites (22.5%). (3) At the method of case arrangement, anatomic area type and diagnosis type were found at the 10 sites (25%) each, question and answer type was found at the 9 sites (22.5%). (4) Radiology teaching file sites with oro-maxillofacial disorder were 9 sites (22.5%). (5) At the image format, GIF format was found at the 14 sites (35%), and JPEG format found at the 14 sites (35%). (6) Created year was the highest in 1995 (43.7%). (7) Continuing case upload was found at the 35 sites (87.5%). 2. Evaluation of images on the radiology teaching files (1) Average file size of GIF format (71 Kbyte) was greater than that of JPEG format (24 Kbyte). (P<0.001) (2) Image quality of GIF format was better than that of JPEG format. (P<0.001)

  7. The sharing of radiological images by professional mixed martial arts fighters on social media.

    Science.gov (United States)

    Rahmani, George; Joyce, Cormac W; McCarthy, Peter

    2017-06-01

    Mixed martial arts is a sport that has recently enjoyed a significant increase in popularity. This rise in popularity has catapulted many of these "cage fighters" into stardom and many regularly use social media to reach out to their fans. An interesting result of this interaction on social media is that athletes are sharing images of their radiological examinations when they sustain an injury. To review instances where mixed martial arts fighters shared images of their radiological examinations on social media and in what context they were shared. An Internet search was performed using the Google search engine. Search terms included "MMA," "mixed martial arts," "injury," "scan," "X-ray," "fracture," and "break." Articles which discussed injuries to MMA fighters were examined and those in which the fighter themselves shared a radiological image of their injury on social media were identified. During our search, we identified 20 MMA fighters that had shared radiological images of their injuries on social media. There were 15 different types of injury, with a fracture of the mid-shaft of the ulna being the most common. The most popular social media platform was Twitter. The most common imaging modality X-ray (71%). The majority of injuries were sustained during competition (81%) and 35% of these fights resulted in a win for the fighter. Professional mixed martial artists are sharing radiological images of their injuries on social media. This may be in an attempt to connect with fans and raise their profile among other fighters.

  8. Cross-system evaluation of clinical trial search engines.

    Science.gov (United States)

    Jiang, Silis Y; Weng, Chunhua

    2014-01-01

    Clinical trials are fundamental to the advancement of medicine but constantly face recruitment difficulties. Various clinical trial search engines have been designed to help health consumers identify trials for which they may be eligible. Unfortunately, knowledge of the usefulness and usability of their designs remains scarce. In this study, we used mixed methods, including time-motion analysis, think-aloud protocol, and survey, to evaluate five popular clinical trial search engines with 11 users. Differences in user preferences and time spent on each system were observed and correlated with user characteristics. In general, searching for applicable trials using these systems is a cognitively demanding task. Our results show that user perceptions of these systems are multifactorial. The survey indicated eTACTS being the generally preferred system, but this finding did not persist among all mixed methods. This study confirms the value of mixed-methods for a comprehensive system evaluation. Future system designers must be aware that different users groups expect different functionalities.

  9. Collection of Medical Original Data with Search Engine for Decision Support.

    Science.gov (United States)

    Orthuber, Wolfgang

    2016-01-01

    Medicine is becoming more and more complex and humans can capture total medical knowledge only partially. For specific access a high resolution search engine is demonstrated, which allows besides conventional text search also search of precise quantitative data of medical findings, therapies and results. Users can define metric spaces ("Domain Spaces", DSs) with all searchable quantitative data ("Domain Vectors", DSs). An implementation of the search engine is online in http://numericsearch.com. In future medicine the doctor could make first a rough diagnosis and check which fine diagnostics (quantitative data) colleagues had collected in such a case. Then the doctor decides about fine diagnostics and results are sent (half automatically) to the search engine which filters a group of patients which best fits to these data. In this specific group variable therapies can be checked with associated therapeutic results, like in an individual scientific study for the current patient. The statistical (anonymous) results could be used for specific decision support. Reversely the therapeutic decision (in the best case with later results) could be used to enhance the collection of precise pseudonymous medical original data which is used for better and better statistical (anonymous) search results.

  10. Enhancing the Internet with the CONVERGENCE system an information-centric network coupled with a standard middleware

    CERN Document Server

    Andrade, Maria; Melazzi, Nicola; Walker, Richard; Hussmann, Heinrich; Venieris, Iakovos

    2014-01-01

    Convergence proposes the enhancement of the Internet with a novel, content-centric, publish–subscribe service model based on the versatile digital item (VDI): a common container for all kinds of digital content, including digital representations of real-world resources. VDIs will serve the needs of the future Internet, providing a homogeneous method for handling structured information, incorporating security and privacy mechanisms. CONVERGENCE subsumes the following areas of research: ·         definition of the VDI as a new fundamental unit of distribution and transaction; ·         content-centric networking functionality to complement or replace IP-address-based routing; ·         security and privacy protection mechanisms; ·         open-source middleware, including a community dictionary service to enable rich semantic searches; ·         applications, tested under real-life conditions. This book shows how CONVERGENCE allows publishing, searching and subscri...

  11. IdentiPy: an extensible search engine for protein identification in shotgun proteomics.

    Science.gov (United States)

    Levitsky, Lev I; Ivanov, Mark V; Lobas, Anna A; Bubis, Julia A; Tarasova, Irina A; Solovyeva, Elizaveta M; Pridatchenko, Marina L; Gorshkov, Mikhail V

    2018-04-23

    We present an open-source, extensible search engine for shotgun proteomics. Implemented in Python programming language, IdentiPy shows competitive processing speed and sensitivity compared with the state-of-the-art search engines. It is equipped with a user-friendly web interface, IdentiPy Server, enabling the use of a single server installation accessed from multiple workstations. Using a simplified version of X!Tandem scoring algorithm and its novel ``auto-tune'' feature, IdentiPy outperforms the popular alternatives on high-resolution data sets. Auto-tune adjusts the search parameters for the particular data set, resulting in improved search efficiency and simplifying the user experience. IdentiPy with the auto-tune feature shows higher sensitivity compared with the evaluated search engines. IdentiPy Server has built-in post-processing and protein inference procedures and provides graphic visualization of the statistical properties of the data set and the search results. It is open-source and can be freely extended to use third-party scoring functions or processing algorithms, and allows customization of the search workflow for specialized applications.

  12. Cognition to Collaboration: User-Centric Approach and Information Behaviour Theories/Models

    Directory of Open Access Journals (Sweden)

    Alperen M Aydin

    2016-12-01

    Full Text Available Aim/Purpose: The objective of this paper is to review the vast literature of user-centric in-formation science and inform about the emerging themes in information behaviour science. Background:\tThe paradigmatic shift from system-centric to user-centric approach facilitates research on the cognitive and individual information processing. Various information behaviour theories/models emerged. Methodology: Recent information behaviour theories and models are presented. Features, strengths and weaknesses of the models are discussed through the analysis of the information behaviour literature. Contribution: This paper sheds light onto the weaknesses in earlier information behaviour models and stresses (and advocates the need for research on social information behaviour. Findings: Prominent information behaviour models deal with individual information behaviour. People live in a social world and sort out most of their daily or work problems in groups. However, only seven papers discuss social information behaviour (Scopus search. Recommendations for Practitioners\t: ICT tools used for inter-organisational sharing should be redesigned for effective information-sharing during disaster/emergency times. Recommendation for Researchers: There are scarce sources on social side of the information behaviour, however, most of the work tasks are carried out in groups/teams. Impact on Society: In dynamic work contexts like disaster management and health care settings, collaborative information-sharing may result in decreasing the losses. Future Research: A fieldwork will be conducted in disaster management context investigating the inter-organisational information-sharing.

  13. Andromeda: a peptide search engine integrated into the MaxQuant environment.

    Science.gov (United States)

    Cox, Jürgen; Neuhauser, Nadin; Michalski, Annette; Scheltema, Richard A; Olsen, Jesper V; Mann, Matthias

    2011-04-01

    A key step in mass spectrometry (MS)-based proteomics is the identification of peptides in sequence databases by their fragmentation spectra. Here we describe Andromeda, a novel peptide search engine using a probabilistic scoring model. On proteome data, Andromeda performs as well as Mascot, a widely used commercial search engine, as judged by sensitivity and specificity analysis based on target decoy searches. Furthermore, it can handle data with arbitrarily high fragment mass accuracy, is able to assign and score complex patterns of post-translational modifications, such as highly phosphorylated peptides, and accommodates extremely large databases. The algorithms of Andromeda are provided. Andromeda can function independently or as an integrated search engine of the widely used MaxQuant computational proteomics platform and both are freely available at www.maxquant.org. The combination enables analysis of large data sets in a simple analysis workflow on a desktop computer. For searching individual spectra Andromeda is also accessible via a web server. We demonstrate the flexibility of the system by implementing the capability to identify cofragmented peptides, significantly improving the total number of identified peptides.

  14. Device-Centric Monitoring for Mobile Device Management

    Directory of Open Access Journals (Sweden)

    Luke Chircop

    2016-03-01

    Full Text Available The ubiquity of computing devices has led to an increased need to ensure not only that the applications deployed on them are correct with respect to their specifications, but also that the devices are used in an appropriate manner, especially in situations where the device is provided by a party other than the actual user. Much work which has been done on runtime verification for mobile devices and operating systems is mostly application-centric, resulting in global, device-centric properties (e.g. the user may not send more than 100 messages per day across all applications being difficult or impossible to verify. In this paper we present a device-centric approach to runtime verify the device behaviour against a device policy with the different applications acting as independent components contributing to the overall behaviour of the device. We also present an implementation for Android devices, and evaluate it on a number of device-centric policies, reporting the empirical results obtained.

  15. EIIS: An Educational Information Intelligent Search Engine Supported by Semantic Services

    Science.gov (United States)

    Huang, Chang-Qin; Duan, Ru-Lin; Tang, Yong; Zhu, Zhi-Ting; Yan, Yong-Jian; Guo, Yu-Qing

    2011-01-01

    The semantic web brings a new opportunity for efficient information organization and search. To meet the special requirements of the educational field, this paper proposes an intelligent search engine enabled by educational semantic support service, where three kinds of searches are integrated into Educational Information Intelligent Search (EIIS)…

  16. Design of personalized search engine based on user-webpage dynamic model

    Science.gov (United States)

    Li, Jihan; Li, Shanglin; Zhu, Yingke; Xiao, Bo

    2013-12-01

    Personalized search engine focuses on establishing a user-webpage dynamic model. In this model, users' personalized factors are introduced so that the search engine is better able to provide the user with targeted feedback. This paper constructs user and webpage dynamic vector tables, introduces singular value decomposition analysis in the processes of topic categorization, and extends the traditional PageRank algorithm.

  17. Index Compression and Efficient Query Processing in Large Web Search Engines

    Science.gov (United States)

    Ding, Shuai

    2013-01-01

    The inverted index is the main data structure used by all the major search engines. Search engines build an inverted index on their collection to speed up query processing. As the size of the web grows, the length of the inverted list structures, which can easily grow to hundreds of MBs or even GBs for common terms (roughly linear in the size of…

  18. Development and tuning of an original search engine for patent libraries in medicinal chemistry.

    Science.gov (United States)

    Pasche, Emilie; Gobeill, Julien; Kreim, Olivier; Oezdemir-Zaech, Fatma; Vachon, Therese; Lovis, Christian; Ruch, Patrick

    2014-01-01

    The large increase in the size of patent collections has led to the need of efficient search strategies. But the development of advanced text-mining applications dedicated to patents of the biomedical field remains rare, in particular to address the needs of the pharmaceutical & biotech industry, which intensively uses patent libraries for competitive intelligence and drug development. We describe here the development of an advanced retrieval engine to search information in patent collections in the field of medicinal chemistry. We investigate and combine different strategies and evaluate their respective impact on the performance of the search engine applied to various search tasks, which covers the putatively most frequent search behaviours of intellectual property officers in medical chemistry: 1) a prior art search task; 2) a technical survey task; and 3) a variant of the technical survey task, sometimes called known-item search task, where a single patent is targeted. The optimal tuning of our engine resulted in a top-precision of 6.76% for the prior art search task, 23.28% for the technical survey task and 46.02% for the variant of the technical survey task. We observed that co-citation boosting was an appropriate strategy to improve prior art search tasks, while IPC classification of queries was improving retrieval effectiveness for technical survey tasks. Surprisingly, the use of the full body of the patent was always detrimental for search effectiveness. It was also observed that normalizing biomedical entities using curated dictionaries had simply no impact on the search tasks we evaluate. The search engine was finally implemented as a web-application within Novartis Pharma. The application is briefly described in the report. We have presented the development of a search engine dedicated to patent search, based on state of the art methods applied to patent corpora. We have shown that a proper tuning of the system to adapt to the various search tasks

  19. Developing a search engine for pharmacotherapeutic information that is not published in biomedical journals.

    Science.gov (United States)

    Do Pazo-Oubiña, F; Calvo Pita, C; Puigventós Latorre, F; Periañez-Párraga, L; Ventayol Bosch, P

    2011-01-01

    To identify publishers of pharmacotherapeutic information not found in biomedical journals that focuses on evaluating and providing advice on medicines and to develop a search engine to access this information. Compiling web sites that publish information on the rational use of medicines and have no commercial interests. Free-access web sites in Spanish, Galician, Catalan or English. Designing a search engine using the Google "custom search" application. Overall 159 internet addresses were compiled and were classified into 9 labels. We were able to recover the information from the selected sources using a search engine, which is called "AlquimiA" and available from http://www.elcomprimido.com/FARHSD/AlquimiA.htm. The main sources of pharmacotherapeutic information not published in biomedical journals were identified. The search engine is a useful tool for searching and accessing "grey literature" on the internet. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.

  20. Defining Patient Centric Pharmaceutical Drug Product Design.

    Science.gov (United States)

    Stegemann, Sven; Ternik, Robert L; Onder, Graziano; Khan, Mansoor A; van Riet-Nales, Diana A

    2016-09-01

    The term "patient centered," "patient centric," or "patient centricity" is increasingly used in the scientific literature in a wide variety of contexts. Generally, patient centric medicines are recognized as an essential contributor to healthy aging and the overall patient's quality of life and life expectancy. Besides the selection of the appropriate type of drug substance and strength for a particular indication in a particular patient, due attention must be paid that the pharmaceutical drug product design is also adequately addressing the particular patient's needs, i.e., assuring adequate patient adherence and the anticipate drug safety and effectiveness. Relevant pharmaceutical design aspects may e.g., involve the selection of the route of administration, the tablet size and shape, the ease of opening the package, the ability to read the user instruction, or the ability to follow the recommended (in-use) storage conditions. Currently, a harmonized definition on patient centric drug development/design has not yet been established. To stimulate scientific research and discussions and the consistent interpretation of test results, it is essential that such a definition is established. We have developed a first draft definition through various rounds of discussions within an interdisciplinary AAPS focus group of experts. This publication summarizes the outcomes and is intended to stimulate further discussions with all stakeholders towards a common definition of patient centric pharmaceutical drug product design that is useable across all disciplines involved.

  1. The LAILAPS Search Engine: Relevance Ranking in Life Science Databases

    Directory of Open Access Journals (Sweden)

    Lange Matthias

    2010-06-01

    Full Text Available Search engines and retrieval systems are popular tools at a life science desktop. The manual inspection of hundreds of database entries, that reflect a life science concept or fact, is a time intensive daily work. Hereby, not the number of query results matters, but the relevance does. In this paper, we present the LAILAPS search engine for life science databases. The concept is to combine a novel feature model for relevance ranking, a machine learning approach to model user relevance profiles, ranking improvement by user feedback tracking and an intuitive and slim web user interface, that estimates relevance rank by tracking user interactions. Queries are formulated as simple keyword lists and will be expanded by synonyms. Supporting a flexible text index and a simple data import format, LAILAPS can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases.

  2. Towards Identifying and Reducing the Bias of Disease Information Extracted from Search Engine Data.

    Science.gov (United States)

    Huang, Da-Cang; Wang, Jin-Feng; Huang, Ji-Xia; Sui, Daniel Z; Zhang, Hong-Yan; Hu, Mao-Gui; Xu, Cheng-Dong

    2016-06-01

    The estimation of disease prevalence in online search engine data (e.g., Google Flu Trends (GFT)) has received a considerable amount of scholarly and public attention in recent years. While the utility of search engine data for disease surveillance has been demonstrated, the scientific community still seeks ways to identify and reduce biases that are embedded in search engine data. The primary goal of this study is to explore new ways of improving the accuracy of disease prevalence estimations by combining traditional disease data with search engine data. A novel method, Biased Sentinel Hospital-based Area Disease Estimation (B-SHADE), is introduced to reduce search engine data bias from a geographical perspective. To monitor search trends on Hand, Foot and Mouth Disease (HFMD) in Guangdong Province, China, we tested our approach by selecting 11 keywords from the Baidu index platform, a Chinese big data analyst similar to GFT. The correlation between the number of real cases and the composite index was 0.8. After decomposing the composite index at the city level, we found that only 10 cities presented a correlation of close to 0.8 or higher. These cities were found to be more stable with respect to search volume, and they were selected as sample cities in order to estimate the search volume of the entire province. After the estimation, the correlation improved from 0.8 to 0.864. After fitting the revised search volume with historical cases, the mean absolute error was 11.19% lower than it was when the original search volume and historical cases were combined. To our knowledge, this is the first study to reduce search engine data bias levels through the use of rigorous spatial sampling strategies.

  3. Penerapan Teknik Seo (Search Engine Optimization pada Website dalam Strategi Pemasaran melalui Internet

    Directory of Open Access Journals (Sweden)

    Rony Baskoro Lukito

    2014-12-01

    Full Text Available The purpose of this research is how to optimize a web design that can increase the number of visitors. The number of Internet users in the world continues to grow in line with advances in information technology. Products and services marketing media do not just use the printed and electronic media. Moreover, the cost of using the Internet as a medium of marketing is relatively inexpensive when compared to the use of television as a marketing medium. The penetration of the internet as a marketing medium lasted for 24 hours in different parts of the world. But to make an internet site into a site that is visited by many internet users, the site is not only good from the outside view only. Web sites that serve as a medium for marketing must be built with the correct rules, so that the Web site be optimal marketing media. One of the good rules in building the internet site as a marketing medium is how the content of such web sites indexed well in search engines like google. Search engine optimization in the index will be focused on the search engine Google for 83% of internet users across the world using Google as a search engine. Search engine optimization commonly known as SEO (Search Engine Optimization is an important rule that the internet site is easier to find a user with the desired keywords.

  4. Research on the optimization strategy of web search engine based on data mining

    Science.gov (United States)

    Chen, Ronghua

    2018-04-01

    With the wide application of search engines, web site information has become an important way for people to obtain information. People have found that they are growing in an increasingly explosive manner. Web site information is verydifficult to find the information they need, and now the search engine can not meet the need, so there is an urgent need for the network to provide website personalized information service, data mining technology for this new challenge is to find a breakthrough. In order to improve people's accuracy of finding information from websites, a website search engine optimization strategy based on data mining is proposed, and verified by website search engine optimization experiment. The results show that the proposed strategy improves the accuracy of the people to find information, and reduces the time for people to find information. It has an important practical value.

  5. Modification site localization scoring integrated into a search engine.

    Science.gov (United States)

    Baker, Peter R; Trinidad, Jonathan C; Chalkley, Robert J

    2011-07-01

    Large proteomic data sets identifying hundreds or thousands of modified peptides are becoming increasingly common in the literature. Several methods for assessing the reliability of peptide identifications both at the individual peptide or data set level have become established. However, tools for measuring the confidence of modification site assignments are sparse and are not often employed. A few tools for estimating phosphorylation site assignment reliabilities have been developed, but these are not integral to a search engine, so require a particular search engine output for a second step of processing. They may also require use of a particular fragmentation method and are mostly only applicable for phosphorylation analysis, rather than post-translational modifications analysis in general. In this study, we present the performance of site assignment scoring that is directly integrated into the search engine Protein Prospector, which allows site assignment reliability to be automatically reported for all modifications present in an identified peptide. It clearly indicates when a site assignment is ambiguous (and if so, between which residues), and reports an assignment score that can be translated into a reliability measure for individual site assignments.

  6. Adding a Visualization Feature to Web Search Engines: It’s Time

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Pak C.

    2008-11-11

    Since the first world wide web (WWW) search engine quietly entered our lives in 1994, the “information need” behind web searching has rapidly grown into a multi-billion dollar business that dominates the internet landscape, drives e-commerce traffic, propels global economy, and affects the lives of the whole human race. Today’s search engines are faster, smarter, and more powerful than those released just a few years ago. With the vast investment pouring into research and development by leading web technology providers and the intense emotion behind corporate slogans such as “win the web” or “take back the web,” I can’t help but ask why are we still using the very same “text-only” interface that was used 13 years ago to browse our search engine results pages (SERPs)? Why has the SERP interface technology lagged so far behind in the web evolution when the corresponding search technology has advanced so rapidly? In this article I explore some current SERP interface issues, suggest a simple but practical visual-based interface design approach, and argue why a visual approach can be a strong candidate for tomorrow’s SERP interface.

  7. The invisible Web uncovering information sources search engines can't see

    CERN Document Server

    Sherman, Chris

    2001-01-01

    Enormous expanses of the Internet are unreachable with standard web search engines. This book provides the key to finding these hidden resources by identifying how to uncover and use invisible web resources. Mapping the invisible Web, when and how to use it, assessing the validity of the information, and the future of Web searching are topics covered in detail. Only 16 percent of Net-based information can be located using a general search engine. The other 84 percent is what is referred to as the invisible Web-made up of information stored in databases. Unlike pages on the visible Web, informa

  8. I-centric Communications

    CERN Document Server

    Arbanowski, S; Steglich, S; Popescu-Zeletin, R

    2001-01-01

    During the last years, a variety of concepts for service integration and corresponding systems have gained momentum. On the one hand, they aim for the interworking and integration of classical telecommunications and data communications services. On the other hand, they are focusing on universal service access from a variety of end user systems. Looking at humans' communication behavior and communication space, it is obvious that human beings interact frequently in a set of contexts in their environment (communication space). Following this view, we want to build communication systems on the analysis of the individual communication spaces. The results are communication systems adapted to the specific demands of each individual. The authors introduce I-centric Communication Systems, an approach to design communication systems which adapt to the individual communication space and individual environment and situation. In this context "I" means I, or individual, "Centric" means adaptable to I requirements and a ce...

  9. Engineering High Assurance Distributed Cyber Physical Systems

    Science.gov (United States)

    2015-01-15

    engineering ( MDE ), Model- centric software engineering (MCSE), and others have attempted to leverage and integrate techniques for requirements...Part I: Principles of Software Engineering.” IBM Syst. J. 38, 2-3, pp.289-295, June 1999. [2] Xie, T, “Software Engineering Conferences”, web page

  10. The internet and intelligent machines: search engines, agents and robots; Radiologische Informationssuche im Internet: Datenbanken, Suchmaschinen und intelligente Agenten

    Energy Technology Data Exchange (ETDEWEB)

    Achenbach, S; Alfke, H [Marburg Univ. (Germany). Abt. fuer Strahlendiagnostik

    2000-04-01

    The internet plays an important role in a growing number of medical applications. Finding relevant information is not always easy as the amount of available information on the Web is rising quickly. Even the best Search Engines can only collect links to a fraction of all existing Web pages. In addition, many of these indexed documents have been changed or deleted. The vast majority of information on the Web is not searchable with conventional methods. New search strategies, technologies and standards are combined in Intelligent Search Agents (ISA) an Robots, which can retrieve desired information in a specific approach. Conclusion: The article describes differences between ISAs and conventional Search Engines and how communication between Agents improves their ability to find information. Examples of existing ISAs are given and the possible influences on the current and future work in radiology is discussed. (orig.) [German] Das Internet findet zunehmend in medizinischen Anwendungen Verbreitung, jedoch ist das Auffinden relevanter Informationen nicht immer leicht. Die Anzahl der verfuegbaren Dokumente im World wide web nimmt so schnell zu, dass die Suche zunehmend Probleme bereitet: Auch gute Suchmaschinen erfassen nur einige Prozent der vorhandenen Seiten in Ihren Datenbanken. Zusaetzlich sorgen staendige Veraenderungen dafuer, dass nur ein Teil dieser durchsuchbaren Dokumente ueberhaupt noch existiert. Der Grossteil des Internets ist daher mit konventionellen Methoden nicht zu erschliessen. Neue Standards, Suchstrategien und Technologien vereinen sich in den Suchagenten und Robots, die gezielter und intelligenter Inhalte ermitteln koennen. Schlussfolgerung: Der Artikel stellt dar, wie sich ein Intelligent search agent (ISA) von einer Suchmaschine unterscheidet und durch Kooperation mit anderen Agenten die Anforderungen der Benutzer besser erfuellen kann. Neben den Grundlagen werden exemplarische Anwendungen gezeigt, die heute im Netz existieren, und ein Ausblick

  11. How visual search relates to visual diagnostic performance : a narrative systematic review of eye-tracking research in radiology

    NARCIS (Netherlands)

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; ten Cate, Olle

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review

  12. ERRATUM: TOWARDS ACTIVE SEO (SEARCH ENGINE OPTIMIZATION 2.0

    Directory of Open Access Journals (Sweden)

    Charles-Victor Boutet

    2013-04-01

    Full Text Available In the age of writable web, new skills and new practices are appearing. In an environment that allows everyone to communicate information globally, internet referencing (or SEO is a strategic discipline that aims to generate visibility, internet traffic and a maximum exploitation of sites publications. Often misperceived as a fraud, SEO has evolved to be a facilitating tool for anyone who wishes to reference their website with search engines. In this article we show that it is possible to achieve the first rank in search results of keywords that are very competitive. We show methods that are quick, sustainable and legal; while applying the principles of active SEO 2.0. This article also clarifies some working functions of search engines, some advanced referencing techniques (that are completely ethical and legal and we lay the foundations for an in depth reflection on the qualities and advantages of these techniques.

  13. A study of medical and health queries to web search engines.

    Science.gov (United States)

    Spink, Amanda; Yang, Yin; Jansen, Jim; Nykanen, Pirrko; Lorence, Daniel P; Ozmutlu, Seda; Ozmutlu, H Cenk

    2004-03-01

    This paper reports findings from an analysis of medical or health queries to different web search engines. We report results: (i). comparing samples of 10000 web queries taken randomly from 1.2 million query logs from the AlltheWeb.com and Excite.com commercial web search engines in 2001 for medical or health queries, (ii). comparing the 2001 findings from Excite and AlltheWeb.com users with results from a previous analysis of medical and health related queries from the Excite Web search engine for 1997 and 1999, and (iii). medical or health advice-seeking queries beginning with the word 'should'. Findings suggest: (i). a small percentage of web queries are medical or health related, (ii). the top five categories of medical or health queries were: general health, weight issues, reproductive health and puberty, pregnancy/obstetrics, and human relationships, and (iii). over time, the medical and health queries may have declined as a proportion of all web queries, as the use of specialized medical/health websites and e-commerce-related queries has increased. Findings provide insights into medical and health-related web querying and suggests some implications for the use of the general web search engines when seeking medical/health information.

  14. Multi-Device to Multi-Device (MD2MD Content-Centric Networking Based on Multi-RAT Device

    Directory of Open Access Journals (Sweden)

    Cheolhoon Kim

    2017-11-01

    Full Text Available This paper proposes a method whereby a device can transmit and receive information using a beacon, and also describes application scenarios for the proposed method. In a multi-device to multi-device (MD2MD content-centric networking (CCN environment, the main issue involves searching for and connecting to nearby devices. However, if a device can’t find another device that satisfies its requirements, the connection is delayed due to the repetition of processes. It is possible to rapidly connect to a device without repetition through the selection of the optimal device using the proposed method. Consequently, the proposed method and scenarios are advantageous in that they enable efficient content identification and delivery in a content-centric Internet of Things (IoT environment, in which multiple mobile devices coexist.

  15. Variability of centric relation position in TMD patients.

    NARCIS (Netherlands)

    Zonnenberg, A.J.J.; Mulder, J.

    2006-01-01

    Reproducibility of the centric relation position for patients with temporomandibular disorders (TMD) is not documented in the current literature. It was the objective of this study to assess clinical variability of the centric relation position for TMD patients with a muscle-determined technique by

  16. Research Trends with Cross Tabulation Search Engine

    Science.gov (United States)

    Yin, Chengjiu; Hirokawa, Sachio; Yau, Jane Yin-Kim; Hashimoto, Kiyota; Tabata, Yoshiyuki; Nakatoh, Tetsuya

    2013-01-01

    To help researchers in building a knowledge foundation of their research fields which could be a time-consuming process, the authors have developed a Cross Tabulation Search Engine (CTSE). Its purpose is to assist researchers in 1) conducting research surveys, 2) efficiently and effectively retrieving information (such as important researchers,…

  17. FOAMSearch.net: A custom search engine for emergency medicine and critical care.

    Science.gov (United States)

    Raine, Todd; Thoma, Brent; Chan, Teresa M; Lin, Michelle

    2015-08-01

    The number of online resources read by and pertinent to clinicians has increased dramatically. However, most healthcare professionals still use mainstream search engines as their primary port of entry to the resources on the Internet. These search engines use algorithms that do not make it easy to find clinician-oriented resources. FOAMSearch, a custom search engine (CSE), was developed to find relevant, high-quality online resources for emergency medicine and critical care (EMCC) clinicians. Using Google™ algorithms, it searches a vetted list of >300 blogs, podcasts, wikis, knowledge translation tools, clinical decision support tools and medical journals. Utilisation has increased progressively to >3000 users/month since its launch in 2011. Further study of the role of CSEs to find medical resources is needed, and it might be possible to develop similar CSEs for other areas of medicine. © 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  18. Win the game of Googleopoly unlocking the secret strategy of search engines

    CERN Document Server

    Bradley, Sean V

    2015-01-01

    Rank higher in search results with this guide to SEO and content building supremacy Google is not only the number one search engine in the world, it is also the number one website in the world. Only 5 percent of site visitors search past the first page of Google, so if you're not in those top ten results, you are essentially invisible. Winning the Game of Googleopoly is the ultimate roadmap to Page One Domination. The POD strategy is what gets you on that super-critical first page of Google results by increasing your page views. You'll learn how to shape your online presence for Search Engine

  19. Enhanced identification of eligibility for depression research using an electronic medical record search engine.

    Science.gov (United States)

    Seyfried, Lisa; Hanauer, David A; Nease, Donald; Albeiruti, Rashad; Kavanagh, Janet; Kales, Helen C

    2009-12-01

    Electronic medical records (EMRs) have become part of daily practice for many physicians. Attempts have been made to apply electronic search engine technology to speed EMR review. This was a prospective, observational study to compare the speed and clinical accuracy of a medical record search engine vs. manual review of the EMR. Three raters reviewed 49 cases in the EMR to screen for eligibility in a depression study using the electronic medical record search engine (EMERSE). One week later raters received a scrambled set of the same patients including 9 distractor cases, and used manual EMR review to determine eligibility. For both methods, accuracy was assessed for the original 49 cases by comparison with a gold standard rater. Use of EMERSE resulted in considerable time savings; chart reviews using EMERSE were significantly faster than traditional manual review (p=0.03). The percent agreement of raters with the gold standard (e.g. concurrent validity) using either EMERSE or manual review was not significantly different. Using a search engine optimized for finding clinical information in the free-text sections of the EMR can provide significant time savings while preserving clinical accuracy. The major power of this search engine is not from a more advanced and sophisticated search algorithm, but rather from a user interface designed explicitly to help users search the entire medical record in a way that protects health information.

  20. The History of the Internet Search Engine: Navigational Media and the Traffic Commodity

    Science.gov (United States)

    van Couvering, E.

    This chapter traces the economic development of the search engine industry over time, beginning with the earliest Web search engines and ending with the domination of the market by Google, Yahoo! and MSN. Specifically, it focuses on the ways in which search engines are similar to and different from traditional media institutions, and how the relations between traditional and Internet media have changed over time. In addition to its historical overview, a core contribution of this chapter is the analysis of the industry using a media value chain based on audiences rather than on content, and the development of traffic as the core unit of exchange. It shows that traditional media companies failed when they attempted to create vertically integrated portals in the late 1990s, based on the idea of controlling Internet content, while search engines succeeded in creating huge "virtually integrated" networks based on control of Internet traffic rather than Internet content.

  1. Ontology for customer centric digital services and analytics

    Science.gov (United States)

    Keat, Ng Wai; Shahrir, Mohammad Shazri

    2017-11-01

    In computer science research, ontologies are commonly utilised to create a unified abstract across many rich and different fields. In this paper, we apply the concept to the customer centric domain of digital services analytics and present an analytics solution ontology. The essence is based from traditional Entity Relationship Diagram (ERD), which then was abstracted out to cover wider areas on customer centric digital services. The ontology we developed covers both static aspects (customer identifiers) and dynamic aspects (customer's temporal interactions). The structure of the customer scape is modeled with classes that represent different types of customer touch points, ranging from digital and digital-stamps which represent physical analogies. The dynamic aspects of customer centric digital service are modeled with a set of classes, with the importance is represented in different associations involving establishment and termination of the target interaction. The realized ontology can be used in development of frameworks for customer centric applications, and for specification of common data format used by cooperating digital service applications.

  2. Radiology and fine art.

    Science.gov (United States)

    Marinković, Slobodan; Stošić-Opinćal, Tatjana; Tomić, Oliver

    2012-07-01

    The radiologic aesthetics of some body parts and internal organs have inspired certain artists to create specific works of art. Our aim was to describe the link between radiology and fine art. We explored 13,625 artworks in the literature produced by 2049 artists and found several thousand photographs in an online image search. The examination revealed 271 radiologic artworks (1.99%) created by 59 artists (2.88%) who mainly applied radiography, sonography, CT, and MRI. Some authors produced radiologic artistic photographs, and others used radiologic images to create artful compositions, specific sculptures, or digital works. Many radiologic artworks have symbolic, metaphoric, or conceptual connotations. Radiology is clearly becoming an original and important field of modern art.

  3. An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.

    Science.gov (United States)

    Zweigenbaum, P; Darmoni, S J; Grabar, N; Douyère, M; Benichou, J

    2002-01-01

    Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF.

  4. Research on the User Interest Modeling of Personalized Search Engine

    Institute of Scientific and Technical Information of China (English)

    LI Zhengwei; XIA Shixiong; NIU Qiang; XIA Zhanguo

    2007-01-01

    At present, how to enable Search Engine to construct user personal interest model initially, master user's personalized information timely and provide personalized services accurately have become the hotspot in the research of Search Engine area.Aiming at the problems of user model's construction and combining techniques of manual customization modeling and automatic analytical modeling, a User Interest Model (UIM) is proposed in the paper. On the basis of it, the corresponding establishment and update algorithms of User Interest Profile (UIP) are presented subsequently. Simulation tests proved that the UIM proposed and corresponding algorithms could enhance the retrieval precision effectively and have superior adaptability.

  5. New Capabilities in the Astrophysics Multispectral Archive Search Engine

    Science.gov (United States)

    Cheung, C. Y.; Kelley, S.; Roussopoulos, N.

    The Astrophysics Multispectral Archive Search Engine (AMASE) uses object-oriented database techniques to provide a uniform multi-mission and multi-spectral interface to search for data in the distributed archives. We describe our experience of porting AMASE from Illustra object-relational DBMS to the Informix Universal Data Server. New capabilities and utilities have been developed, including a spatial datablade that supports Nearest Neighbor queries.

  6. Development and Evaluation of Thesauri-Based Bibliographic Biomedical Search Engine

    Science.gov (United States)

    Alghoson, Abdullah

    2017-01-01

    Due to the large volume and exponential growth of biomedical documents (e.g., books, journal articles), it has become increasingly challenging for biomedical search engines to retrieve relevant documents based on users' search queries. Part of the challenge is the matching mechanism of free-text indexing that performs matching based on…

  7. GeneView: a comprehensive semantic search engine for PubMed.

    Science.gov (United States)

    Thomas, Philippe; Starlinger, Johannes; Vowinkel, Alexander; Arzt, Sebastian; Leser, Ulf

    2012-07-01

    Research results are primarily published in scientific literature and curation efforts cannot keep up with the rapid growth of published literature. The plethora of knowledge remains hidden in large text repositories like MEDLINE. Consequently, life scientists have to spend a great amount of time searching for specific information. The enormous ambiguity among most names of biomedical objects such as genes, chemicals and diseases often produces too large and unspecific search results. We present GeneView, a semantic search engine for biomedical knowledge. GeneView is built upon a comprehensively annotated version of PubMed abstracts and openly available PubMed Central full texts. This semi-structured representation of biomedical texts enables a number of features extending classical search engines. For instance, users may search for entities using unique database identifiers or they may rank documents by the number of specific mentions they contain. Annotation is performed by a multitude of state-of-the-art text-mining tools for recognizing mentions from 10 entity classes and for identifying protein-protein interactions. GeneView currently contains annotations for >194 million entities from 10 classes for ∼21 million citations with 271,000 full text bodies. GeneView can be searched at http://bc3.informatik.hu-berlin.de/.

  8. An empirical study on website usability elements and how they affect search engine optimisation

    Directory of Open Access Journals (Sweden)

    Eugene B. Visser

    2011-03-01

    Full Text Available The primary objective of this research project was to identify and investigate the website usability attributes which are in contradiction with search engine optimisation elements. The secondary objective was to determine if these usability attributes affect conversion. Although the literature review identifies the contradictions, experts disagree about their existence.An experiment was conducted, whereby the conversion and/or traffic ratio results of an existing control website were compared to a usability-designed version of the control website,namely the experimental website. All optimisation elements were ignored, thus implementing only usability. The results clearly show that inclusion of the usability attributes positively affect conversion,indicating that usability is a prerequisite for effective website design. Search engine optimisation is also a prerequisite for the very reason that if a website does not rank on the first page of the search engine result page for a given keyword, then that website might as well not exist. According to this empirical work, usability is in contradiction to search engine optimisation best practices. Therefore the two need to be weighed up in terms of importance towards search engines and visitors.

  9. A Full-Text-Based Search Engine for Finding Highly Matched Documents Across Multiple Categories

    Science.gov (United States)

    Nguyen, Hung D.; Steele, Gynelle C.

    2016-01-01

    This report demonstrates the full-text-based search engine that works on any Web-based mobile application. The engine has the capability to search databases across multiple categories based on a user's queries and identify the most relevant or similar. The search results presented here were found using an Android (Google Co.) mobile device; however, it is also compatible with other mobile phones.

  10. Knowledge-based personalized search engine for the Web-based Human Musculoskeletal System Resources (HMSR) in biomechanics.

    Science.gov (United States)

    Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba

    2013-02-01

    Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. How Visual Search Relates to Visual Diagnostic Performance: A Narrative Systematic Review of Eye-Tracking Research in Radiology

    Science.gov (United States)

    van der Gijp, A.; Ravesloot, C. J.; Jarodzka, H.; van der Schaaf, M. F.; van der Schaaf, I. C.; van Schaik, J. P.; ten Cate, Th. J.

    2017-01-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology…

  12. GTNDSE: The GA Tech nuclear data search engine

    International Nuclear Information System (INIS)

    Kulp, W.D.; Wood, J.L.

    2004-01-01

    The function of the search engine is to retrieve data from ENSDF-formatted files and to write data in user-selected format. The purposes are horizontal systematics of nuclear mass surface, comparison with experimental data and to assist in data analysis and evaluation

  13. NEWordS A News Search Engine for English Vocabulary Learning

    Directory of Open Access Journals (Sweden)

    Xuejing Huang

    2015-08-01

    Full Text Available Vocabulary is the first hurdle for English learners to over- come. Instead of simply showing a word again and again we come up with an idea to develop an English news article search engine based on users word-reciting record on Shanbay.com. It is designed for advanced English learners to find suitable reading materials. The search engine consists of Crawling Module Document Normalizing module Indexing Module Querying Module and Interface Module. We propose three sorting amp ranking algorithms for Querying Module. For the basic algorithm five crucial principles are taken into consideration. Term frequency inverse document frequency familiarity degree and article freshness degree are factors in this algorithm. Then we think of a improved algorithm for the scene in which a user read multiple articles in the searching result list. Here we adopt a iterative amp greedy method. The essential idea is to select English news articles one by one according to the query meanwhile dynamically update the unfamiliarity of the words during each iterative step. Moreover we develop an advanced algorithm to take article difficulty in to account. Interface Module is designed as a website meanwhile some data visualization technologies e.g. word cloud are applied here. Furthermore we conduct both applicability check and performance evaluation. Metrics such as searching time word-covering ratio and minimum number of articles that completely cover all the queried vocabulary are randomly sampled and profoundly analyzed. The result shows that our search engine works very well with satisfying performance.

  14. Through the Google Goggles: Sociopolitical Bias in Search Engine Design

    Science.gov (United States)

    Diaz, A.

    Search engines like Google are essential to navigating the Web's endless supply of news, political information, and citizen discourse. The mechanisms and conditions under which search results are selected should therefore be of considerable interest to media scholars, political theorists, and citizens alike. In this chapter, I adopt a "deliberative" ideal for search engines and examine whether Google exhibits the "same old" media biases of mainstreaming, hypercommercialism, and industry consolidation. In the end, serious objections to Google are raised: Google may favor popularity over richness; it provides advertising that competes directly with "editorial" content; it so overwhelmingly dominates the industry that users seldom get a second opinion, and this is unlikely to change. Ultimately, however, the results of this analysis may speak less about Google than about contradictions in the deliberative ideal and the so-called "inherently democratic" nature of the Web.

  15. The EBI search engine: EBI search as a service—making biological data accessible for all

    Science.gov (United States)

    Park, Young M.; Squizzato, Silvano; Buso, Nicola; Gur, Tamer

    2017-01-01

    Abstract We present an update of the EBI Search engine, an easy-to-use fast text search and indexing system with powerful data navigation and retrieval capabilities. The interconnectivity that exists between data resources at EMBL–EBI provides easy, quick and precise navigation and a better understanding of the relationship between different data types that include nucleotide and protein sequences, genes, gene products, proteins, protein domains, protein families, enzymes and macromolecular structures, as well as the life science literature. EBI Search provides a powerful RESTful API that enables its integration into third-party portals, thus providing ‘Search as a Service’ capabilities, which are the main topic of this article. PMID:28472374

  16. Comparison of Four Search Engines and their efficacy With Emphasis on Literature Research in Addiction (Prevention and Treatment).

    Science.gov (United States)

    Samadzadeh, Gholam Reza; Rigi, Tahereh; Ganjali, Ali Reza

    2013-01-01

    Surveying valuable and most recent information from internet, has become vital for researchers and scholars, because every day, thousands and perhaps millions of scientific works are brought out as digital resources which represented by internet and researchers can't ignore this great resource to find related documents for their literature search, which may not be found in any library. With regard to variety of documents presented on the internet, search engines are one of the most effective search tools for finding information. The aim of this study is to evaluate the three criteria, recall, preciseness and importance of the four search engines which are PubMed, Science Direct, Google Scholar and federated search of Iranian National Medical Digital Library in addiction (prevention and treatment) to select the most effective search engine for offering the best literature research. This research was a cross-sectional study by which four popular search engines in medical sciences were evaluated. To select keywords, medical subject heading (Mesh) was used. We entered given keywords in the search engines and after searching, 10 first entries were evaluated. Direct observation was used as a mean for data collection and they were analyzed by descriptive statistics (number, percent number and mean) and inferential statistics, One way analysis of variance (ANOVA) and post hoc Tukey in Spss. 15 statistical software. P Value search engines had different operations with regard to the evaluated criteria. Since P Value was 0.004 search engines. PubMed, Science Direct and Google Scholar were the best in recall, preciseness and importance respectively. As literature research is one of the most important stages of research, it's better for researchers, especially Substance-Related Disorders scholars to use different search engines with the best recall, preciseness and importance in that subject field to reach desirable results while searching and they don't depend on just one

  17. Revisiting video game ratings: Shift from content-centric to parent-centric approach

    Directory of Open Access Journals (Sweden)

    Jiow Hee Jhee

    2017-01-01

    Full Text Available The rapid adoption of video gaming among children has placed tremendous strain on parents’ ability to manage their children’s consumption. While parents refer online to video games ratings (VGR information to support their mediation efforts, there are many difficulties associated with such practice. This paper explores the popular VGR sites, and highlights the inadequacies of VGRs to capture the parents’ concerns, such as time displacement, social interactions, financial spending and various video game effects, beyond the widespread panics over content issues, that is subjective, ever-changing and irrelevant. As such, this paper argues for a shift from content-centric to a parent-centric approach in VGRs, that captures the evolving nature of video gaming, and support parents, the main users of VGRs, in their management of their young video gaming children. This paper proposes a Video Games Repository for Parents to represent that shift.

  18. Seasonal trends in sleep-disordered breathing: evidence from Internet search engine query data.

    Science.gov (United States)

    Ingram, David G; Matthews, Camilla K; Plante, David T

    2015-03-01

    The primary aim of the current study was to test the hypothesis that there is a seasonal component to snoring and obstructive sleep apnea (OSA) through the use of Google search engine query data. Internet search engine query data were retrieved from Google Trends from January 2006 to December 2012. Monthly normalized search volume was obtained over that 7-year period in the USA and Australia for the following search terms: "snoring" and "sleep apnea". Seasonal effects were investigated by fitting cosinor regression models. In addition, the search terms "snoring children" and "sleep apnea children" were evaluated to examine seasonal effects in pediatric populations. Statistically significant seasonal effects were found using cosinor analysis in both USA and Australia for "snoring" (p search term in Australia (p = 0.13). Seasonal patterns for "snoring children" and "sleep apnea children" were observed in the USA (p = 0.002 and p search volume to examine these search terms in Australia. All searches peaked in the winter or early spring in both countries, with the magnitude of seasonal effect ranging from 5 to 50 %. Our findings indicate that there are significant seasonal trends for both snoring and sleep apnea internet search engine queries, with a peak in the winter and early spring. Further research is indicated to determine the mechanisms underlying these findings, whether they have clinical impact, and if they are associated with other comorbid medical conditions that have similar patterns of seasonal exacerbation.

  19. Implications of Network Centric Warfare

    National Research Council Canada - National Science Library

    Bailey, Alvin

    2004-01-01

    .... These areas of dependence also provide numerous vulnerabilities. This paper will focus specifically on Network Centric Warfare's vulnerabilities in terms of sensors cyberterrorism/ Electro-Magnetic Pulse (EMP...

  20. Questionnaire surveys of dentists on radiology.

    Science.gov (United States)

    Shelley, A M; Brunton, P; Horner, K

    2012-05-01

    Survey by questionnaire is a widely used research method in dental radiology. A major concern in reviews of questionnaires is non-response. The objectives of this study were to review questionnaire studies in dental radiology with regard to potential survey errors and to develop recommendations to assist future researchers. A literature search with the software search package PubMed was used to obtain internet-based access to Medline through the website www.ncbi.nlm.nih.gov/pubmed. A search of the English language peer-reviewed literature was conducted of all published studies, with no restriction on date. The search strategy found articles with dates from 1983 to 2010. The medical subject heading terms used were "questionnaire", "dental radiology" and "dental radiography". The reference sections of articles retrieved by this method were hand-searched in order to identify further relevant papers. Reviews, commentaries and relevant studies from the wider literature were also included. 53 questionnaire studies were identified in the dental literature that concerned dental radiography and included a report of response rate. These were all published between 1983 and 2010. In total, 87 articles are referred to in this review, including the 53 dental radiology studies. Other cited articles include reviews, commentaries and examples of studies outside dental radiology where they are germane to the arguments presented. Non-response is only one of four broad areas of error to which questionnaire surveys are subject. This review considers coverage, sampling and measurement, as well as non-response. Recommendations are made to assist future research that uses questionnaire surveys.

  1. Developing a distributed HTML5-based search engine for geospatial resource discovery

    Science.gov (United States)

    ZHOU, N.; XIA, J.; Nebert, D.; Yang, C.; Gui, Z.; Liu, K.

    2013-12-01

    With explosive growth of data, Geospatial Cyberinfrastructure(GCI) components are developed to manage geospatial resources, such as data discovery and data publishing. However, the efficiency of geospatial resources discovery is still challenging in that: (1) existing GCIs are usually developed for users of specific domains. Users may have to visit a number of GCIs to find appropriate resources; (2) The complexity of decentralized network environment usually results in slow response and pool user experience; (3) Users who use different browsers and devices may have very different user experiences because of the diversity of front-end platforms (e.g. Silverlight, Flash or HTML). To address these issues, we developed a distributed and HTML5-based search engine. Specifically, (1)the search engine adopts a brokering approach to retrieve geospatial metadata from various and distributed GCIs; (2) the asynchronous record retrieval mode enhances the search performance and user interactivity; (3) the search engine based on HTML5 is able to provide unified access capabilities for users with different devices (e.g. tablet and smartphone).

  2. Finding Business Information on the "Invisible Web": Search Utilities vs. Conventional Search Engines.

    Science.gov (United States)

    Darrah, Brenda

    Researchers for small businesses, which may have no access to expensive databases or market research reports, must often rely on information found on the Internet, which can be difficult to find. Although current conventional Internet search engines are now able to index over on billion documents, there are many more documents existing in…

  3. The EBI search engine: EBI search as a service-making biological data accessible for all.

    Science.gov (United States)

    Park, Young M; Squizzato, Silvano; Buso, Nicola; Gur, Tamer; Lopez, Rodrigo

    2017-07-03

    We present an update of the EBI Search engine, an easy-to-use fast text search and indexing system with powerful data navigation and retrieval capabilities. The interconnectivity that exists between data resources at EMBL-EBI provides easy, quick and precise navigation and a better understanding of the relationship between different data types that include nucleotide and protein sequences, genes, gene products, proteins, protein domains, protein families, enzymes and macromolecular structures, as well as the life science literature. EBI Search provides a powerful RESTful API that enables its integration into third-party portals, thus providing 'Search as a Service' capabilities, which are the main topic of this article. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Distribution-centric 3-parameter thermodynamic models of partition gas chromatography.

    Science.gov (United States)

    Blumberg, Leonid M

    2017-03-31

    If both parameters (the entropy, ΔS, and the enthalpy, ΔH) of the classic van't Hoff model of dependence of distribution coefficients (K) of analytes on temperature (T) are treated as the temperature-independent constants then the accuracy of the model is known to be insufficient for the needed accuracy of retention time prediction. A more accurate 3-parameter Clarke-Glew model offers a way to treat ΔS and ΔH as functions, ΔS(T) and ΔH(T), of T. A known T-centric construction of these functions is based on relating them to the reference values (ΔS ref and ΔH ref ) corresponding to a predetermined reference temperature (T ref ). Choosing a single T ref for all analytes in a complex sample or in a large database might lead to practically irrelevant values of ΔS ref and ΔH ref for those analytes that have too small or too large retention factors at T ref . Breaking all analytes in several subsets each with its own T ref leads to discontinuities in the analyte parameters. These problems are avoided in the K-centric modeling where ΔS(T) and ΔS(T) and other analyte parameters are described in relation to their values corresponding to a predetermined reference distribution coefficient (K Ref ) - the same for all analytes. In this report, the mathematics of the K-centric modeling are described and the properties of several types of K-centric parameters are discussed. It has been shown that the earlier introduced characteristic parameters of the analyte-column interaction (the characteristic temperature, T char , and the characteristic thermal constant, θ char ) are a special chromatographically convenient case of the K-centric parameters. Transformations of T-centric parameters into K-centric ones and vice-versa as well as the transformations of one set of K-centric parameters into another set and vice-versa are described. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Users' Understanding of Search Engine Advertisements

    Directory of Open Access Journals (Sweden)

    Lewandowski, Dirk

    2017-12-01

    Full Text Available In this paper, a large-scale study on users' understanding of search-based advertising is presented. It is based on (1 a survey, (2 a task-based user study, and (3 an online experiment. Data were collected from 1,000 users representative of the German online population. Findings show that users generally lack an understanding of Google's business model and the workings of search-based advertising. 42% of users self-report that they either do not know that it is possible to pay Google for preferred listings for one's company on the SERPs or do not know how to distinguish between organic results and ads. In the task-based user study, we found that only 1.3 percent of participants were able to mark all areas correctly. 9.6 percent had all their identifications correct but did not mark all results they were required to mark. For none of the screenshots given were more than 35% of users able to mark all areas correctly. In the experiment, we found that users who are not able to distinguish between the two results types choose ads around twice as often as users who can recognize the ads. The implications are that models of search engine advertising and of information seeking need to be amended, and that there is a severe need for regulating search-based advertising.

  6. The Influence of Local Ethnic Diversity on Group-Centric Crime Attitudes

    DEFF Research Database (Denmark)

    Hjorth, Frederik

    2017-01-01

    Several studies provide evidence of group-centric policy attitudes, that is, citizens evaluating policies based on linkages with visible social groups. The existing literature generally points to the role of media imagery, rhetoric and prominent political sponsors in driving group-centric attitudes......-down’ influence on group-centric attitudes by elite actors is complemented by ‘bottom-up’ local processes of experiential learning about group–policy linkages....

  7. Featureous: infrastructure for feature-centric analysis of object-oriented software

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2010-01-01

    The decentralized nature of collaborations between objects in object-oriented software makes it difficult to understand how user-observable program features are implemented and how their implementations relate to each other. It is worthwhile to improve this situation, since feature-centric program...... understanding and modification are essential during software evolution and maintenance. In this paper, we present an infrastructure built on top of the NetBeans IDE called Featureous that allows for rapid construction of tools for feature-centric analysis of object-oriented software. Our infrastructure...... encompasses a lightweight feature location mechanism, a number of analytical views and an API allowing for addition of third-party extensions. To form a common conceptual framework for future feature-centric extensions, we propose to structure feature centric analysis along three dimensions: perspective...

  8. EVALUATING A CUSTOMER-CENTRIC APPROACH

    Directory of Open Access Journals (Sweden)

    Luigi-Nicolae DUMITRESCU

    2007-01-01

    Full Text Available Customer focus is, at best, only one element of the relationship between a company and its customers. At worst it is a board-room buzzworld, witch makes every board member feel a little more secure. Not unlike the phrase “working towards equal opportunities”, it is showing an awareness of a need but is not addressing the issues. Customer focus must lead to something meaningful, will probably require sacrifices and is just one of the steps necessary to become truly customer-centric. A customer focus puts your customers high on your list of priorities. When you put your customers into the heart of your business, make customers part of the culture, then you to become customer-centric.

  9. Complex dynamics of our economic life on different scales: insights from search engine query data.

    Science.gov (United States)

    Preis, Tobias; Reith, Daniel; Stanley, H Eugene

    2010-12-28

    Search engine query data deliver insight into the behaviour of individuals who are the smallest possible scale of our economic life. Individuals are submitting several hundred million search engine queries around the world each day. We study weekly search volume data for various search terms from 2004 to 2010 that are offered by the search engine Google for scientific use, providing information about our economic life on an aggregated collective level. We ask the question whether there is a link between search volume data and financial market fluctuations on a weekly time scale. Both collective 'swarm intelligence' of Internet users and the group of financial market participants can be regarded as a complex system of many interacting subunits that react quickly to external changes. We find clear evidence that weekly transaction volumes of S&P 500 companies are correlated with weekly search volume of corresponding company names. Furthermore, we apply a recently introduced method for quantifying complex correlations in time series with which we find a clear tendency that search volume time series and transaction volume time series show recurring patterns.

  10. Semantic Search in E-Discovery: An Interdisciplinary Approach

    NARCIS (Netherlands)

    Graus, D.; Ren, Z.; de Rijke, M.; van Dijk, D.; Henseler, H.; van der Knaap, N.

    2013-01-01

    We propose an interdisciplinary approach to applying and evaluating semantic search in the e-discovery setting. By combining expertise from the fields of law and criminology with that of information retrieval and extraction, we move beyond "algorithm-centric" evaluation, towards evaluating the

  11. Web Spam, Social Propaganda and the Evolution of Search Engine Rankings

    Science.gov (United States)

    Metaxas, Panagiotis Takis

    Search Engines have greatly influenced the way we experience the web. Since the early days of the web, users have been relying on them to get informed and make decisions. When the web was relatively small, web directories were built and maintained using human experts to screen and categorize pages according to their characteristics. By the mid 1990's, however, it was apparent that the human expert model of categorizing web pages does not scale. The first search engines appeared and they have been evolving ever since, taking over the role that web directories used to play.

  12. A Search Engine That's Aware of Your Needs

    Science.gov (United States)

    2005-01-01

    Internet research can be compared to trying to drink from a firehose. Such a wealth of information is available that even the simplest inquiry can sometimes generate tens of thousands of leads, more information than most people can handle, and more burdensome than most can endure. Like everyone else, NASA scientists rely on the Internet as a primary search tool. Unlike the average user, though, NASA scientists perform some pretty sophisticated, involved research. To help manage the Internet and to allow researchers at NASA to gain better, more efficient access to the wealth of information, the Agency needed a search tool that was more refined and intelligent than the typical search engine. Partnership NASA funded Stottler Henke, Inc., of San Mateo, California, a cutting-edge software company, with a Small Business Innovation Research (SBIR) contract to develop the Aware software for searching through the vast stores of knowledge quickly and efficiently. The partnership was through NASA s Ames Research Center.

  13. Searching with Experience - A Search Engine for Product Information that Learns from its Users

    NARCIS (Netherlands)

    Leeuwen, van J.P.; Jessurun, A.J.; Jansen, G.; Martens, B.; Brown, A.

    2005-01-01

    This paper describes the motivation and development of a new algorithm for ranking web pages. This development aims to enable the implementation of a search engine that can provide highly personalised results to queries. It was initiated by a request from the Dutch CAD industry, but has generic

  14. Applying industrial engineering practices to radiology.

    Science.gov (United States)

    Rosen, Len

    2004-01-01

    Seven hospitals in Oregon and Washington have successfully adopted the Toyota Production System (TPS). Developed by Taiichi Ohno, TPS focuses on finding efficiencies and cost savings in manufacturing processes. A similar effort has occurred in Canada, where Toronto's Hospital for Sick Children has developed a database for its diagnostic imaging department built on the principles of TPS applied to patient encounters. Developed over the last 5 years, the database currently manages all interventional patient procedures for quality assurance, inventory, equipment, and labor. By applying industrial engineering methodology to manufacturing processes, it is possible to manage these constraints, eliminate the obstacles to achieving streamlined processes, and keep the cost of delivering products and services under control. Industrial engineering methodology has encouraged all stakeholders in manufacturing plants to become participants in dealing with constraints. It has empowered those on the shop floor as well as management to become partners in the change process. Using a manufacturing process model to organize patient procedures enables imaging department and imaging centers to generate reports that can help them understand utilization of labor, materials, equipment, and rooms. Administrators can determine the cost of individual procedures as well as the total and average cost of specific procedure types. When Toronto's Hospital for Sick Children first implemented industrial engineering methodology to medical imaging interventional radiology patient encounters, it focused on materials management. Early in the process, the return on investment became apparent as the department improved its management of more than 500,000 dollars of inventory. The calculated accumulated savings over 4 years for 10,000 interventional procedures alone amounted to more than 140,000 dollars. The medical imaging department in this hospital is only now beginning to apply what it has learned to

  15. Anatomy and evolution of database search engines-a central component of mass spectrometry based proteomic workflows.

    Science.gov (United States)

    Verheggen, Kenneth; Raeder, Helge; Berven, Frode S; Martens, Lennart; Barsnes, Harald; Vaudel, Marc

    2017-09-13

    Sequence database search engines are bioinformatics algorithms that identify peptides from tandem mass spectra using a reference protein sequence database. Two decades of development, notably driven by advances in mass spectrometry, have provided scientists with more than 30 published search engines, each with its own properties. In this review, we present the common paradigm behind the different implementations, and its limitations for modern mass spectrometry datasets. We also detail how the search engines attempt to alleviate these limitations, and provide an overview of the different software frameworks available to the researcher. Finally, we highlight alternative approaches for the identification of proteomic mass spectrometry datasets, either as a replacement for, or as a complement to, sequence database search engines. © 2017 Wiley Periodicals, Inc.

  16. Search Engines : Some Data Protection Issues

    OpenAIRE

    Unger, Patrick

    2009-01-01

    The thesis elaborates on a topic which has attracted a lot of discussion in recent years and is the subject of an ongoing debate. A big controversy has flourished between the company Google Inc. and several privacy groups' ostensibly led by the "Working Party on the Protection of Individuals with regard to the Processing of Personal Data". Deep data protection concerns have been raised in this debate by the use of a search engine and its storing of personal data belonging to the data subject....

  17. Respiratory syncytial virus tracking using internet search engine data.

    Science.gov (United States)

    Oren, Eyal; Frere, Justin; Yom-Tov, Eran; Yom-Tov, Elad

    2018-04-03

    Respiratory Syncytial Virus (RSV) is the leading cause of hospitalization in children less than 1 year of age in the United States. Internet search engine queries may provide high resolution temporal and spatial data to estimate and predict disease activity. After filtering an initial list of 613 symptoms using high-resolution Bing search logs, we used Google Trends data between 2004 and 2016 for a smaller list of 50 terms to build predictive models of RSV incidence for five states where long-term surveillance data was available. We then used domain adaptation to model RSV incidence for the 45 remaining US states. Surveillance data sources (hospitalization and laboratory reports) were highly correlated, as were laboratory reports with search engine data. The four terms which were most often statistically significantly correlated as time series with the surveillance data in the five state models were RSV, flu, pneumonia, and bronchiolitis. Using our models, we tracked the spread of RSV by observing the time of peak use of the search term in different states. In general, the RSV peak moved from south-east (Florida) to the north-west US. Our study represents the first time that RSV has been tracked using Internet data results and highlights successful use of search filters and domain adaptation techniques, using data at multiple resolutions. Our approach may assist in identifying spread of both local and more widespread RSV transmission and may be applicable to other seasonal conditions where comprehensive epidemiological data is difficult to collect or obtain.

  18. GEMINI: a computationally-efficient search engine for large gene expression datasets.

    Science.gov (United States)

    DeFreitas, Timothy; Saddiki, Hachem; Flaherty, Patrick

    2016-02-24

    Low-cost DNA sequencing allows organizations to accumulate massive amounts of genomic data and use that data to answer a diverse range of research questions. Presently, users must search for relevant genomic data using a keyword, accession number of meta-data tag. However, in this search paradigm the form of the query - a text-based string - is mismatched with the form of the target - a genomic profile. To improve access to massive genomic data resources, we have developed a fast search engine, GEMINI, that uses a genomic profile as a query to search for similar genomic profiles. GEMINI implements a nearest-neighbor search algorithm using a vantage-point tree to store a database of n profiles and in certain circumstances achieves an [Formula: see text] expected query time in the limit. We tested GEMINI on breast and ovarian cancer gene expression data from The Cancer Genome Atlas project and show that it achieves a query time that scales as the logarithm of the number of records in practice on genomic data. In a database with 10(5) samples, GEMINI identifies the nearest neighbor in 0.05 sec compared to a brute force search time of 0.6 sec. GEMINI is a fast search engine that uses a query genomic profile to search for similar profiles in a very large genomic database. It enables users to identify similar profiles independent of sample label, data origin or other meta-data information.

  19. Grid Service for User-Centric Job

    Energy Technology Data Exchange (ETDEWEB)

    Lauret, Jerome

    2009-07-31

    The User Centric Monitoring (UCM) project was aimed at developing a toolkit that provides the Virtual Organization (VO) with tools to build systems that serve a rich set of intuitive job and application monitoring information to the VO’s scientists so that they can be more productive. The tools help collect and serve the status and error information through a Web interface. The proposed UCM toolkit is composed of a set of library functions, a database schema, and a Web portal that will collect and filter available job monitoring information from various resources and present it to users in a user-centric view rather than and administrative-centric point of view. The goal is to create a set of tools that can be used to augment grid job scheduling systems, meta-schedulers, applications, and script sets in order to provide the UCM information. The system provides various levels of an application programming interface that is useful through out the Grid environment and at the application level for logging messages, which are combined with the other user-centric monitoring information in a abstracted “data store”. A planned monitoring portal will also dynamically present the information to users in their web browser in a secure manor, which is also easily integrated into any JSR-compliant portal deployment that a VO might employ. The UCM is meant to be flexible and modular in the ways that it can be adopted to give the VO many choices to build a solution that works for them with special attention to the smaller VOs that do not have the resources to implement home-grown solutions.

  20. Open meta-search with OpenSearch: a case study

    OpenAIRE

    O'Riordan, Adrian P.

    2007-01-01

    The goal of this project was to demonstrate the possibilities of open source search engine and aggregation technology in a Web environment by building a meta-search engine which employs free open search engines and open protocols. In contrast many meta-search engines on the Internet use proprietary search systems. The search engines employed in this case study are all based on the OpenSearch protocol. OpenSearch-compliant systems support XML technologies such as RSS and Atom for aggregation a...

  1. The physics and engineering aspects of radiology.. Textbook with questions and answers. 2. enl. and rev. ed.

    International Nuclear Information System (INIS)

    Link, T.M.; Heppe, A.

    1998-01-01

    The authors have chosen the form of questions and answers derived from practice in order to present and explain the fundamental physics and engineering aspects of radiology. The second, completely revised edition of the textbook has been updated so as to include recent legislation and the guidelines for specialized medical education and training for specialists in diagnostic radiology. One new chapters added deals with the diagnostic method of magnetic resonance imaging (MRI), and the chapters on computed tomography (CT), digital radiography and ultrasonography have been enlarged to include recent developments. The text is accompanied by illustrations that are easy to remember, showing the typical aspects and information, and the chapter containing and discussing diagnostic images has likewise been enlarged by representative CT and MRI images. The book is intended for readers preparing for their examination as specialists, for participants of courses in radiological protection, radiological medical technicians or medical students, and may also serve as a refresher course. (orig./CB) [de

  2. An advanced search engine for patent analytics in medicinal chemistry.

    Science.gov (United States)

    Pasche, Emilie; Gobeill, Julien; Teodoro, Douglas; Gaudinat, Arnaud; Vishnykova, Dina; Lovis, Christian; Ruch, Patrick

    2012-01-01

    Patent collections contain an important amount of medical-related knowledge, but existing tools were reported to lack of useful functionalities. We present here the development of TWINC, an advanced search engine dedicated to patent retrieval in the domain of health and life sciences. Our tool embeds two search modes: an ad hoc search to retrieve relevant patents given a short query and a related patent search to retrieve similar patents given a patent. Both search modes rely on tuning experiments performed during several patent retrieval competitions. Moreover, TWINC is enhanced with interactive modules, such as chemical query expansion, which is of prior importance to cope with various ways of naming biomedical entities. While the related patent search showed promising performances, the ad-hoc search resulted in fairly contrasted results. Nonetheless, TWINC performed well during the Chemathlon task of the PatOlympics competition and experts appreciated its usability.

  3. Evaluating a federated medical search engine: tailoring the methodology and reporting the evaluation outcomes.

    Science.gov (United States)

    Saparova, D; Belden, J; Williams, J; Richardson, B; Schuster, K

    2014-01-01

    Federated medical search engines are health information systems that provide a single access point to different types of information. Their efficiency as clinical decision support tools has been demonstrated through numerous evaluations. Despite their rigor, very few of these studies report holistic evaluations of medical search engines and even fewer base their evaluations on existing evaluation frameworks. To evaluate a federated medical search engine, MedSocket, for its potential net benefits in an established clinical setting. This study applied the Human, Organization, and Technology (HOT-fit) evaluation framework in order to evaluate MedSocket. The hierarchical structure of the HOT-factors allowed for identification of a combination of efficiency metrics. Human fit was evaluated through user satisfaction and patterns of system use; technology fit was evaluated through the measurements of time-on-task and the accuracy of the found answers; and organization fit was evaluated from the perspective of system fit to the existing organizational structure. Evaluations produced mixed results and suggested several opportunities for system improvement. On average, participants were satisfied with MedSocket searches and confident in the accuracy of retrieved answers. However, MedSocket did not meet participants' expectations in terms of download speed, access to information, and relevance of the search results. These mixed results made it necessary to conclude that in the case of MedSocket, technology fit had a significant influence on the human and organization fit. Hence, improving technological capabilities of the system is critical before its net benefits can become noticeable. The HOT-fit evaluation framework was instrumental in tailoring the methodology for conducting a comprehensive evaluation of the search engine. Such multidimensional evaluation of the search engine resulted in recommendations for system improvement.

  4. Comprehensive engineering and radiological survey of the Bochvar VNIINM site

    International Nuclear Information System (INIS)

    Bazhanov, M.S.; Belousov, S.V.; Grishin, E.Zh.; Kotov, A.L.; Kuznetsov, A.Yu.; Savin, S.K.; Sukhanov, L.P.; Chernikov, M.A.; Utrobin, D.V.

    2012-01-01

    The comprehensive engineering and radiological survey (CERS) of Bochvar VNIINM war performed in 2010-2012. During the performance of the survey, radiometric measurements were taken to determine the total activity of α-emitting radionuclides in the air of work rooms, gas releases and in the environment, α- and β-emitting radionuclides in the air of work rooms, air releases from the site, waste waters, samples of snow, soil and vegetation, and in process oils. As a result of the work, experience was obtained in performing CERS of buildings and territory, standardized CERS programmes were developed, essential information was collected about radioactive contamination of both buildings and territory of the Bochvar VNIINM [ru

  5. Global polar geospatial information service retrieval based on search engine and ontology reasoning

    Science.gov (United States)

    Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang

    2007-01-01

    In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

  6. The EBI Search engine: providing search and retrieval functionality for biological data from EMBL-EBI.

    Science.gov (United States)

    Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Gur, Tamer; Cowley, Andrew; Li, Weizhong; Uludag, Mahmut; Pundir, Sangya; Cham, Jennifer A; McWilliam, Hamish; Lopez, Rodrigo

    2015-07-01

    The European Bioinformatics Institute (EMBL-EBI-https://www.ebi.ac.uk) provides free and unrestricted access to data across all major areas of biology and biomedicine. Searching and extracting knowledge across these domains requires a fast and scalable solution that addresses the requirements of domain experts as well as casual users. We present the EBI Search engine, referred to here as 'EBI Search', an easy-to-use fast text search and indexing system with powerful data navigation and retrieval capabilities. API integration provides access to analytical tools, allowing users to further investigate the results of their search. The interconnectivity that exists between data resources at EMBL-EBI provides easy, quick and precise navigation and a better understanding of the relationship between different data types including sequences, genes, gene products, proteins, protein domains, protein families, enzymes and macromolecular structures, together with relevant life science literature. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Data Centric Development Methodology

    Science.gov (United States)

    Khoury, Fadi E.

    2012-01-01

    Data centric applications, an important effort of software development in large organizations, have been mostly adopting a software methodology, such as a waterfall or Rational Unified Process, as the framework for its development. These methodologies could work on structural, procedural, or object oriented based applications, but fails to capture…

  8. Categorization of web pages - Performance enhancement to search engine

    Digital Repository Service at National Institute of Oceanography (India)

    Lakshminarayana, S.

    of Artificial Intelligence, Volume III. Los Altos, CA.: William Kaufmann. pp 1-74. 18. Brin, S. & Page, L. (1998). The anatomy of a large scale hyper-textual web search engine. In Proceedings of the seventh World Wide Web conference, Brisbane, Australia. 19...

  9. Curating the Web: Building a Google Custom Search Engine for the Arts

    Science.gov (United States)

    Hennesy, Cody; Bowman, John

    2008-01-01

    Google's first foray onto the web made search simple and results relevant. With its Co-op platform, Google has taken another step toward dramatically increasing the relevancy of search results, further adapting the World Wide Web to local needs. Google Custom Search Engine, a tool on the Co-op platform, puts one in control of his or her own search…

  10. Introducing the religio-centric positional advantage to Indonesian small businesses

    Directory of Open Access Journals (Sweden)

    Hendar Hendar

    2017-03-01

    Full Text Available With a focus on small religion-based businesses in Indonesia, this research examines whether marketing innovativeness, customers responsiveness and competitors′ responsiveness can improve marketing performance. A conceptual model on the correlation of these three variables with religio-centric positional advantage and marketing performance is examined by using structural equation model. For this purpose 335 small businesses of Islamic Fashion had been studied using purposive sampling from 11 regencies/cities in Central Java (Indonesia. The results showed that (1 small companies in these market segments based on religion is likely to have better marketing performance when they have the Religio-centric positional advantage, (2 The increase in marketing innovativeness, CuR and CoR is required to improve and maintain religio-centric positional advantage, and (3 religio-centric positional advantage is definitely a mediator in the correlation of marketing innovativeness, customer responsiveness and competitors responsiveness with marketing performance.

  11. SpEnD: Linked Data SPARQL Endpoints Discovery Using Search Engines

    OpenAIRE

    Yumusak, Semih; Dogdu, Erdogan; Kodaz, Halife; Kamilaris, Andreas

    2016-01-01

    In this study, a novel metacrawling method is proposed for discovering and monitoring linked data sources on the Web. We implemented the method in a prototype system, named SPARQL Endpoints Discovery (SpEnD). SpEnD starts with a "search keyword" discovery process for finding relevant keywords for the linked data domain and specifically SPARQL endpoints. Then, these search keywords are utilized to find linked data sources via popular search engines (Google, Bing, Yahoo, Yandex). By using this ...

  12. Exploring the Relevance of Search Engines: An Overview of Google as a Case Study

    Directory of Open Access Journals (Sweden)

    Ricardo Beltrán-Alfonso

    2017-08-01

    Full Text Available The huge amount of data on the Internet and the diverse list of strategies used to try to link this information with relevant searches through Linked Data have generated a revolution in data treatment and its representation. Nevertheless, the conventional search engines like Google are kept as strategies with good reception to do search processes. The following article presents a study of the development and evolution of search engines, more specifically, to analyze the relevance of findings based on the number of results displayed in paging systems with Google as a case study. Finally, it is intended to contribute to indexing criteria in search results, based on an approach to Semantic Web as a stage in the evolution of the Web.

  13. SearchResultFinder: federated search made easy

    NARCIS (Netherlands)

    Trieschnigg, Rudolf Berend; Tjin-Kam-Jet, Kien; Hiemstra, Djoerd

    Building a federated search engine based on a large number existing web search engines is a challenge: implementing the programming interface (API) for each search engine is an exacting and time-consuming job. In this demonstration we present SearchResultFinder, a browser plugin which speeds up

  14. Information access in the art history domain. Evaluating a federated search engine for Rembrandt research

    NARCIS (Netherlands)

    Verberne, S.; Boves, L.W.J.; Bosch, A.P.J. van den

    2016-01-01

    The art history domain is an interesting case for search engines tailored to the digital humanities, because the domain involves different types of sources (primary and secondary; text and images). One example of an art history search engine is RemBench, which provides access to information in four

  15. A Webometric Analysis of ISI Medical Journals Using Yahoo, AltaVista, and All the Web Search Engines

    Directory of Open Access Journals (Sweden)

    Zohreh Zahedi

    2010-12-01

    Full Text Available The World Wide Web is an important information source for scholarly communications. Examining the inlinks via webometrics studies has attracted particular interests among information researchers. In this study, the number of inlinks to 69 ISI medical journals retrieved by Yahoo, AltaVista, and All The web Search Engines were examined via a comparative and Webometrics study. For data analysis, SPSS software was employed. Findings revealed that British Medical Journal website attracted the most links of all in the three search engines. There is a significant correlation between the number of External links and the ISI impact factor. The most significant correlation in the three search engines exists between external links of Yahoo and AltaVista (100% and the least correlation is found between external links of All The web & the number of pages of AltaVista (0.51. There is no significant difference between the internal links & the number of pages found by the three search engines. But in case of impact factors, significant differences are found between these three search engines. So, the study shows that journals with higher impact factor attract more links to their websites. It also indicates that the three search engines are significantly different in terms of total links, outlinks and web impact factors

  16. Search and rescue in collapsed structures: engineering and social science aspects.

    Science.gov (United States)

    El-Tawil, Sherif; Aguirre, Benigno

    2010-10-01

    This paper discusses the social science and engineering dimensions of search and rescue (SAR) in collapsed buildings. First, existing information is presented on factors that influence the behaviour of trapped victims, particularly human, physical, socioeconomic and circumstantial factors. Trapped victims are most often discussed in the context of structural collapse and injuries sustained. Most studies in this area focus on earthquakes as the type of disaster that produces the most extensive structural damage. Second, information is set out on the engineering aspects of urban search and rescue (USAR) in the United States, including the role of structural engineers in USAR operations, training and certification of structural specialists, and safety and general procedures. The use of computational simulation to link the engineering and social science aspects of USAR is discussed. This could supplement training of local SAR groups and USAR teams, allowing them to understand better the collapse process and how voids form in a rubble pile. A preliminary simulation tool developed for this purpose is described. © 2010 The Author(s). Journal compilation © Overseas Development Institute, 2010.

  17. History of metaphoric signs in radiology

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Stephen R., E-mail: bakersr@umdnj.edu; Noorelahi, Yasser M., E-mail: dr.ynoorelahi@gmail.com; Ghosh, Shanchita, E-mail: Ghoshs1@umdnj.edu; Yang, Lily C., E-mail: yangclily@gmail.com; Kasper, David J., E-mail: dkasp86@gmail.com

    2013-09-15

    Purpose: To survey the nearly 100 year history of metaphoric sign naming in radiology describing the pace of their overall accumulation in the radiology canon, their specific rates of growth by modality and subspecialty and the characteristics of the referents to which the signs are attached. Materials and methods: A comprehensive list of metaphoric signs was compiled from a search of articles in several major English language radiology journals, from a roster compiled in a monograph on the subject published in 1984 and from a search of several databases to find signs published in the first half of the 20th century. Results: The growth of radiological metaphorical signs naming was slow for several decades after the first one was published in 1918. It then increased rapidly until the 1980s encompassing all modalities and subspecialties. Recently the practice has shown a marked and steady decline. Conclusion: Metaphoric sign naming was a frequently reported contribution to the radiological literature in the second half of the 20th century corresponding with Radiology's growth as a descriptive discipline. Its decline since then may be a consequence of Radiology's evolution into a more analytic, data-driven field of inquiry.

  18. History of metaphoric signs in radiology

    International Nuclear Information System (INIS)

    Baker, Stephen R.; Noorelahi, Yasser M.; Ghosh, Shanchita; Yang, Lily C.; Kasper, David J.

    2013-01-01

    Purpose: To survey the nearly 100 year history of metaphoric sign naming in radiology describing the pace of their overall accumulation in the radiology canon, their specific rates of growth by modality and subspecialty and the characteristics of the referents to which the signs are attached. Materials and methods: A comprehensive list of metaphoric signs was compiled from a search of articles in several major English language radiology journals, from a roster compiled in a monograph on the subject published in 1984 and from a search of several databases to find signs published in the first half of the 20th century. Results: The growth of radiological metaphorical signs naming was slow for several decades after the first one was published in 1918. It then increased rapidly until the 1980s encompassing all modalities and subspecialties. Recently the practice has shown a marked and steady decline. Conclusion: Metaphoric sign naming was a frequently reported contribution to the radiological literature in the second half of the 20th century corresponding with Radiology's growth as a descriptive discipline. Its decline since then may be a consequence of Radiology's evolution into a more analytic, data-driven field of inquiry

  19. Key word placing in Web page body text to increase visibility to search engines

    Directory of Open Access Journals (Sweden)

    W. T. Kritzinger

    2007-11-01

    Full Text Available The growth of the World Wide Web has spawned a wide variety of new information sources, which has also left users with the daunting task of determining which sources are valid. Many users rely on the Web as an information source because of the low cost of information retrieval. It is also claimed that the Web has evolved into a powerful business tool. Examples include highly popular business services such as Amazon.com and Kalahari.net. It is estimated that around 80% of users utilize search engines to locate information on the Internet. This, by implication, places emphasis on the underlying importance of Web pages being listed on search engines indices. Empirical evidence that the placement of key words in certain areas of the body text will have an influence on the Web sites' visibility to search engines could not be found in the literature. The result of two experiments indicated that key words should be concentrated towards the top, and diluted towards the bottom of a Web page to increase visibility. However, care should be taken in terms of key word density, to prevent search engine algorithms from raising the spam alarm.

  20. Process-centric IT in Practice

    DEFF Research Database (Denmark)

    Siurdyban, Artur; Nielsen, Peter Axel

    2012-01-01

    , they should find governance structures which ensure that there is a fruitful collaboration between the corporate IT department and the local business units. This collaboration should also include different competences with both IT and process management and competences differing because they are centralized......This case illustrates and discusses the issues and challenges at Kerrtec Corporation in their effort to establish process-centric IT management. The case describes how a local business unit in Kerrtec managed their business processes and how that created a necessity for IT to be managed to match...... the business processes. It also describes how the central IT department at corporate headquarters responded to requests rooted in business processes. In discussing the challenges for Kerrtec, it is clear that they will have to map out the needed competences for process-centric IT management. In particular...

  1. A World Wide Web Region-Based Image Search Engine

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper the development of an intelligent image content-based search engine for the World Wide Web is presented. This system will offer a new form of media representation and access of content available in WWW. Information Web Crawlers continuously traverse the Internet and collect images...

  2. Pyndri: a Python Interface to the Indri Search Engine

    NARCIS (Netherlands)

    Van Gysel, C.; Kanoulas, E.; de Rijke, M.; Jose, J.M.; Hauff, C.; Altıngovde, I.S.; Song, D.; Albakour, D.; Watt, S.; Tait, J.

    2017-01-01

    We introduce pyndri, a Python interface to the Indri search engine. Pyndri allows to access Indri indexes from Python at two levels: (1) dictionary and tokenized document collection, (2) evaluating queries on the index. We hope that with the release of pyndri, we will stimulate reproducible, open

  3. Advanced Technologies, Embedded and Multimedia for Human-Centric Computing

    CERN Document Server

    Chao, Han-Chieh; Deng, Der-Jiunn; Park, James; HumanCom and EMC 2013

    2014-01-01

    The theme of HumanCom and EMC are focused on the various aspects of human-centric computing for advances in computer science and its applications, embedded and multimedia computing and provides an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of human-centric computing. And the theme of EMC (Advanced in Embedded and Multimedia Computing) is focused on the various aspects of embedded system, smart grid, cloud and multimedia computing, and it provides an opportunity for academic, industry professionals to discuss the latest issues and progress in the area of embedded and multimedia computing. Therefore this book will be include the various theories and practical applications in human-centric computing and embedded and multimedia computing.

  4. [On the seasonality of dermatoses: a retrospective analysis of search engine query data depending on the season].

    Science.gov (United States)

    Köhler, M J; Springer, S; Kaatz, M

    2014-09-01

    The volume of search engine queries about disease-relevant items reflects public interest and correlates with disease prevalence as proven by the example of flu (influenza). Other influences include media attention or holidays. The present work investigates if the seasonality of prevalence or symptom severity of dermatoses correlates with search engine query data. The relative weekly volume of dermatological relevant search terms was assessed by the online tool Google Trends for the years 2009-2013. For each item, the degree of seasonality was calculated via frequency analysis and a geometric approach. Many dermatoses show a marked seasonality, reflected by search engine query volumes. Unexpected seasonal variations of these queries suggest a previously unknown variability of the respective disease prevalence. Furthermore, using the example of allergic rhinitis, a close correlation of search engine query data with actual pollen count can be demonstrated. In many cases, search engine query data are appropriate to estimate seasonal variability in prevalence of common dermatoses. This finding may be useful for real-time analysis and formation of hypotheses concerning pathogenetic or symptom aggravating mechanisms and may thus contribute to improvement of diagnostics and prevention of skin diseases.

  5. PubMed vs. HighWire Press: a head-to-head comparison of two medical literature search engines.

    Science.gov (United States)

    Vanhecke, Thomas E; Barnes, Michael A; Zimmerman, Janet; Shoichet, Sandor

    2007-09-01

    PubMed and HighWire Press are both useful medical literature search engines available for free to anyone on the internet. We measured retrieval accuracy, number of results generated, retrieval speed, features and search tools on HighWire Press and PubMed using the quick search features of each. We found that using HighWire Press resulted in a higher likelihood of retrieving the desired article and higher number of search results than the same search on PubMed. PubMed was faster than HighWire Press in delivering search results regardless of search settings. There are considerable differences in search features between these two search engines.

  6. Radiology consultation in the era of precision oncology: A review of consultation models and services in the tertiary setting

    Energy Technology Data Exchange (ETDEWEB)

    DiPiro, Pamela J.; Krajewski, Katherine M.; Giardino, Angela A.; Braschi-Amirfarzan, Marta; Ramaiya, Nikhil H. [Dept. of Radiology, Brigham and Women' s Hospital, Harvard Medical School, Boston (United States)

    2017-01-15

    The purpose of the article is to describe the various radiology consultation models in the Era of Precision Medicine. Since the inception of our specialty, radiologists have served as consultants to physicians of various disciplines. A variety of radiology consultation services have been described in the literature, including clinical decision support, patient-centric, subspecialty interpretation, and/or some combination of these. In oncology care in particular, case complexity often merits open dialogue with clinical providers. To explore the utility and impact of radiology consultation services in the academic setting, this article will further describe existing consultation models and the circumstances that precipitated their development. The hybrid model successful at our tertiary cancer center is discussed. In addition, the contributions of a consultant radiologist in breast cancer care are reviewed as the archetype of radiology consultation services provided to oncology practitioners.

  7. Radiology Consultation in the Era of Precision Oncology: A Review of Consultation Models and Services in the Tertiary Setting.

    Science.gov (United States)

    DiPiro, Pamela J; Krajewski, Katherine M; Giardino, Angela A; Braschi-Amirfarzan, Marta; Ramaiya, Nikhil H

    2017-01-01

    The purpose of the article is to describe the various radiology consultation models in the Era of Precision Medicine. Since the inception of our specialty, radiologists have served as consultants to physicians of various disciplines. A variety of radiology consultation services have been described in the literature, including clinical decision support, patient-centric, subspecialty interpretation, and/or some combination of these. In oncology care in particular, case complexity often merits open dialogue with clinical providers. To explore the utility and impact of radiology consultation services in the academic setting, this article will further describe existing consultation models and the circumstances that precipitated their development. The hybrid model successful at our tertiary cancer center is discussed. In addition, the contributions of a consultant radiologist in breast cancer care are reviewed as the archetype of radiology consultation services provided to oncology practitioners.

  8. In Search of Search Engine Marketing Strategy Amongst SME's in Ireland

    Science.gov (United States)

    Barry, Chris; Charleton, Debbie

    Researchers have identified the Web as a searchers first port of call for locating information. Search Engine Marketing (SEM) strategies have been noted as a key consideration when developing, maintaining and managing Websites. A study presented here of SEM practices of Irish small to medium enterprises (SMEs) reveals they plan to spend more resources on SEM in the future. Most firms utilize an informal SEM strategy, where Website optimization is perceived most effective in attracting traffic. Respondents cite the use of ‘keywords in title and description tags’ as the most used SEM technique, followed by the use of ‘keywords throughout the whole Website’; while ‘Pay for Placement’ was most widely used Paid Search technique. In concurrence with the literature, measuring SEM performance remains a significant challenge with many firms unsure if they measure it effectively. An encouraging finding is that Irish SMEs adopt a positive ethical posture when undertaking SEM.

  9. Quantitative evaluation of recall and precision of CAT Crawler, a search engine specialized on retrieval of Critically Appraised Topics

    Science.gov (United States)

    Dong, Peng; Wong, Ling Ling; Ng, Sarah; Loh, Marie; Mondry, Adrian

    2004-01-01

    Background Critically Appraised Topics (CATs) are a useful tool that helps physicians to make clinical decisions as the healthcare moves towards the practice of Evidence-Based Medicine (EBM). The fast growing World Wide Web has provided a place for physicians to share their appraised topics online, but an increasing amount of time is needed to find a particular topic within such a rich repository. Methods A web-based application, namely the CAT Crawler, was developed by Singapore's Bioinformatics Institute to allow physicians to adequately access available appraised topics on the Internet. A meta-search engine, as the core component of the application, finds relevant topics following keyword input. The primary objective of the work presented here is to evaluate the quantity and quality of search results obtained from the meta-search engine of the CAT Crawler by comparing them with those obtained from two individual CAT search engines. From the CAT libraries at these two sites, all possible keywords were extracted using a keyword extractor. Of those common to both libraries, ten were randomly chosen for evaluation. All ten were submitted to the two search engines individually, and through the meta-search engine of the CAT Crawler. Search results were evaluated for relevance both by medical amateurs and professionals, and the respective recall and precision were calculated. Results While achieving an identical recall, the meta-search engine showed a precision of 77.26% (±14.45) compared to the individual search engines' 52.65% (±12.0) (p search engine approach. The improved precision due to inherent filters underlines the practical usefulness of this tool for clinicians. PMID:15588311

  10. Radiological effluents released from nuclear rocket and ramjet engine tests at the Nevada Test Site 1959 through 1969: Fact Book

    Energy Technology Data Exchange (ETDEWEB)

    Friesen, H.N.

    1995-06-01

    Nuclear rocket and ramjet engine tests were conducted on the Nevada Test Site (NTS) in Area 25 and Area 26, about 80 miles northwest of Las Vegas, Nevada, from July 1959 through September 1969. This document presents a brief history of the nuclear rocket engine tests, information on the off-site radiological monitoring, and descriptions of the tests.

  11. Improvement of natural image search engines results by emotional filtering

    Directory of Open Access Journals (Sweden)

    Patrice Denis

    2016-04-01

    Full Text Available With the Internet 2.0 era, managing user emotions is a problem that more and more actors are interested in. Historically, the first notions of emotion sharing were expressed and defined with emoticons. They allowed users to show their emotional status to others in an impersonal and emotionless digital world. Now, in the Internet of social media, every day users share lots of content with each other on Facebook, Twitter, Google+ and so on. Several new popular web sites like FlickR, Picassa, Pinterest, Instagram or DeviantArt are now specifically based on sharing image content as well as personal emotional status. This kind of information is economically very valuable as it can for instance help commercial companies sell more efficiently. In fact, with this king of emotional information, business can made where companies will better target their customers needs, and/or even sell them more products. Research has been and is still interested in the mining of emotional information from user data since then. In this paper, we focus on the impact of emotions from images that have been collected from search image engines. More specifically our proposition is the creation of a filtering layer applied on the results of such image search engines. Our peculiarity relies in the fact that it is the first attempt from our knowledge to filter image search engines results with an emotional filtering approach.

  12. Understanding Mechanisms of Radiological Contamination

    Energy Technology Data Exchange (ETDEWEB)

    Rick Demmer; John Drake; Ryan James, PhD

    2014-03-01

    Over the last 50 years, the study of radiological contamination and decontamination has expanded significantly. This paper addresses the mechanisms of radiological contamination that have been reported and then discusses which methods have recently been used during performance testing of several different decontamination technologies. About twenty years ago the Idaho Nuclear Technology Engineering Center (INTEC) at the INL began a search for decontamination processes which could minimize secondary waste. In order to test the effectiveness of these decontamination technologies, a new simulated contamination, termed SIMCON, was developed. SIMCON was designed to replicate the types of contamination found on stainless steel, spent fuel processing equipment. Ten years later, the INL began research into methods for simulating urban contamination resulting from a radiological dispersal device (RDD). This work was sponsored by the Defense Advanced Research Projects Agency (DARPA) and included the initial development an aqueous application of contaminant to substrate. Since 2007, research sponsored by the US Environmental Protection Agency (EPA) has advanced that effort and led to the development of a contamination method that simulates particulate fallout from an Improvised Nuclear Device (IND). The IND method diverges from previous efforts to create tenacious contamination by simulating a reproducible “loose” contamination. Examining these different types of contamination (and subsequent decontamination processes), which have included several different radionuclides and substrates, sheds light on contamination processes that occur throughout the nuclear industry and in the urban environment.

  13. GeNemo: a search engine for web-based functional genomic data.

    Science.gov (United States)

    Zhang, Yongqing; Cao, Xiaoyi; Zhong, Sheng

    2016-07-08

    A set of new data types emerged from functional genomic assays, including ChIP-seq, DNase-seq, FAIRE-seq and others. The results are typically stored as genome-wide intensities (WIG/bigWig files) or functional genomic regions (peak/BED files). These data types present new challenges to big data science. Here, we present GeNemo, a web-based search engine for functional genomic data. GeNemo searches user-input data against online functional genomic datasets, including the entire collection of ENCODE and mouse ENCODE datasets. Unlike text-based search engines, GeNemo's searches are based on pattern matching of functional genomic regions. This distinguishes GeNemo from text or DNA sequence searches. The user can input any complete or partial functional genomic dataset, for example, a binding intensity file (bigWig) or a peak file. GeNemo reports any genomic regions, ranging from hundred bases to hundred thousand bases, from any of the online ENCODE datasets that share similar functional (binding, modification, accessibility) patterns. This is enabled by a Markov Chain Monte Carlo-based maximization process, executed on up to 24 parallel computing threads. By clicking on a search result, the user can visually compare her/his data with the found datasets and navigate the identified genomic regions. GeNemo is available at www.genemo.org. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Quantitative evaluation of recall and precision of CAT Crawler, a search engine specialized on retrieval of Critically Appraised Topics

    Directory of Open Access Journals (Sweden)

    Loh Marie

    2004-12-01

    Full Text Available Abstract Background Critically Appraised Topics (CATs are a useful tool that helps physicians to make clinical decisions as the healthcare moves towards the practice of Evidence-Based Medicine (EBM. The fast growing World Wide Web has provided a place for physicians to share their appraised topics online, but an increasing amount of time is needed to find a particular topic within such a rich repository. Methods A web-based application, namely the CAT Crawler, was developed by Singapore's Bioinformatics Institute to allow physicians to adequately access available appraised topics on the Internet. A meta-search engine, as the core component of the application, finds relevant topics following keyword input. The primary objective of the work presented here is to evaluate the quantity and quality of search results obtained from the meta-search engine of the CAT Crawler by comparing them with those obtained from two individual CAT search engines. From the CAT libraries at these two sites, all possible keywords were extracted using a keyword extractor. Of those common to both libraries, ten were randomly chosen for evaluation. All ten were submitted to the two search engines individually, and through the meta-search engine of the CAT Crawler. Search results were evaluated for relevance both by medical amateurs and professionals, and the respective recall and precision were calculated. Results While achieving an identical recall, the meta-search engine showed a precision of 77.26% (±14.45 compared to the individual search engines' 52.65% (±12.0 (p Conclusion The results demonstrate the validity of the CAT Crawler meta-search engine approach. The improved precision due to inherent filters underlines the practical usefulness of this tool for clinicians.

  15. Using internet search engines and library catalogs to locate toxicology information.

    Science.gov (United States)

    Wukovitz, L D

    2001-01-12

    The increasing importance of the Internet demands that toxicologists become aquainted with its resources. To find information, researchers must be able to effectively use Internet search engines, directories, subject-oriented websites, and library catalogs. The article will explain these resources, explore their benefits and weaknesses, and identify skills that help the researcher to improve search results and critically evaluate sources for their relevancy, validity, accuracy, and timeliness.

  16. GoFFish: A Sub-Graph Centric Framework for Large-Scale Graph Analytics1

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Kumbhare, Alok; Wickramaarachchi, Charith; Nagarkar, Soonil; Ravi, Santosh; Raghavendra, Cauligi; Prasanna, Viktor

    2014-08-25

    Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines the scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.

  17. A search engine to find the best data?

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    What if you could see your experiment’s results in a “page rank” style? How would your workflow change if you could collaborate with your colleagues on a single platform? What if you could search all your event data for certain specifications? All of these ideas (and more) are being explored at the LHCb experiment in collaboration with Internet giant Yandex.   An extremely rare B0s → μμ decay candidate event observed in the LHCb detector. As the leading search provider in Russia, with over 60% of the market share, Yandex is to East what Google is to West. Their collaboration with CERN began back in 2011, when Yandex co-founder Ilya Segalovich was approached by then-LHCb spokesperson Andrei Golutvin. “Just as Yandex's search engines sift through thousands of websites to find the right page, our experimentalists apply algorithms to find the best result in our data," says Andrei Golutvin. "Perhaps the techn...

  18. Al Hirschfeld's NINA as a prototype search task for studying perceptual error in radiology

    Science.gov (United States)

    Nodine, Calvin F.; Kundel, Harold L.

    1997-04-01

    Artist Al Hirschfeld has been hiding the word NINA (his daughter's name) in line drawings of theatrical scenes that have appeared in the New York Times for over 50 years. This paper shows how Hirschfeld's search task of finding the name NINA in his drawings illustrates basic perceptual principles of detection, discrimination and decision-making commonly encountered in radiology search tasks. Hirschfeld's hiding of NINA is typically accomplished by camouflaging the letters of the name and blending them into scenic background details such as wisps of hair and folds of clothing. In a similar way, pulmonary nodules and breast lesions are camouflaged by anatomic features of the chest or breast image. Hirschfeld's hidden NINAs are sometimes missed because they are integrated into a Gestalt overview rather than differentiated from background features during focal scanning. This may be similar to overlooking an obvious nodule behind the heart in a chest x-ray image. Because it is a search game, Hirschfeld assigns a number to each drawing to indicate how many NINAs he has hidden so as not to frustrate his viewers. In the radiologists' task, the number of targets detected in a medical image is determined by combining perceptual input with probabilities generated from clinical history and viewing experience. Thus, in the absence of truth, searching for abnormalities in x-ray images creates opportunities for recognition and decision errors (e.g. false positives and false negatives). We illustrate how camouflage decreases the conspicuity of both artistic and radiographic targets, compare detection performance of radiologists with lay persons searching for NINAs, and, show similarities and differences between scanning strategies of the two groups based on eye-position data.

  19. Free and open source enabling technologies for patient-centric, guideline-based clinical decision support: a survey.

    Science.gov (United States)

    Leong, T Y; Kaiser, K; Miksch, S

    2007-01-01

    Guideline-based clinical decision support is an emerging paradigm to help reduce error, lower cost, and improve quality in evidence-based medicine. The free and open source (FOS) approach is a promising alternative for delivering cost-effective information technology (IT) solutions in health care. In this paper, we survey the current FOS enabling technologies for patient-centric, guideline-based care, and discuss the current trends and future directions of their role in clinical decision support. We searched PubMed, major biomedical informatics websites, and the web in general for papers and links related to FOS health care IT systems. We also relied on our background and knowledge for specific subtopics. We focused on the functionalities of guideline modeling tools, and briefly examined the supporting technologies for terminology, data exchange and electronic health record (EHR) standards. To effectively support patient-centric, guideline-based care, the computerized guidelines and protocols need to be integrated with existing clinical information systems or EHRs. Technologies that enable such integration should be accessible, interoperable, and scalable. A plethora of FOS tools and techniques for supporting different knowledge management and quality assurance tasks involved are available. Many challenges, however, remain in their implementation. There are active and growing trends of deploying FOS enabling technologies for integrating clinical guidelines, protocols, and pathways into the main care processes. The continuing development and maturation of such technologies are likely to make increasingly significant contributions to patient-centric, guideline-based clinical decision support.

  20. Environmental engineering: Saving a threatened resource--In search of solutions

    International Nuclear Information System (INIS)

    Linaweaver, F.P.

    1992-01-01

    This proceedings, Environmental Engineering: Saving a Threatened Resource--In search of solutions, contains papers presented at the 1992 National Conference on Environmental Engineering, a component of Water Forum '92, Baltimore, Maryland, August 2-5, 1992. Some of the topics addressed include air quality; environmental assessment; sludge management and disposal; solid waste, toxic and hazardous materials; water supply and treatment; and water/wastewater infrastructure. In addition, key areas explored are toxicity reduction; urban nonpoint source pollution; incineration; landfills; leachate control; and VOC emissions from wastewater treatment plants. This publication provides the environmental engineer with state-of-the-art information on practical environmental engineering and results from recent advancements in scientific knowledge in this field. Individual papers are processed separately for inclusion in the appropriate data bases

  1. Crescendo: A Protein Sequence Database Search Engine for Tandem Mass Spectra.

    Science.gov (United States)

    Wang, Jianqi; Zhang, Yajie; Yu, Yonghao

    2015-07-01

    A search engine that discovers more peptides reliably is essential to the progress of the computational proteomics. We propose two new scoring functions (L- and P-scores), which aim to capture similar characteristics of a peptide-spectrum match (PSM) as Sequest and Comet do. Crescendo, introduced here, is a software program that implements these two scores for peptide identification. We applied Crescendo to test datasets and compared its performance with widely used search engines, including Mascot, Sequest, and Comet. The results indicate that Crescendo identifies a similar or larger number of peptides at various predefined false discovery rates (FDR). Importantly, it also provides a better separation between the true and decoy PSMs, warranting the future development of a companion post-processing filtering algorithm.

  2. A novel algorithm for validating peptide identification from a shotgun proteomics search engine.

    Science.gov (United States)

    Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Mu, Zheng; Jennings, Jennifer L; Hoek, Kristen L; Allos, Tara; Howard, Leigh M; Edwards, Kathryn M; Weil, P Anthony; Link, Andrew J

    2013-03-01

    Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC-MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three-step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm on the basis of the resolution and mass accuracy of the mass spectrometer employed in the LC-MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines.

  3. Preliminary Comparison of Three Search Engines for Point of Care Access to MEDLINE® Citations

    Science.gov (United States)

    Hauser, Susan E.; Demner-Fushman, Dina; Ford, Glenn M.; Jacobs, Joshua L.; Thoma, George

    2006-01-01

    Medical resident physicians used MD on Tap in real time to search for MEDLINE citations relevant to clinical questions using three search engines: Essie, Entrez and Google™, in order of performance. PMID:17238564

  4. Search Engine Customization and Data Set Builder

    OpenAIRE

    Arias Moreno, Fco Javier

    2009-01-01

    There are two core objectives in this work: firstly, to build a data set, and secondly, to customize a search engine. The first objective is to design and implement a data set builder. There are two steps required for this. The first step is to build a crawler. The second step is to include a cleaner. The crawler collects Web links. The cleaner extracts the main content and removes noise from the files crawled. The goal of this application is crawling Web news sites to find the...

  5. Reconsidering the Rhizome: A Textual Analysis of Web Search Engines as Gatekeepers of the Internet

    Science.gov (United States)

    Hess, A.

    Critical theorists have often drawn from Deleuze and Guattari's notion of the rhizome when discussing the potential of the Internet. While the Internet may structurally appear as a rhizome, its day-to-day usage by millions via search engines precludes experiencing the random interconnectedness and potential democratizing function. Through a textual analysis of four search engines, I argue that Web searching has grown hierarchies, or "trees," that organize data in tracts of knowledge and place users in marketing niches rather than assist in the development of new knowledge.

  6. Cooperative Human-Centric Sensing Cooperation

    DEFF Research Database (Denmark)

    Mihovska, Albena

    Human-Centric Sensing (HCS) is a new concept relevant to Internet of Things (IoT). HCS connectivity, referred to as “smart connectivity” enables applications that are highly personalized and often time-critical. In a typical HCS scenario, there may be many hundreds of sensor streams connections...

  7. Georgia Tech video-based MS program in health physics/radiological engineering

    International Nuclear Information System (INIS)

    Abdel-Khalik, S.I.; Kahn, B.

    1991-01-01

    For the past several years, the health physics/radiation protection field has experienced a significant shortage of qualified professionals. The shortage is expected to continue for foreseeable future given the continued demand by both nuclear and medical facilities and the expected growth in the areas of waste management and environmental remediation. In response to such a shortage, beginning in the fall of 1984, Georgia Institute of Technology (Georgia Tech) established a video-based instruction program that enables professionals in the nuclear field to earn a master of science degree in health physics/radiological engineering while working at a distant nuclear facility. The admission criteria and curricular requirements for the program are identical to those for the resident (on-campus) students (except that weekly attendance at departmental seminars is excused). The program is designed for students with undergraduate degrees in health physics, engineering, or appropriate sciences such as physics, chemistry, or biology. A total of 50 quarter credit hours is required, so that a student who takes one course per quarter can complete the program in four years

  8. Search Engine Marketing (SEM): Financial & Competitive Advantages of an Effective Hotel SEM Strategy

    OpenAIRE

    Leora Halpern Lanz

    2015-01-01

    Search Engine Marketing and Optimization (SEO, SEM) are keystones of a hotels marketing strategy, in fact research shows that 90% of travelers start their vacation planning with a Google search. Learn five strategies that can enhance a hotels SEO and SEM strategies to boost bookings.

  9. Query Log Analysis of an Electronic Health Record Search Engine

    Science.gov (United States)

    Yang, Lei; Mei, Qiaozhu; Zheng, Kai; Hanauer, David A.

    2011-01-01

    We analyzed a longitudinal collection of query logs of a full-text search engine designed to facilitate information retrieval in electronic health records (EHR). The collection, 202,905 queries and 35,928 user sessions recorded over a course of 4 years, represents the information-seeking behavior of 533 medical professionals, including frontline practitioners, coding personnel, patient safety officers, and biomedical researchers for patient data stored in EHR systems. In this paper, we present descriptive statistics of the queries, a categorization of information needs manifested through the queries, as well as temporal patterns of the users’ information-seeking behavior. The results suggest that information needs in medical domain are substantially more sophisticated than those that general-purpose web search engines need to accommodate. Therefore, we envision there exists a significant challenge, along with significant opportunities, to provide intelligent query recommendations to facilitate information retrieval in EHR. PMID:22195150

  10. An engineering approach to an integrated value proposition design framework

    Directory of Open Access Journals (Sweden)

    Van Der Merwe, Carmen

    2015-05-01

    Full Text Available Numerous problems with product quality and time-to-market launches can be traced back to how the product lifecycle process is managed within the organisation. This article provides insight into how an integrated value proposition design framework shifts product lifecycle management from a product-centric view to a customer-centric view, through the use of good engineering practices as found in the systems engineering discipline. Combining this with methods and tools such as the Refined Kano model, Blue Ocean strategy, and the Generalised Bass model enables the organisation to enhance product and service quality while reducing the time-to-market for new value proposition launches.

  11. The LAILAPS search engine: a feature model for relevance ranking in life science databases.

    Science.gov (United States)

    Lange, Matthias; Spies, Karl; Colmsee, Christian; Flemming, Steffen; Klapperstück, Matthias; Scholz, Uwe

    2010-03-25

    Efficient and effective information retrieval in life sciences is one of the most pressing challenge in bioinformatics. The incredible growth of life science databases to a vast network of interconnected information systems is to the same extent a big challenge and a great chance for life science research. The knowledge found in the Web, in particular in life-science databases, are a valuable major resource. In order to bring it to the scientist desktop, it is essential to have well performing search engines. Thereby, not the response time nor the number of results is important. The most crucial factor for millions of query results is the relevance ranking. In this paper, we present a feature model for relevance ranking in life science databases and its implementation in the LAILAPS search engine. Motivated by the observation of user behavior during their inspection of search engine result, we condensed a set of 9 relevance discriminating features. These features are intuitively used by scientists, who briefly screen database entries for potential relevance. The features are both sufficient to estimate the potential relevance, and efficiently quantifiable. The derivation of a relevance prediction function that computes the relevance from this features constitutes a regression problem. To solve this problem, we used artificial neural networks that have been trained with a reference set of relevant database entries for 19 protein queries. Supporting a flexible text index and a simple data import format, this concepts are implemented in the LAILAPS search engine. It can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases. LAILAPS is publicly available for SWISSPROT data at http://lailaps.ipk-gatersleben.de.

  12. Durham Zoo: Powering a Search-&-Innovation Engine with Collective Intelligence

    Directory of Open Access Journals (Sweden)

    Richard Absalom

    2015-02-01

    Full Text Available Purpose – Durham Zoo (hereinafter – DZ is a project to design and operate a concept search engine for science and technology. In DZ, a concept includes a solution to a problem in a particular context.Design – Concept searching is rendered complex by the fuzzy nature of a concept, the many possible implementations of the same concept, and the many more ways that the many implementations can be expressed in natural language. An additional complexity is the diversity of languages and formats, in which the concepts can be disclosed.Humans understand language, inference, implication and abstraction and, hence, concepts much better than computers, that in turn are much better at storing and processing vast amounts of data.We are 7 billion on the planet and we have the Internet as the backbone for Collective Intelligence. So, our concept search engine uses humans to store concepts via a shorthand that can be stored, processed and searched by computers: so, humans IN and computers OUT.The shorthand is classification: metadata in a structure that can define the content of a disclosure. The classification is designed to be powerful in terms of defining and searching concepts, whilst suited to a crowdsourcing effort. It is simple and intuitive to use. Most importantly, it is adapted to restrict ambiguity, which is the poison of classification, without imposing a restrictive centralised management.In the classification scheme, each entity is shown together in a graphical representation with related entities. The entities are arranged on a sliding scale of similarity. This sliding scale is effectively fuzzy classification.Findings – The authors of the paper have been developing a first classification scheme for the technology of traffic cones, this in preparation for a trial of a working system. The process has enabled the authors to further explore the practicalities of concept classification. The CmapTools knowledge modelling kit to develop the

  13. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.

    Science.gov (United States)

    Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John

    2012-12-05

    For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.

  14. Ergonomics in radiology

    Energy Technology Data Exchange (ETDEWEB)

    Goyal, N. [Department of Radiology, University Hospital of Wales, Cardiff (United Kingdom)], E-mail: nimitgoyal@doctors.org.uk; Jain, N.; Rachapalli, V. [Department of Radiology, University Hospital of Wales, Cardiff (United Kingdom)

    2009-02-15

    The use of computers is increasing in every field of medicine, especially radiology. Filmless radiology departments, speech recognition software, electronic request forms and teleradiology are some of the recent developments that have substantially increased the amount of time a radiologist spends in front of a computer monitor. Computers are also needed for searching literature on the internet, communicating via e-mails, and preparing for lectures and presentations. It is well known that regular computer users can suffer musculoskeletal injuries due to repetitive stress. The role of ergonomics in radiology is to ensure that working conditions are optimized in order to avoid injury and fatigue. Adequate workplace ergonomics can go a long way in increasing productivity, efficiency, and job satisfaction. We review the current literature pertaining to the role of ergonomics in modern-day radiology especially with the development of picture archiving and communication systems (PACS) workstations.

  15. Ergonomics in radiology

    International Nuclear Information System (INIS)

    Goyal, N.; Jain, N.; Rachapalli, V.

    2009-01-01

    The use of computers is increasing in every field of medicine, especially radiology. Filmless radiology departments, speech recognition software, electronic request forms and teleradiology are some of the recent developments that have substantially increased the amount of time a radiologist spends in front of a computer monitor. Computers are also needed for searching literature on the internet, communicating via e-mails, and preparing for lectures and presentations. It is well known that regular computer users can suffer musculoskeletal injuries due to repetitive stress. The role of ergonomics in radiology is to ensure that working conditions are optimized in order to avoid injury and fatigue. Adequate workplace ergonomics can go a long way in increasing productivity, efficiency, and job satisfaction. We review the current literature pertaining to the role of ergonomics in modern-day radiology especially with the development of picture archiving and communication systems (PACS) workstations

  16. Balancing Efficiency and Effectiveness for Fusion-Based Search Engines in the "Big Data" Environment

    Science.gov (United States)

    Li, Jieyu; Huang, Chunlan; Wang, Xiuhong; Wu, Shengli

    2016-01-01

    Introduction: In the big data age, we have to deal with a tremendous amount of information, which can be collected from various types of sources. For information search systems such as Web search engines or online digital libraries, the collection of documents becomes larger and larger. For some queries, an information search system needs to…

  17. FPS-RAM: Fast Prefix Search RAM-Based Hardware for Forwarding Engine

    Science.gov (United States)

    Zaitsu, Kazuya; Yamamoto, Koji; Kuroda, Yasuto; Inoue, Kazunari; Ata, Shingo; Oka, Ikuo

    Ternary content addressable memory (TCAM) is becoming very popular for designing high-throughput forwarding engines on routers. However, TCAM has potential problems in terms of hardware and power costs, which limits its ability to deploy large amounts of capacity in IP routers. In this paper, we propose new hardware architecture for fast forwarding engines, called fast prefix search RAM-based hardware (FPS-RAM). We designed FPS-RAM hardware with the intent of maintaining the same search performance and physical user interface as TCAM because our objective is to replace the TCAM in the market. Our RAM-based hardware architecture is completely different from that of TCAM and has dramatically reduced the costs and power consumption to 62% and 52%, respectively. We implemented FPS-RAM on an FPGA to examine its lookup operation.

  18. General vs health specialized search engine: a blind comparative evaluation of top search results.

    Science.gov (United States)

    Pletneva, Natalia; Ruiz de Castaneda, Rafael; Baroz, Frederic; Boyer, Celia

    2014-01-01

    This paper presents the results of a blind comparison of top ten search results retrieved by Google.ch (French) and Khresmoi for everyone, a health specialized search engine. Participants--students of the Faculty of Medicine of the University of Geneva had to complete three tasks and select their preferred results. The majority of the participants have largely preferred Google results while Khresmoi results showed potential to compete in specific topics. The coverage of the results seems to be one of the reasons. The second being that participants do not know how to select quality and transparent health web pages. More awareness, tools and education about the matter is required for the students of Medicine to be able to efficiently distinguish trustworthy online health information.

  19. Preface for the book: Antennas And Propagation for Body-Centric Wireless Communications

    DEFF Research Database (Denmark)

    Frederiksen, Flemming Bjerge; Prasad, Ramjee

    2006-01-01

    The book address the following subjects: Body Centric Wireless Communications possibilities, Electromagnetic properties of the body, On-body Communication Channels at high and low frequency bands, Body Centric UWB Communications, Wearable Antennas for cellular and WLAN communications, Body...

  20. Search Engine Marketing (SEM: Financial & Competitive Advantages of an Effective Hotel SEM Strategy

    Directory of Open Access Journals (Sweden)

    Leora Halpern Lanz

    2015-05-01

    Full Text Available Search Engine Marketing and Optimization (SEO, SEM are keystones of a hotels marketing strategy, in fact research shows that 90% of travelers start their vacation planning with a Google search. Learn five strategies that can enhance a hotels SEO and SEM strategies to boost bookings.

  1. Search Engine Optimization for Flash Best Practices for Using Flash on the Web

    CERN Document Server

    Perkins, Todd

    2009-01-01

    Search Engine Optimization for Flash dispels the myth that Flash-based websites won't show up in a web search by demonstrating exactly what you can do to make your site fully searchable -- no matter how much Flash it contains. You'll learn best practices for using HTML, CSS and JavaScript, as well as SWFObject, for building sites with Flash that will stand tall in search rankings.

  2. SDN Based User-Centric Framework for Heterogeneous Wireless Networks

    Directory of Open Access Journals (Sweden)

    Zhaoming Lu

    2016-01-01

    Full Text Available Due to the rapid growth of mobile data traffic, more and more basestations and access points (APs have been densely deployed to provide users with ubiquitous network access, which make current wireless network a complex heterogeneous network (HetNet. However, traditional wireless networks are designed with network-centric approaches where different networks have different quality of service (QoS strategies and cannot easily cooperate with each other to serve network users. Massive network infrastructures could not assure users perceived network and service quality, which is an indisputable fact. To address this issue, we design a new framework for heterogeneous wireless networks with the principle of user-centricity, refactoring the network from users’ perspective to suffice their requirements and preferences. Different from network-centric approaches, the proposed framework takes advantage of Software Defined Networking (SDN and virtualization technology, which will bring better perceived services quality for wireless network users. In the proposed user-centric framework, control plane and data plane are decoupled to manage the HetNets in a flexible and coadjutant way, and resource virtualization technology is introduced to abstract physical resources of HetNets into unified virtualized resources. Hence, ubiquitous and undifferentiated network connectivity and QoE (quality of experience driven fine-grained resource management could be achieved for wireless network users.

  3. Maximizing the sensitivity and reliability of peptide identification in large-scale proteomic experiments by harnessing multiple search engines.

    Science.gov (United States)

    Yu, Wen; Taylor, J Alex; Davis, Michael T; Bonilla, Leo E; Lee, Kimberly A; Auger, Paul L; Farnsworth, Chris C; Welcher, Andrew A; Patterson, Scott D

    2010-03-01

    Despite recent advances in qualitative proteomics, the automatic identification of peptides with optimal sensitivity and accuracy remains a difficult goal. To address this deficiency, a novel algorithm, Multiple Search Engines, Normalization and Consensus is described. The method employs six search engines and a re-scoring engine to search MS/MS spectra against protein and decoy sequences. After the peptide hits from each engine are normalized to error rates estimated from the decoy hits, peptide assignments are then deduced using a minimum consensus model. These assignments are produced in a series of progressively relaxed false-discovery rates, thus enabling a comprehensive interpretation of the data set. Additionally, the estimated false-discovery rate was found to have good concordance with the observed false-positive rate calculated from known identities. Benchmarking against standard proteins data sets (ISBv1, sPRG2006) and their published analysis, demonstrated that the Multiple Search Engines, Normalization and Consensus algorithm consistently achieved significantly higher sensitivity in peptide identifications, which led to increased or more robust protein identifications in all data sets compared with prior methods. The sensitivity and the false-positive rate of peptide identification exhibit an inverse-proportional and linear relationship with the number of participating search engines.

  4. Visibiome: an efficient microbiome search engine based on a scalable, distributed architecture.

    Science.gov (United States)

    Azman, Syafiq Kamarul; Anwar, Muhammad Zohaib; Henschel, Andreas

    2017-07-24

    Given the current influx of 16S rRNA profiles of microbiota samples, it is conceivable that large amounts of them eventually are available for search, comparison and contextualization with respect to novel samples. This process facilitates the identification of similar compositional features in microbiota elsewhere and therefore can help to understand driving factors for microbial community assembly. We present Visibiome, a microbiome search engine that can perform exhaustive, phylogeny based similarity search and contextualization of user-provided samples against a comprehensive dataset of 16S rRNA profiles environments, while tackling several computational challenges. In order to scale to high demands, we developed a distributed system that combines web framework technology, task queueing and scheduling, cloud computing and a dedicated database server. To further ensure speed and efficiency, we have deployed Nearest Neighbor search algorithms, capable of sublinear searches in high-dimensional metric spaces in combination with an optimized Earth Mover Distance based implementation of weighted UniFrac. The search also incorporates pairwise (adaptive) rarefaction and optionally, 16S rRNA copy number correction. The result of a query microbiome sample is the contextualization against a comprehensive database of microbiome samples from a diverse range of environments, visualized through a rich set of interactive figures and diagrams, including barchart-based compositional comparisons and ranking of the closest matches in the database. Visibiome is a convenient, scalable and efficient framework to search microbiomes against a comprehensive database of environmental samples. The search engine leverages a popular but computationally expensive, phylogeny based distance metric, while providing numerous advantages over the current state of the art tool.

  5. Toward two-dimensional search engines

    International Nuclear Information System (INIS)

    Ermann, L; Shepelyansky, D L; Chepelianskii, A D

    2012-01-01

    We study the statistical properties of various directed networks using ranking of their nodes based on the dominant vectors of the Google matrix known as PageRank and CheiRank. On average PageRank orders nodes proportionally to a number of ingoing links, while CheiRank orders nodes proportionally to a number of outgoing links. In this way, the ranking of nodes becomes two dimensional which paves the way for the development of two-dimensional search engines of a new type. Statistical properties of information flow on the PageRank–CheiRank plane are analyzed for networks of British, French and Italian universities, Wikipedia, Linux Kernel, gene regulation and other networks. A special emphasis is done for British universities networks using the large database publicly available in the UK. Methods of spam links control are also analyzed. (paper)

  6. Radionuclide radiology

    International Nuclear Information System (INIS)

    Scarsbrook, A.F.; Graham, R.N.J.; Perriss, R.W.; Bradley, K.M.

    2006-01-01

    This is the fourth in a series of short reviews of internet-based radiological educational resources, and will focus on radionuclide radiology and nuclear medicine. What follows is a list of carefully selected websites to save time in searching them out. Most of the sites cater for trainee or non-specialist radiologists, but may also be of interest to specialists for use in teaching. This article may be particularly useful to radiologists interested in the rapidly expanding field of positron emission tomography computed tomography (PET-CT). Hyperlinks are available in the electronic version of this article and were all active at the time of going to press (February 2006)

  7. Society of Interventional Radiology

    Science.gov (United States)

    ... Picture yourself in L.A. Register now SIR Essentials Purchase/register Search SIR's entire catalog for educational ... Quality Improvement Clinical practice MACRA Matters Health Policy, Economics, Coding Toolkits Society of Interventional Radiology 3975 Fair ...

  8. The accuracy of Internet search engines to predict diagnoses from symptoms can be assessed with a validated scoring system.

    Science.gov (United States)

    Shenker, Bennett S

    2014-02-01

    To validate a scoring system that evaluates the ability of Internet search engines to correctly predict diagnoses when symptoms are used as search terms. We developed a five point scoring system to evaluate the diagnostic accuracy of Internet search engines. We identified twenty diagnoses common to a primary care setting to validate the scoring system. One investigator entered the symptoms for each diagnosis into three Internet search engines (Google, Bing, and Ask) and saved the first five webpages from each search. Other investigators reviewed the webpages and assigned a diagnostic accuracy score. They rescored a random sample of webpages two weeks later. To validate the five point scoring system, we calculated convergent validity and test-retest reliability using Kendall's W and Spearman's rho, respectively. We used the Kruskal-Wallis test to look for differences in accuracy scores for the three Internet search engines. A total of 600 webpages were reviewed. Kendall's W for the raters was 0.71 (psearch engines is a valid and reliable instrument. The scoring system may be used in future Internet research. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections.

    Science.gov (United States)

    Epstein, Robert; Robertson, Ronald E

    2015-08-18

    Internet search rankings have a significant impact on consumer choices, mainly because users trust and choose higher-ranked results more than lower-ranked results. Given the apparent power of search rankings, we asked whether they could be manipulated to alter the preferences of undecided voters in democratic elections. Here we report the results of five relevant double-blind, randomized controlled experiments, using a total of 4,556 undecided voters representing diverse demographic characteristics of the voting populations of the United States and India. The fifth experiment is especially notable in that it was conducted with eligible voters throughout India in the midst of India's 2014 Lok Sabha elections just before the final votes were cast. The results of these experiments demonstrate that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) search ranking bias can be masked so that people show no awareness of the manipulation. We call this type of influence, which might be applicable to a variety of attitudes and beliefs, the search engine manipulation effect. Given that many elections are won by small margins, our results suggest that a search engine company has the power to influence the results of a substantial number of elections with impunity. The impact of such manipulations would be especially large in countries dominated by a single search engine company.

  10. GGRNA: an ultrafast, transcript-oriented search engine for genes and transcripts.

    Science.gov (United States)

    Naito, Yuki; Bono, Hidemasa

    2012-07-01

    GGRNA (http://GGRNA.dbcls.jp/) is a Google-like, ultrafast search engine for genes and transcripts. The web server accepts arbitrary words and phrases, such as gene names, IDs, gene descriptions, annotations of gene and even nucleotide/amino acid sequences through one simple search box, and quickly returns relevant RefSeq transcripts. A typical search takes just a few seconds, which dramatically enhances the usability of routine searching. In particular, GGRNA can search sequences as short as 10 nt or 4 amino acids, which cannot be handled easily by popular sequence analysis tools. Nucleotide sequences can be searched allowing up to three mismatches, or the query sequences may contain degenerate nucleotide codes (e.g. N, R, Y, S). Furthermore, Gene Ontology annotations, Enzyme Commission numbers and probe sequences of catalog microarrays are also incorporated into GGRNA, which may help users to conduct searches by various types of keywords. GGRNA web server will provide a simple and powerful interface for finding genes and transcripts for a wide range of users. All services at GGRNA are provided free of charge to all users.

  11. Decision making in family medicine: randomized trial of the effects of the InfoClinique and Trip database search engines.

    Science.gov (United States)

    Labrecque, Michel; Ratté, Stéphane; Frémont, Pierre; Cauchon, Michel; Ouellet, Jérôme; Hogg, William; McGowan, Jessie; Gagnon, Marie-Pierre; Njoya, Merlin; Légaré, France

    2013-10-01

    To compare the ability of users of 2 medical search engines, InfoClinique and the Trip database, to provide correct answers to clinical questions and to explore the perceived effects of the tools on the clinical decision-making process. Randomized trial. Three family medicine units of the family medicine program of the Faculty of Medicine at Laval University in Quebec city, Que. Fifteen second-year family medicine residents. Residents generated 30 structured questions about therapy or preventive treatment (2 questions per resident) based on clinical encounters. Using an Internet platform designed for the trial, each resident answered 20 of these questions (their own 2, plus 18 of the questions formulated by other residents, selected randomly) before and after searching for information with 1 of the 2 search engines. For each question, 5 residents were randomly assigned to begin their search with InfoClinique and 5 with the Trip database. The ability of residents to provide correct answers to clinical questions using the search engines, as determined by third-party evaluation. After answering each question, participants completed a questionnaire to assess their perception of the engine's effect on the decision-making process in clinical practice. Of 300 possible pairs of answers (1 answer before and 1 after the initial search), 254 (85%) were produced by 14 residents. Of these, 132 (52%) and 122 (48%) pairs of answers concerned questions that had been assigned an initial search with InfoClinique and the Trip database, respectively. Both engines produced an important and similar absolute increase in the proportion of correct answers after searching (26% to 62% for InfoClinique, for an increase of 36%; 24% to 63% for the Trip database, for an increase of 39%; P = .68). For all 30 clinical questions, at least 1 resident produced the correct answer after searching with either search engine. The mean (SD) time of the initial search for each question was 23.5 (7

  12. Design implications for task-specific search utilities for retrieval and re-engineering of code

    Science.gov (United States)

    Iqbal, Rahat; Grzywaczewski, Adam; Halloran, John; Doctor, Faiyaz; Iqbal, Kashif

    2017-05-01

    The importance of information retrieval systems is unquestionable in the modern society and both individuals as well as enterprises recognise the benefits of being able to find information effectively. Current code-focused information retrieval systems such as Google Code Search, Codeplex or Koders produce results based on specific keywords. However, these systems do not take into account developers' context such as development language, technology framework, goal of the project, project complexity and developer's domain expertise. They also impose additional cognitive burden on users in switching between different interfaces and clicking through to find the relevant code. Hence, they are not used by software developers. In this paper, we discuss how software engineers interact with information and general-purpose information retrieval systems (e.g. Google, Yahoo!) and investigate to what extent domain-specific search and recommendation utilities can be developed in order to support their work-related activities. In order to investigate this, we conducted a user study and found that software engineers followed many identifiable and repeatable work tasks and behaviours. These behaviours can be used to develop implicit relevance feedback-based systems based on the observed retention actions. Moreover, we discuss the implications for the development of task-specific search and collaborative recommendation utilities embedded with the Google standard search engine and Microsoft IntelliSense for retrieval and re-engineering of code. Based on implicit relevance feedback, we have implemented a prototype of the proposed collaborative recommendation system, which was evaluated in a controlled environment simulating the real-world situation of professional software engineers. The evaluation has achieved promising initial results on the precision and recall performance of the system.

  13. Natural Language Processing Technologies in Radiology Research and Clinical Applications

    Science.gov (United States)

    Cai, Tianrun; Giannopoulos, Andreas A.; Yu, Sheng; Kelil, Tatiana; Ripley, Beth; Kumamaru, Kanako K.; Rybicki, Frank J.

    2016-01-01

    The migration of imaging reports to electronic medical record systems holds great potential in terms of advancing radiology research and practice by leveraging the large volume of data continuously being updated, integrated, and shared. However, there are significant challenges as well, largely due to the heterogeneity of how these data are formatted. Indeed, although there is movement toward structured reporting in radiology (ie, hierarchically itemized reporting with use of standardized terminology), the majority of radiology reports remain unstructured and use free-form language. To effectively “mine” these large datasets for hypothesis testing, a robust strategy for extracting the necessary information is needed. Manual extraction of information is a time-consuming and often unmanageable task. “Intelligent” search engines that instead rely on natural language processing (NLP), a computer-based approach to analyzing free-form text or speech, can be used to automate this data mining task. The overall goal of NLP is to translate natural human language into a structured format (ie, a fixed collection of elements), each with a standardized set of choices for its value, that is easily manipulated by computer programs to (among other things) order into subcategories or query for the presence or absence of a finding. The authors review the fundamentals of NLP and describe various techniques that constitute NLP in radiology, along with some key applications. ©RSNA, 2016 PMID:26761536

  14. Natural Language Processing Technologies in Radiology Research and Clinical Applications.

    Science.gov (United States)

    Cai, Tianrun; Giannopoulos, Andreas A; Yu, Sheng; Kelil, Tatiana; Ripley, Beth; Kumamaru, Kanako K; Rybicki, Frank J; Mitsouras, Dimitrios

    2016-01-01

    The migration of imaging reports to electronic medical record systems holds great potential in terms of advancing radiology research and practice by leveraging the large volume of data continuously being updated, integrated, and shared. However, there are significant challenges as well, largely due to the heterogeneity of how these data are formatted. Indeed, although there is movement toward structured reporting in radiology (ie, hierarchically itemized reporting with use of standardized terminology), the majority of radiology reports remain unstructured and use free-form language. To effectively "mine" these large datasets for hypothesis testing, a robust strategy for extracting the necessary information is needed. Manual extraction of information is a time-consuming and often unmanageable task. "Intelligent" search engines that instead rely on natural language processing (NLP), a computer-based approach to analyzing free-form text or speech, can be used to automate this data mining task. The overall goal of NLP is to translate natural human language into a structured format (ie, a fixed collection of elements), each with a standardized set of choices for its value, that is easily manipulated by computer programs to (among other things) order into subcategories or query for the presence or absence of a finding. The authors review the fundamentals of NLP and describe various techniques that constitute NLP in radiology, along with some key applications. ©RSNA, 2016.

  15. PR Students' Perceptions and Readiness for Using Search Engine Optimization

    Science.gov (United States)

    Moody, Mia; Bates, Elizabeth

    2013-01-01

    Enough evidence is available to support the idea that public relations professionals must possess search engine optimization (SEO) skills to assist clients in a full-service capacity; however, little research exists on how much college students know about the tactic and best practices for incorporating SEO into course curriculum. Furthermore, much…

  16. A Competitive and Experiential Assignment in Search Engine Optimization Strategy

    Science.gov (United States)

    Clarke, Theresa B.; Clarke, Irvine, III

    2014-01-01

    Despite an increase in ad spending and demand for employees with expertise in search engine optimization (SEO), methods for teaching this important marketing strategy have received little coverage in the literature. Using Bloom's cognitive goals hierarchy as a framework, this experiential assignment provides a process for educators who may be new…

  17. The Role of Exploratory Talk in Classroom Search Engine Tasks

    Science.gov (United States)

    Knight, Simon; Mercer, Neil

    2015-01-01

    While search engines are commonly used by children to find information, and in classroom-based activities, children are not adept in their information seeking or evaluation of information sources. Prior work has explored such activities in isolated, individual contexts, failing to account for the collaborative, discourse-mediated nature of search…

  18. Radiation protection of the environment: anthropocentric and eco-centric principles

    International Nuclear Information System (INIS)

    Alexakhin, R.M.; Fesenko, S.V.

    2004-01-01

    The second half of the 20. century was dominated in the field of radiation protection by the anthropocentric concept stated by the International Commission on Radiological Protection (ICRP). According to this concept 'if radiation standards protect man then biota are also adequately protected from ionizing radiation'. At the end of the 20. beginning of the 21. centuries in the area of radiation protection of nature an eco-centric strategy is beginning to develop where emphasis has swung to the protection of biota in their environment. Inadequacy of ICRP's anthropocentric concept is reported. Issues are discussed such as ecological dosimetry, non-equi-dose irradiation of man and biota, criteria for estimating radiation induced changes in biota and man, as well as the need to harmonize permissible exposure doses to man and biota. An urgent need is stressed to develop a single (synthetic) concept of radiation protection which simultaneously ensures protection of human health and biota well-being in their environment. This concept is to be based on the recognition of the integrity of socio-natural ecosystems where man and biota are considered as a unity. (author)

  19. Data Centric Sensor Stream Reduction for Real-Time Applications in Wireless Sensor Networks

    Science.gov (United States)

    Aquino, Andre Luiz Lins; Nakamura, Eduardo Freire

    2009-01-01

    This work presents a data-centric strategy to meet deadlines in soft real-time applications in wireless sensor networks. This strategy considers three main aspects: (i) The design of real-time application to obtain the minimum deadlines; (ii) An analytic model to estimate the ideal sample size used by data-reduction algorithms; and (iii) Two data-centric stream-based sampling algorithms to perform data reduction whenever necessary. Simulation results show that our data-centric strategies meet deadlines without loosing data representativeness. PMID:22303145

  20. Search engine imaginary: Visions and values in the co-production of search technology and Europe.

    Science.gov (United States)

    Mager, Astrid

    2017-04-01

    This article discusses the co-production of search technology and a European identity in the context of the EU data protection reform. The negotiations of the EU data protection legislation ran from 2012 until 2015 and resulted in a unified data protection legislation directly binding for all European member states. I employ a discourse analysis to examine EU policy documents and Austrian media materials related to the reform process. Using the concept 'sociotechnical imaginary', I show how a European imaginary of search engines is forming in the EU policy domain, how a European identity is constructed in the envisioned politics of control, and how national specificities contribute to the making and unmaking of a European identity. I discuss the roles that national technopolitical identities play in shaping both search technology and Europe, taking as an example Austria, a small country with a long history in data protection and a tradition of restrained technology politics.

  1. A Commander in Chief's Network-Centric Odyssey

    National Research Council Canada - National Science Library

    Copley, E

    2002-01-01

    .... Each Armed Service has begun training and equipping its force using the tenets of Network-Centric Operations, but those forces come together for the first time under the combatant Commander-in-Chief...

  2. Network Centric Information Structure - Crisis Information Management

    National Research Council Canada - National Science Library

    Aarholt, Eldar; Berg, Olav

    2004-01-01

    This paper presents a generic Network Centric Information Structure (NCIS) that can be used by civilian, military and public sectors, and that supports information handling applied to crises management and emergency response...

  3. SA Journal of Radiology

    African Journals Online (AJOL)

    SA Journal of Radiology. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 19, No 2 (2015) >. Log in or Register to get access to full text downloads.

  4. SA Journal of Radiology

    African Journals Online (AJOL)

    SA Journal of Radiology. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 21, No 1 (2017) >. Log in or Register to get access to full text downloads.

  5. REPTREE CLASSIFIER FOR IDENTIFYING LINK SPAM IN WEB SEARCH ENGINES

    Directory of Open Access Journals (Sweden)

    S.K. Jayanthi

    2013-01-01

    Full Text Available Search Engines are used for retrieving the information from the web. Most of the times, the importance is laid on top 10 results sometimes it may shrink as top 5, because of the time constraint and reliability on the search engines. Users believe that top 10 or 5 of total results are more relevant. Here comes the problem of spamdexing. It is a method to deceive the search result quality. Falsified metrics such as inserting enormous amount of keywords or links in website may take that website to the top 10 or 5 positions. This paper proposes a classifier based on the Reptree (Regression tree representative. As an initial step Link-based features such as neighbors, pagerank, truncated pagerank, trustrank and assortativity related attributes are inferred. Based on this features, tree is constructed. The tree uses the feature inference to differentiate spam sites from legitimate sites. WEBSPAM-UK-2007 dataset is taken as a base. It is preprocessed and converted into five datasets FEATA, FEATB, FEATC, FEATD and FEATE. Only link based features are taken for experiments. This paper focus on link spam alone. Finally a representative tree is created which will more precisely classify the web spam entries. Results are given. Regression tree classification seems to perform well as shown through experiments.

  6. Informatics in radiology: radiology gamuts ontology: differential diagnosis for the Semantic Web.

    Science.gov (United States)

    Budovec, Joseph J; Lam, Cesar A; Kahn, Charles E

    2014-01-01

    The Semantic Web is an effort to add semantics, or "meaning," to empower automated searching and processing of Web-based information. The overarching goal of the Semantic Web is to enable users to more easily find, share, and combine information. Critical to this vision are knowledge models called ontologies, which define a set of concepts and formalize the relations between them. Ontologies have been developed to manage and exploit the large and rapidly growing volume of information in biomedical domains. In diagnostic radiology, lists of differential diagnoses of imaging observations, called gamuts, provide an important source of knowledge. The Radiology Gamuts Ontology (RGO) is a formal knowledge model of differential diagnoses in radiology that includes 1674 differential diagnoses, 19,017 terms, and 52,976 links between terms. Its knowledge is used to provide an interactive, freely available online reference of radiology gamuts ( www.gamuts.net ). A Web service allows its content to be discovered and consumed by other information systems. The RGO integrates radiologic knowledge with other biomedical ontologies as part of the Semantic Web. © RSNA, 2014.

  7. Radiology and the Internet: A systematic review of patient information resources

    Energy Technology Data Exchange (ETDEWEB)

    Smart, James M; Burling, David

    2001-11-01

    AIM: To determine whether the internet is a useful resource for patients seeking information on radiological procedures. MATERIALS AND METHODS: A systematic search of the world wide web was performed by means of four general search engines (AltaVista, Yahoo{exclamation_point}, Infoseek and Excite). Twenty-eight suitable patient-directed websites on arteriography were identified for analysis. The value of this material was measured by establishing inclusion or exclusion of a number of factors relating to the procedure. Readability of the materials was evaluated using the Flesch reading ease score. RESULTS: Advice on preparation was included in 21 (75%) sites. Contraindications were found in 16 (57%) sites, risks in 6 (21%) and aftercare in 25 (89%). Result availability was discussed in 15 (54%) sites, with links to other radiology sites in 13 (46%). Visual aids were used in 6 (21%) sites and a contact address found in 27 (96%). Mean Flesch reading ease score was 57, with 46% of sites below the preferred minimum of 60. CONCLUSIONS: Few sites provide the range of information a patient needs before arriving for a procedure. In addition, the readability of the material on these sites is frequently set at a level incomprehensible to patients with lower levels of literacy. Smart, J.M. and Burling, D. (2001)

  8. Radiology and the Internet: A systematic review of patient information resources

    International Nuclear Information System (INIS)

    Smart, James M.; Burling, David

    2001-01-01

    AIM: To determine whether the internet is a useful resource for patients seeking information on radiological procedures. MATERIALS AND METHODS: A systematic search of the world wide web was performed by means of four general search engines (AltaVista, Yahoo!, Infoseek and Excite). Twenty-eight suitable patient-directed websites on arteriography were identified for analysis. The value of this material was measured by establishing inclusion or exclusion of a number of factors relating to the procedure. Readability of the materials was evaluated using the Flesch reading ease score. RESULTS: Advice on preparation was included in 21 (75%) sites. Contraindications were found in 16 (57%) sites, risks in 6 (21%) and aftercare in 25 (89%). Result availability was discussed in 15 (54%) sites, with links to other radiology sites in 13 (46%). Visual aids were used in 6 (21%) sites and a contact address found in 27 (96%). Mean Flesch reading ease score was 57, with 46% of sites below the preferred minimum of 60. CONCLUSIONS: Few sites provide the range of information a patient needs before arriving for a procedure. In addition, the readability of the material on these sites is frequently set at a level incomprehensible to patients with lower levels of literacy. Smart, J.M. and Burling, D. (2001)

  9. LIVIVO - the Vertical Search Engine for Life Sciences.

    Science.gov (United States)

    Müller, Bernd; Poley, Christoph; Pössel, Jana; Hagelstein, Alexandra; Gübitz, Thomas

    2017-01-01

    The explosive growth of literature and data in the life sciences challenges researchers to keep track of current advancements in their disciplines. Novel approaches in the life science like the One Health paradigm require integrated methodologies in order to link and connect heterogeneous information from databases and literature resources. Current publications in the life sciences are increasingly characterized by the employment of trans-disciplinary methodologies comprising molecular and cell biology, genetics, genomic, epigenomic, transcriptional and proteomic high throughput technologies with data from humans, plants, and animals. The literature search engine LIVIVO empowers retrieval functionality by incorporating various literature resources from medicine, health, environment, agriculture and nutrition. LIVIVO is developed in-house by ZB MED - Information Centre for Life Sciences. It provides a user-friendly and usability-tested search interface with a corpus of 55 Million citations derived from 50 databases. Standardized application programming interfaces are available for data export and high throughput retrieval. The search functions allow for semantic retrieval with filtering options based on life science entities. The service oriented architecture of LIVIVO uses four different implementation layers to deliver search services. A Knowledge Environment is developed by ZB MED to deal with the heterogeneity of data as an integrative approach to model, store, and link semantic concepts within literature resources and databases. Future work will focus on the exploitation of life science ontologies and on the employment of NLP technologies in order to improve query expansion, filters in faceted search, and concept based relevancy rankings in LIVIVO.

  10. Teaching Search Engine Marketing through the Google Ad Grants Program

    Science.gov (United States)

    Clarke, Theresa B.; Murphy, Jamie; Wetsch, Lyle R.; Boeck, Harold

    2018-01-01

    Instructors may find it difficult to stay abreast of the rapidly changing nature of search engine marketing (SEM) and to incorporate hands-on, practical classroom experiences. One solution is Google Ad Grants, a nonprofit edition of Google AdWords that provides up to $10,000 monthly in free advertising. A quasi-experiment revealed no differences…

  11. A Probabilistic, Facility-Centric Approach to Lightning Strike Location

    Science.gov (United States)

    Huddleston, Lisa L.; Roeder, William p.; Merceret, Francis J.

    2012-01-01

    A new probabilistic facility-centric approach to lightning strike location has been developed. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collisionith spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.

  12. A Taxonomic Search Engine: federating taxonomic databases using web services.

    Science.gov (United States)

    Page, Roderic D M

    2005-03-09

    The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. The Taxonomic Search Engine (TSE) is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO) and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID) authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata) for each name. The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  13. RNA search engines empower the bacterial intranet.

    Science.gov (United States)

    Dendooven, Tom; Luisi, Ben F

    2017-08-15

    RNA acts not only as an information bearer in the biogenesis of proteins from genes, but also as a regulator that participates in the control of gene expression. In bacteria, small RNA molecules (sRNAs) play controlling roles in numerous processes and help to orchestrate complex regulatory networks. Such processes include cell growth and development, response to stress and metabolic change, transcription termination, cell-to-cell communication, and the launching of programmes for host invasion. All these processes require recognition of target messenger RNAs by the sRNAs. This review summarizes recent results that have provided insights into how bacterial sRNAs are recruited into effector ribonucleoprotein complexes that can seek out and act upon target transcripts. The results hint at how sRNAs and their protein partners act as pattern-matching search engines that efficaciously regulate gene expression, by performing with specificity and speed while avoiding off-target effects. The requirements for efficient searches of RNA patterns appear to be common to all domains of life. © 2017 The Author(s).

  14. Adverse Reactions Associated With Cannabis Consumption as Evident From Search Engine Queries.

    Science.gov (United States)

    Yom-Tov, Elad; Lev-Ran, Shaul

    2017-10-26

    Cannabis is one of the most widely used psychoactive substances worldwide, but adverse drug reactions (ADRs) associated with its use are difficult to study because of its prohibited status in many countries. Internet search engine queries have been used to investigate ADRs in pharmaceutical drugs. In this proof-of-concept study, we tested whether these queries can be used to detect the adverse reactions of cannabis use. We analyzed anonymized queries from US-based users of Bing, a widely used search engine, made over a period of 6 months and compared the results with the prevalence of cannabis use as reported in the US National Survey on Drug Use in the Household (NSDUH) and with ADRs reported in the Food and Drug Administration's Adverse Drug Reporting System. Predicted prevalence of cannabis use was estimated from the fraction of people making queries about cannabis, marijuana, and 121 additional synonyms. Predicted ADRs were estimated from queries containing layperson descriptions to 195 ICD-10 symptoms list. Our results indicated that the predicted prevalence of cannabis use at the US census regional level reaches an R 2 of .71 NSDUH data. Queries for ADRs made by people who also searched for cannabis reveal many of the known adverse effects of cannabis (eg, cough and psychotic symptoms), as well as plausible unknown reactions (eg, pyrexia). These results indicate that search engine queries can serve as an important tool for the study of adverse reactions of illicit drugs, which are difficult to study in other settings. ©Elad Yom-Tov, Shaul Lev-Ran. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 26.10.2017.

  15. Surfing for suicide methods and help: content analysis of websites retrieved with search engines in Austria and the United States.

    Science.gov (United States)

    Till, Benedikt; Niederkrotenthaler, Thomas

    2014-08-01

    The Internet provides a variety of resources for individuals searching for suicide-related information. Structured content-analytic approaches to assess intercultural differences in web contents retrieved with method-related and help-related searches are scarce. We used the 2 most popular search engines (Google and Yahoo/Bing) to retrieve US-American and Austrian search results for the term suicide, method-related search terms (e.g., suicide methods, how to kill yourself, painless suicide, how to hang yourself), and help-related terms (e.g., suicidal thoughts, suicide help) on February 11, 2013. In total, 396 websites retrieved with US search engines and 335 websites from Austrian searches were analyzed with content analysis on the basis of current media guidelines for suicide reporting. We assessed the quality of websites and compared findings across search terms and between the United States and Austria. In both countries, protective outweighed harmful website characteristics by approximately 2:1. Websites retrieved with method-related search terms (e.g., how to hang yourself) contained more harmful (United States: P search engines generally had more protective characteristics (P search engines. Resources with harmful characteristics were better ranked than those with protective characteristics (United States: P < .01, Austria: P < .05). The quality of suicide-related websites obtained depends on the search terms used. Preventive efforts to improve the ranking of preventive web content, particularly regarding method-related search terms, seem necessary. © Copyright 2014 Physicians Postgraduate Press, Inc.

  16. Firewall Mechanism in a User Centric Smart Card Ownership Model

    OpenAIRE

    Akram , Raja Naeem; Markantonakis , Konstantinos; Mayes , Keith

    2010-01-01

    International audience; Multi-application smart card technology facilitates applications to securely share their data and functionality. The security enforcement and assurance in application sharing is provided by the smart card firewall. The firewall mechanism is well defined and studied in the Issuer Centric Smart Card Ownership Model (ICOM), in which a smart card is under total control of its issuer. However, it is not analysed in the User Centric Smart Card Ownership Model (UCOM) that del...

  17. Eczema, Atopic Dermatitis, or Atopic Eczema: Analysis of Global Search Engine Trends.

    Science.gov (United States)

    Xu, Shuai; Thyssen, Jacob P; Paller, Amy S; Silverberg, Jonathan I

    The lack of standardized nomenclature for atopic dermatitis (AD) creates challenges for scientific communication, patient education, and advocacy. We sought to determine the relative popularity of the terms eczema, AD, and atopic eczema (AE) using global search engine volumes. A retrospective analysis of average monthly search volumes from 2014 to 2016 of Google, Bing/Yahoo, and Baidu was performed for eczema, AD, and AE in English and 37 other languages. Google Trends was used to determine the relative search popularity of each term from 2006 to 2016 in English and the top foreign languages, German, Turkish, Russian, and Japanese. Overall, eczema accounted for 1.5 million monthly searches (84%) compared with 247 000 searches for AD (14%) and 44 000 searches for AE (2%). For English language, eczema accounted for 93% of searches compared with 6% for AD and 1% for AE. Search popularity for eczema increased from 2006 to 2016 but remained stable for AD and AE. Given the ambiguity of the term eczema, we recommend the universal use of the next most popular term, AD.

  18. Internet Search Engines: Copyright's "Fair Use" in Reproduction and Public Display Rights

    National Research Council Canada - National Science Library

    Jeweler, Robin

    2007-01-01

    .... If so, is the activity a "fair use" protected by the Copyright Act? These issues frequently implicate search engines, which scan the web to allow users to find content for uses, both legitimate and illegitimate...

  19. E-learning and education in radiology

    International Nuclear Information System (INIS)

    Pinto, Antonio; Brunese, Luca; Pinto, Fabio; Acampora, Ciro; Romano, Luigia

    2011-01-01

    Purpose: To evaluate current applications of e-learning in radiology. Material and methods: A Medline search was performed using PubMed (National Library of Medicine, Bethesda, MD) for publications discussing the applications of e-learning in radiology. The search strategy employed a single combination of the following terms: (1) e-learning, and (2) education and (3) radiology. This review was limited to human studies and to English-language literature. We reviewed all the titles and subsequent the abstract of 29 articles that appeared pertinent. Additional articles were identified by reviewing the reference lists of relevant papers. Finally, the full text of 38 selected articles was reviewed. Results: Literature data shows that with the constant development of technology and global spread of computer networks, in particular of the Internet, the integration of multimedia and interactivity introduced into electronic publishing has allowed the creation of multimedia applications that provide valuable support for medical teaching and continuing medical education, specifically for radiology. Such technologies are valuable tools for collaboration, interactivity, simulation, and self-testing. However, not everything on the World Wide Web is useful, accurate, or beneficial: the quality and veracity of medical information on the World Wide Web is variable and much time can be wasted as many websites do not meet basic publication standards. Conclusion: E-learning will become an important source of education in radiology.

  20. E-learning and education in radiology.

    Science.gov (United States)

    Pinto, Antonio; Brunese, Luca; Pinto, Fabio; Acampora, Ciro; Romano, Luigia

    2011-06-01

    To evaluate current applications of e-learning in radiology. A Medline search was performed using PubMed (National Library of Medicine, Bethesda, MD) for publications discussing the applications of e-learning in radiology. The search strategy employed a single combination of the following terms: (1) e-learning, and (2) education and (3) radiology. This review was limited to human studies and to English-language literature. We reviewed all the titles and subsequent the abstract of 29 articles that appeared pertinent. Additional articles were identified by reviewing the reference lists of relevant papers. Finally, the full text of 38 selected articles was reviewed. Literature data shows that with the constant development of technology and global spread of computer networks, in particular of the Internet, the integration of multimedia and interactivity introduced into electronic publishing has allowed the creation of multimedia applications that provide valuable support for medical teaching and continuing medical education, specifically for radiology. Such technologies are valuable tools for collaboration, interactivity, simulation, and self-testing. However, not everything on the World Wide Web is useful, accurate, or beneficial: the quality and veracity of medical information on the World Wide Web is variable and much time can be wasted as many websites do not meet basic publication standards. E-learning will become an important source of education in radiology. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  1. E-learning and education in radiology

    Energy Technology Data Exchange (ETDEWEB)

    Pinto, Antonio, E-mail: antopin1968@libero.it [Department of Diagnostic Imaging, A. Cardarelli Hospital, I-80131 Naples (Italy); Brunese, Luca, E-mail: lucabrunese@libero.it [Department of Health Science, Faculty of Medicine and Surgery, University of Molise, I-86100 Campobasso (Italy); Pinto, Fabio, E-mail: fpinto1966@libero.it [Department of Diagnostic Imaging, A. Cardarelli Hospital, I-80131 Naples (Italy); Acampora, Ciro, E-mail: itrasente@libero.it [Department of Diagnostic Imaging, A. Cardarelli Hospital, I-80131 Naples (Italy); Romano, Luigia, E-mail: luigia.romano@fastwebnet.it [Department of Diagnostic Imaging, A. Cardarelli Hospital, I-80131 Naples (Italy)

    2011-06-15

    Purpose: To evaluate current applications of e-learning in radiology. Material and methods: A Medline search was performed using PubMed (National Library of Medicine, Bethesda, MD) for publications discussing the applications of e-learning in radiology. The search strategy employed a single combination of the following terms: (1) e-learning, and (2) education and (3) radiology. This review was limited to human studies and to English-language literature. We reviewed all the titles and subsequent the abstract of 29 articles that appeared pertinent. Additional articles were identified by reviewing the reference lists of relevant papers. Finally, the full text of 38 selected articles was reviewed. Results: Literature data shows that with the constant development of technology and global spread of computer networks, in particular of the Internet, the integration of multimedia and interactivity introduced into electronic publishing has allowed the creation of multimedia applications that provide valuable support for medical teaching and continuing medical education, specifically for radiology. Such technologies are valuable tools for collaboration, interactivity, simulation, and self-testing. However, not everything on the World Wide Web is useful, accurate, or beneficial: the quality and veracity of medical information on the World Wide Web is variable and much time can be wasted as many websites do not meet basic publication standards. Conclusion: E-learning will become an important source of education in radiology.

  2. User-Centric Multi-Criteria Information Retrieval

    Science.gov (United States)

    Wolfe, Shawn R.; Zhang, Yi

    2009-01-01

    Information retrieval models usually represent content only, and not other considerations, such as authority, cost, and recency. How could multiple criteria be utilized in information retrieval, and how would it affect the results? In our experiments, using multiple user-centric criteria always produced better results than a single criteria.

  3. Origin of Disagreements in Tandem Mass Spectra Interpretation by Search Engines.

    Science.gov (United States)

    Tessier, Dominique; Lollier, Virginie; Larré, Colette; Rogniaux, Hélène

    2016-10-07

    Several proteomic database search engines that interpret LC-MS/MS data do not identify the same set of peptides. These disagreements occur even when the scores of the peptide-to-spectrum matches suggest good confidence in the interpretation. Our study shows that these disagreements observed for the interpretations of a given spectrum are almost exclusively due to the variation of what we call the "peptide space", i.e., the set of peptides that are actually compared to the experimental spectra. We discuss the potential difficulties of precisely defining the "peptide space." Indeed, although several parameters that are generally reported in publications can easily be set to the same values, many additional parameters-with much less straightforward user access-might impact the "peptide space" used by each program. Moreover, in a configuration where each search engine identifies the same candidates for each spectrum, the inference of the proteins may remain quite different depending on the false discovery rate selected.

  4. Patient-Centered Radiology Reporting: Using Online Crowdsourcing to Assess the Effectiveness of a Web-Based Interactive Radiology Report.

    Science.gov (United States)

    Short, Ryan G; Middleton, Dana; Befera, Nicholas T; Gondalia, Raj; Tailor, Tina D

    2017-11-01

    The aim of this study was to evaluate the effectiveness of a patient-centered web-based interactive mammography report. A survey was distributed on Amazon Mechanical Turk, an online crowdsourcing platform. One hundred ninety-three US women ≥18 years of age were surveyed and then randomized to one of three simulated BI-RADS ® 0 report formats: standard report, Mammography Quality Standards Act-modeled patient letter, or web-based interactive report. Survey questions assessed participants' report comprehension, satisfaction with and perception of the interpreting radiologist, and experience with the presented report. Two-tailed t tests and χ 2 tests were used to evaluate differences among groups. Participants in the interactive web-based group spent more than double the time viewing the report than the standard report group (160.0 versus 64.2 seconds, P < .001). Report comprehension scores were significantly higher for the interactive web-based and patient letter groups than the standard report group (P < .05). Scores of satisfaction with the interpreting radiologist were significantly higher for the web-based interactive report and patient letter groups than the standard report group (P < .01). There were no significant differences between the patient letter and web-based interactive report groups. Radiology report format likely influences communication effectiveness. For result communication to a non-medical patient audience, patient-centric report formats, such as a Mammography Quality Standards Act-modeled patient letter or web-based interactive report, may offer advantages over the standard radiology report. Future work is needed to determine if these findings are reproducible in patient care settings and to determine how best to optimize radiology result communication to patients. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  5. The radiological technologist

    International Nuclear Information System (INIS)

    Bundy, A.L.

    1988-01-01

    Radiologists rely upon the talents of the technologists with whom they work. Indeed, a good technologist will only enhance the radiologist's performance. Radiological technologists no longer solely take radiographs, but are involved in many more detailed areas of imaging, such as computered tomography, magnetic resonance imaging, nuclear radiology, ultrasound, angiography, and special procedures. They are also required to make decisions that affect the radiological examination. Besides the degree in radiological technology (RT), advanced degrees in nuclear medicine technology (NMT) and diagnostic medical sonography (RDMS) are attainable. The liability of the technologist is not the same as the radiologist involved, but the liability is potentially real and governed by a subdivision of jurisprudence known as agency law. Since plaintiffs and attorneys are constantly searching for new frontiers of medical liability, it is wise for the radiologist and technologist to be aware of the legalities governing their working relationship and to behave accordingly. The legal principles that apply to this working relationship are discussed in this chapter, followed by a presentation of some relevant and interesting cases that have been litigated

  6. Cytogenetic follow-up of patients exposed, 7.5 years after radiological accident in Goiania, Brazil

    International Nuclear Information System (INIS)

    Ramalho, Adriana T.

    1996-01-01

    Ten persons exposed to 137 Cs during the radiological accident in Goiania (Brazil) were reexamined for the frequency of unstable chromosomal aberrations (dicentric chromosomes, centric rings and acentric fragments) 7.5 years the first examination. It was found that the frequencies fell to about 5% of the initial values, for those individuals who had been exposed to moderate to high doses. For the subjects exposed to low doses, of the order of 0.2 Gy or less., the observed frequencies of chromosomal aberrations fell much more slowly. (author)

  7. Teaching customer-centric operations management - evidence from an experiential learning-oriented mass customisation class

    Science.gov (United States)

    Medini, Khaled

    2018-01-01

    The increase of individualised customer demands and tough competition in the manufacturing sector gave rise to more customer-centric operations management such as products and services (mass) customisation. Mass customisation (MC), which inherits the 'economy of scale' from mass production (MP), aims to meet specific customer demands with near MP efficiency. Such an overarching concept has multiple impacts on operations management. This requires highly qualified and multi-skilled engineers who are well prepared for managing MC. Therefore, this concept should be properly addressed by engineering education curricula which needs to keep up with the emerging business trends. This paper introduces a novel course about MC and variety in operations management which recalls several Experiential Learning (EL) practices consistently with the principle of an active learning. The paper aims to analyse to which extent EL can improve the efficiency of the teaching methods and the retention rate in the context of operations management. The proposed course is given to engineering students whose' perceptions are collected using semi-structured questionnaires and analysed quantitatively and qualitatively. The paper highlights the relevance (i) of teaching MC, and (ii) of active learning in engineering education, through the specific application in the domain of MC.

  8. Obstacles of Search Engines Used by Graduate Students at The Faculty of Education, The Islamic University in Gaza

    Directory of Open Access Journals (Sweden)

    Fayez Kamal Shaladan

    2018-03-01

    Full Text Available This study aimed to identify obstacles of search engines used by graduate students at the Faculty of Education, the Islamic University in Gaza, and to overcome them. The researchers utilized the analytical descriptive approach to achieve the goal of the study. They used the interview tool and designed a questionnaire to collect data for the study. The sample of the study was (164 male and female postgraduate students enrolled in the College of Education. The study results were as follows: The degree of obstacles to the use of search engines among postgraduate students at the Faculty of Education at the Islamic University in Gaza was high with a percentage of (%71.05.There were no statistically significant differences between the averages of the study sample for the obstacles of the use of the search engines among the postgraduate students in the Faculty of Education, the Islamic University due to the gender and academic variables, the cumulative average. An exception to this was the third theme which was personal constraints which had differences in favor of students whose cumulative rates were less than (%85. The study concluded with these recommendations: The university should subscribe to various search engines revise admission terms and conditions for postgraduate studies whereby English and computer courses can be included. Keywords: Search engines, Students, Postgraduate studies, Islamic University.

  9. SpEnD: Linked Data SPARQL Endpoints Discovery Using Search Engines

    Science.gov (United States)

    Yumusak, Semih; Dogdu, Erdogan; Kodaz, Halife; Kamilaris, Andreas; Vandenbussche, Pierre-Yves

    In this study, a novel metacrawling method is proposed for discovering and monitoring linked data sources on the Web. We implemented the method in a prototype system, named SPARQL Endpoints Discovery (SpEnD). SpEnD starts with a "search keyword" discovery process for finding relevant keywords for the linked data domain and specifically SPARQL endpoints. Then, these search keywords are utilized to find linked data sources via popular search engines (Google, Bing, Yahoo, Yandex). By using this method, most of the currently listed SPARQL endpoints in existing endpoint repositories, as well as a significant number of new SPARQL endpoints, have been discovered. Finally, we have developed a new SPARQL endpoint crawler (SpEC) for crawling and link analysis.

  10. What Major Search Engines Like Google, Yahoo and Bing Need to Know about Teachers in the UK?

    Science.gov (United States)

    Seyedarabi, Faezeh

    2014-01-01

    This article briefly outlines the current major search engines' approach to teachers' web searching. The aim of this article is to make Web searching easier for teachers when searching for relevant online teaching materials, in general, and UK teacher practitioners at primary, secondary and post-compulsory levels, in particular. Therefore, major…

  11. A cognitive evaluation of four online search engines for answering definitional questions posed by physicians.

    Science.gov (United States)

    Yu, Hong; Kaufman, David

    2007-01-01

    The Internet is having a profound impact on physicians' medical decision making. One recent survey of 277 physicians showed that 72% of physicians regularly used the Internet to research medical information and 51% admitted that information from web sites influenced their clinical decisions. This paper describes the first cognitive evaluation of four state-of-the-art Internet search engines: Google (i.e., Google and Scholar.Google), MedQA, Onelook, and PubMed for answering definitional questions (i.e., questions with the format of "What is X?") posed by physicians. Onelook is a portal for online definitions, and MedQA is a question answering system that automatically generates short texts to answer specific biomedical questions. Our evaluation criteria include quality of answer, ease of use, time spent, and number of actions taken. Our results show that MedQA outperforms Onelook and PubMed in most of the criteria, and that MedQA surpasses Google in time spent and number of actions, two important efficiency criteria. Our results show that Google is the best system for quality of answer and ease of use. We conclude that Google is an effective search engine for medical definitions, and that MedQA exceeds the other search engines in that it provides users direct answers to their questions; while the users of the other search engines have to visit several sites before finding all of the pertinent information.

  12. Additive manufacturing for consumer-centric business models

    DEFF Research Database (Denmark)

    Bogers, Marcel; Hadar, Ronen; Bilberg, Arne

    2016-01-01

    Digital fabrication—including additive manufacturing (AM), rapid prototyping and 3D printing—has the potential to revolutionize the way in which products are produced and delivered to the customer. Therefore, it challenges companies to reinvent their business model—describing the logic of creating...... and capturing value. In this paper, we explore the implications that AM technologies have for manufacturing systems in the new business models that they enable. In particular, we consider how a consumer goods manufacturer can organize the operations of a more open business model when moving from a manufacturer......-centric to a consumer-centric value logic. A major shift includes a move from centralized to decentralized supply chains, where consumer goods manufacturers can implement a “hybrid” approach with a focus on localization and accessibility or develop a fully personalized model where the consumer effectively takes over...

  13. Analysis of nuclear and radiological events. Textbook for lecture in graduate school of engineering in the University of Tokyo

    International Nuclear Information System (INIS)

    Watanabe, Norio

    2007-02-01

    The Japan Atomic Energy Agency is carrying out the cooperative activity by providing specialized educational and training staff and making our facilities available for the graduate school of engineering in The University of Tokyo as part of developing human resources in nuclear technology. This report is prepared as a textbook for the lecture in the graduate school of engineering in The University of Tokyo and provides the outlines of activities on the analysis of nuclear and radiological events and analysis methods as well as the summaries of major incidents and accidents that occurred. (author)

  14. Whiplash Syndrome Reloaded: Digital Echoes of Whiplash Syndrome in the European Internet Search Engine Context

    Science.gov (United States)

    2017-01-01

    Background In many Western countries, after a motor vehicle collision, those involved seek health care for the assessment of injuries and for insurance documentation purposes. In contrast, in many less wealthy countries, there may be limited access to care and no insurance or compensation system. Objective The purpose of this infodemiology study was to investigate the global pattern of evolving Internet usage in countries with and without insurance and the corresponding compensation systems for whiplash injury. Methods We used the Internet search engine analytics via Google Trends to study the health information-seeking behavior concerning whiplash injury at national population levels in Europe. Results We found that the search for “whiplash” is strikingly and consistently often associated with the search for “compensation” in countries or cultures with a tort system. Frequent or traumatic painful injuries; diseases or disorders such as arthritis, headache, radius, and hip fracture; depressive disorders; and fibromyalgia were not associated similarly with searches on “compensation.” Conclusions In this study, we present evidence from the evolving viewpoint of naturalistic Internet search engine analytics that the expectations for receiving compensation may influence Internet search behavior in relation to whiplash injury. PMID:28347974

  15. CrossTalk. The Journal of Defense Software Engineering. Volume 23, Number 6, Nov/Dec 2010

    Science.gov (United States)

    2010-11-01

    such standards are International Standards Organization/ International Electrotechnical Commission ( ISO / IEC) standards 15288 for system engineering...and 12207 for software development. 13. Office of the DoD CIO. White Paper Phase I: A Competency Framework for the DoD Architect. Washington...processes in later phases. DoD-Centric Use Case Current support for net-centric operations is based on iso - lated deployments of relevant services in

  16. Refining comparative proteomics by spectral counting to account for shared peptides and multiple search engines.

    Science.gov (United States)

    Chen, Yao-Yi; Dasari, Surendra; Ma, Ze-Qiang; Vega-Montoto, Lorenzo J; Li, Ming; Tabb, David L

    2012-09-01

    Spectral counting has become a widely used approach for measuring and comparing protein abundance in label-free shotgun proteomics. However, when analyzing complex samples, the ambiguity of matching between peptides and proteins greatly affects the assessment of peptide and protein inventories, differentiation, and quantification. Meanwhile, the configuration of database searching algorithms that assign peptides to MS/MS spectra may produce different results in comparative proteomic analysis. Here, we present three strategies to improve comparative proteomics through spectral counting. We show that comparing spectral counts for peptide groups rather than for protein groups forestalls problems introduced by shared peptides. We demonstrate the advantage and flexibility of this new method in two datasets. We present four models to combine four popular search engines that lead to significant gains in spectral counting differentiation. Among these models, we demonstrate a powerful vote counting model that scales well for multiple search engines. We also show that semi-tryptic searching outperforms tryptic searching for comparative proteomics. Overall, these techniques considerably improve protein differentiation on the basis of spectral count tables.

  17. Featureous: A Tool for Feature-Centric Analysis of Java Software

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2010-01-01

    Feature-centric comprehension of source code is necessary for incorporating user-requested modifications during software evolution and maintenance. However, such comprehension is difficult to achieve in case of large object-oriented programs due to the size, complexity, and implicit character...... of mappings between features and source code. To support programmers in overcoming these difficulties, we present a feature-centric analysis tool, Featureous. Our tool extends the NetBeans IDE with mechanisms for efficient location of feature implementations in legacy source code, and an extensive analysis...

  18. Distribution of scholarly publications among academic radiology departments.

    Science.gov (United States)

    Morelli, John N; Bokhari, Danial

    2013-03-01

    The aim of this study was to determine whether the distribution of publications among academic radiology departments in the United States is Gaussian (ie, the bell curve) or Paretian. The search affiliation feature of the PubMed database was used to search for publications in 3 general radiology journals with high Impact Factors, originating at radiology departments in the United States affiliated with residency training programs. The distribution of the number of publications among departments was examined using χ(2) test statistics to determine whether it followed a Pareto or a Gaussian distribution more closely. A total of 14,219 publications contributed since 1987 by faculty members in 163 departments with residency programs were available for assessment. The data acquired were more consistent with a Pareto (χ(2) = 80.4) than a Gaussian (χ(2) = 659.5) distribution. The mean number of publications for departments was 79.9 ± 146 (range, 0-943). The median number of publications was 16.5. The majority (>50%) of major radiology publications from academic departments with residency programs originated in Pareto rather than a normal distribution. Copyright © 2013 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  19. Development of mobile radiological assessment laboratory

    International Nuclear Information System (INIS)

    Pujari, R.N.; Saindane, Shashank S.; Jain, Amit; Parmar, Jayesh; Narsaiah, M.V.R.; Pote, M.B.; Murali, S.; Chaudhury, Probal

    2018-01-01

    During any emergency situations real-time radiation measurements and the fast analysis of the measured radiological data are of crucial importance. The newly developed mobile vehicle based laboratory known as 'Radiological Assessment Laboratory' (RAL) can be used for real time measurements in different radiation emergency scenarios, such as the release of radioactive materials from a radiological/nuclear incident, during search of an orphan source or during radioisotope transportation. RAL is equipped with several high sensitive detectors/systems such as NaI(Tl) gamma spectrometers, large size plastic scintillators and air-sampler, along with GPS and data transfer capability through GSM modem

  20. Changes in users' mental models of Web search engines after ten ...

    African Journals Online (AJOL)

    Ward's Cluster analyses including the Pseudo T² Statistical analyses were used to determine the mental model clusters for the seventeen salient design features of Web search engines at each time point. The cubic clustering criterion (CCC) and the dendogram were conducted for each sample to help determine the number ...

  1. A Taxonomic Search Engine: Federating taxonomic databases using web services

    Directory of Open Access Journals (Sweden)

    Page Roderic DM

    2005-03-01

    Full Text Available Abstract Background The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. Results The Taxonomic Search Engine (TSE is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata for each name. Conclusion The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  2. Implementation of procedures of radiological protection in the section of Radiology of the emergency Hospital of Porto Alegre-Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Lorenzini, F.; Rizzati, M.R. [Emergency Hospital of Porto Alegre, HPS (Brazil)

    1998-12-31

    The Emergency Hospital of Porto Alegre (HPS) is one of the main reference centers for the population in the attendance of medical emergencies/urgencies. The Section of Radiology, which informs the patients clinical conditions based on radiological images, is the most demanded section of the hospital (81.43 % of the medical cases request radiological exams) in the aid of the diagnosis, in which excels for the search of the quality in the health branch. In this work are presented the procedures to have been implemented about radiological protection according to effective norm, methods, ways and conditions to satisfy the radiation workers and the internal and external patients. (Author)

  3. Implementation of procedures of radiological protection in the section of Radiology of the emergency Hospital of Porto Alegre-Brazil

    International Nuclear Information System (INIS)

    Lorenzini, F.; Rizzati, M.R.

    1998-01-01

    The Emergency Hospital of Porto Alegre (HPS) is one of the main reference centers for the population in the attendance of medical emergencies/urgencies. The Section of Radiology, which informs the patients clinical conditions based on radiological images, is the most demanded section of the hospital (81.43 % of the medical cases request radiological exams) in the aid of the diagnosis, in which excels for the search of the quality in the health branch. In this work are presented the procedures to have been implemented about radiological protection according to effective norm, methods, ways and conditions to satisfy the radiation workers and the internal and external patients. (Author)

  4. Being or Becoming: Toward an Open-System, Process-Centric Model of Personality.

    Science.gov (United States)

    Giordano, Peter J

    2015-12-01

    Mainstream personality psychology in the West neglects the investigation of intra-individual process and variation, because it favors a Being over a Becoming ontology. A Being ontology privileges a structural (e.g., traits or selves) conception of personality. Structure-centric models in turn suggest nomothetic research strategies and the investigation of individual and group differences. This article argues for an open-system, process-centric understanding of personality anchored in an ontology of Becoming. A classical Confucian model of personality is offered as an example of a process-centric approach for investigating and appreciating within-person personality process and variation. Both quantitative and qualitative idiographic strategies can be used as methods of scientific inquiry, particularly the exploration of the Confucian exemplar of psychological health and well-being.

  5. The impact of search engine selection and sorting criteria on vaccination beliefs and attitudes: two experiments manipulating Google output.

    Science.gov (United States)

    Allam, Ahmed; Schulz, Peter Johannes; Nakamoto, Kent

    2014-04-02

    During the past 2 decades, the Internet has evolved to become a necessity in our daily lives. The selection and sorting algorithms of search engines exert tremendous influence over the global spread of information and other communication processes. This study is concerned with demonstrating the influence of selection and sorting/ranking criteria operating in search engines on users' knowledge, beliefs, and attitudes of websites about vaccination. In particular, it is to compare the effects of search engines that deliver websites emphasizing on the pro side of vaccination with those focusing on the con side and with normal Google as a control group. We conducted 2 online experiments using manipulated search engines. A pilot study was to verify the existence of dangerous health literacy in connection with searching and using health information on the Internet by exploring the effect of 2 manipulated search engines that yielded either pro or con vaccination sites only, with a group receiving normal Google as control. A pre-post test design was used; participants were American marketing students enrolled in a study-abroad program in Lugano, Switzerland. The second experiment manipulated the search engine by applying different ratios of con versus pro vaccination webpages displayed in the search results. Participants were recruited from Amazon's Mechanical Turk platform where it was published as a human intelligence task (HIT). Both experiments showed knowledge highest in the group offered only pro vaccination sites (Z=-2.088, P=.03; Kruskal-Wallis H test [H₅]=11.30, P=.04). They acknowledged the importance/benefits (Z=-2.326, P=.02; H5=11.34, P=.04) and effectiveness (Z=-2.230, P=.03) of vaccination more, whereas groups offered antivaccination sites only showed increased concern about effects (Z=-2.582, P=.01; H₅=16.88, P=.005) and harmful health outcomes (Z=-2.200, P=.02) of vaccination. Normal Google users perceived information quality to be positive despite a

  6. Which Search Engine Is the Most Used One among University Students?

    Science.gov (United States)

    Cavus, Nadire; Alpan, Kezban

    2010-01-01

    The importance of information is increasing in the information age that we are living in with internet becoming the major information resource for people with rapidly increasing number of documents. This situation makes finding information on the internet without web search engines impossible. The aim of the study is revealing most widely used…

  7. Use of search engines for academic activities by the academic staff ...

    African Journals Online (AJOL)

    The research was designed to investigate the Internet Search Engine use behaviour and experiences of lecturers at the University of Jos, using the academics of the Faculty of Natural Sciences in the University as a focal population. The entire population of 148 academic staff members in the Faculty was adopted for the ...

  8. Towards a service centric contextualized vehicular cloud

    NARCIS (Netherlands)

    Hu, Xiping; Wang, Lei; Sheng, Zhengguo; TalebiFard, Peyman; Zhou, Li; Liu, Jia; Leung, Victor C.M.

    2014-01-01

    This paper proposes a service-centric contextualized vehicular (SCCV) cloud platform to facilitate the deployment and delivery of cloud-based mobile applications over vehicular networks. SCCV cloud employs a multi-tier architecture that consists of the network, mobile device, and cloud tiers. Based

  9. Inefficiency and Bias of Search Engines in Retrieving References Containing Scientific Names of Fossil Amphibians

    Science.gov (United States)

    Brown, Lauren E.; Dubois, Alain; Shepard, Donald B.

    2008-01-01

    Retrieval efficiencies of paper-based references in journals and other serials containing 10 scientific names of fossil amphibians were determined for seven major search engines. Retrievals were compared to the number of references obtained covering the period 1895-2006 by a Comprehensive Search. The latter was primarily a traditional…

  10. Seasonal trends in tinnitus symptomatology: evidence from Internet search engine query data.

    Science.gov (United States)

    Plante, David T; Ingram, David G

    2015-10-01

    The primary aim of this study was to test the hypothesis that the symptom of tinnitus demonstrates a seasonal pattern with worsening in the winter relative to the summer using Internet search engine query data. Normalized search volume for the term 'tinnitus' from January 2004 through December 2013 was retrieved from Google Trends. Seasonal effects were evaluated using cosinor regression models. Primary countries of interest were the United States and Australia. Secondary exploratory analyses were also performed using data from Germany, the United Kingdom, Canada, Sweden, and Switzerland. Significant seasonal effects for 'tinnitus' search queries were found in the United States and Australia (p search volume in the winter relative to the summer. Our findings indicate that there are significant seasonal trends for Internet search queries for tinnitus, with a zenith in winter months. Further research is indicated to determine the biological mechanisms underlying these findings, as they may provide insights into the pathophysiology of this common and debilitating medical symptom.

  11. A comparative evaluation of static and functional methods for recording centric relation and condylar guidance: a clinical study.

    Science.gov (United States)

    Thakur, Mridul; Jain, Veena; Parkash, Hari; Kumar, Pravesh

    2012-09-01

    To evaluate and compare the centric relation and horizontal condylar guidance using interocclusal wax and extra oral Gothic arch methods and subjective evaluation of dentures thus fabricated. Centric relation and horizontal condylar guidance was recorded by using interocclusal wax and gothic arch tracing in 28 completely edentulous patients. These records were transferred to the articulator and difference in both values was recorded. After that patients were divided in two groups according to the centric relation and horizontal condylar guidance recording method used to achieve balanced occlusion. Response of the dentures was subjectively evaluated using "Woelfel subjective evaluation criteria". Centric relation recorded by both the methods did coincide in 7.14 % of patients. Centric relation recorded by interocclusal wax was posterior to Gothic centric relation in 21.43 % of patients, and anterior to Gothic centric relation in 71.42 % patients. Gothic arch tracings gave higher mean guidance values on both the sides as compared to protrusive wax record in all the subjects, although the difference was statistically insignificant (P > 0.05). Subjective evaluation showed statistical insignificance for all the parameters in both groups. Gothic arch method records the centric relation at a more posterior position than the Static method, but it does not make any difference in clinical performance of the complete denture. Horizontal condylar guidance angle was approximately similar by both the methods.

  12. From unified messaging towards I-centric Services for the virtual home environment

    CERN Document Server

    van der Meer, S

    2001-01-01

    The vision of user centric information and an architecture for the realization of user centric services was discussed. The illustrated framework shows that these ideas can be assembled to set up an environment where services that allow users to interact with their environment are possible. The technologies integrated into the system were platform localization capabilities, environment awareness, communication capabilities and information storage capabilities. (Edited abstract)

  13. A Reference Architecture for Network-Centric Information Systems

    National Research Council Canada - National Science Library

    Renner, Scott; Schaefer, Ronald

    2003-01-01

    This paper presents the "C2 Enterprise Reference Architecture" (C2ERA), which is a new technical concept of operations for building information systems better suited to the Network-Centric Warfare (NCW) environment...

  14. Whiplash Syndrome Reloaded: Digital Echoes of Whiplash Syndrome in the European Internet Search Engine Context.

    Science.gov (United States)

    Noll-Hussong, Michael

    2017-03-27

    In many Western countries, after a motor vehicle collision, those involved seek health care for the assessment of injuries and for insurance documentation purposes. In contrast, in many less wealthy countries, there may be limited access to care and no insurance or compensation system. The purpose of this infodemiology study was to investigate the global pattern of evolving Internet usage in countries with and without insurance and the corresponding compensation systems for whiplash injury. We used the Internet search engine analytics via Google Trends to study the health information-seeking behavior concerning whiplash injury at national population levels in Europe. We found that the search for "whiplash" is strikingly and consistently often associated with the search for "compensation" in countries or cultures with a tort system. Frequent or traumatic painful injuries; diseases or disorders such as arthritis, headache, radius, and hip fracture; depressive disorders; and fibromyalgia were not associated similarly with searches on "compensation." In this study, we present evidence from the evolving viewpoint of naturalistic Internet search engine analytics that the expectations for receiving compensation may influence Internet search behavior in relation to whiplash injury. ©Michael Noll-Hussong. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 27.03.2017.

  15. A search engine to identify pathway genes from expression data on multiple organisms

    Directory of Open Access Journals (Sweden)

    Zambon Alexander C

    2007-05-01

    Full Text Available Abstract Background The completion of several genome projects showed that most genes have not yet been characterized, especially in multicellular organisms. Although most genes have unknown functions, a large collection of data is available describing their transcriptional activities under many different experimental conditions. In many cases, the coregulatation of a set of genes across a set of conditions can be used to infer roles for genes of unknown function. Results We developed a search engine, the Multiple-Species Gene Recommender (MSGR, which scans gene expression datasets from multiple organisms to identify genes that participate in a genetic pathway. The MSGR takes a query consisting of a list of genes that function together in a genetic pathway from one of six organisms: Homo sapiens, Drosophila melanogaster, Caenorhabditis elegans, Saccharomyces cerevisiae, Arabidopsis thaliana, and Helicobacter pylori. Using a probabilistic method to merge searches, the MSGR identifies genes that are significantly coregulated with the query genes in one or more of those organisms. The MSGR achieves its highest accuracy for many human pathways when searches are combined across species. We describe specific examples in which new genes were identified to be involved in a neuromuscular signaling pathway and a cell-adhesion pathway. Conclusion The search engine can scan large collections of gene expression data for new genes that are significantly coregulated with a pathway of interest. By integrating searches across organisms, the MSGR can identify pathway members whose coregulation is either ancient or newly evolved.

  16. From school centric to ‘material centric’ education

    DEFF Research Database (Denmark)

    Christensen, Suna Møller

    2015-01-01

    as craftsman. A pilot-project in spring 2014 revealed that designing talent education pushed teachers (who all have a personal background in vocational jobs) to challenge a theory based and in this sense school centric curriculum objectifying materials as pre-existing elements or resources in an assignment...... ([1934]2005) in ‘Art as Experience’ draws between art as an object and art as experience, this presentation examines the way in which education for particularly skilled students at vocational education schools in Denmark re-focus teachers attention to materials and student-material relations. Technically......, and re-consider relations with materials as dimensions facilitating the professional development of vocational ‘talent’. The broader topic of school centric versus ‘material centric’ education ties this research into vocational skills and craftsmanship with inquiries into framing and structuring...

  17. GIGGLE: a search engine for large-scale integrated genome analysis.

    Science.gov (United States)

    Layer, Ryan M; Pedersen, Brent S; DiSera, Tonya; Marth, Gabor T; Gertz, Jason; Quinlan, Aaron R

    2018-02-01

    GIGGLE is a genomics search engine that identifies and ranks the significance of genomic loci shared between query features and thousands of genome interval files. GIGGLE (https://github.com/ryanlayer/giggle) scales to billions of intervals and is over three orders of magnitude faster than existing methods. Its speed extends the accessibility and utility of resources such as ENCODE, Roadmap Epigenomics, and GTEx by facilitating data integration and hypothesis generation.

  18. OrChem - An open source chemistry search engine for Oracle®

    Science.gov (United States)

    2009-01-01

    Background Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Results Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. Availability OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net. PMID:20298521

  19. OrChem - An open source chemistry search engine for Oracle(R).

    Science.gov (United States)

    Rijnbeek, Mark; Steinbeck, Christoph

    2009-10-22

    Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net.

  20. Impact of Internet Search Engines on OPAC Users: A Study of Punjabi University, Patiala (India)

    Science.gov (United States)

    Kumar, Shiv

    2012-01-01

    Purpose: The aim of this paper is to study the impact of internet search engine usage with special reference to OPAC searches in the Punjabi University Library, Patiala, Punjab (India). Design/methodology/approach: The primary data were collected from 352 users comprising faculty, research scholars and postgraduate students of the university. A…

  1. Radiological Work Planning and Procedures

    CERN Document Server

    Kurtz, J E

    2000-01-01

    Each facility is tasked with maintaining personnel radiation exposure as low as reasonably achievable (ALARA). A continued effort is required to meet this goal by developing and implementing improvements to technical work documents (TWDs) and work performance. A review of selected TWDs from most facilities shows there is a need to incorporate more radiological control requirements into the TWD. The Radioactive Work Permit (RWP) provides a mechanism to place some of the requirements but does not provide all the information needed by the worker as he/she is accomplishing the steps of the TWD. Requiring the engineers, planners and procedure writers to put the radiological control requirements in the work steps would be very easy if all personnel had a strong background in radiological work planning and radiological controls. Unfortunately, many of these personnel do not have the background necessary to include these requirements without assistance by the Radiological Control organization at each facility. In add...

  2. Aplikasi Search Engine Perpustakaan Petra Berbasis Android dengan Apache SOLR

    Directory of Open Access Journals (Sweden)

    Andreas Handojo

    2016-07-01

    Full Text Available Abstrak: Pendidikan merupakan kebutuhan yang penting bagi manusia untuk meningkatkan kemampuan serta taraf hidupnya.Selain melalui pendidikan formal, ilmu juga dapat diperoleh melalui media cetak atau buku.Perpustakaan merupakan salah satu sarana yang penting dalam menunjang hal tersebut.Meskipun sangat bermanfaat, terdapat kesulitan penggunaan layanan perpustakaan, karena terlalu banyaknya koleksi pustaka yang ada (buku, jurnal, majalah, dan sebagainya sehingga sulit untuk menemukan buku yang ingin dicari.Oleh sebab itu, selain harus berkembang dengan penyediaan koleksi pustaka, perpustakaan harus dapat mengikuti perkembangan zaman yang ada sehingga mempermudah penggunaan layanan perpustakaan.Saat iniperpustakaan Universitas Kristen Petra memiliki perpustakaan dengan kurang lebih 230.000 koleksi fisik maupun digital (berdasarkan data 2014.Dimana daftar koleksi fisik dan dokumen digital dapat diakses melalui website perpustakaan.Adanya koleksi pustaka yang sangat banyak ini menyebabkan kesulitan pengguna dalam melakukan proses pencarian. Sehingga guna menambah fitur layanan yang diberikan maka pada penelitian ini dibuatlah sebuah aplikasi layanan search engine perpustakaan menggunakan platform Apache SOLR dan database PostgreSQL. Selain itu, guna lebih meningkatkan kemudahan akses maka aplikasi ini dibuat dengan menggunakan platform mobile device berbasis Android.Selain pengujian terhadap aplikasi dilakukan juga pengujian dengan mengedarkan kuesioner terhadap 50 calon pengguna.Dari hasil kuestioner tersebut menunjukkan bahwa fitur-fitur yang dibuat telah sesuai dengan kebutuhan pengguna (78%. Kata kunci: SOLR, Mesin Pencarian, Perpustakaan, PostgreSQL Abstract: Education is an essential requirement for people to improve their standard of living. Other than through formal education, science can also be obtained through the print media or books. Library is one important tool supporting it. Although it is useful, there are difficulties use library

  3. West African Journal of Radiology: Advanced Search

    African Journals Online (AJOL)

    Search tips: Search terms are case-insensitive; Common words are ignored; By default only articles containing all terms in the query are returned (i.e., AND is implied); Combine multiple words with OR to find articles containing either term; e.g., education OR research; Use parentheses to create more complex queries; e.g., ...

  4. Future consumer mobile phone security: A case study using the data-centric security model

    NARCIS (Netherlands)

    van Cleeff, A.

    Consumer mobile phone security requires more attention, now that their data storage capacity is increasing. At the same time, much effort is spent on data-centric security for large enterprises. In this article we try to apply data-centric security to consumer mobile phones. We show a maturity model

  5. performance evaluation of a pilot paraplegic centricity mobility aid

    African Journals Online (AJOL)

    eobe

    PERFORMANCE EVALUATION OF A PILOT PARAPLEGIC CENTRICITY. MOBILITY AID. MOBILITY ... The result of the test showed a remarkable improvement in. Wilcoxin's signed rank test. .... RESEARCH METHOD. RESEARCH METHOD.

  6. An end user evaluation of query formulation and results review tools in three medical meta-search engines.

    Science.gov (United States)

    Leroy, Gondy; Xu, Jennifer; Chung, Wingyan; Eggers, Shauna; Chen, Hsinchun

    2007-01-01

    Retrieving sufficient relevant information online is difficult for many people because they use too few keywords to search and search engines do not provide many support tools. To further complicate the search, users often ignore support tools when available. Our goal is to evaluate in a realistic setting when users use support tools and how they perceive these tools. We compared three medical search engines with support tools that require more or less effort from users to form a query and evaluate results. We carried out an end user study with 23 users who were asked to find information, i.e., subtopics and supporting abstracts, for a given theme. We used a balanced within-subjects design and report on the effectiveness, efficiency and usability of the support tools from the end user perspective. We found significant differences in efficiency but did not find significant differences in effectiveness between the three search engines. Dynamic user support tools requiring less effort led to higher efficiency. Fewer searches were needed and more documents were found per search when both query reformulation and result review tools dynamically adjust to the user query. The query reformulation tool that provided a long list of keywords, dynamically adjusted to the user query, was used most often and led to more subtopics. As hypothesized, the dynamic result review tools were used more often and led to more subtopics than static ones. These results were corroborated by the usability questionnaires, which showed that support tools that dynamically optimize output were preferred.

  7. Adaptive Engineering of an Embedded System, Engineered for use by Search and Rescue Canines

    Directory of Open Access Journals (Sweden)

    Cristina Ribeiro

    2011-06-01

    Full Text Available In Urban Search and Rescue (US&R operations, canine teams are deployed to find live patients, and save lives. US&R may benefit from increased levels of situational awareness, through information made available through the use of embedded systems attached to the dogs. One of these is the Canine Pose Estimation (CPE system. There are many challenges faced with such embedded systems including the engineering of such devices for use in disaster environments. Durability and wireless connectivity in areas with materials that inhibit wireless communications, the safety of the dog wearing the devices, and form factor must be accommodated. All of these factors must be weighed without compromising the accuracy of the application and the timely delivery of its data. This paper discusses the adaptive engineering process and how each of the unique challenges of emergency response embedded systems can be defined and overcome through effective design methods.

  8. GIGGLE: a search engine for large-scale integrated genome analysis

    Science.gov (United States)

    Layer, Ryan M; Pedersen, Brent S; DiSera, Tonya; Marth, Gabor T; Gertz, Jason; Quinlan, Aaron R

    2018-01-01

    GIGGLE is a genomics search engine that identifies and ranks the significance of genomic loci shared between query features and thousands of genome interval files. GIGGLE (https://github.com/ryanlayer/giggle) scales to billions of intervals and is over three orders of magnitude faster than existing methods. Its speed extends the accessibility and utility of resources such as ENCODE, Roadmap Epigenomics, and GTEx by facilitating data integration and hypothesis generation. PMID:29309061

  9. Needle Custom Search: Recall-oriented search on the Web using semantic annotations

    NARCIS (Netherlands)

    Kaptein, Rianne; Koot, Gijs; Huis in 't Veld, Mirjam A.A.; van den Broek, Egon; de Rijke, Maarten; Kenter, Tom; de Vries, A.P.; Zhai, Chen Xiang; de Jong, Franciska M.G.; Radinsky, Kira; Hofmann, Katja

    Web search engines are optimized for early precision, which makes it difficult to perform recall-oriented tasks using these search engines. In this article, we present our tool Needle Custom Search. This tool exploits semantic annotations of Web search results and, thereby, increase the efficiency

  10. Needle Custom Search : Recall-oriented search on the web using semantic annotations

    NARCIS (Netherlands)

    Kaptein, Rianne; Koot, Gijs; Huis in 't Veld, Mirjam A.A.; van den Broek, Egon L.

    2014-01-01

    Web search engines are optimized for early precision, which makes it difficult to perform recall-oriented tasks using these search engines. In this article, we present our tool Needle Custom Search. This tool exploits semantic annotations of Web search results and, thereby, increase the efficiency

  11. Radiation dose reduction: comparative assessment of publication volume between interventional and diagnostic radiology.

    Science.gov (United States)

    Hansmann, Jan; Henzler, Thomas; Gaba, Ron C; Morelli, John N

    2017-01-01

    We aimed to quantify and compare awareness regarding radiation dose reduction within the interventional radiology and diagnostic radiology communities. Abstracts accepted to the annual meetings of the Society of Interventional Radiology (SIR), the Cardiovascular and Interventional Radiological Society of Europe (CIRSE), the Radiological Society of North America (RSNA), and the European Congress of Radiology (ECR) between 2005 and 2015 were analyzed using the search terms "interventional/computed tomography" and "radiation dose/radiation dose reduction." A PubMed query using the above-mentioned search terms for the years of 2005-2015 was performed. Between 2005 and 2015, a total of 14 520 abstracts (mean, 660±297 abstracts) and 80 614 abstracts (mean, 3664±1025 abstracts) were presented at interventional and diagnostic radiology meetings, respectively. Significantly fewer abstracts related to radiation dose were presented at the interventional radiology meetings compared with the diagnostic radiology meetings (162 abstracts [1% of total] vs. 2706 [3% of total]; P radiology abstracts (range, 6-27) and 246±105 diagnostic radiology abstracts (range, 112-389) pertaining to radiation dose were presented at each meeting. The PubMed query revealed an average of 124±39 publications (range, 79-187) and 1205±307 publications (range, 829-1672) related to interventional and diagnostic radiology dose reduction per year, respectively (P radiology community over the past 10 years has not mirrored the increased volume seen within diagnostic radiology, suggesting that increased education and discussion about this topic may be warranted.

  12. Socioeconomic and political issues in radiology

    International Nuclear Information System (INIS)

    Stiles, R.G.; Belt, H.C.

    1990-01-01

    This paper compares editorials on socioeconomic and political issues published in the radiologic literature during 1920-1940 with those published during 1970-1990. Radiologists literature indexes were searched for editorials on socioeconomic and political issues published during two 20-year periods: 1920-1940 and 1970- 1990. One hundred editorials from each period were chosen from two major journals. The editorials were organized into 20 categories including turf, subspecialization, radiologist as physician, public relations, governmental intervention (socialized medicine), future of radiology, overuse of studies

  13. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-08-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  14. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-12-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  15. iPixel: a visual content-based and semantic search engine for retrieving digitized mammograms by using collective intelligence.

    Science.gov (United States)

    Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A

    2012-09-01

    Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis.

  16. User Centric Job Monitoring – a redesign and novel approach in the STAR experiment

    International Nuclear Information System (INIS)

    Arkhipkin, D; Lauret, J; Zulkarneeva, Y

    2014-01-01

    User Centric Monitoring (or UCM) has been a long awaited feature in STAR, whereas programs, workflows and system 'events' could be logged, broadcast and later analyzed. UCM allows to collect and filter available job monitoring information from various resources and present it to users in a user-centric view rather than an administrative-centric point of view. The first attempt and implementation of 'a' UCM approach was made in STAR 2004 using a log4cxx plug-in back-end and then further evolved with an attempt to push toward a scalable database back-end (2006) and finally using a Web-Service approach (2010, CSW4DB SBIR). The latest showed to be incomplete and not addressing the evolving needs of the experiment where streamlined messages for online (data acquisition) purposes as well as the continuous support for the data mining needs and event analysis need to coexists and unified in a seamless approach. The code also revealed to be hardly maintainable. This paper presents the next evolutionary step of the UCM toolkit, a redesign and redirection of our latest attempt acknowledging and integrating recent technologies and a simpler, maintainable and yet scalable manner. The extended version of the job logging package is built upon three-tier approach based on Task, Job and Event, and features a Web-Service based logging API, a responsive AJAX-powered user interface, and a database back-end relying on MongoDB, which is uniquely suited for STAR needs. In addition, we present details of integration of this logging package with the STAR offline and online software frameworks. Leveraging on the reported experience and work from the ATLAS and CMS experience on using the ESPER engine, we discuss and show how such approach has been implemented in STAR for meta-data event triggering stream processing and filtering. An ESPER based solution seems to fit well into the online data acquisition system where many systems are monitored.

  17. Verification of Minimum Detectable Activity for Radiological Threat Source Search

    Science.gov (United States)

    Gardiner, Hannah; Myjak, Mitchell; Baciak, James; Detwiler, Rebecca; Seifert, Carolyn

    2015-10-01

    The Department of Homeland Security's Domestic Nuclear Detection Office is working to develop advanced technologies that will improve the ability to detect, localize, and identify radiological and nuclear sources from airborne platforms. The Airborne Radiological Enhanced-sensor System (ARES) program is developing advanced data fusion algorithms for analyzing data from a helicopter-mounted radiation detector. This detector platform provides a rapid, wide-area assessment of radiological conditions at ground level. The NSCRAD (Nuisance-rejection Spectral Comparison Ratios for Anomaly Detection) algorithm was developed to distinguish low-count sources of interest from benign naturally occurring radiation and irrelevant nuisance sources. It uses a number of broad, overlapping regions of interest to statistically compare each newly measured spectrum with the current estimate for the background to identify anomalies. We recently developed a method to estimate the minimum detectable activity (MDA) of NSCRAD in real time. We present this method here and report on the MDA verification using both laboratory measurements and simulated injects on measured backgrounds at or near the detection limits. This work is supported by the US Department of Homeland Security, Domestic Nuclear Detection Office, under competitively awarded contract/IAA HSHQDC-12-X-00376. This support does not constitute an express or implied endorsement on the part of the Gov't.

  18. COORDINATION IN MULTILEVEL NETWORK-CENTRIC CONTROL SYSTEMS OF REGIONAL SECURITY: APPROACH AND FORMAL MODEL

    Directory of Open Access Journals (Sweden)

    A. V. Masloboev

    2015-01-01

    Full Text Available The paper deals with development of methods and tools for mathematical and computer modeling of the multilevel network-centric control systems of regional security. This research is carried out under development strategy implementation of the Arctic zone of the Russian Federation and national safeguarding for the period before 2020 in the Murmansk region territory. Creation of unified interdepartmental multilevel computer-aided system is proposed intended for decision-making information support and socio-economic security monitoring of the Arctic regions of Russia. The distinctive features of the investigated system class are openness, self-organization, decentralization of management functions and decision-making, weak hierarchy in the decision-making circuit and goal generation capability inside itself. Research techniques include functional-target approach, mathematical apparatus of multilevel hierarchical system theory and principles of network-centric control of distributed systems with pro-active components and variable structure. The work considers network-centric management local decisions coordination problem-solving within the multilevel distributed systems intended for information support of regional security. The coordination problem-solving approach and problem formalization in the multilevel network-centric control systems of regional security have been proposed based on developed multilevel recurrent hierarchical model of regional socio-economic system complex security. The model provides coordination of regional security indexes, optimized by the different elements of multilevel control systems, subject to decentralized decision-making. The model specificity consists in application of functional-target technology and mathematical apparatus of multilevel hierarchical system theory for coordination procedures implementation of the network-centric management local decisions. The work-out and research results can find further

  19. Discourse-Centric Learning Analytics: Mapping the Terrain

    Science.gov (United States)

    Knight, Simon; Littleton, Karen

    2015-01-01

    There is an increasing interest in developing learning analytic techniques for the analysis, and support of, high-quality learning discourse. This paper maps the terrain of discourse-centric learning analytics (DCLA), outlining the distinctive contribution of DCLA and outlining a definition for the field moving forwards. It is our claim that DCLA…

  20. How Will Online Affiliate Marketing Networks Impact Search Engine Rankings?

    OpenAIRE

    Janssen, David; Heck, Eric

    2007-01-01

    textabstractIn online affiliate marketing networks advertising web sites offer their affiliates revenues based on provided web site traffic and associated leads and sales. Advertising web sites can have a network of thousands of affiliates providing them with web site traffic through hyperlinks on their web sites. Search engines such as Google, MSN, and Yahoo, consider hyperlinks as a proof of quality and/or reliability of the linked web sites, and therefore use them to determine the relevanc...

  1. evaluating search effectiveness of some selected search engines

    African Journals Online (AJOL)

    Precision, relative recall and response time were considered for this ... a total of 24 search queries were sampled based on information queries, .... searching process and results, although there are other ... Q3.2 Software prototype model.

  2. Seasonal trends in hypertension in Poland: evidence from Google search engine query data.

    Science.gov (United States)

    Płatek, Anna E; Sierdziński, Janusz; Krzowski, Bartosz; Szymański, Filip M

    2018-01-01

    Various conditions, including arterial hypertension, exhibit seasonal trends in their occurrence and magnitude. Those trends correspond to an interest exhibited in the number of Internet searches for the specific conditions per month. The aim of the study was to show seasonal trends in the hypertension prevalence in Poland relate to the data from the Google Trends tool. Internet search engine query data were retrieved from Google Trends from January 2008 to November 2017. Data were calculated as a monthly normalised search volume from the nine-year period. Data was presented for specific geographic regions, including Poland, the United States of America, Australia, and worldwide for the following search terms: "arterial hypertension (pol. nadciśnienie tętnicze)", "hypertension (pol. nadciśnienie)" and "hypertension medical condition". Seasonal effects were calculated using regression models and presented graphically. In Poland the search volume is the highest between November and May, while patients exhibit the least interest in arterial hypertension during summer holidays (p Google.

  3. Paediatric doses from diagnostic radiology in Victoria

    International Nuclear Information System (INIS)

    Boal, T.J.; Cardillo, I.; Einsiedel, P.F.

    1998-01-01

    This study examines doses to paediatric patients from diagnostic radiology. Measurements were made at 29 hospitals and private radiology practices in the state of Victoria. Entrance skin doses in air were measured for the exposure factors used by hospital radiology departments and private radiology practices for a standard size 1, 5, 10 and 15 year old child, for the following procedures: chest AP/PA, lat; abdomen AP; pelvis AP; lumbar spine AP, lat; and skull AP, lat. There was a large range of doses for each particular procedure and age group. Factors contributing to the range of doses were identified. Guidance levels for paediatric radiology based on the third quartile value of the skin entrance doses have been recommended and are compared with guidance levels. Copyright (1998) Australasian Physical and Engineering Sciences in Medicine

  4. Using Search Engine Query Data to Explore the Epidemiology of Common Gastrointestinal Symptoms.

    Science.gov (United States)

    Hassid, Benjamin G; Day, Lukejohn W; Awad, Mohannad A; Sewell, Justin L; Osterberg, E Charles; Breyer, Benjamin N

    2017-03-01

    Internet searches are an increasingly used tool in medical research. To date, no studies have examined Google search data in relation to common gastrointestinal symptoms. The aim of this study was to compare trends in Internet search volume with clinical datasets for common gastrointestinal symptoms. Using Google Trends, we recorded relative changes in volume of searches related to dysphagia, vomiting, and diarrhea in the USA between January 2008 and January 2011. We queried the National Inpatient Sample (NIS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS) during this time period and identified cases related to these symptoms. We assessed the correlation between Google Trends and these two clinical datasets, as well as examined seasonal variation trends. Changes to Google search volume for all three symptoms correlated significantly with changes to NIS output (dysphagia: r = 0.5, P = 0.002; diarrhea: r = 0.79, P search engine query volume over time. These data demonstrate that the prevalence of common GI symptoms is rising over time.

  5. Searching for information on the World Wide Web with a search engine: a pilot study on cognitive flexibility in younger and older users.

    Science.gov (United States)

    Dommes, Aurelie; Chevalier, Aline; Rossetti, Marilyne

    2010-04-01

    This pilot study investigated the age-related differences in searching for information on the World Wide Web with a search engine. 11 older adults (6 men, 5 women; M age=59 yr., SD=2.76, range=55-65 yr.) and 12 younger adults (2 men, 10 women; M=23.7 yr., SD=1.07, range=22-25 yr.) had to conduct six searches differing in complexity, and for which a search method was or was not induced. The results showed that the younger and older participants provided with an induced search method were less flexible than the others and produced fewer new keywords. Moreover, older participants took longer than the younger adults, especially in the complex searches. The younger participants were flexible in the first request and spontaneously produced new keywords (spontaneous flexibility), whereas the older participants only produced new keywords when confronted by impasses (reactive flexibility). Aging may influence web searches, especially the nature of keywords used.

  6. Current evaluation of the information about Radiological Protection in Internet

    International Nuclear Information System (INIS)

    Ruiz-Cruces, R.; Marco, M.; Villanueva, I.

    2003-01-01

    To analyze the current situation about the pedagogic information on radiological protection training which could be found in Internet. More than 756 web-pages have been visited in Internet about Radiological Protection in the nuclear and medical fields, providing information mainly focusing on information to the members of the public. In this search were used internet Searching Appliance (as Copernicus, Google and Scirus), using key words related with this subject (as Radiological Protection and Health Safety), getting the internet address of organizations, societies and investigation groups. Only a low percentage (less than 5 per cent) of these addresses content information on Radiological Protection for the members of the public, including information about the regulator Organizations, and which are the objectives for protection of the members of the public against ionization radiation (from the point of view of the use of the ionization radiation in the medical and nuclear field). This work attempts to propose the use of internet as a tool for informing the members of the public in matter of radiological protection, as first link in the chain of the training and education. (Author)

  7. Laboratory of environmental radiological surveillance

    International Nuclear Information System (INIS)

    Mendez G, A.; Marcial M, F.; Giber F, J.; Montiel R, E.; Leon del V, E.; Rivas C, I.; Leon G, M.V.; Lagunas G, E.; Aragon S, R.; Juarez N, A.; Alfaro L, M.M.

    1991-12-01

    The department of radiological protection of the ININ requests the collaboration of the Engineering Unit for the elaboration of the work project of the laboratory of environmental radiological surveillance. The emission of radioactive substances to the atmosphere like consequence of the normal operation of the Nuclear Center, constitutes an exhibition source from the man to the radiations that it should be appropriately watched over and controlled to be able to determine the population's potential exhibition that it lives in the area of influence of the installation. (Author)

  8. Radiological Work Planning and Procedures

    International Nuclear Information System (INIS)

    KURTZ, J.E.

    2000-01-01

    Each facility is tasked with maintaining personnel radiation exposure as low as reasonably achievable (ALARA). A continued effort is required to meet this goal by developing and implementing improvements to technical work documents (TWDs) and work performance. A review of selected TWDs from most facilities shows there is a need to incorporate more radiological control requirements into the TWD. The Radioactive Work Permit (RWP) provides a mechanism to place some of the requirements but does not provide all the information needed by the worker as he/she is accomplishing the steps of the TWD. Requiring the engineers, planners and procedure writers to put the radiological control requirements in the work steps would be very easy if all personnel had a strong background in radiological work planning and radiological controls. Unfortunately, many of these personnel do not have the background necessary to include these requirements without assistance by the Radiological Control organization at each facility. In addition, there seems to be confusion as to what should be and what should not be included in the TWD

  9. Building Internet Search Engines Internet'te Tarama Sistemlerinin Kurulması

    Directory of Open Access Journals (Sweden)

    Mustafa Akgül

    1996-09-01

    Full Text Available Internet search engines are powerful tools to find electronics objects such as addresses of individuals and institutions, documents, statistics of all kinds, dictionaries, catalogs, product information etc. This paper explains how to build and run some verycommon search engines on Unix platforms, so as to serve documents through the Web. Internet üzerinde var olan çeşitli tarama mekanizmaları, kullanıcılara birey ve kurum adreslerinden doküman adreslerine, istatistiklerden sözlüklere, kitap kataloglarından ürün fiyatlarına kadar bir yelpazede elektronik nesnelerin aranıp bulunması ve sunulmas��nda yararlı olmaktadır. Hiyerarşik şekilde örgütlenen sanal kütüphanelerle birlikte, tarama mekanizmaları kullanıcıya bu çok büyük dağıtık kütüphane üzerinde yolunu bulmasına yardımcı olmaktadır. Bu makalede çok yaygın olarak kullanılan tarama motorlarının özellikle Unix ortamında kurulması ve çalıştırılması için yapılması gerekenler anlatılmaktadır.

  10. Predicting user click behaviour in search engine advertisements

    Science.gov (United States)

    Daryaie Zanjani, Mohammad; Khadivi, Shahram

    2015-10-01

    According to the specific requirements and interests of users, search engines select and display advertisements that match user needs and have higher probability of attracting users' attention based on their previous search history. New objects such as user, advertisement or query cause a deterioration of precision in targeted advertising due to their lack of history. This article surveys this challenge. In the case of new objects, we first extract similar observed objects to the new object and then we use their history as the history of new object. Similarity between objects is measured based on correlation, which is a relation between user and advertisement when the advertisement is displayed to the user. This method is used for all objects, so it has helped us to accurately select relevant advertisements for users' queries. In our proposed model, we assume that similar users behave in a similar manner. We find that users with few queries are similar to new users. We will show that correlation between users and advertisements' keywords is high. Thus, users who pay attention to advertisements' keywords, click similar advertisements. In addition, users who pay attention to specific brand names might have similar behaviours too.

  11. Search engines, news wires and digital epidemiology: Presumptions and facts.

    Science.gov (United States)

    Kaveh-Yazdy, Fatemeh; Zareh-Bidoki, Ali-Mohammad

    2018-07-01

    Digital epidemiology tries to identify diseases dynamics and spread behaviors using digital traces collected via search engines logs and social media posts. However, the impacts of news on information-seeking behaviors have been remained unknown. Data employed in this research provided from two sources, (1) Parsijoo search engine query logs of 48 months, and (2) a set of documents of 28 months of Parsijoo's news service. Two classes of topics, i.e. macro-topics and micro-topics were selected to be tracked in query logs and news. Keywords of the macro-topics were automatically generated using web provided resources and exceeded 10k. Keyword set of micro-topics were limited to a numerable list including terms related to diseases and health-related activities. The tests are established in the form of three studies. Study A includes temporal analyses of 7 macro-topics in query logs. Study B considers analyzing seasonality of searching patterns of 9 micro-topics, and Study C assesses the impact of news media coverage on users' health-related information-seeking behaviors. Study A showed that the hourly distribution of various macro-topics followed the changes in social activity level. Conversely, the interestingness of macro-topics did not follow the regulation of topic distributions. Among macro-topics, "Pharmacotherapy" has highest interestingness level and wider time-window of popularity. In Study B, seasonality of a limited number of diseases and health-related activities were analyzed. Trends of infectious diseases, such as flu, mumps and chicken pox were seasonal. Due to seasonality of most of diseases covered in national vaccination plans, the trend belonging to "Immunization and Vaccination" was seasonal, as well. Cancer awareness events caused peaks in search trends of "Cancer" and "Screening" micro-topics in specific days of each year that mimic repeated patterns which may mistakenly be identified as seasonality. In study C, we assessed the co-integration and

  12. I-SG : Interactive Search Grouping - Search result grouping using Independent Component Analysis

    DEFF Research Database (Denmark)

    Lauritsen, Thomas; Kolenda, Thomas

    2002-01-01

    We present a computational simple and efficient approach to unsupervised grouping the search result from any search engine. Along with each group a set of keywords are found to annotate the contents. This approach leads to an interactive search trough a hierarchial structure that is build online....... It is the users task to improve the search, trough expanding the search query using the topic keywords representing the desired groups. In doing so the search engine limits the space of possible search results, virtually moving down in the search hierarchy, and so refines the search....

  13. On bridging relational and document-centric data stores

    NARCIS (Netherlands)

    Roijackers, J.; Fletcher, G.H.L.; Gottlob, G.; Grasso, G.; Olteanu, D.; Schallhart, C.

    2013-01-01

    Big Data scenarios often involve massive collections of nested data objects, typically referred to as "documents." The challenges of document management at web scale have stimulated a recent trend towards the development of document-centric "NoSQL" data stores. Many query tasks naturally involve

  14. Guided interaction exploration in artifact-centric process models

    NARCIS (Netherlands)

    van Eck, M.L.; Sidorova, N.; van der Aalst, W.M.P.

    2017-01-01

    Artifact-centric process models aim to describe complex processes as a collection of interacting artifacts. Recent development in process mining allow for the discovery of such models. However, the focus is often on the representation of the individual artifacts rather than their interactions. Based

  15. Search engine ranking, quality, and content of webpages that are critical vs noncritical of HPV vaccine

    Science.gov (United States)

    Fu, Linda Y.; Zook, Kathleen; Spoehr-Labutta, Zachary; Hu, Pamela; Joseph, Jill G.

    2015-01-01

    Purpose Online information can influence attitudes toward vaccination. The aim of the present study is to provide a systematic evaluation of the search engine ranking, quality, and content of webpages that are critical versus noncritical of HPV vaccination. Methods We identified HPV vaccine-related webpages with the Google search engine by entering 20 terms. We then assessed each webpage for critical versus noncritical bias as well as for the following quality indicators: authorship disclosure, source disclosure, attribution of at least one reference, currency, exclusion of testimonial accounts, and readability level less than 9th grade. We also determined webpage comprehensiveness in terms of mention of 14 HPV vaccine relevant topics. Results Twenty searches yielded 116 unique webpages. HPV vaccine-critical webpages comprised roughly a third of the top, top 5 and top 10-ranking webpages. The prevalence of HPV vaccine-critical webpages was higher for queries that included term modifiers in addition to root terms. Compared with noncritical webpages, webpages critical of HPV vaccine overall had a lower quality score than those with a noncritical bias (psearch engine queries despite being of lower quality and less comprehensive than noncritical webpages. PMID:26559742

  16. Semantic association ranking schemes for information retrieval ...

    Indian Academy of Sciences (India)

    retrieval applications using term association graph representation ... Department of Computer Science and Engineering, Government College of ... Introduction ... leads to poor precision, e.g., model, python, and chip. ...... The approaches proposed in this paper focuses on the query-centric re-ranking of search results.

  17. Human Systems Integration Assessment of Network Centric Command and Control

    National Research Council Canada - National Science Library

    Quashnock, Dee; Kelly, Richard T; Dunaway, John; Smillie, Robert J

    2004-01-01

    .... FORCEnet is the operational construct and architectural framework for Naval Network Centric Warfare in the information age that integrates warriors, sensors, networks, command and control, platforms...

  18. Building 773-A, Lab F003 Glovebox Project Radiological Design Summary Report

    International Nuclear Information System (INIS)

    Gaul, W.C.

    2003-01-01

    Engineering Standards present the radiological design criteria and requirements, which must be satisfied for all SRS facility designs. The radiological design criteria and requirements specified in the standard are based on the Code of Federal Regulations, DOE Orders, Site manuals, other applicable standards, and various DOE guides and handbooks. This report contains top-level requirements for the various areas of radiological protection for workers. For the purposes of demonstrating compliance with these requirements, the designer must examine the requirement for the design and either incorporate or provide a technical justification as to why the requirement is not incorporated. This document reports a radiological design review for the STREAK lab glovebox upgrades of inlet ventilation, additional mechanical and electrical services, new glovebox instrumentation and alarms. This report demonstrates that the gloveboxes meet the radiological design requirements of Engineering Standards

  19. Pediatric radiology malpractice claims - characteristics and comparison to adult radiology claims

    Energy Technology Data Exchange (ETDEWEB)

    Breen, Micheal A.; Taylor, George A. [Boston Children' s Hospital, Department of Radiology, Boston, MA (United States); Dwyer, Kathy; Yu-Moe, Winnie [CRICO Risk Management Foundation, Boston, MA (United States)

    2017-06-15

    Medical malpractice is the primary method by which people who believe they have suffered an injury in the course of medical care seek compensation in the United States and Canada. An increasing body of research demonstrates that failure to correctly diagnose is the most common allegation made in malpractice claims against radiologists. Since the 1994 survey by the Society of Chairmen of Radiology in Children's Hospitals (SCORCH), no other published studies have specifically examined the frequency or clinical context of malpractice claims against pediatric radiologists or arising from pediatric imaging interpretation. We hypothesize that the frequency, character and outcome of malpractice claims made against pediatric radiologists differ from those seen in general radiology practice. We searched the Controlled Risk Insurance Co. (CRICO) Strategies' Comparative Benchmarking System (CBS), a private repository of approximately 350,000 open and closed medical malpractice claims in the United States, for claims related to pediatric radiology. We further queried these cases for the major allegation, the clinical environment in which the claim arose, the clinical severity of the alleged injury, indemnity paid (if payment was made), primary imaging modality involved (if applicable) and primary International Classification of Diseases, 9th revision (ICD-9) diagnosis underlying the claim. There were a total of 27,056 fully coded claims of medical malpractice in the CBS database in the 5-year period between Jan. 1, 2010, and Dec. 31, 2014. Of these, 1,472 cases (5.4%) involved patients younger than 18 years. Radiology was the primary service responsible for 71/1,472 (4.8%) pediatric cases. There were statistically significant differences in average payout for pediatric radiology claims ($314,671) compared to adult radiology claims ($174,033). The allegations were primarily diagnosis-related in 70% of pediatric radiology claims. The most common imaging modality

  20. Pediatric radiology malpractice claims - characteristics and comparison to adult radiology claims

    International Nuclear Information System (INIS)

    Breen, Micheal A.; Taylor, George A.; Dwyer, Kathy; Yu-Moe, Winnie

    2017-01-01

    Medical malpractice is the primary method by which people who believe they have suffered an injury in the course of medical care seek compensation in the United States and Canada. An increasing body of research demonstrates that failure to correctly diagnose is the most common allegation made in malpractice claims against radiologists. Since the 1994 survey by the Society of Chairmen of Radiology in Children's Hospitals (SCORCH), no other published studies have specifically examined the frequency or clinical context of malpractice claims against pediatric radiologists or arising from pediatric imaging interpretation. We hypothesize that the frequency, character and outcome of malpractice claims made against pediatric radiologists differ from those seen in general radiology practice. We searched the Controlled Risk Insurance Co. (CRICO) Strategies' Comparative Benchmarking System (CBS), a private repository of approximately 350,000 open and closed medical malpractice claims in the United States, for claims related to pediatric radiology. We further queried these cases for the major allegation, the clinical environment in which the claim arose, the clinical severity of the alleged injury, indemnity paid (if payment was made), primary imaging modality involved (if applicable) and primary International Classification of Diseases, 9th revision (ICD-9) diagnosis underlying the claim. There were a total of 27,056 fully coded claims of medical malpractice in the CBS database in the 5-year period between Jan. 1, 2010, and Dec. 31, 2014. Of these, 1,472 cases (5.4%) involved patients younger than 18 years. Radiology was the primary service responsible for 71/1,472 (4.8%) pediatric cases. There were statistically significant differences in average payout for pediatric radiology claims ($314,671) compared to adult radiology claims ($174,033). The allegations were primarily diagnosis-related in 70% of pediatric radiology claims. The most common imaging modality implicated in

  1. Pediatric radiology malpractice claims - characteristics and comparison to adult radiology claims.

    Science.gov (United States)

    Breen, Micheál A; Dwyer, Kathy; Yu-Moe, Winnie; Taylor, George A

    2017-06-01

    Medical malpractice is the primary method by which people who believe they have suffered an injury in the course of medical care seek compensation in the United States and Canada. An increasing body of research demonstrates that failure to correctly diagnose is the most common allegation made in malpractice claims against radiologists. Since the 1994 survey by the Society of Chairmen of Radiology in Children's Hospitals (SCORCH), no other published studies have specifically examined the frequency or clinical context of malpractice claims against pediatric radiologists or arising from pediatric imaging interpretation. We hypothesize that the frequency, character and outcome of malpractice claims made against pediatric radiologists differ from those seen in general radiology practice. We searched the Controlled Risk Insurance Co. (CRICO) Strategies' Comparative Benchmarking System (CBS), a private repository of approximately 350,000 open and closed medical malpractice claims in the United States, for claims related to pediatric radiology. We further queried these cases for the major allegation, the clinical environment in which the claim arose, the clinical severity of the alleged injury, indemnity paid (if payment was made), primary imaging modality involved (if applicable) and primary International Classification of Diseases, 9th revision (ICD-9) diagnosis underlying the claim. There were a total of 27,056 fully coded claims of medical malpractice in the CBS database in the 5-year period between Jan. 1, 2010, and Dec. 31, 2014. Of these, 1,472 cases (5.4%) involved patients younger than 18 years. Radiology was the primary service responsible for 71/1,472 (4.8%) pediatric cases. There were statistically significant differences in average payout for pediatric radiology claims ($314,671) compared to adult radiology claims ($174,033). The allegations were primarily diagnosis-related in 70% of pediatric radiology claims. The most common imaging modality implicated in

  2. Search Engines for Tomorrow's Scholars

    Science.gov (United States)

    Fagan, Jody Condit

    2011-01-01

    Today's scholars face an outstanding array of choices when choosing search tools: Google Scholar, discipline-specific abstracts and index databases, library discovery tools, and more recently, Microsoft's re-launch of their academic search tool, now dubbed Microsoft Academic Search. What are these tools' strengths for the emerging needs of…

  3. Value of lymphocyte cryo-preservation after a radiological or nuclear accident

    International Nuclear Information System (INIS)

    Laroche, P.; Lataillade, J.J.; Chambrette, V.; Voisin, Ph.

    1997-01-01

    The conventional cytogenetic method in biological dosimetry is most useful for the estimation of the received radiation dose. It shows resulting unstable chromosomal aberrations (dicentrics, centric rings and fragments) in peripheral blood lymphocytes. This method has been used over the past 30 years and is used in forensic medicine. Nevertheless, it is long and fastidious. Accordingly, the number of simultaneous analyses of blood samples is limited and depends on the capacity of specialized laboratories. This capacity may be insufficient in the case of large scale radiological or nuclear accidents. Cryo-preservation is the usual method to store cells before analysis or use, for instance for biological dosimetry purposes. Some investigations have shown that thawing following freezing may induce cell injury but few studies have been made on the effect of cryo-preservation on cells containing radiation-induced unstable chromosomal aberrations. In this work, lymphocytes were irradiated with 1 to 4 Gy gamma rays and stored in liquid nitrogen. The dicentric and centric ring yields were analysed after storage periods of 1 week, 1 month, 3 months, and 1 year. No difference in aberration frequency from control, unfrozen samples was observed over this period. Lymphocytes stored at -196 deg C for up to least 1 year may therefore be used for chromosome aberration scoring when overexposure to ionizing radiation is suspected. (author)

  4. Evaluation of use of e-Learning in undergraduate radiology education: a review.

    Science.gov (United States)

    Zafar, Saad; Safdar, Saima; Zafar, Aasma N

    2014-12-01

    The aim of this review is to investigate the evaluative outcomes present in the literature according to Kirkpatrick's learning model and to examine the nature and characteristics of the e-Learning interventions in radiology education at undergraduate level. Four databases (PubMed, MEDLINE, Embase, Eric) are searched for publications related to the application of e-Learning in undergraduate radiology education. The search strategy is a combination of e-Learning and Mesh and non Mesh radiology and undergraduate related terms. These search strategies are established in relation to experts of respective domains. The full text of thirty pertinent articles is reviewed. Author's country and study location data is extracted to identify the most active regions and year's are extracted to know the existing trend. Data regarding radiology subfields and undergraduate year of radiology education is extracted along with e-Learning technologies to identify the most prevalent or suitable technologies or tools with respect to radiology contents. Kirkpatricks learning evaluation model is used to categorize the evaluative outcomes reported in the identified studies. The results of this analysis reveal emergence of highly interactive games, audience response systems and designing of wide range of customized tools according to learner needs assessment in radiology education at undergraduate level. All these initiatives are leading toward highly interactive self directed learning environments to support the idea of life-long independent learners. Moreover, majority of the studies in literature regarding e-Learning in radiology at undergraduate level are based on participant satisfaction followed by participant results or outcomes either before or after an intervention or both. There was no research particularly demonstrating performance change in clinical practice or patient outcome as they may be difficult to measure in medical education. Thus clinical competences and performances are

  5. Usability evaluation of an experimental text summarization system and three search engines: implications for the reengineering of health care interfaces.

    Science.gov (United States)

    Kushniruk, Andre W; Kan, Min-Yem; McKeown, Kathleen; Klavans, Judith; Jordan, Desmond; LaFlamme, Mark; Patel, Vimia L

    2002-01-01

    This paper describes the comparative evaluation of an experimental automated text summarization system, Centrifuser and three conventional search engines - Google, Yahoo and About.com. Centrifuser provides information to patients and families relevant to their questions about specific health conditions. It then produces a multidocument summary of articles retrieved by a standard search engine, tailored to the user's question. Subjects, consisting of friends or family of hospitalized patients, were asked to "think aloud" as they interacted with the four systems. The evaluation involved audio- and video recording of subject interactions with the interfaces in situ at a hospital. Results of the evaluation show that subjects found Centrifuser's summarization capability useful and easy to understand. In comparing Centrifuser to the three search engines, subjects' ratings varied; however, specific interface features were deemed useful across interfaces. We conclude with a discussion of the implications for engineering Web-based retrieval systems.

  6. Residential Consumer-Centric Demand-Side Management Based on Energy Disaggregation-Piloting Constrained Swarm Intelligence: Towards Edge Computing.

    Science.gov (United States)

    Lin, Yu-Hsiu; Hu, Yu-Chen

    2018-04-27

    The emergence of smart Internet of Things (IoT) devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR) schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO)-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved

  7. Residential Consumer-Centric Demand-Side Management Based on Energy Disaggregation-Piloting Constrained Swarm Intelligence: Towards Edge Computing

    Science.gov (United States)

    Hu, Yu-Chen

    2018-01-01

    The emergence of smart Internet of Things (IoT) devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR) schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO)-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved

  8. Residential Consumer-Centric Demand-Side Management Based on Energy Disaggregation-Piloting Constrained Swarm Intelligence: Towards Edge Computing

    Directory of Open Access Journals (Sweden)

    Yu-Hsiu Lin

    2018-04-01

    Full Text Available The emergence of smart Internet of Things (IoT devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power

  9. Using search engine query data to track pharmaceutical utilization: a study of statins.

    Science.gov (United States)

    Schuster, Nathaniel M; Rogers, Mary A M; McMahon, Laurence F

    2010-08-01

    To examine temporal and geographic associations between Google queries for health information and healthcare utilization benchmarks. Retrospective longitudinal study. Using Google Trends and Google Insights for Search data, the search terms Lipitor (atorvastatin calcium; Pfizer, Ann Arbor, MI) and simvastatin were evaluated for change over time and for association with Lipitor revenues. The relationship between query data and community-based resource use per Medicare beneficiary was assessed for 35 US metropolitan areas. Google queries for Lipitor significantly decreased from January 2004 through June 2009 and queries for simvastatin significantly increased (P patent (P global revenues from 2004 to 2008 (P search engine queries for medical information correlate with pharmaceutical revenue and with overall healthcare utilization in a community. This suggests that search query data can track community-wide characteristics in healthcare utilization and have the potential for informing payers and policy makers regarding trends in utilization.

  10. Metadata Effectiveness in Internet Discovery: An Analysis of Digital Collection Metadata Elements and Internet Search Engine Keywords

    Science.gov (United States)

    Yang, Le

    2016-01-01

    This study analyzed digital item metadata and keywords from Internet search engines to learn what metadata elements actually facilitate discovery of digital collections through Internet keyword searching and how significantly each metadata element affects the discovery of items in a digital repository. The study found that keywords from Internet…

  11. Google and Women's Health-Related Issues: What Does the Search Engine Data Reveal?

    Science.gov (United States)

    Baazeem, Mazin; Abenhaim, Haim

    2014-01-01

    Identifying the gaps in public knowledge of women's health related issues has always been difficult. With the increasing number of Internet users in the United States, we sought to use the Internet as a tool to help us identify such gaps and to estimate women's most prevalent health concerns by examining commonly searched health-related keywords in Google search engine. We collected a large pool of possible search keywords from two independent practicing obstetrician/gynecologists and classified them into five main categories (obstetrics, gynecology, infertility, urogynecology/menopause and oncology), and measured the monthly average search volume within the United States for each keyword with all its possible combinations using Google AdWords tool. We found that pregnancy related keywords were less frequently searched in general compared to other categories with an average of 145,400 hits per month for the top twenty keywords. Among the most common pregnancy-related keywords was "pregnancy and sex' while pregnancy-related diseases were uncommonly searched. HPV alone was searched 305,400 times per month. Of the cancers affecting women, breast cancer was the most commonly searched with an average of 247,190 times per month, followed by cervical cancer then ovarian cancer. The commonly searched keywords are often issues that are not discussed in our daily practice as well as in public health messages. The search volume is relatively related to disease prevalence with the exception of ovarian cancer which could signify a public fear.

  12. Architecture design in global and model-centric software development

    NARCIS (Netherlands)

    Heijstek, Werner

    2012-01-01

    This doctoral dissertation describes a series of empirical investigations into representation, dissemination and coordination of software architecture design in the context of global software development. A particular focus is placed on model-centric and model-driven software development.

  13. Synovial sarcoma | Vlok | SA Journal of Radiology

    African Journals Online (AJOL)

    SA Journal of Radiology. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 18, No 2 (2014) >. Log in or Register to get access to full text downloads.

  14. Mandibulofacial dysostosis | Els | SA Journal of Radiology

    African Journals Online (AJOL)

    SA Journal of Radiology. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 16, No 1 (2012) >. Log in or Register to get access to full text downloads.

  15. Cerebral schistosomiasis | Ravi | SA Journal of Radiology

    African Journals Online (AJOL)

    SA Journal of Radiology. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 17, No 4 (2013) >. Log in or Register to get access to full text downloads.

  16. Pseudomyxoma peritonei | Sureka | SA Journal of Radiology

    African Journals Online (AJOL)

    SA Journal of Radiology. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 18, No 1 (2014) >. Log in or Register to get access to full text downloads.

  17. Torus palatinus | Naidoo | SA Journal of Radiology

    African Journals Online (AJOL)

    SA Journal of Radiology. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 17, No 4 (2013) >. Log in or Register to get access to full text downloads.

  18. Medical negligence | Otto | SA Journal of Radiology

    African Journals Online (AJOL)

    SA Journal of Radiology. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 8, No 2 (2004) >. Log in or Register to get access to full text downloads.

  19. Cardiothoracic radiology

    International Nuclear Information System (INIS)

    Scarsbrook, A.F.; Graham, R.N.J.; Perriss, R.W.

    2005-01-01

    A wealth of cardiothoracic websites exist on the internet. What follows is a list of the higher quality resources currently available which should save you time searching them out for yourself. Many of the sites listed cater for undergraduates and trainee or non-specialist radiologists, nevertheless these may also be of interest to specialists in thoracic radiology, particularly for use in teaching. Hyperlinks are available in the electronic version of this article and were all active at the time of going to press (April 2005)

  20. The Effects of Fatigue From Overnight Shifts on Radiology Search Patterns and Diagnostic Performance.

    Science.gov (United States)

    Hanna, Tarek N; Zygmont, Matthew E; Peterson, Ryan; Theriot, David; Shekhani, Haris; Johnson, Jamlik-Omari; Krupinski, Elizabeth A

    2018-01-20

    The aim of this study was to assess the effect of overnight shifts (ONS) on radiologist fatigue, visual search pattern, and diagnostic performance. This experimental study was approved by the institutional review board. Twelve radiologists (five faculty members and seven residents) each completed two sessions: one during a normal workday ("not fatigued") and another in the morning after an ONS ("fatigued"). Each radiologist completed the Swedish Occupational Fatigue Inventory. During each session, radiologists viewed 20 bone radiographs consisting of normal and abnormal findings. Viewing time, diagnostic confidence, and eye-tracking data were recorded. Swedish Occupational Fatigue Inventory results demonstrated worsening in all five variables (lack of energy, physical exertion, physical discomfort, lack of motivation, and sleepiness) after ONS (P radiologists were more fatigued with worse diagnostic performance, a 45% increase in view time per case, a 60% increase in total gaze fixations, and a 34% increase in time to fixate on the fracture. The effects of fatigue were more pronounced in residents. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  1. RSL-Nellis Analysis of Radiological Data from the Las Vegas Motor Speedway

    International Nuclear Information System (INIS)

    Wasiolek, Piotr; Okada, Colin

    2009-01-01

    The Department of Energy (DOE) National Nuclear Security Administration (NNSA) is responsible for maintaining national-level radiological emergency response assets and for deploying those assets in the event of a radiological accident or incident. NNSA's emergency response assets include the Aerial Measuring System (AMS) and Search Response Team (SRT) operated by the Remote Sensing Laboratory (RSL) located at Nellis Air Force Base, Las Vegas, Nevada and Andrews Air Force Base in Suitland, Maryland. The AMS mission is to provide a rapid response to radiological emergencies with helicopters and fixed-wing aircraft equipped to detect and measure radioactive material on the ground. The acquired data are stored and used to produce maps of radiation exposure and activity concentration per unit area. AMS also conducts surveys to create radiological baselines of major cities and nuclear facilities throughout the country. The SRT's Crisis Response (CR) mission is to provide scientific, technical, and operational support for NNSA-directed search activities involving radiological search and identification, through both field deployments and home team and reachback support. The AMS radiological mapping and SRT CR emergency response teams from the Remote Sensing Laboratory-Nellis (RSL-N) supported safety and security activities at a NASCAR race at the Las Vegas Motor Speedway (LVMS) on February 27-March 1, 2009. In support of this event, and for proficiency training, these teams carried out aerial and ground surveys of the area. The aerial radiological survey of the LVMS was conducted on February 17, 2009, and the CR ground survey was performed on February 23, 2009. RSL-N operations and appropriate law enforcement agencies selected this area as a suitable location to exercise AMS and CR capabilities for mapping environmental radiation and searching for man-made radioactive sources. The aerial surveys covered approximately 11 square miles. The survey required a 2.5-hour flight

  2. Sagace: A web-based search engine for biomedical databases in Japan

    Directory of Open Access Journals (Sweden)

    Morita Mizuki

    2012-10-01

    Full Text Available Abstract Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data and biological resource banks (such as mouse models of disease and cell lines. With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/.

  3. Comparison of the most popular Czech and German lexemes in the global Internet search engine Google

    Directory of Open Access Journals (Sweden)

    Dana Gálová

    2017-11-01

    Full Text Available Most widely used search engine Google represents the most current language databank in the world. It publishes annually the statistics of the most popular words that indicate interest preferences of a particular population in a given year. The aim of this paper is a comparative analysis of the most popular lexemes in Czech and German versions of search engine Google in 2015. Lexeme segmentation into different thematic fields enabled the identification of correspondences and differences and thus the comparison of interest preferences of Czech and German population. The paper also offers mutual interpretation of differences and reflection on the causes of different extents of civic engagement in both populations.

  4. The Evolution of Web Searching.

    Science.gov (United States)

    Green, David

    2000-01-01

    Explores the interrelation between Web publishing and information retrieval technologies and lists new approaches to Web indexing and searching. Highlights include Web directories; search engines; portalisation; Internet service providers; browser providers; meta search engines; popularity based analysis; natural language searching; links-based…

  5. Management of oral and maxillofacial radiological images

    International Nuclear Information System (INIS)

    Kim, Eun Kyung

    2002-01-01

    To implement the database system of oral and maxillofacial radiological images using a commercial medical image management software with personally developed classification code. The image database was built using a slightly modified commercial medical image management software, Dr. Image v.2.1 (Bit Computer Co., Korea). The function of wild card '*' was added to the search function of this program. Diagnosis classification codes were written as the number at the first three digits, and radiographic technique classification codes as the alphabet right after the diagnosis code. 449 radiological films of 218 cases from January, 2000 to December, 2000, which had been specially stored for the demonstration and education at Dept. of OMF Radiology of Dankook University Dental Hospital, were scanned with each patient information. Cases could be efficiently accessed and analyzed by using the classification code. Search and statistics results were easily obtained according to sex, age, disease diagnosis and radiographic technique. Efficient image management was possible with this image database system. Application of this system to other departments or personal image management can be made possible by utilizing the appropriate classification code system.

  6. Revisiting the Robustness of PET-Based Textural Features in the Context of Multi-Centric Trials.

    Science.gov (United States)

    Bailly, Clément; Bodet-Milin, Caroline; Couespel, Solène; Necib, Hatem; Kraeber-Bodéré, Françoise; Ansquer, Catherine; Carlier, Thomas

    2016-01-01

    This study aimed to investigate the variability of textural features (TF) as a function of acquisition and reconstruction parameters within the context of multi-centric trials. The robustness of 15 selected TFs were studied as a function of the number of iterations, the post-filtering level, input data noise, the reconstruction algorithm and the matrix size. A combination of several reconstruction and acquisition settings was devised to mimic multi-centric conditions. We retrospectively studied data from 26 patients enrolled in a diagnostic study that aimed to evaluate the performance of PET/CT 68Ga-DOTANOC in gastro-entero-pancreatic neuroendocrine tumors. Forty-one tumors were extracted and served as the database. The coefficient of variation (COV) or the absolute deviation (for the noise study) was derived and compared statistically with SUVmax and SUVmean results. The majority of investigated TFs can be used in a multi-centric context when each parameter is considered individually. The impact of voxel size and noise in the input data were predominant as only 4 TFs presented a high/intermediate robustness against SUV-based metrics (Entropy, Homogeneity, RP and ZP). When combining several reconstruction settings to mimic multi-centric conditions, most of the investigated TFs were robust enough against SUVmax except Correlation, Contrast, LGRE, LGZE and LZLGE. Considering previously published results on either reproducibility or sensitivity against delineation approach and our findings, it is feasible to consider Homogeneity, Entropy, Dissimilarity, HGRE, HGZE and ZP as relevant for being used in multi-centric trials.

  7. Stochastic background search correlating ALLEGRO with LIGO engineering data

    International Nuclear Information System (INIS)

    Whelan, John T; Daw, Edward; Heng, Ik Siong; McHugh, Martin P; Lazzarini, Albert

    2003-01-01

    We describe the role of correlation measurements between the LIGO interferometer in Livingston, LA, and the ALLEGRO resonant bar detector in Baton Rouge, LA, in searches for a stochastic background of gravitational waves. Such measurements provide a valuable complement to correlations between interferometers at the two LIGO sites, since they are sensitive in a different, higher, frequency band. Additionally, the variable orientation of the ALLEGRO detector provides a means to distinguish gravitational wave correlations from correlated environmental noise. We describe the analysis underway to set a limit on the strength of a stochastic background at frequencies near 900 Hz using ALLEGRO data and data from LIGO's E7 Engineering Run

  8. In-depth analysis of protein inference algorithms using multiple search engines and well-defined metrics.

    Science.gov (United States)

    Audain, Enrique; Uszkoreit, Julian; Sachsenberg, Timo; Pfeuffer, Julianus; Liang, Xiao; Hermjakob, Henning; Sanchez, Aniel; Eisenacher, Martin; Reinert, Knut; Tabb, David L; Kohlbacher, Oliver; Perez-Riverol, Yasset

    2017-01-06

    In mass spectrometry-based shotgun proteomics, protein identifications are usually the desired result. However, most of the analytical methods are based on the identification of reliable peptides and not the direct identification of intact proteins. Thus, assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is a critical step in proteomics research. Currently, different protein inference algorithms and tools are available for the proteomics community. Here, we evaluated five software tools for protein inference (PIA, ProteinProphet, Fido, ProteinLP, MSBayesPro) using three popular database search engines: Mascot, X!Tandem, and MS-GF+. All the algorithms were evaluated using a highly customizable KNIME workflow using four different public datasets with varying complexities (different sample preparation, species and analytical instruments). We defined a set of quality control metrics to evaluate the performance of each combination of search engines, protein inference algorithm, and parameters on each dataset. We show that the results for complex samples vary not only regarding the actual numbers of reported protein groups but also concerning the actual composition of groups. Furthermore, the robustness of reported proteins when using databases of differing complexities is strongly dependant on the applied inference algorithm. Finally, merging the identifications of multiple search engines does not necessarily increase the number of reported proteins, but does increase the number of peptides per protein and thus can generally be recommended. Protein inference is one of the major challenges in MS-based proteomics nowadays. Currently, there are a vast number of protein inference algorithms and implementations available for the proteomics community. Protein assembly impacts in the final results of the research, the quantitation values and the final claims in the research manuscript. Even though protein

  9. Dictionary of radiological engineering. [English, French, and German]. Fachwoerterbuch der radiologischen Technik

    Energy Technology Data Exchange (ETDEWEB)

    Neuder, G F; Ullrich, H M

    1980-01-01

    In the present book an attempt has been made to record the current terminology in the field of radiological technology - for the present in the three languages English, German and French. It is hoped that this will contribute to a world-wide understanding amongst all those practising radiology. This is meant to include in a equal way radiologists in hospitals and private practice as well as those engaged in planning, developing, manufacturing or distributing radiological units. (orig.) 891 MG/orig. 892 MR.

  10. Search engine advertisements : The impact of advertising statements on click-through and conversion rates

    NARCIS (Netherlands)

    Haans, A.J.; Raassens, N.; van Hout, R.M.W.M.

    2013-01-01

    Search engine advertising has emerged as the predominant form of advertising on the Internet. Despite its increasing importance academic research on this topic is scarce. Several authors have called for more research on how the content of the ad influences its evaluation. This exploratory study

  11. A survey on visual information search behavior and requirements of radiologists.

    Science.gov (United States)

    Markonis, D; Holzer, M; Dungs, S; Vargas, A; Langs, G; Kriewel, S; Müller, H

    2012-01-01

    The main objective of this study is to learn more on the image use and search requirements of radiologists. These requirements will then be taken into account to develop a new search system for images and associated meta data search in the Khresmoi project. Observations of the radiology workflow, case discussions and a literature review were performed to construct a survey form that was given online and in paper form to radiologists. Eye tracking was performed on a radiology viewing station to analyze typical tasks and to complement the survey. In total 34 radiologists answered the survey online or on paper. Image search was mentioned as a frequent and common task, particularly for finding cases of interest for differential diagnosis. Sources of information besides the Internet are books and discussions with colleagues. Search for images is unsuccessful in around 25% of the cases, stopping the search after around 10 minutes. The most common reason for failure is that target images are considered rare. Important additions for search requested in the survey are filtering by pathology and modality, as well as search for visually similar images and cases. Few radiologists are familiar with visual retrieval but they desire the option to upload images for searching similar ones. Image search is common in radiology but few radiologists are fully aware of visual information retrieval. Taking into account the many unsuccessful searches and time spent for this, a good image search could improve the situation and help in clinical practice.

  12. Comparative case study on website traffic generated by search engine optimisation and a pay-per-click campaign, versus marketing expenditure

    Directory of Open Access Journals (Sweden)

    Wouter T. Kritzinger

    2015-09-01

    Full Text Available Background: No empirical work was found on how marketing expenses compare when used solely for either the one or the other of the two main types of search engine marketing. Objectives: This research set out to determine how the results of the implementation of a pay-per-click campaign compared to those of a search engine optimisation campaign, given the same website and environment. At the same time, the expenses incurred on both these marketing methods were recorded and compared. Method: The active website of an existing, successful e-commerce concern was used as platform. The company had been using pay-per-click only for a period, whilst traffic was monitored. This system was decommissioned on a particular date and time, and an alternative search engine optimisation system was started at the same time. Again, both traffic and expenses were monitored. Results: The results indicate that the pay-per-click system did produce favourable results, but on the condition that a monthly fee has to be set aside to guarantee consistent traffic. The implementation of search engine optimisation required a relatively large investment at the outset, but it was once-off. After a drop in traffic owing to crawler visitation delays, the website traffic bypassed the average figure achieved during the pay-per-click period after a little over three months, whilst the expenditure crossed over after just six months. Conclusion: Whilst considering the specific parameters of this study, an investment in search engine optimisation rather than a pay-per-click campaign appears to produce better results at a lower cost, after a given period of time. [PDF to follow

  13. L1000CDS2: LINCS L1000 characteristic direction signatures search engine.

    Science.gov (United States)

    Duan, Qiaonan; Reid, St Patrick; Clark, Neil R; Wang, Zichen; Fernandez, Nicolas F; Rouillard, Andrew D; Readhead, Ben; Tritsch, Sarah R; Hodos, Rachel; Hafner, Marc; Niepel, Mario; Sorger, Peter K; Dudley, Joel T; Bavari, Sina; Panchal, Rekha G; Ma'ayan, Avi

    2016-01-01

    The library of integrated network-based cellular signatures (LINCS) L1000 data set currently comprises of over a million gene expression profiles of chemically perturbed human cell lines. Through unique several intrinsic and extrinsic benchmarking schemes, we demonstrate that processing the L1000 data with the characteristic direction (CD) method significantly improves signal to noise compared with the MODZ method currently used to compute L1000 signatures. The CD processed L1000 signatures are served through a state-of-the-art web-based search engine application called L1000CDS 2 . The L1000CDS 2 search engine provides prioritization of thousands of small-molecule signatures, and their pairwise combinations, predicted to either mimic or reverse an input gene expression signature using two methods. The L1000CDS 2 search engine also predicts drug targets for all the small molecules profiled by the L1000 assay that we processed. Targets are predicted by computing the cosine similarity between the L1000 small-molecule signatures and a large collection of signatures extracted from the gene expression omnibus (GEO) for single-gene perturbations in mammalian cells. We applied L1000CDS 2 to prioritize small molecules that are predicted to reverse expression in 670 disease signatures also extracted from GEO, and prioritized small molecules that can mimic expression of 22 endogenous ligand signatures profiled by the L1000 assay. As a case study, to further demonstrate the utility of L1000CDS 2 , we collected expression signatures from human cells infected with Ebola virus at 30, 60 and 120 min. Querying these signatures with L1000CDS 2 we identified kenpaullone, a GSK3B/CDK2 inhibitor that we show, in subsequent experiments, has a dose-dependent efficacy in inhibiting Ebola infection in vitro without causing cellular toxicity in human cell lines. In summary, the L1000CDS 2 tool can be applied in many biological and biomedical settings, while improving the extraction of

  14. Learning from diagnostic errors: A good way to improve education in radiology

    Energy Technology Data Exchange (ETDEWEB)

    Pinto, Antonio, E-mail: antopin1968@libero.it [Department of Diagnostic Imaging, A. Cardarelli Hospital, I-80131 Naples (Italy); Acampora, Ciro, E-mail: itrasente@libero.it [Department of Diagnostic Imaging, A. Cardarelli Hospital, I-80131 Naples (Italy); Pinto, Fabio, E-mail: fpinto1966@libero.it [Department of Diagnostic Imaging, A. Cardarelli Hospital, I-80131 Naples (Italy); Kourdioukova, Elena, E-mail: Elena.Kourdioukova@UGent.be [Department of Radiology, Ghent University Hospital (UZG), MR/-1K12, De Pintelaan 185, B-9000 Ghent (Belgium); Romano, Luigia, E-mail: luigia.romano@fastwebnet.it [Department of Diagnostic Imaging, A. Cardarelli Hospital, I-80131 Naples (Italy); Verstraete, Koenraad, E-mail: Koenraad.Verstraete@UGent.be [Department of Radiology, Ghent University Hospital (UZG), MR/-1K12, De Pintelaan 185, B-9000 Ghent (Belgium)

    2011-06-15

    Purpose: To evaluate the causes and the main categories of diagnostic errors in radiology as a method for improving education in radiology. Material and methods: A Medline search was performed using PubMed (National Library of Medicine, Bethesda, MD) for original research publications discussing errors in diagnosis with specific reference to radiology. The search strategy employed different combinations of the following terms: (1) diagnostic radiology, (2) radiological error and (3) medical negligence. This review was limited to human studies and to English-language literature. Two authors reviewed all the titles and subsequently the abstracts of 491 articles that appeared pertinent. Additional articles were identified by reviewing the reference lists of relevant papers. Finally, the full text of 75 selected articles was reviewed. Results: Several studies show that the etiology of radiological error is multi-factorial. The main category of claims against radiologists includes the misdiagnoses. Radiologic 'misses' typically are one of two types: either missed fractures or missed diagnosis of cancer. The most commonly missed fractures include those in the femur, the navicular bone, and the cervical spine. The second type of 'miss' is failure to diagnose cancer. Lack of appreciation of lung nodules on chest radiographs and breast lesions on mammograms are the predominant problems. Conclusion: Diagnostic errors should be considered not as signs of failure, but as learning opportunities.

  15. Learning from diagnostic errors: A good way to improve education in radiology

    International Nuclear Information System (INIS)

    Pinto, Antonio; Acampora, Ciro; Pinto, Fabio; Kourdioukova, Elena; Romano, Luigia; Verstraete, Koenraad

    2011-01-01

    Purpose: To evaluate the causes and the main categories of diagnostic errors in radiology as a method for improving education in radiology. Material and methods: A Medline search was performed using PubMed (National Library of Medicine, Bethesda, MD) for original research publications discussing errors in diagnosis with specific reference to radiology. The search strategy employed different combinations of the following terms: (1) diagnostic radiology, (2) radiological error and (3) medical negligence. This review was limited to human studies and to English-language literature. Two authors reviewed all the titles and subsequently the abstracts of 491 articles that appeared pertinent. Additional articles were identified by reviewing the reference lists of relevant papers. Finally, the full text of 75 selected articles was reviewed. Results: Several studies show that the etiology of radiological error is multi-factorial. The main category of claims against radiologists includes the misdiagnoses. Radiologic 'misses' typically are one of two types: either missed fractures or missed diagnosis of cancer. The most commonly missed fractures include those in the femur, the navicular bone, and the cervical spine. The second type of 'miss' is failure to diagnose cancer. Lack of appreciation of lung nodules on chest radiographs and breast lesions on mammograms are the predominant problems. Conclusion: Diagnostic errors should be considered not as signs of failure, but as learning opportunities.

  16. Graph Theoretical Analysis of Network Centric Operations Using Multi-Layer Models

    National Research Council Canada - National Science Library

    Wong-Jiru, Ann

    2006-01-01

    .... The research incorporates the importance of understanding network topology for evaluating an environment for net-centricity and using network characteristics to help commanders assess the effects...

  17. Method and electronic database search engine for exposing the content of an electronic database

    NARCIS (Netherlands)

    Stappers, P.J.

    2000-01-01

    The invention relates to an electronic database search engine comprising an electronic memory device suitable for storing and releasing elements from the database, a display unit, a user interface for selecting and displaying at least one element from the database on the display unit, and control

  18. A Web Centric Architecture for Deploying Multi-Disciplinary Engineering Design Processes

    Science.gov (United States)

    Woyak, Scott; Kim, Hongman; Mullins, James; Sobieszczanski-Sobieski, Jaroslaw

    2004-01-01

    There are continuous needs for engineering organizations to improve their design process. Current state of the art techniques use computational simulations to predict design performance, and optimize it through advanced design methods. These tools have been used mostly by individual engineers. This paper presents an architecture for achieving results at an organization level beyond individual level. The next set of gains in process improvement will come from improving the effective use of computers and software within a whole organization, not just for an individual. The architecture takes advantage of state of the art capabilities to produce a Web based system to carry engineering design into the future. To illustrate deployment of the architecture, a case study for implementing advanced multidisciplinary design optimization processes such as Bi-Level Integrated System Synthesis is discussed. Another example for rolling-out a design process for Design for Six Sigma is also described. Each example explains how an organization can effectively infuse engineering practice with new design methods and retain the knowledge over time.

  19. An integrated framework for rural electrification: Adopting a user-centric approach to business model development

    International Nuclear Information System (INIS)

    Schillebeeckx, Simon J.D.; Parikh, Priti; Bansal, Rahul; George, Gerard

    2012-01-01

    Rural electrification (RE) has gained prominence over the past two decades as an effective means for improving living conditions. This growth has largely been driven by socio-economic and political imperatives to improve rural livelihood and by technological innovation. Based on a content analysis of 232 scholarly articles, the literature is categorized into four focal lenses: technology, institutional, viability and user-centric. We find that the first two dominate the RE debate. The viability lens has been used less frequently, whilst the user-centric lens began to engage scholars as late as 2007. We provide an overview of the technological, institutional and viability lenses, and elaborate upon the user-centric lens in greater detail. For energy policy and practice, we combine the four lenses to develop a business model framework that policy makers, practitioners and investors could use to assess RE projects or to design future rural electrification strategies. - Highlights: ► Review of two decades of rural electrification research. ► Content analysis of 232 scholarly articles. ► Literature is categorized into four focal lenses: technology, institutional, viability and user-centric. ► We develop a business model framework for rural electrification strategies.

  20. Are cannabis prevalence estimates comparable across countries and regions? A cross-cultural validation using search engine query data.

    Science.gov (United States)

    Steppan, Martin; Kraus, Ludwig; Piontek, Daniela; Siciliano, Valeria

    2013-01-01

    Prevalence estimation of cannabis use is usually based on self-report data. Although there is evidence on the reliability of this data source, its cross-cultural validity is still a major concern. External objective criteria are needed for this purpose. In this study, cannabis-related search engine query data are used as an external criterion. Data on cannabis use were taken from the 2007 European School Survey Project on Alcohol and Other Drugs (ESPAD). Provincial data came from three Italian nation-wide studies using the same methodology (2006-2008; ESPAD-Italia). Information on cannabis-related search engine query data was based on Google search volume indices (GSI). (1) Reliability analysis was conducted for GSI. (2) Latent measurement models of "true" cannabis prevalence were tested using perceived availability, web-based cannabis searches and self-reported prevalence as indicators. (3) Structure models were set up to test the influences of response tendencies and geographical position (latitude, longitude). In order to test the stability of the models, analyses were conducted on country level (Europe, US) and on provincial level in Italy. Cannabis-related GSI were found to be highly reliable and constant over time. The overall measurement model was highly significant in both data sets. On country level, no significant effects of response bias indicators and geographical position on perceived availability, web-based cannabis searches and self-reported prevalence were found. On provincial level, latitude had a significant positive effect on availability indicating that perceived availability of cannabis in northern Italy was higher than expected from the other indicators. Although GSI showed weaker associations with cannabis use than perceived availability, the findings underline the external validity and usefulness of search engine query data as external criteria. The findings suggest an acceptable relative comparability of national (provincial) prevalence