WorldWideScience

Sample records for database search strategies

  1. Metagenomic Taxonomy-Guided Database-Searching Strategy for Improving Metaproteomic Analysis.

    Science.gov (United States)

    Xiao, Jinqiu; Tanca, Alessandro; Jia, Ben; Yang, Runqing; Wang, Bo; Zhang, Yu; Li, Jing

    2018-04-06

    Metaproteomics provides a direct measure of the functional information by investigating all proteins expressed by a microbiota. However, due to the complexity and heterogeneity of microbial communities, it is very hard to construct a sequence database suitable for a metaproteomic study. Using a public database, researchers might not be able to identify proteins from poorly characterized microbial species, while a sequencing-based metagenomic database may not provide adequate coverage for all potentially expressed protein sequences. To address this challenge, we propose a metagenomic taxonomy-guided database-search strategy (MT), in which a merged database is employed, consisting of both taxonomy-guided reference protein sequences from public databases and proteins from metagenome assembly. By applying our MT strategy to a mock microbial mixture, about two times as many peptides were detected as with the metagenomic database only. According to the evaluation of the reliability of taxonomic attribution, the rate of misassignments was comparable to that obtained using an a priori matched database. We also evaluated the MT strategy with a human gut microbial sample, and we found 1.7 times as many peptides as using a standard metagenomic database. In conclusion, our MT strategy allows the construction of databases able to provide high sensitivity and precision in peptide identification in metaproteomic studies, enabling the detection of proteins from poorly characterized species within the microbiota.

  2. Keyword Search in Databases

    CERN Document Server

    Yu, Jeffrey Xu; Chang, Lijun

    2009-01-01

    It has become highly desirable to provide users with flexible ways to query/search information over databases as simple as keyword search like Google search. This book surveys the recent developments on keyword search over databases, and focuses on finding structural information among objects in a database using a set of keywords. Such structural information to be returned can be either trees or subgraphs representing how the objects, that contain the required keywords, are interconnected in a relational database or in an XML database. The structural keyword search is completely different from

  3. Integration of first-principles methods and crystallographic database searches for new ferroelectrics: Strategies and explorations

    International Nuclear Information System (INIS)

    Bennett, Joseph W.; Rabe, Karin M.

    2012-01-01

    In this concept paper, the development of strategies for the integration of first-principles methods with crystallographic database mining for the discovery and design of novel ferroelectric materials is discussed, drawing on the results and experience derived from exploratory investigations on three different systems: (1) the double perovskite Sr(Sb 1/2 Mn 1/2 )O 3 as a candidate semiconducting ferroelectric; (2) polar derivatives of schafarzikite MSb 2 O 4 ; and (3) ferroelectric semiconductors with formula M 2 P 2 (S,Se) 6 . A variety of avenues for further research and investigation are suggested, including automated structure type classification, low-symmetry improper ferroelectrics, and high-throughput first-principles searches for additional representatives of structural families with desirable functional properties. - Graphical abstract: Integration of first-principles methods with crystallographic database mining, for the discovery and design of novel ferroelectric materials, could potentially lead to new classes of multifunctional materials. Highlights: ► Integration of first-principles methods and database mining. ► Minor structural families with desirable functional properties. ► Survey of polar entries in the Inorganic Crystal Structural Database.

  4. Testing search strategies for systematic reviews in the Medline literature database through PubMed.

    Science.gov (United States)

    Volpato, Enilze S N; Betini, Marluci; El Dib, Regina

    2014-04-01

    A high-quality electronic search is essential in ensuring accuracy and completeness in retrieved records for the conducting of a systematic review. We analysed the available sample of search strategies to identify the best method for searching in Medline through PubMed, considering the use or not of parenthesis, double quotation marks, truncation and use of a simple search or search history. In our cross-sectional study of search strategies, we selected and analysed the available searches performed during evidence-based medicine classes and in systematic reviews conducted in the Botucatu Medical School, UNESP, Brazil. We analysed 120 search strategies. With regard to the use of phrase searches with parenthesis, there was no difference between the results with and without parenthesis and simple searches or search history tools in 100% of the sample analysed (P = 1.0). The number of results retrieved by the searches analysed was smaller using double quotations marks and using truncation compared with the standard strategy (P = 0.04 and P = 0.08, respectively). There is no need to use phrase-searching parenthesis to retrieve studies; however, we recommend the use of double quotation marks when an investigator attempts to retrieve articles in which a term appears to be exactly the same as what was proposed in the search form. Furthermore, we do not recommend the use of truncation in search strategies in the Medline via PubMed. Although the results of simple searches or search history tools were the same, we recommend using the latter.

  5. Search Databases and Statistics

    DEFF Research Database (Denmark)

    Refsgaard, Jan C; Munk, Stephanie; Jensen, Lars J

    2016-01-01

    having strengths and weaknesses that must be considered for the individual needs. These are reviewed in this chapter. Equally critical for generating highly confident output datasets is the application of sound statistical criteria to limit the inclusion of incorrect peptide identifications from database...... searches. Additionally, careful filtering and use of appropriate statistical tests on the output datasets affects the quality of all downstream analyses and interpretation of the data. Our considerations and general practices on these aspects of phosphoproteomics data processing are presented here....

  6. Search strategies

    Science.gov (United States)

    Oliver, B. M.

    Attention is given to the approaches which would provide the greatest chance of success in attempts related to the discovery of extraterrestrial advanced cultures in the Galaxy, taking into account the principle of least energy expenditure. The energetics of interstellar contact are explored, giving attention to the use of manned spacecraft, automatic probes, and beacons. The least expensive approach to a search for other civilizations involves a listening program which attempts to detect signals emitted by such civilizations. The optimum part of the spectrum for the considered search is found to be in the range from 1 to 2 GHz. Antenna and transmission formulas are discussed along with the employment of matched gates and filters, the probable characteristics of the signals to be detected, the filter-signal mismatch loss, surveys of the radio sky, the conduction of targeted searches.

  7. Information Retrieval Strategies of Millennial Undergraduate Students in Web and Library Database Searches

    Science.gov (United States)

    Porter, Brandi

    2009-01-01

    Millennial students make up a large portion of undergraduate students attending colleges and universities, and they have a variety of online resources available to them to complete academically related information searches, primarily Web based and library-based online information retrieval systems. The content, ease of use, and required search…

  8. NBIC: Search Ballast Report Database

    Science.gov (United States)

    Smithsonian Environmental Research Center Logo US Coast Guard Logo Submit BW Report | Search NBIC Database developed an online database that can be queried through our website. Data are accessible for all coastal Lakes, have been incorporated into the NBIC database as of August 2004. Information on data availability

  9. Database searches for qualitative research

    OpenAIRE

    Evans, David

    2002-01-01

    Interest in the role of qualitative research in evidence-based health care is growing. However, the methods currently used to identify quantitative research do not translate easily to qualitative research. This paper highlights some of the difficulties during searches of electronic databases for qualitative research. These difficulties relate to the descriptive nature of the titles used in some qualitative studies, the variable information provided in abstracts, and the differences in the ind...

  10. Intermittent search strategies

    Science.gov (United States)

    Bénichou, O.; Loverdo, C.; Moreau, M.; Voituriez, R.

    2011-01-01

    This review examines intermittent target search strategies, which combine phases of slow motion, allowing the searcher to detect the target, and phases of fast motion during which targets cannot be detected. It is first shown that intermittent search strategies are actually widely observed at various scales. At the macroscopic scale, this is, for example, the case of animals looking for food; at the microscopic scale, intermittent transport patterns are involved in a reaction pathway of DNA-binding proteins as well as in intracellular transport. Second, generic stochastic models are introduced, which show that intermittent strategies are efficient strategies that enable the minimization of search time. This suggests that the intrinsic efficiency of intermittent search strategies could justify their frequent observation in nature. Last, beyond these modeling aspects, it is proposed that intermittent strategies could also be used in a broader context to design and accelerate search processes.

  11. Database Search Engines: Paradigms, Challenges and Solutions.

    Science.gov (United States)

    Verheggen, Kenneth; Martens, Lennart; Berven, Frode S; Barsnes, Harald; Vaudel, Marc

    2016-01-01

    The first step in identifying proteins from mass spectrometry based shotgun proteomics data is to infer peptides from tandem mass spectra, a task generally achieved using database search engines. In this chapter, the basic principles of database search engines are introduced with a focus on open source software, and the use of database search engines is demonstrated using the freely available SearchGUI interface. This chapter also discusses how to tackle general issues related to sequence database searching and shows how to minimize their impact.

  12. Protein structure database search and evolutionary classification.

    Science.gov (United States)

    Yang, Jinn-Moon; Tung, Chi-Hua

    2006-01-01

    As more protein structures become available and structural genomics efforts provide structural models in a genome-wide strategy, there is a growing need for fast and accurate methods for discovering homologous proteins and evolutionary classifications of newly determined structures. We have developed 3D-BLAST, in part, to address these issues. 3D-BLAST is as fast as BLAST and calculates the statistical significance (E-value) of an alignment to indicate the reliability of the prediction. Using this method, we first identified 23 states of the structural alphabet that represent pattern profiles of the backbone fragments and then used them to represent protein structure databases as structural alphabet sequence databases (SADB). Our method enhanced BLAST as a search method, using a new structural alphabet substitution matrix (SASM) to find the longest common substructures with high-scoring structured segment pairs from an SADB database. Using personal computers with Intel Pentium4 (2.8 GHz) processors, our method searched more than 10 000 protein structures in 1.3 s and achieved a good agreement with search results from detailed structure alignment methods. [3D-BLAST is available at http://3d-blast.life.nctu.edu.tw].

  13. Quantum search of a real unstructured database

    Science.gov (United States)

    Broda, Bogusław

    2016-02-01

    A simple circuit implementation of the oracle for Grover's quantum search of a real unstructured classical database is proposed. The oracle contains a kind of quantumly accessible classical memory, which stores the database.

  14. Constructing Effective Search Strategies for Electronic Searching.

    Science.gov (United States)

    Flanagan, Lynn; Parente, Sharon Campbell

    Electronic databases have grown tremendously in both number and popularity since their development during the 1960s. Access to electronic databases in academic libraries was originally offered primarily through mediated search services by trained librarians; however, the advent of CD-ROM and end-user interfaces for online databases has shifted the…

  15. Optimal intermittent search strategies

    International Nuclear Information System (INIS)

    Rojo, F; Budde, C E; Wio, H S

    2009-01-01

    We study the search kinetics of a single fixed target by a set of searchers performing an intermittent random walk, jumping between different internal states. Exploiting concepts of multi-state and continuous-time random walks we have calculated the survival probability of a target up to time t, and have 'optimized' (minimized) it with regard to the transition probability among internal states. Our model shows that intermittent strategies always improve target detection, even for simple diffusion states of motion

  16. Optimal database combinations for literature searches in systematic reviews : a prospective exploratory study

    NARCIS (Netherlands)

    Bramer, W. M.; Rethlefsen, Melissa L.; Kleijnen, Jos; Franco, Oscar H.

    2017-01-01

    Background: Within systematic reviews, when searching for relevant references, it is advisable to use multiple databases. However, searching databases is laborious and time-consuming, as syntax of search strategies are database specific. We aimed to determine the optimal combination of databases

  17. Millennial Students’ Online Search Strategies are Associated With Their Mental Models of Search. A Review of: Holman, L. (2011. Millennial students’ mental models of search: Implications for academic librarians and database developers. Journal of Academic Librarianship, 37(1, 19-27. doi:10.1016/j.acalib.2010.10.003

    Directory of Open Access Journals (Sweden)

    Leslie Bussert

    2011-09-01

    Full Text Available Objective – To examine first-year college students’ information seeking behaviours and determine whether their mental models of the search process influence their ability to effectively search for and find scholarly materials.Design – Mixed methods including contextual inquiry, concept mapping, observation, and interviews.Setting – University of Baltimore, a public institution in Maryland, United States of America, offering undergraduate, graduate, and professional degrees.Subjects – A total of 21 first-year undergraduate students, ages 16 to 19 years, undertaking research assignments for which they chose to use online resources.Methods – First-year students were recruited in the fall of 2008 and met with the researcher in a university usability lab for about one hour over a three week period. The researcher observed and videotaped the students as they conducted research in their chosen search engines or article databases. The searches were captured using software, and students were encouraged to think aloud about their research process, search strategies, and anticipated search results. Observation sessions concluded with a 10-question interview incorporating a review of the keywords the student used, the student’s reflection on the success of his or her searches, and possible alternate keywords. The interview also offered prompts to help the researcher learn about students’ conceptualizations of search tools’ utilization of keywords to generate results. The researcher then asked the students to provide a visual diagram of the relationship between their search terms and the items retrieved in the search tool.Data were analyzed by identifying the 21 different search tools used by the students and categorizing all 210 searches and student diagrams for further analysis. A scheme similar to Guinee, Eagleton, and Hall’s (2003 characterized the student searches into four categories: simple single-term searches, topic plus focus

  18. Optimal intermittent search strategies

    Energy Technology Data Exchange (ETDEWEB)

    Rojo, F; Budde, C E [FaMAF, Universidad Nacional de Cordoba, Ciudad Universitaria, X5000HUA Cordoba (Argentina); Wio, H S [Instituto de Fisica de Cantabria, Universidad de Cantabria and CSIC E-39005 Santander (Spain)

    2009-03-27

    We study the search kinetics of a single fixed target by a set of searchers performing an intermittent random walk, jumping between different internal states. Exploiting concepts of multi-state and continuous-time random walks we have calculated the survival probability of a target up to time t, and have 'optimized' (minimized) it with regard to the transition probability among internal states. Our model shows that intermittent strategies always improve target detection, even for simple diffusion states of motion.

  19. Interactive searching of facial image databases

    Science.gov (United States)

    Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean

    1995-09-01

    A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.

  20. Fast Structural Search in Phylogenetic Databases

    Directory of Open Access Journals (Sweden)

    William H. Piel

    2005-01-01

    Full Text Available As the size of phylogenetic databases grows, the need for efficiently searching these databases arises. Thanks to previous and ongoing research, searching by attribute value and by text has become commonplace in these databases. However, searching by topological or physical structure, especially for large databases and especially for approximate matches, is still an art. We propose structural search techniques that, given a query or pattern tree P and a database of phylogenies D, find trees in D that are sufficiently close to P . The “closeness” is a measure of the topological relationships in P that are found to be the same or similar in a tree D in D. We develop a filtering technique that accelerates searches and present algorithms for rooted and unrooted trees where the trees can be weighted or unweighted. Experimental results on comparing the similarity measure with existing tree metrics and on evaluating the efficiency of the search techniques demonstrate that the proposed approach is promising

  1. Switching strategies to optimize search

    International Nuclear Information System (INIS)

    Shlesinger, Michael F

    2016-01-01

    Search strategies are explored when the search time is fixed, success is probabilistic and the estimate for success can diminish with time if there is not a successful result. Under the time constraint the problem is to find the optimal time to switch a search strategy or search location. Several variables are taken into account, including cost, gain, rate of success if a target is present and the probability that a target is present. (paper: interdisciplinary statistical mechanics)

  2. Phonetic search methods for large speech databases

    CERN Document Server

    Moyal, Ami; Tetariy, Ella; Gishri, Michal

    2013-01-01

    “Phonetic Search Methods for Large Databases” focuses on Keyword Spotting (KWS) within large speech databases. The brief will begin by outlining the challenges associated with Keyword Spotting within large speech databases using dynamic keyword vocabularies. It will then continue by highlighting the various market segments in need of KWS solutions, as well as, the specific requirements of each market segment. The work also includes a detailed description of the complexity of the task and the different methods that are used, including the advantages and disadvantages of each method and an in-depth comparison. The main focus will be on the Phonetic Search method and its efficient implementation. This will include a literature review of the various methods used for the efficient implementation of Phonetic Search Keyword Spotting, with an emphasis on the authors’ own research which entails a comparative analysis of the Phonetic Search method which includes algorithmic details. This brief is useful for resea...

  3. WGDB: Wood Gene Database with search interface.

    Science.gov (United States)

    Goyal, Neha; Ginwal, H S

    2014-01-01

    Wood quality can be defined in terms of particular end use with the involvement of several traits. Over the last fifteen years researchers have assessed the wood quality traits in forest trees. The wood quality was categorized as: cell wall biochemical traits, fibre properties include the microfibril angle, density and stiffness in loblolly pine [1]. The user friendly and an open-access database has been developed named Wood Gene Database (WGDB) for describing the wood genes along the information of protein and published research articles. It contains 720 wood genes from species namely Pinus, Deodar, fast growing trees namely Poplar, Eucalyptus. WGDB designed to encompass the majority of publicly accessible genes codes for cellulose, hemicellulose and lignin in tree species which are responsive to wood formation and quality. It is an interactive platform for collecting, managing and searching the specific wood genes; it also enables the data mining relate to the genomic information specifically in Arabidopsis thaliana, Populus trichocarpa, Eucalyptus grandis, Pinus taeda, Pinus radiata, Cedrus deodara, Cedrus atlantica. For user convenience, this database is cross linked with public databases namely NCBI, EMBL & Dendrome with the search engine Google for making it more informative and provides bioinformatics tools named BLAST,COBALT. The database is freely available on www.wgdb.in.

  4. Audio stream classification for multimedia database search

    Science.gov (United States)

    Artese, M.; Bianco, S.; Gagliardi, I.; Gasparini, F.

    2013-03-01

    Search and retrieval of huge archives of Multimedia data is a challenging task. A classification step is often used to reduce the number of entries on which to perform the subsequent search. In particular, when new entries of the database are continuously added, a fast classification based on simple threshold evaluation is desirable. In this work we present a CART-based (Classification And Regression Tree [1]) classification framework for audio streams belonging to multimedia databases. The database considered is the Archive of Ethnography and Social History (AESS) [2], which is mainly composed of popular songs and other audio records describing the popular traditions handed down generation by generation, such as traditional fairs, and customs. The peculiarities of this database are that it is continuously updated; the audio recordings are acquired in unconstrained environment; and for the non-expert human user is difficult to create the ground truth labels. In our experiments, half of all the available audio files have been randomly extracted and used as training set. The remaining ones have been used as test set. The classifier has been trained to distinguish among three different classes: speech, music, and song. All the audio files in the dataset have been previously manually labeled into the three classes above defined by domain experts.

  5. The Development of a Combined Search for a Heterogeneous Chemistry Database

    Directory of Open Access Journals (Sweden)

    Lulu Jiang

    2015-05-01

    Full Text Available A combined search, which joins a slow molecule structure search with a fast compound property search, results in more accurate search results and has been applied in several chemistry databases. However, the problems of search speed differences and combining the two separate search results are two major challenges. In this paper, two kinds of search strategies, synchronous search and asynchronous search, are proposed to solve these problems in the heterogeneous structure database and the property database found in ChemDB, a chemistry database owned by the Institute of Process Engineering, CAS. Their advantages and disadvantages under different conditions are discussed in detail. Furthermore, we applied these two searches to ChemDB and used them to screen for potential molecules that can work as CO2 absorbents. The results reveal that this combined search discovers reasonable target molecules within an acceptable time frame.

  6. Tales from the Field: Search Strategies Applied in Web Searching

    Directory of Open Access Journals (Sweden)

    Soohyung Joo

    2010-08-01

    Full Text Available In their web search processes users apply multiple types of search strategies, which consist of different search tactics. This paper identifies eight types of information search strategies with associated cases based on sequences of search tactics during the information search process. Thirty-one participants representing the general public were recruited for this study. Search logs and verbal protocols offered rich data for the identification of different types of search strategies. Based on the findings, the authors further discuss how to enhance web-based information retrieval (IR systems to support each type of search strategy.

  7. Search pattern of databases by the undergraduate students of ...

    African Journals Online (AJOL)

    The main objective of this study is to assess the awareness and search pattern of databases in order to determine the extent to which user are aware and search for databases by examining the relationship between their Awareness and search patterns of Databases, and their information literacy skills. The methodology ...

  8. Winnowing sequences from a database search.

    Science.gov (United States)

    Berman, P; Zhang, Z; Wolf, Y I; Koonin, E V; Miller, W

    2000-01-01

    In database searches for sequence similarity, matches to a distinct sequence region (e.g., protein domain) are frequently obscured by numerous matches to another region of the same sequence. In order to cope with this problem, algorithms are developed to discard redundant matches. One model for this problem begins with a list of intervals, each with an associated score; each interval gives the range of positions in the query sequence that align to a database sequence, and the score is that of the alignment. If interval I is contained in interval J, and I's score is less than J's, then I is said to be dominated by J. The problem is then to identify each interval that is dominated by at least K other intervals, where K is a given level of "tolerable redundancy." An algorithm is developed to solve the problem in O(N log N) time and O(N*) space, where N is the number of intervals and N* is a precisely defined value that never exceeds N and is frequently much smaller. This criterion for discarding database hits has been implemented in the Blast program, as illustrated herein with examples. Several variations and extensions of this approach are also described.

  9. University Students' Online Information Searching Strategies in Different Search Contexts

    Science.gov (United States)

    Tsai, Meng-Jung; Liang, Jyh-Chong; Hou, Huei-Tse; Tsai, Chin-Chung

    2012-01-01

    This study investigates the role of search context played in university students' online information searching strategies. A total of 304 university students in Taiwan were surveyed with questionnaires in which two search contexts were defined as searching for learning, and searching for daily life information. Students' online search strategies…

  10. WAIS Searching of the Current Contents Database

    Science.gov (United States)

    Banholzer, P.; Grabenstein, M. E.

    The Homer E. Newell Memorial Library of NASA's Goddard Space Flight Center is developing capabilities to permit Goddard personnel to access electronic resources of the Library via the Internet. The Library's support services contractor, Maxima Corporation, and their subcontractor, SANAD Support Technologies have recently developed a World Wide Web Home Page (http://www-library.gsfc.nasa.gov) to provide the primary means of access. The first searchable database to be made available through the HomePage to Goddard employees is Current Contents, from the Institute for Scientific Information (ISI). The initial implementation includes coverage of articles from the last few months of 1992 to present. These records are augmented with abstracts and references, and often are more robust than equivalent records in bibliographic databases that currently serve the astronomical community. Maxima/SANAD selected Wais Incorporated's WAIS product with which to build the interface to Current Contents. This system allows access from Macintosh, IBM PC, and Unix hosts, which is an important feature for Goddard's multiplatform environment. The forms interface is structured to allow both fielded (author, article title, journal name, id number, keyword, subject term, and citation) and unfielded WAIS searches. The system allows a user to: Retrieve individual journal article records. Retrieve Table of Contents of specific issues of journals. Connect to articles with similar subject terms or keywords. Connect to other issues of the same journal in the same year. Browse journal issues from an alphabetical list of indexed journal names.

  11. Searching the ASRS Database Using QUORUM Keyword Search, Phrase Search, Phrase Generation, and Phrase Discovery

    Science.gov (United States)

    McGreevy, Michael W.; Connors, Mary M. (Technical Monitor)

    2001-01-01

    To support Search Requests and Quick Responses at the Aviation Safety Reporting System (ASRS), four new QUORUM methods have been developed: keyword search, phrase search, phrase generation, and phrase discovery. These methods build upon the core QUORUM methods of text analysis, modeling, and relevance-ranking. QUORUM keyword search retrieves ASRS incident narratives that contain one or more user-specified keywords in typical or selected contexts, and ranks the narratives on their relevance to the keywords in context. QUORUM phrase search retrieves narratives that contain one or more user-specified phrases, and ranks the narratives on their relevance to the phrases. QUORUM phrase generation produces a list of phrases from the ASRS database that contain a user-specified word or phrase. QUORUM phrase discovery finds phrases that are related to topics of interest. Phrase generation and phrase discovery are particularly useful for finding query phrases for input to QUORUM phrase search. The presentation of the new QUORUM methods includes: a brief review of the underlying core QUORUM methods; an overview of the new methods; numerous, concrete examples of ASRS database searches using the new methods; discussion of related methods; and, in the appendices, detailed descriptions of the new methods.

  12. PubData: search engine for bioinformatics databases worldwide

    OpenAIRE

    Vand, Kasra; Wahlestedt, Thor; Khomtchouk, Kelly; Sayed, Mohammed; Wahlestedt, Claes; Khomtchouk, Bohdan

    2016-01-01

    We propose a search engine and file retrieval system for all bioinformatics databases worldwide. PubData searches biomedical data in a user-friendly fashion similar to how PubMed searches biomedical literature. PubData is built on novel network programming, natural language processing, and artificial intelligence algorithms that can patch into the file transfer protocol servers of any user-specified bioinformatics database, query its contents, retrieve files for download, and adapt to the use...

  13. Searching the PASCAL database - A user's perspective

    Science.gov (United States)

    Jack, Robert F.

    1989-01-01

    The operation of PASCAL, a bibliographic data base covering broad subject areas in science and technology, is discussed. The data base includes information from about 1973 to the present, including topics in engineering, chemistry, physics, earth science, environmental science, biology, psychology, and medicine. Data from 1986 to the present may be searched using DIALOG. The procedures and classification codes for searching PASCAL are presented. Examples of citations retrieved from the data base are given and suggestions are made concerning when to use PASCAL.

  14. Two Search Techniques within a Human Pedigree Database

    OpenAIRE

    Gersting, J. M.; Conneally, P. M.; Rogers, K.

    1982-01-01

    This paper presents the basic features of two search techniques from MEGADATS-2 (MEdical Genetics Acquisition and DAta Transfer System), a system for collecting, storing, retrieving and plotting human family pedigrees. The individual search provides a quick method for locating an individual in the pedigree database. This search uses a modified soundex coding and an inverted file structure based on a composite key. The navigational search uses a set of pedigree traversal operations (individual...

  15. Using SQL Databases for Sequence Similarity Searching and Analysis.

    Science.gov (United States)

    Pearson, William R; Mackey, Aaron J

    2017-09-13

    Relational databases can integrate diverse types of information and manage large sets of similarity search results, greatly simplifying genome-scale analyses. By focusing on taxonomic subsets of sequences, relational databases can reduce the size and redundancy of sequence libraries and improve the statistical significance of homologs. In addition, by loading similarity search results into a relational database, it becomes possible to explore and summarize the relationships between all of the proteins in an organism and those in other biological kingdoms. This unit describes how to use relational databases to improve the efficiency of sequence similarity searching and demonstrates various large-scale genomic analyses of homology-related data. It also describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. The unit also introduces search_demo, a database that stores sequence similarity search results. The search_demo database is then used to explore the evolutionary relationships between E. coli proteins and proteins in other organisms in a large-scale comparative genomic analysis. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  16. Method and electronic database search engine for exposing the content of an electronic database

    NARCIS (Netherlands)

    Stappers, P.J.

    2000-01-01

    The invention relates to an electronic database search engine comprising an electronic memory device suitable for storing and releasing elements from the database, a display unit, a user interface for selecting and displaying at least one element from the database on the display unit, and control

  17. Effective Image Database Search via Dimensionality Reduction

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Aanæs, Henrik

    2008-01-01

    Image search using the bag-of-words image representation is investigated further in this paper. This approach has shown promising results for large scale image collections making it relevant for Internet applications. The steps involved in the bag-of-words approach are feature extraction, vocabul......Image search using the bag-of-words image representation is investigated further in this paper. This approach has shown promising results for large scale image collections making it relevant for Internet applications. The steps involved in the bag-of-words approach are feature extraction......, vocabulary building, and searching with a query image. It is important to keep the computational cost low through all steps. In this paper we focus on the efficiency of the technique. To do that we substantially reduce the dimensionality of the features by the use of PCA and addition of color. Building...... of the visual vocabulary is typically done using k-means. We investigate a clustering algorithm based on the leader follower principle (LF-clustering), in which the number of clusters is not fixed. The adaptive nature of LF-clustering is shown to improve the quality of the visual vocabulary using this...

  18. Uncovering Web search strategies in South African higher education

    Directory of Open Access Journals (Sweden)

    Surika Civilcharran

    2016-11-01

    Full Text Available Background: In spite of the enormous amount of information available on the Web and the fact that search engines are continuously evolving to enhance the search experience, students are nevertheless faced with the difficulty of effectively retrieving information. It is, therefore, imperative for the interaction between students and search tools to be understood and search strategies to be identified, in order to promote successful information retrieval. Objectives: This study identifies the Web search strategies used by postgraduate students and forms part of a wider study into information retrieval strategies used by postgraduate students at the University of KwaZulu-Natal (UKZN, Pietermaritzburg campus, South Africa. Method: Largely underpinned by Thatcher’s cognitive search strategies, the mixed-methods approach was utilised for this study, in which questionnaires were employed in Phase 1 and structured interviews in Phase 2. This article reports and reflects on the findings of Phase 2, which focus on identifying the Web search strategies employed by postgraduate students. The Phase 1 results were reported in Civilcharran, Hughes and Maharaj (2015. Results: Findings reveal the Web search strategies used for academic information retrieval. In spite of easy access to the invisible Web and the advent of meta-search engines, the use of Web search engines still remains the preferred search tool. The UKZN online library databases and especially the UKZN online library, Online Public Access Catalogue system, are being underutilised. Conclusion: Being ranked in the top three percent of the world’s universities, UKZN is investing in search tools that are not being used to their full potential. This evidence suggests an urgent need for students to be trained in Web searching and to have a greater exposure to a variety of search tools. This article is intended to further contribute to the design of undergraduate training programmes in order to deal

  19. Simplified validation of borderline hits of database searches

    OpenAIRE

    Thomas, Henrik; Shevchenko, Andrej

    2008-01-01

    Along with unequivocal hits produced by matching multiple MS/MS spectra to database sequences, LC-MS/MS analysis often yields a large number of hits of borderline statistical confidence. To simplify their validation, we propose to use rapid de novo interpretation of all acquired MS/MS spectra and, with the help of a simple software tool, display the candidate sequences together with each database search hit. We demonstrate that comparing hit database sequences and independent de novo interpre...

  20. MICA: desktop software for comprehensive searching of DNA databases

    Directory of Open Access Journals (Sweden)

    Glick Benjamin S

    2006-10-01

    Full Text Available Abstract Background Molecular biologists work with DNA databases that often include entire genomes. A common requirement is to search a DNA database to find exact matches for a nondegenerate or partially degenerate query. The software programs available for such purposes are normally designed to run on remote servers, but an appealing alternative is to work with DNA databases stored on local computers. We describe a desktop software program termed MICA (K-Mer Indexing with Compact Arrays that allows large DNA databases to be searched efficiently using very little memory. Results MICA rapidly indexes a DNA database. On a Macintosh G5 computer, the complete human genome could be indexed in about 5 minutes. The indexing algorithm recognizes all 15 characters of the DNA alphabet and fully captures the information in any DNA sequence, yet for a typical sequence of length L, the index occupies only about 2L bytes. The index can be searched to return a complete list of exact matches for a nondegenerate or partially degenerate query of any length. A typical search of a long DNA sequence involves reading only a small fraction of the index into memory. As a result, searches are fast even when the available RAM is limited. Conclusion MICA is suitable as a search engine for desktop DNA analysis software.

  1. The LAILAPS Search Engine: Relevance Ranking in Life Science Databases

    Directory of Open Access Journals (Sweden)

    Lange Matthias

    2010-06-01

    Full Text Available Search engines and retrieval systems are popular tools at a life science desktop. The manual inspection of hundreds of database entries, that reflect a life science concept or fact, is a time intensive daily work. Hereby, not the number of query results matters, but the relevance does. In this paper, we present the LAILAPS search engine for life science databases. The concept is to combine a novel feature model for relevance ranking, a machine learning approach to model user relevance profiles, ranking improvement by user feedback tracking and an intuitive and slim web user interface, that estimates relevance rank by tracking user interactions. Queries are formulated as simple keyword lists and will be expanded by synonyms. Supporting a flexible text index and a simple data import format, LAILAPS can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases.

  2. Searching mixed DNA profiles directly against profile databases.

    Science.gov (United States)

    Bright, Jo-Anne; Taylor, Duncan; Curran, James; Buckleton, John

    2014-03-01

    DNA databases have revolutionised forensic science. They are a powerful investigative tool as they have the potential to identify persons of interest in criminal investigations. Routinely, a DNA profile generated from a crime sample could only be searched for in a database of individuals if the stain was from single contributor (single source) or if a contributor could unambiguously be determined from a mixed DNA profile. This meant that a significant number of samples were unsuitable for database searching. The advent of continuous methods for the interpretation of DNA profiles offers an advanced way to draw inferential power from the considerable investment made in DNA databases. Using these methods, each profile on the database may be considered a possible contributor to a mixture and a likelihood ratio (LR) can be formed. Those profiles which produce a sufficiently large LR can serve as an investigative lead. In this paper empirical studies are described to determine what constitutes a large LR. We investigate the effect on a database search of complex mixed DNA profiles with contributors in equal proportions with dropout as a consideration, and also the effect of an incorrect assignment of the number of contributors to a profile. In addition, we give, as a demonstration of the method, the results using two crime samples that were previously unsuitable for database comparison. We show that effective management of the selection of samples for searching and the interpretation of the output can be highly informative. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Searching CLEF-IP by Strategy

    NARCIS (Netherlands)

    W. Alink (Wouter); R. Cornacchia (Roberto); A.P. de Vries (Arjen)

    2010-01-01

    htmlabstractTasks performed by intellectual property specialists are often ad hoc, and continuously require new approaches to search a collection of documents. We therefore investigate the benets of a visual `search strategy builder' to allow IP search experts to express their approach to

  4. BioCarian: search engine for exploratory searches in heterogeneous biological databases.

    Science.gov (United States)

    Zaki, Nazar; Tennakoon, Chandana

    2017-10-02

    There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search

  5. A search strategy for occupational health intervention studies

    NARCIS (Netherlands)

    Verbeek, J.; Salmi, J.; Pasternack, I.; Jauhiainen, M.; Laamanen, I.; Schaafsma, F.; Hulshof, C.; van Dijk, F.

    2005-01-01

    As a result of low numbers and diversity in study type, occupational health intervention studies are not easy to locate in electronic literature databases. To develop a search strategy that facilitates finding occupational health intervention studies in Medline, both for researchers and

  6. Searching Harvard Business Review Online. . . Lessons in Searching a Full Text Database.

    Science.gov (United States)

    Tenopir, Carol

    1985-01-01

    This article examines the Harvard Business Review Online (HBRO) database (bibliographic description fields, abstracts, extracted information, full text, subject descriptors) and reports on 31 sample HBRO searches conducted in Bibliographic Retrieval Services to test differences between searching full text and searching bibliographic record. Sample…

  7. Routine development of objectively derived search strategies

    Directory of Open Access Journals (Sweden)

    Hausner Elke

    2012-02-01

    Full Text Available Abstract Background Over the past few years, information retrieval has become more and more professionalized, and information specialists are considered full members of a research team conducting systematic reviews. Research groups preparing systematic reviews and clinical practice guidelines have been the driving force in the development of search strategies, but open questions remain regarding the transparency of the development process and the available resources. An empirically guided approach to the development of a search strategy provides a way to increase transparency and efficiency. Methods Our aim in this paper is to describe the empirically guided development process for search strategies as applied by the German Institute for Quality and Efficiency in Health Care (Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen, or "IQWiG". This strategy consists of the following steps: generation of a test set, as well as the development, validation and standardized documentation of the search strategy. Results We illustrate our approach by means of an example, that is, a search for literature on brachytherapy in patients with prostate cancer. For this purpose, a test set was generated, including a total of 38 references from 3 systematic reviews. The development set for the generation of the strategy included 25 references. After application of textual analytic procedures, a strategy was developed that included all references in the development set. To test the search strategy on an independent set of references, the remaining 13 references in the test set (the validation set were used. The validation set was also completely identified. Discussion Our conclusion is that an objectively derived approach similar to that used in search filter development is a feasible way to develop and validate reliable search strategies. Besides creating high-quality strategies, the widespread application of this approach will result in a

  8. STEPS: a grid search methodology for optimized peptide identification filtering of MS/MS database search results.

    Science.gov (United States)

    Piehowski, Paul D; Petyuk, Vladislav A; Sandoval, John D; Burnum, Kristin E; Kiebel, Gary R; Monroe, Matthew E; Anderson, Gordon A; Camp, David G; Smith, Richard D

    2013-03-01

    For bottom-up proteomics, there are wide variety of database-searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid-search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection--referred to as STEPS--utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true-positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Forensic utilization of familial searches in DNA databases.

    Science.gov (United States)

    Gershaw, Cassandra J; Schweighardt, Andrew J; Rourke, Linda C; Wallace, Margaret M

    2011-01-01

    DNA evidence is widely recognized as an invaluable tool in the process of investigation and identification, as well as one of the most sought after types of evidence for presentation to a jury. In the United States, the development of state and federal DNA databases has greatly impacted the forensic community by creating an efficient, searchable system that can be used to eliminate or include suspects in an investigation based on matching DNA profiles - the profile already in the database to the profile of the unknown sample in evidence. Recent changes in legislation have begun to allow for the possibility to expand the parameters of DNA database searches, taking into account the possibility of familial searches. This article discusses prospective positive outcomes of utilizing familial DNA searches and acknowledges potential negative outcomes, thereby presenting both sides of this very complicated, rapidly evolving situation. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  10. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    Science.gov (United States)

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework

  11. PubMed searches: overview and strategies for clinicians.

    Science.gov (United States)

    Lindsey, Wesley T; Olin, Bernie R

    2013-04-01

    PubMed is a biomedical and life sciences database maintained by a division of the National Library of Medicine known as the National Center for Biotechnology Information (NCBI). It is a large resource with more than 5600 journals indexed and greater than 22 million total citations. Searches conducted in PubMed provide references that are more specific for the intended topic compared with other popular search engines. Effective PubMed searches allow the clinician to remain current on the latest clinical trials, systematic reviews, and practice guidelines. PubMed continues to evolve by allowing users to create a customized experience through the My NCBI portal, new arrangements and options in search filters, and supporting scholarly projects through exportation of citations to reference managing software. Prepackaged search options available in the Clinical Queries feature also allow users to efficiently search for clinical literature. PubMed also provides information regarding the source journals themselves through the Journals in NCBI Databases link. This article provides an overview of the PubMed database's structure and features as well as strategies for conducting an effective search.

  12. Collaborative Search Strategies for Green Innovation

    DEFF Research Database (Denmark)

    Ørding Olsen, Anders; Sofka, Wolfgang; Grimpe, Christoph

    Recent innovation and strategy research emphasizes the importance of firm’s search for external knowledge to improve innovation performance. We focus on such search strategies within the domain of sustainable innovation in which problems are inherently complex and the relevant knowledge is widely...... dispersed. Hence, firms need to collaborate. We shed new light on collaborative search strategies led by firms in general and for solving environmental problems in particular. Both topics are largely absent in the extant open innovation literature. Using data from the European Seventh Framework Program...... for Research and Technological Development (FP7), our results indicate that the problem-solving potential of a search strategy increases with the diversity of existing knowledge of the partners in a consortium and with the experience of the partners involved. Moreover, we identify a substantial negative effect...

  13. PFTijah: text search in an XML database system

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Rode, H.; van Os, R.; Flokstra, Jan

    2006-01-01

    This paper introduces the PFTijah system, a text search system that is integrated with an XML/XQuery database management system. We present examples of its use, we explain some of the system internals, and discuss plans for future work. PFTijah is part of the open source release of MonetDB/XQuery.

  14. A practical approach for inexpensive searches of radiology report databases.

    Science.gov (United States)

    Desjardins, Benoit; Hamilton, R Curtis

    2007-06-01

    We present a method to perform full text searches of radiology reports for the large number of departments that do not have this ability as part of their radiology or hospital information system. A tool written in Microsoft Access (front-end) has been designed to search a server (back-end) containing the indexed backup weekly copy of the full relational database extracted from a radiology information system (RIS). This front end-/back-end approach has been implemented in a large academic radiology department, and is used for teaching, research and administrative purposes. The weekly second backup of the 80 GB, 4 million record RIS database takes 2 hours. Further indexing of the exported radiology reports takes 6 hours. Individual searches of the indexed database typically take less than 1 minute on the indexed database and 30-60 minutes on the nonindexed database. Guidelines to properly address privacy and institutional review board issues are closely followed by all users. This method has potential to improve teaching, research, and administrative programs within radiology departments that cannot afford more expensive technology.

  15. When is a search not a search? A comparison of searching the AMED complementary health database via EBSCOhost, OVID and DIALOG.

    Science.gov (United States)

    Younger, Paula; Boddy, Kate

    2009-06-01

    The researchers involved in this study work at Exeter Health library and at the Complementary Medicine Unit, Peninsula School of Medicine and Dentistry (PCMD). Within this collaborative environment it is possible to access the electronic resources of three institutions. This includes access to AMED and other databases using different interfaces. The aim of this study was to investigate whether searching different interfaces to the AMED allied health and complementary medicine database produced the same results when using identical search terms. The following Internet-based AMED interfaces were searched: DIALOG DataStar; EBSCOhost and OVID SP_UI01.00.02. Search results from all three databases were saved in an endnote database to facilitate analysis. A checklist was also compiled comparing interface features. In our initial search, DIALOG returned 29 hits, OVID 14 and Ebsco 8. If we assume that DIALOG returned 100% of potential hits, OVID initially returned only 48% of hits and EBSCOhost only 28%. In our search, a researcher using the Ebsco interface to carry out a simple search on AMED would miss over 70% of possible search hits. Subsequent EBSCOhost searches on different subjects failed to find between 21 and 86% of the hits retrieved using the same keywords via DIALOG DataStar. In two cases, the simple EBSCOhost search failed to find any of the results found via DIALOG DataStar. Depending on the interface, the number of hits retrieved from the same database with the same simple search can vary dramatically. Some simple searches fail to retrieve a substantial percentage of citations. This may result in an uninformed literature review, research funding application or treatment intervention. In addition to ensuring that keywords, spelling and medical subject headings (MeSH) accurately reflect the nature of the search, database users should include wildcards and truncation and adapt their search strategy substantially to retrieve the maximum number of appropriate

  16. Combined semantic and similarity search in medical image databases

    Science.gov (United States)

    Seifert, Sascha; Thoma, Marisa; Stegmaier, Florian; Hammon, Matthias; Kramer, Martin; Huber, Martin; Kriegel, Hans-Peter; Cavallaro, Alexander; Comaniciu, Dorin

    2011-03-01

    The current diagnostic process at hospitals is mainly based on reviewing and comparing images coming from multiple time points and modalities in order to monitor disease progression over a period of time. However, for ambiguous cases the radiologist deeply relies on reference literature or second opinion. Although there is a vast amount of acquired images stored in PACS systems which could be reused for decision support, these data sets suffer from weak search capabilities. Thus, we present a search methodology which enables the physician to fulfill intelligent search scenarios on medical image databases combining ontology-based semantic and appearance-based similarity search. It enabled the elimination of 12% of the top ten hits which would arise without taking the semantic context into account.

  17. A Taxonomic Search Engine: Federating taxonomic databases using web services

    Directory of Open Access Journals (Sweden)

    Page Roderic DM

    2005-03-01

    Full Text Available Abstract Background The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. Results The Taxonomic Search Engine (TSE is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata for each name. Conclusion The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  18. A Taxonomic Search Engine: federating taxonomic databases using web services.

    Science.gov (United States)

    Page, Roderic D M

    2005-03-09

    The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. The Taxonomic Search Engine (TSE) is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO) and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID) authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata) for each name. The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  19. The database search problem: a question of rational decision making.

    Science.gov (United States)

    Gittelson, S; Biedermann, A; Bozza, S; Taroni, F

    2012-10-10

    This paper applies probability and decision theory in the graphical interface of an influence diagram to study the formal requirements of rationality which justify the individualization of a person found through a database search. The decision-theoretic part of the analysis studies the parameters that a rational decision maker would use to individualize the selected person. The modeling part (in the form of an influence diagram) clarifies the relationships between this decision and the ingredients that make up the database search problem, i.e., the results of the database search and the different pairs of propositions describing whether an individual is at the source of the crime stain. These analyses evaluate the desirability associated with the decision of 'individualizing' (and 'not individualizing'). They point out that this decision is a function of (i) the probability that the individual in question is, in fact, at the source of the crime stain (i.e., the state of nature), and (ii) the decision maker's preferences among the possible consequences of the decision (i.e., the decision maker's loss function). We discuss the relevance and argumentative implications of these insights with respect to recent comments in specialized literature, which suggest points of view that are opposed to the results of our study. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. Search strategies on the Internet: general and specific.

    Science.gov (United States)

    Bottrill, Krys

    2004-06-01

    Some of the most up-to-date information on scientific activity is to be found on the Internet; for example, on the websites of academic and other research institutions and in databases of currently funded research studies provided on the websites of funding bodies. Such information can be valuable in suggesting new approaches and techniques that could be applicable in a Three Rs context. However, the Internet is a chaotic medium, not subject to the meticulous classification and organisation of classical information resources. At the same time, Internet search engines do not match the sophistication of search systems used by database hosts. Also, although some offer relatively advanced features, user awareness of these tends to be low. Furthermore, much of the information on the Internet is not accessible to conventional search engines, giving rise to the concept of the "Invisible Web". General strategies and techniques for Internet searching are presented, together with a comparative survey of selected search engines. The question of how the Invisible Web can be accessed is discussed, as well as how to keep up-to-date with Internet content and improve searching skills.

  1. Enriching Great Britain's National Landslide Database by searching newspaper archives

    Science.gov (United States)

    Taylor, Faith E.; Malamud, Bruce D.; Freeborough, Katy; Demeritt, David

    2015-11-01

    Our understanding of where landslide hazard and impact will be greatest is largely based on our knowledge of past events. Here, we present a method to supplement existing records of landslides in Great Britain by searching an electronic archive of regional newspapers. In Great Britain, the British Geological Survey (BGS) is responsible for updating and maintaining records of landslide events and their impacts in the National Landslide Database (NLD). The NLD contains records of more than 16,500 landslide events in Great Britain. Data sources for the NLD include field surveys, academic articles, grey literature, news, public reports and, since 2012, social media. We aim to supplement the richness of the NLD by (i) identifying additional landslide events, (ii) acting as an additional source of confirmation of events existing in the NLD and (iii) adding more detail to existing database entries. This is done by systematically searching the Nexis UK digital archive of 568 regional newspapers published in the UK. In this paper, we construct a robust Boolean search criterion by experimenting with landslide terminology for four training periods. We then apply this search to all articles published in 2006 and 2012. This resulted in the addition of 111 records of landslide events to the NLD over the 2 years investigated (2006 and 2012). We also find that we were able to obtain information about landslide impact for 60-90% of landslide events identified from newspaper articles. Spatial and temporal patterns of additional landslides identified from newspaper articles are broadly in line with those existing in the NLD, confirming that the NLD is a representative sample of landsliding in Great Britain. This method could now be applied to more time periods and/or other hazards to add richness to databases and thus improve our ability to forecast future events based on records of past events.

  2. Formalized search strategies for human risk contributions

    International Nuclear Information System (INIS)

    Rasmussen, J.; Pedersen, O.M.

    1982-07-01

    For risk management, the results of a probabilistic risk analysis (PRA) as well as the underlying assumptions can be used as references in a closed-loop risk control; and the analyses of operational experiences as a means of feedback. In this context, the need for explicit definition and documentation of the PRA coverage, including the search strategies applied, is discussed and aids are proposed such as plant description in terms of a formal abstraction hierarchy and use of cause-consequence-charts for the documentation of not only the results of PRA but also of its coverage. Typical human risk contributions are described on the basis of general plant design features relevant for risk and accident analysis. With this background, search strategies for human risk contributions are treated: Under the designation ''work analysis'', procedures for the analysis of familiar, well trained, planned tasks are proposed. Strategies for identifying human risk contributions outside this category are outlined. (author)

  3. Supporting ontology-based keyword search over medical databases.

    Science.gov (United States)

    Kementsietsidis, Anastasios; Lim, Lipyeow; Wang, Min

    2008-11-06

    The proliferation of medical terms poses a number of challenges in the sharing of medical information among different stakeholders. Ontologies are commonly used to establish relationships between different terms, yet their role in querying has not been investigated in detail. In this paper, we study the problem of supporting ontology-based keyword search queries on a database of electronic medical records. We present several approaches to support this type of queries, study the advantages and limitations of each approach, and summarize the lessons learned as best practices.

  4. A stochastic model for intermittent search strategies

    International Nuclear Information System (INIS)

    Benichou, O; Coppey, M; Moreau, M; Suet, P H; Voituriez, R

    2005-01-01

    It is often necessary, in scientific or everyday life problems, to find a randomly hidden target. What is then the optimal strategy to reach it as rapidly as possible? In this article, we develop a stochastic theory for intermittent search behaviours, which are often observed: the searcher alternates phases of intensive search and slow motion with fast displacements. The first results of this theory have already been announced recently. Here we provide a detailed presentation of the theory, as well as the full derivation of the results. Furthermore, we explicitly discuss the minimization of the time needed to find the target

  5. Formalized Search Strategies for Human Risk Contributions

    DEFF Research Database (Denmark)

    Rasmussen, Jens; Pedersen, O. M.

    For risk management, the results of a probabilistic risk analysis (PRA) as well as the underlying assumptions can be used as references in a closed-loop risk control; and the analyses of operational experiences as a means of feedback. In this context, the need for explicit definition...... risk contributions are described on the basis of general plant design features relevant for risk and accident analysis. With this background, search strategies for human risk contributions are treated: Under the designation "work analysis", procedures for the analysis of familiar, well trained, planned...... tasks are proposed. Strategies for identifying human risk contributions outside this category are outlined....

  6. MSblender: A probabilistic approach for integrating peptide identifications from multiple database search engines.

    Science.gov (United States)

    Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M

    2011-07-01

    Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.

  7. Archiving, ordering and searching: search engines, algorithms, databases and deep mediatization

    DEFF Research Database (Denmark)

    Andersen, Jack

    2018-01-01

    This article argues that search engines, algorithms, and databases can be considered as a way of understanding deep mediatization (Couldry & Hepp, 2016). They are embedded in a variety of social and cultural practices and as such they change our communicative actions to be shaped by their logic o...... reviewed recent trends in mediatization research, the argument is discussed and unfolded in-between the material and social constructivist-phenomenological interpretations of mediatization. In conclusion, it is discussed how deep this form of mediatization can be taken to be.......This article argues that search engines, algorithms, and databases can be considered as a way of understanding deep mediatization (Couldry & Hepp, 2016). They are embedded in a variety of social and cultural practices and as such they change our communicative actions to be shaped by their logic...

  8. Expert Search Strategies: The Information Retrieval Practices of Healthcare Information Professionals.

    Science.gov (United States)

    Russell-Rose, Tony; Chamberlain, Jon

    2017-10-02

    Healthcare information professionals play a key role in closing the knowledge gap between medical research and clinical practice. Their work involves meticulous searching of literature databases using complex search strategies that can consist of hundreds of keywords, operators, and ontology terms. This process is prone to error and can lead to inefficiency and bias if performed incorrectly. The aim of this study was to investigate the search behavior of healthcare information professionals, uncovering their needs, goals, and requirements for information retrieval systems. A survey was distributed to healthcare information professionals via professional association email discussion lists. It investigated the search tasks they undertake, their techniques for search strategy formulation, their approaches to evaluating search results, and their preferred functionality for searching library-style databases. The popular literature search system PubMed was then evaluated to determine the extent to which their needs were met. The 107 respondents indicated that their information retrieval process relied on the use of complex, repeatable, and transparent search strategies. On average it took 60 minutes to formulate a search strategy, with a search task taking 4 hours and consisting of 15 strategy lines. Respondents reviewed a median of 175 results per search task, far more than they would ideally like (100). The most desired features of a search system were merging search queries and combining search results. Healthcare information professionals routinely address some of the most challenging information retrieval problems of any profession. However, their needs are not fully supported by current literature search systems and there is demand for improved functionality, in particular regarding the development and management of search strategies. ©Tony Russell-Rose, Jon Chamberlain. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 02.10.2017.

  9. Organizational Search Strategy: An Examination of Interdependencies, Locus and Temporality of Search

    NARCIS (Netherlands)

    Ebrahim, Mahdi

    2017-01-01

    Notwithstanding the ample research on organizational search and its performance implications, the factors that shape the organizations’ search strategy is fairly under-explored. This study investigates how problems’ characteristics influence the managerial decision of searching jointly or

  10. Chance and strategy in search processes

    International Nuclear Information System (INIS)

    Moreau, M; Bénichou, O; Loverdo, C; Voituriez, R

    2009-01-01

    We consider a searcher in quest of a target in two situations: in the presence of an infinite number of identical, Poisson distributed targets, and in the presence of a unique target in a finite territory. The searcher alternates intensive search phases, during which it scans the neighbouring territory but does not move, and displacement phases with no target detection. We study the problem of determining the best strategy of displacement for minimizing the mean search time: either a deterministic or a stochastic trajectory. With a reasonable simplifying hypothesis, we show that for Poisson distributed targets, deterministic, self-avoiding trajectories are more efficient than stochastic ones if the detection process involves no memory skills and can be modelled by a Markov process. In contrast, if the detection process is not Markovian, it can be better for the searcher to follow a stochastic trajectory rather than a self-avoiding trajectory, and we give an explicit example of such a memory law. In the case of a unique target, self-avoiding trajectories are always better if an infinite time is available for the search, whereas stochastic trajectories can be more efficient if the searcher has to find the target before a given deadline. Moreover, we show that the gain due to a deterministic trajectory, compared to a stochastic one, is not significant in the case of a large network containing a unique target. Additionally, for various examples of displacement trajectories, we compute the overall mean search time and study its minimization as a function of the mean duration of the detection process

  11. Expert Search Strategies: The Information Retrieval Practices of Healthcare Information Professionals

    OpenAIRE

    Russell-Rose, Tony; Chamberlain, Jon

    2017-01-01

    Background Healthcare information professionals play a key role in closing the knowledge gap between medical research and clinical practice. Their work involves meticulous searching of literature databases using complex search strategies that can consist of hundreds of keywords, operators, and ontology terms. This process is prone to error and can lead to inefficiency and bias if performed incorrectly. Objective The aim of this study was to investigate the search behavior of healthcare inform...

  12. An approach in building a chemical compound search engine in oracle database.

    Science.gov (United States)

    Wang, H; Volarath, P; Harrison, R

    2005-01-01

    A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated. The convenience of the implementation emphasizes the efficiency and simplicity of the framework.

  13. Database system selection for marketing strategies support in information systems

    Directory of Open Access Journals (Sweden)

    František Dařena

    2007-01-01

    Full Text Available In today’s dynamically changing environment marketing has a significant role. Creating successful marketing strategies requires large amount of high quality information of various kinds and data types. A powerful database management system is a necessary condition for marketing strategies creation support. The paper briefly describes the field of marketing strategies and specifies the features that should be provided by database systems in connection with these strategies support. Major commercial (Oracle, DB2, MS SQL, Sybase and open-source (PostgreSQL, MySQL, Firebird databases are than examined from the point of view of accordance with these characteristics and their comparison in made. The results are useful for making the decision before acquisition of a database system during information system’s hardware architecture specification.

  14. PIR search result - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available e filtered with Expect values lower than 1e-10. Number of data entries 1,549,409 ...he searches. Data analysis method Performed blastx searches against the PIR protein database. The results ar

  15. Randomized Search Strategies With Imperfect Sensors

    National Research Council Canada - National Science Library

    Gage, Douglas W

    1993-01-01

    .... An important class of coverage applications are those that involve a search, in which a number of searching elements move about within a prescribed search area in order to find one or more target...

  16. pSort search result - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...name: kome_psort_search_result.zip File URL: ftp://ftp.biosciencedbc.jp/archive/kome/LATEST/kome_psort_searc...abase Description Download License Update History of This Database Site Policy | Contact Us pSort search result - KOME | LSDB Archive ...

  17. Literature search strategies for conducting knowledge-building and theory-generating qualitative systematic reviews.

    Science.gov (United States)

    Finfgeld-Connett, Deborah; Johnson, E Diane

    2013-01-01

    To report literature search strategies for the purpose of conducting knowledge-building and theory-generating qualitative systematic reviews. Qualitative systematic reviews lie on a continuum from knowledge-building and theory-generating to aggregating and summarizing. Different types of literature searches are needed to optimally support these dissimilar reviews. Articles published between 1989-Autumn 2011. These documents were identified using a hermeneutic approach and multiple literature search strategies. Redundancy is not the sole measure of validity when conducting knowledge-building and theory-generating systematic reviews. When conducting these types of reviews, literature searches should be consistent with the goal of fully explicating concepts and the interrelationships among them. To accomplish this objective, a 'berry picking' approach is recommended along with strategies for overcoming barriers to finding qualitative research reports. To enhance integrity of knowledge-building and theory-generating systematic reviews, reviewers are urged to make literature search processes as transparent as possible, despite their complexity. This includes fully explaining and rationalizing what databases were used and how they were searched. It also means describing how literature tracking was conducted and grey literature was searched. In the end, the decision to cease searching also needs to be fully explained and rationalized. Predetermined linear search strategies are unlikely to generate search results that are adequate for purposes of conducting knowledge-building and theory-generating qualitative systematic reviews. Instead, it is recommended that iterative search strategies take shape as reviews evolve. © 2012 Blackwell Publishing Ltd.

  18. An effective suggestion method for keyword search of databases

    KAUST Repository

    Huang, Hai; Chen, Zonghai; Liu, Chengfei; Huang, He; Zhang, Xiangliang

    2016-01-01

    This paper solves the problem of providing high-quality suggestions for user keyword queries over databases. With the assumption that the returned suggestions are independent, existing query suggestion methods over databases score candidate

  19. MetaboSearch: tool for mass-based metabolite identification using multiple databases.

    Directory of Open Access Journals (Sweden)

    Bin Zhou

    Full Text Available Searching metabolites against databases according to their masses is often the first step in metabolite identification for a mass spectrometry-based untargeted metabolomics study. Major metabolite databases include Human Metabolome DataBase (HMDB, Madison Metabolomics Consortium Database (MMCD, Metlin, and LIPID MAPS. Since each one of these databases covers only a fraction of the metabolome, integration of the search results from these databases is expected to yield a more comprehensive coverage. However, the manual combination of multiple search results is generally difficult when identification of hundreds of metabolites is desired. We have implemented a web-based software tool that enables simultaneous mass-based search against the four major databases, and the integration of the results. In addition, more complete chemical identifier information for the metabolites is retrieved by cross-referencing multiple databases. The search results are merged based on IUPAC International Chemical Identifier (InChI keys. Besides a simple list of m/z values, the software can accept the ion annotation information as input for enhanced metabolite identification. The performance of the software is demonstrated on mass spectrometry data acquired in both positive and negative ionization modes. Compared with search results from individual databases, MetaboSearch provides better coverage of the metabolome and more complete chemical identifier information.The software tool is available at http://omics.georgetown.edu/MetaboSearch.html.

  20. Database search for safety information on cosmetic ingredients.

    Science.gov (United States)

    Pauwels, Marleen; Rogiers, Vera

    2007-12-01

    Ethical considerations with respect to experimental animal use and regulatory testing are worldwide under heavy discussion and are, in certain cases, taken up in legislative measures. The most explicit example is the European cosmetic legislation, establishing a testing ban on finished cosmetic products since 11 September 2004 and enforcing that the safety of a cosmetic product is assessed by taking into consideration "the general toxicological profile of the ingredients, their chemical structure and their level of exposure" (OJ L151, 32-37, 23 June 1993; OJ L066, 26-35, 11 March 2003). Therefore the availability of referenced and reliable information on cosmetic ingredients becomes a dire necessity. Given the high-speed progress of the World Wide Web services and the concurrent drastic increase in free access to information, identification of relevant data sources and evaluation of the scientific value and quality of the retrieved data, are crucial. Based upon own practical experience, a survey is put together of freely and commercially available data sources with their individual description, field of application, benefits and drawbacks. It should be mentioned that the search strategies described are equally useful as a starting point for any quest for safety data on chemicals or chemical-related substances in general.

  1. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    Science.gov (United States)

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  2. Web-based information search and retrieval: effects of strategy use and age on search success.

    Science.gov (United States)

    Stronge, Aideen J; Rogers, Wendy A; Fisk, Arthur D

    2006-01-01

    The purpose of this study was to investigate the relationship between strategy use and search success on the World Wide Web (i.e., the Web) for experienced Web users. An additional goal was to extend understanding of how the age of the searcher may influence strategy use. Current investigations of information search and retrieval on the Web have provided an incomplete picture of Web strategy use because participants have not been given the opportunity to demonstrate their knowledge of Web strategies while also searching for information on the Web. Using both behavioral and knowledge-engineering methods, we investigated searching behavior and system knowledge for 16 younger adults (M = 20.88 years of age) and 16 older adults (M = 67.88 years). Older adults were less successful than younger adults in finding correct answers to the search tasks. Knowledge engineering revealed that the age-related effect resulted from ineffective search strategies and amount of Web experience rather than age per se. Our analysis led to the development of a decision-action diagram representing search behavior for both age groups. Older adults had more difficulty than younger adults when searching for information on the Web. However, this difficulty was related to the selection of inefficient search strategies, which may have been attributable to a lack of knowledge about available Web search strategies. Actual or potential applications of this research include training Web users to search more effectively and suggestions to improve the design of search engines.

  3. Efficiency of Database Search for Identification of Mutated and Modified Proteins via Mass Spectrometry

    OpenAIRE

    Pevzner, Pavel A.; Mulyukov, Zufar; Dancik, Vlado; Tang, Chris L

    2001-01-01

    Although protein identification by matching tandem mass spectra (MS/MS) against protein databases is a widespread tool in mass spectrometry, the question about reliability of such searches remains open. Absence of rigorous significance scores in MS/MS database search makes it difficult to discard random database hits and may lead to erroneous protein identification, particularly in the case of mutated or post-translationally modified peptides. This problem is especially important for high-thr...

  4. Searching Databases without Query-Building Aids: Implications for Dyslexic Users

    Science.gov (United States)

    Berget, Gerd; Sandnes, Frode Eika

    2015-01-01

    Introduction: Few studies document the information searching behaviour of users with cognitive impairments. This paper therefore addresses the effect of dyslexia on information searching in a database with no tolerance for spelling errors and no query-building aids. The purpose was to identify effective search interface design guidelines that…

  5. Term Relevance Feedback and Mediated Database Searching: Implications for Information Retrieval Practice and Systems Design.

    Science.gov (United States)

    Spink, Amanda

    1995-01-01

    This study uses the human approach to examine the sources and effectiveness of search terms selected during 40 mediated interactive database searches and focuses on determining the retrieval effectiveness of search terms identified by users and intermediaries from retrieved items during term relevance feedback. (Author/JKP)

  6. Critical Assessment of Search Strategies in Systematic Reviews in Endodontics.

    Science.gov (United States)

    Yaylali, Ibrahim Ethem; Alaçam, Tayfun

    2016-06-01

    The aim of this study was to perform an overview of literature search strategies in systematic reviews (SRs) published in 2 endodontic journals, Journal of Endodontics and International Endodontic Journal. A search was done by using the MEDLINE (PubMed interface) database to retrieve the articles published between January 1, 2000 and December 31, 2015. The last search was on January 10, 2016. All the SRs published in the 2 journals were retrieved and screened. Eligible SRs were assessed by using 11 questions about search strategies in the SRs that were adapted from 2 guidelines (ie, AMSTAR checklist and the Cochrane Handbook). A total of 83 SRs were retrieved by electronic search. Of these, 55 were from the Journal of Endodontics, and 28 were from the International Endodontic Journal. After screening, 2 SRs were excluded, and 81 SRs were included in the study. Some issues, such as search of grey literature and contact with study authors, were not fully reported (30% and 25%, respectively). On the other hand, some issues, such as the use of index terms and key words and search in at least 2 databases, were reported in most of the SRs (97% and 95%, respectively). The overall quality of the search strategy in both journals was 61%. No significant difference was found between the 2 journals in terms of evaluation criteria (P > .05). There exist areas for improving the quality of reporting of search strategies in SRs; for example, grey literature should be searched for unpublished studies, no language limitation should be applied to databases, and authors should make an attempt to contact the authors of included studies to obtain further relevant information. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  7. A student's guide to searching the literature using online databases

    Science.gov (United States)

    Miller, Casey W.; Belyea, Dustin; Chabot, Michelle; Messina, Troy

    2012-02-01

    A method is described to empower students to efficiently perform general and specific literature searches using online resources [Miller et al., Am. J. Phys. 77, 1112 (2009)]. The method was tested on multiple groups, including undergraduate and graduate students with varying backgrounds in scientific literature searches. Students involved in this study showed marked improvement in their awareness of how and where to find scientific information. Repeated exposure to literature searching methods appears worthwhile, starting early in the undergraduate career, and even in graduate school orientation.

  8. The Effect of Teaching Search Strategies on Perceptual Performance.

    Science.gov (United States)

    van der Gijp, Anouk; Vincken, Koen L; Boscardin, Christy; Webb, Emily M; Ten Cate, Olle Th J; Naeger, David M

    2017-06-01

    Radiology expertise is dependent on the use of efficient search strategies. The aim of this study is to investigate the effect of teaching search strategies on trainee's accuracy in detecting lung nodules at computed tomography. Two search strategies, "scanning" and "drilling," were tested with a randomized crossover design. Nineteen junior radiology residents were randomized into two groups. Both groups first completed a baseline lung nodule detection test allowing a free search strategy, followed by a test after scanning instruction and drilling instruction or vice versa. True positive (TP) and false positive (FP) scores and scroll behavior were registered. A mixed-design analysis of variance was applied to compare the three search conditions. Search strategy instruction had a significant effect on scroll behavior, F(1.3) = 54.2, P search (M = 15.3, SD = 4.6), t(18) = 4.44, P search. FP scores for drilling (M = 7.3, SD = 5.6) were significantly lower than for free search (M = 12.5, SD = 7.8), t(18) = 4.86, P < 0.001. Teaching a drilling strategy is preferable to teaching a scanning strategy for finding lung nodules. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  9. Searching for religion and mental health studies required health, social science, and grey literature databases.

    Science.gov (United States)

    Wright, Judy M; Cottrell, David J; Mir, Ghazala

    2014-07-01

    To determine the optimal databases to search for studies of faith-sensitive interventions for treating depression. We examined 23 health, social science, religious, and grey literature databases searched for an evidence synthesis. Databases were prioritized by yield of (1) search results, (2) potentially relevant references identified during screening, (3) included references contained in the synthesis, and (4) included references that were available in the database. We assessed the impact of databases beyond MEDLINE, EMBASE, and PsycINFO by their ability to supply studies identifying new themes and issues. We identified pragmatic workload factors that influence database selection. PsycINFO was the best performing database within all priority lists. ArabPsyNet, CINAHL, Dissertations and Theses, EMBASE, Global Health, Health Management Information Consortium, MEDLINE, PsycINFO, and Sociological Abstracts were essential for our searches to retrieve the included references. Citation tracking activities and the personal library of one of the research teams made significant contributions of unique, relevant references. Religion studies databases (Am Theo Lib Assoc, FRANCIS) did not provide unique, relevant references. Literature searches for reviews and evidence syntheses of religion and health studies should include social science, grey literature, non-Western databases, personal libraries, and citation tracking activities. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Comparison of search strategies in systematic reviews of adverse effects to other systematic reviews.

    Science.gov (United States)

    Golder, Su; Loke, Yoon K; Zorzela, Liliane

    2014-06-01

    Research indicates that the methods used to identify data for systematic reviews of adverse effects may need to differ from other systematic reviews. To compare search methods in systematic reviews of adverse effects with other reviews. The search methodologies in 849 systematic reviews of adverse effects were compared with other reviews. Poor reporting of search strategies is apparent in both systematic reviews of adverse effects and other types of systematic reviews. Systematic reviews of adverse effects are less likely to restrict their searches to MEDLINE or include only randomised controlled trials (RCTs). The use of other databases is largely dependent on the topic area and the year the review was conducted, with more databases searched in more recent reviews. Adverse effects search terms are used by 72% of reviews and despite recommendations only two reviews report using floating subheadings. The poor reporting of search strategies in systematic reviews is universal, as is the dominance of searching MEDLINE. However, reviews of adverse effects are more likely to include a range of study designs (not just RCTs) and search beyond MEDLINE. © 2014 Crown Copyright.

  11. Chapter 51: How to Build a Simple Cone Search Service Using a Local Database

    Science.gov (United States)

    Kent, B. R.; Greene, G. R.

    The cone search service protocol will be examined from the server side in this chapter. A simple cone search service will be setup and configured locally using MySQL. Data will be read into a table, and the Java JDBC will be used to connect to the database. Readers will understand the VO cone search specification and how to use it to query a database on their local systems and return an XML/VOTable file based on an input of RA/DEC coordinates and a search radius. The cone search in this example will be deployed as a Java servlet. The resulting cone search can be tested with a verification service. This basic setup can be used with other languages and relational databases.

  12. Enabling Searches on Wavelengths in a Hyperspectral Indices Database

    Science.gov (United States)

    Piñuela, F.; Cerra, D.; Müller, R.

    2017-10-01

    Spectral indices derived from hyperspectral reflectance measurements are powerful tools to estimate physical parameters in a non-destructive and precise way for several fields of applications, among others vegetation health analysis, coastal and deep water constituents, geology, and atmosphere composition. In the last years, several micro-hyperspectral sensors have appeared, with both full-frame and push-broom acquisition technologies, while in the near future several hyperspectral spaceborne missions are planned to be launched. This is fostering the use of hyperspectral data in basic and applied research causing a large number of spectral indices to be defined and used in various applications. Ad hoc search engines are therefore needed to retrieve the most appropriate indices for a given application. In traditional systems, query input parameters are limited to alphanumeric strings, while characteristics such as spectral range/ bandwidth are not used in any existing search engine. Such information would be relevant, as it enables an inverse type of search: given the spectral capabilities of a given sensor or a specific spectral band, find all indices which can be derived from it. This paper describes a tool which enables a search as described above, by using the central wavelength or spectral range used by a given index as a search parameter. This offers the ability to manage numeric wavelength ranges in order to select indices which work at best in a given set of wavelengths or wavelength ranges.

  13. Social Work Literature Searching: Current Issues with Databases and Online Search Engines

    Science.gov (United States)

    McGinn, Tony; Taylor, Brian; McColgan, Mary; McQuilkan, Janice

    2016-01-01

    Objectives: To compare the performance of a range of search facilities; and to illustrate the execution of a comprehensive literature search for qualitative evidence in social work. Context: Developments in literature search methods and comparisons of search facilities help facilitate access to the best available evidence for social workers.…

  14. Usability Testing of a Large, Multidisciplinary Library Database: Basic Search and Visual Search

    Directory of Open Access Journals (Sweden)

    Jody Condit Fagan

    2006-09-01

    Full Text Available Visual search interfaces have been shown by researchers to assist users with information search and retrieval. Recently, several major library vendors have added visual search interfaces or functions to their products. For public service librarians, perhaps the most critical area of interest is the extent to which visual search interfaces and text-based search interfaces support research. This study presents the results of eight full-scale usability tests of both the EBSCOhost Basic Search and Visual Search in the context of a large liberal arts university.

  15. Modelling antibody side chain conformations using heuristic database search.

    Science.gov (United States)

    Ritchie, D W; Kemp, G J

    1997-01-01

    We have developed a knowledge-based system which models the side chain conformations of residues in the variable domains of antibody Fv fragments. The system is written in Prolog and uses an object-oriented database of aligned antibody structures in conjunction with a side chain rotamer library. The antibody database provides 3-dimensional clusters of side chain conformations which can be copied en masse into the model structure. The object-oriented database architecture facilitates a navigational style of database access, necessary to assemble side chains clusters. Around 60% of the model is built using side chain clusters and this eliminates much of the combinatorial complexity associated with many other side chain placement algorithms. Construction and placement of side chain clusters is guided by a heuristic cost function based on a simple model of side chain packing interactions. Even with a simple model, we find that a large proportion of side chain conformations are modelled accurately. We expect our approach could be used with other homologous protein families, in addition to antibodies, both to improve the quality of model structures and to give a "smart start" to the side chain placement problem.

  16. Real-Time Ligand Binding Pocket Database Search Using Local Surface Descriptors

    Science.gov (United States)

    Chikhi, Rayan; Sael, Lee; Kihara, Daisuke

    2010-01-01

    Due to the increasing number of structures of unknown function accumulated by ongoing structural genomics projects, there is an urgent need for computational methods for characterizing protein tertiary structures. As functions of many of these proteins are not easily predicted by conventional sequence database searches, a legitimate strategy is to utilize structure information in function characterization. Of a particular interest is prediction of ligand binding to a protein, as ligand molecule recognition is a major part of molecular function of proteins. Predicting whether a ligand molecule binds a protein is a complex problem due to the physical nature of protein-ligand interactions and the flexibility of both binding sites and ligand molecules. However, geometric and physicochemical complementarity is observed between the ligand and its binding site in many cases. Therefore, ligand molecules which bind to a local surface site in a protein can be predicted by finding similar local pockets of known binding ligands in the structure database. Here, we present two representations of ligand binding pockets and utilize them for ligand binding prediction by pocket shape comparison. These representations are based on mapping of surface properties of binding pockets, which are compactly described either by the two dimensional pseudo-Zernike moments or the 3D Zernike descriptors. These compact representations allow a fast real-time pocket searching against a database. Thorough benchmark study employing two different datasets show that our representations are competitive with the other existing methods. Limitations and potentials of the shape-based methods as well as possible improvements are discussed. PMID:20455259

  17. Searching for evidence or approval? A commentary on database search in systematic reviews and alternative information retrieval methodologies.

    Science.gov (United States)

    Delaney, Aogán; Tamás, Peter A

    2018-03-01

    Despite recognition that database search alone is inadequate even within the health sciences, it appears that reviewers in fields that have adopted systematic review are choosing to rely primarily, or only, on database search for information retrieval. This commentary reminds readers of factors that call into question the appropriateness of default reliance on database searches particularly as systematic review is adapted for use in new and lower consensus fields. It then discusses alternative methods for information retrieval that require development, formalisation, and evaluation. Our goals are to encourage reviewers to reflect critically and transparently on their choice of information retrieval methods and to encourage investment in research on alternatives. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Federated or cached searches: providing expected performance from multiple invasive species databases

    Science.gov (United States)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-01-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  19. Content Based Retrieval Database Management System with Support for Similarity Searching and Query Refinement

    National Research Council Canada - National Science Library

    Ortega-Binderberger, Michael

    2002-01-01

    ... as a critical area of research. This thesis explores how to enhance database systems with content based search over arbitrary abstract data types in a similarity based framework with query refinement...

  20. The new ENSDF search system NESSY: IBM/PC nuclear spectroscopy database

    International Nuclear Information System (INIS)

    Boboshin, I.N.; Varlamov, V.V.

    1996-01-01

    The universal relational nuclear structure and decay database NESSY (New ENSDF Search SYstem) developed for the IBM/PC and compatible PCs, and based on the international file ENSDF (Evaluated Nuclear Structure Data File), is described. The NESSY provides the possibility of high efficiency processing (the search and retrieval of any kind of physical data) of the information from ENSDF. The principles of the database development are described and examples of applications are presented. (orig.)

  1. Online Information Search Performance and Search Strategies in a Health Problem-Solving Scenario.

    Science.gov (United States)

    Sharit, Joseph; Taha, Jessica; Berkowsky, Ronald W; Profita, Halley; Czaja, Sara J

    2015-01-01

    Although access to Internet health information can be beneficial, solving complex health-related problems online is challenging for many individuals. In this study, we investigated the performance of a sample of 60 adults ages 18 to 85 years in using the Internet to resolve a relatively complex health information problem. The impact of age, Internet experience, and cognitive abilities on measures of search time, amount of search, and search accuracy was examined, and a model of Internet information seeking was developed to guide the characterization of participants' search strategies. Internet experience was found to have no impact on performance measures. Older participants exhibited longer search times and lower amounts of search but similar search accuracy performance as their younger counterparts. Overall, greater search accuracy was related to an increased amount of search but not to increased search duration and was primarily attributable to higher cognitive abilities, such as processing speed, reasoning ability, and executive function. There was a tendency for those who were younger, had greater Internet experience, and had higher cognitive abilities to use a bottom-up (i.e., analytic) search strategy, although use of a top-down (i.e., browsing) strategy was not necessarily unsuccessful. Implications of the findings for future studies and design interventions are discussed.

  2. An effective suggestion method for keyword search of databases

    KAUST Repository

    Huang, Hai

    2016-09-09

    This paper solves the problem of providing high-quality suggestions for user keyword queries over databases. With the assumption that the returned suggestions are independent, existing query suggestion methods over databases score candidate suggestions individually and return the top-k best of them. However, the top-k suggestions have high redundancy with respect to the topics. To provide informative suggestions, the returned k suggestions are expected to be diverse, i.e., maximizing the relevance to the user query and the diversity with respect to topics that the user might be interested in simultaneously. In this paper, an objective function considering both factors is defined for evaluating a suggestion set. We show that maximizing the objective function is a submodular function maximization problem subject to n matroid constraints, which is an NP-hard problem. An greedy approximate algorithm with an approximation ratio O((Formula presented.)) is also proposed. Experimental results show that our suggestion outperforms other methods on providing relevant and diverse suggestions. © 2016 Springer Science+Business Media New York

  3. PLAST: parallel local alignment search tool for database comparison

    Directory of Open Access Journals (Sweden)

    Lavenier Dominique

    2009-10-01

    Full Text Available Abstract Background Sequence similarity searching is an important and challenging task in molecular biology and next-generation sequencing should further strengthen the need for faster algorithms to process such vast amounts of data. At the same time, the internal architecture of current microprocessors is tending towards more parallelism, leading to the use of chips with two, four and more cores integrated on the same die. The main purpose of this work was to design an effective algorithm to fit with the parallel capabilities of modern microprocessors. Results A parallel algorithm for comparing large genomic banks and targeting middle-range computers has been developed and implemented in PLAST software. The algorithm exploits two key parallel features of existing and future microprocessors: the SIMD programming model (SSE instruction set and the multithreading concept (multicore. Compared to multithreaded BLAST software, tests performed on an 8-processor server have shown speedup ranging from 3 to 6 with a similar level of accuracy. Conclusion A parallel algorithmic approach driven by the knowledge of the internal microprocessor architecture allows significant speedup to be obtained while preserving standard sensitivity for similarity search problems.

  4. muBLASTP: database-indexed protein sequence search on multicore CPUs.

    Science.gov (United States)

    Zhang, Jing; Misra, Sanchit; Wang, Hao; Feng, Wu-Chun

    2016-11-04

    The Basic Local Alignment Search Tool (BLAST) is a fundamental program in the life sciences that searches databases for sequences that are most similar to a query sequence. Currently, the BLAST algorithm utilizes a query-indexed approach. Although many approaches suggest that sequence search with a database index can achieve much higher throughput (e.g., BLAT, SSAHA, and CAFE), they cannot deliver the same level of sensitivity as the query-indexed BLAST, i.e., NCBI BLAST, or they can only support nucleotide sequence search, e.g., MegaBLAST. Due to different challenges and characteristics between query indexing and database indexing, the existing techniques for query-indexed search cannot be used into database indexed search. muBLASTP, a novel database-indexed BLAST for protein sequence search, delivers identical hits returned to NCBI BLAST. On Intel Haswell multicore CPUs, for a single query, the single-threaded muBLASTP achieves up to a 4.41-fold speedup for alignment stages, and up to a 1.75-fold end-to-end speedup over single-threaded NCBI BLAST. For a batch of queries, the multithreaded muBLASTP achieves up to a 5.7-fold speedups for alignment stages, and up to a 4.56-fold end-to-end speedup over multithreaded NCBI BLAST. With a newly designed index structure for protein database and associated optimizations in BLASTP algorithm, we re-factored BLASTP algorithm for modern multicore processors that achieves much higher throughput with acceptable memory footprint for the database index.

  5. Heterogeneous Biomedical Database Integration Using a Hybrid Strategy: A p53 Cancer Research Database

    Directory of Open Access Journals (Sweden)

    Vadim Y. Bichutskiy

    2006-01-01

    Full Text Available Complex problems in life science research give rise to multidisciplinary collaboration, and hence, to the need for heterogeneous database integration. The tumor suppressor p53 is mutated in close to 50% of human cancers, and a small drug-like molecule with the ability to restore native function to cancerous p53 mutants is a long-held medical goal of cancer treatment. The Cancer Research DataBase (CRDB was designed in support of a project to find such small molecules. As a cancer informatics project, the CRDB involved small molecule data, computational docking results, functional assays, and protein structure data. As an example of the hybrid strategy for data integration, it combined the mediation and data warehousing approaches. This paper uses the CRDB to illustrate the hybrid strategy as a viable approach to heterogeneous data integration in biomedicine, and provides a design method for those considering similar systems. More efficient data sharing implies increased productivity, and, hopefully, improved chances of success in cancer research. (Code and database schemas are freely downloadable, http://www.igb.uci.edu/research/research.html.

  6. Designing Search UX Strategies for eCommerce Success

    CERN Document Server

    Nudelman, Greg

    2011-01-01

    Best practices, practical advice, and design ideas for successful ecommerce search A glaring gap has existed in the market for a resource that offers a comprehensive, actionable design patterns and design strategies for ecommerce search-but no longer. With this invaluable book, user experience designer and user researcher Greg Nudelman shares his years of experience working on popular ecommerce sites as he tackles even the most difficult ecommerce search design problems. Nudelman helps you create highly effective and intuitive ecommerce search design solutions and he takes a unique forward-thi

  7. A searching and reporting system for relational databases using a graph-based metadata representation.

    Science.gov (United States)

    Hewitt, Robin; Gobbi, Alberto; Lee, Man-Ling

    2005-01-01

    Relational databases are the current standard for storing and retrieving data in the pharmaceutical and biotech industries. However, retrieving data from a relational database requires specialized knowledge of the database schema and of the SQL query language. At Anadys, we have developed an easy-to-use system for searching and reporting data in a relational database to support our drug discovery project teams. This system is fast and flexible and allows users to access all data without having to write SQL queries. This paper presents the hierarchical, graph-based metadata representation and SQL-construction methods that, together, are the basis of this system's capabilities.

  8. Search for Directed Networks by Different Random Walk Strategies

    Science.gov (United States)

    Zhu, Zi-Qi; Jin, Xiao-Ling; Huang, Zhi-Long

    2012-03-01

    A comparative study is carried out on the efficiency of five different random walk strategies searching on directed networks constructed based on several typical complex networks. Due to the difference in search efficiency of the strategies rooted in network clustering, the clustering coefficient in a random walker's eye on directed networks is defined and computed to be half of the corresponding undirected networks. The search processes are performed on the directed networks based on Erdös—Rényi model, Watts—Strogatz model, Barabási—Albert model and clustered scale-free network model. It is found that self-avoiding random walk strategy is the best search strategy for such directed networks. Compared to unrestricted random walk strategy, path-iteration-avoiding random walks can also make the search process much more efficient. However, no-triangle-loop and no-quadrangle-loop random walks do not improve the search efficiency as expected, which is different from those on undirected networks since the clustering coefficient of directed networks are smaller than that of undirected networks.

  9. Google Scholar Out-Performs Many Subscription Databases when Keyword Searching. A Review of: Walters, W. H. (2009. Google Scholar search performance: Comparative recall and precision. portal: Libraries and the Academy, 9(1, 5-24.

    Directory of Open Access Journals (Sweden)

    Giovanna Badia

    2010-09-01

    title search in Google Scholar using the same keywords, elderly and migration. Compared to the standard search on the same topic, there was almost no difference in recall or precision when a title search was performed and the first 50 results were viewed.Conclusion – Database search performance differs significantly from one field to another so that a comparative study using a different search topic might produce different search results from those summarized above. Nevertheless, Google Scholar out-performs many subscription databases – in terms of recall and precision – when using keyword searches for some topics, as was the case for the multidisciplinary topic of later-life migration. Google Scholar’s recall and precision rates were high within the first 10 to 100 search results examined. According to the author, “these findings suggest that a searcher who is unwilling to search multiple databases or to adopt a sophisticated search strategy is likely to achieve better than average recall and precision by using Google Scholar” (p. 16.The author concludes the paper by discussing the relevancy of search results obtained by undergraduate students. All of the 155 relevant journal articles on the topic of later-life migration were pre-selected based on an expert critique of the complete articles, rather than by looking at only the titles or abstracts of references as most searchers do. Instructors and librarians may wish to support the use of databases that increase students’ contact with high-quality research documents (i.e.., documents that are authoritative, well written, contain a strong analysis, or demonstrate quality in other ways. The study’s findings indicate that Google Scholar is an example of one such database, since it obtained a large number of references to the relevant papers on the topic searched.

  10. Optimal intermittent search strategies: smelling the prey

    International Nuclear Information System (INIS)

    Revelli, J A; Wio, H S; Rojo, F; Budde, C E

    2010-01-01

    We study the kinetics of the search of a single fixed target by a searcher/walker that performs an intermittent random walk, characterized by different states of motion. In addition, we assume that the walker has the ability to detect the scent left by the prey/target in its surroundings. Our results, in agreement with intuition, indicate that the prey's survival probability could be strongly reduced (increased) if the predator is attracted (or repelled) by the trace left by the prey. We have also found that, for a positive trace (the predator is guided towards the prey), increasing the inhomogeneity's size reduces the prey's survival probability, while the optimal value of α (the parameter that regulates intermittency) ceases to exist. The agreement between theory and numerical simulations is excellent.

  11. Optimal intermittent search strategies: smelling the prey

    Energy Technology Data Exchange (ETDEWEB)

    Revelli, J A; Wio, H S [Instituto de Fisica de Cantabria, Universidad de Cantabria and CSIC, E-39005 Santander (Spain); Rojo, F; Budde, C E [Fa.M.A.F., Universidad Nacional de Cordoba, Ciudad Universitaria, X5000HUA Cordoba (Argentina)

    2010-05-14

    We study the kinetics of the search of a single fixed target by a searcher/walker that performs an intermittent random walk, characterized by different states of motion. In addition, we assume that the walker has the ability to detect the scent left by the prey/target in its surroundings. Our results, in agreement with intuition, indicate that the prey's survival probability could be strongly reduced (increased) if the predator is attracted (or repelled) by the trace left by the prey. We have also found that, for a positive trace (the predator is guided towards the prey), increasing the inhomogeneity's size reduces the prey's survival probability, while the optimal value of {alpha} (the parameter that regulates intermittency) ceases to exist. The agreement between theory and numerical simulations is excellent.

  12. MIDAS: a database-searching algorithm for metabolite identification in metabolomics.

    Science.gov (United States)

    Wang, Yingfeng; Kora, Guruprasad; Bowen, Benjamin P; Pan, Chongle

    2014-10-07

    A database searching approach can be used for metabolite identification in metabolomics by matching measured tandem mass spectra (MS/MS) against the predicted fragments of metabolites in a database. Here, we present the open-source MIDAS algorithm (Metabolite Identification via Database Searching). To evaluate a metabolite-spectrum match (MSM), MIDAS first enumerates possible fragments from a metabolite by systematic bond dissociation, then calculates the plausibility of the fragments based on their fragmentation pathways, and finally scores the MSM to assess how well the experimental MS/MS spectrum from collision-induced dissociation (CID) is explained by the metabolite's predicted CID MS/MS spectrum. MIDAS was designed to search high-resolution tandem mass spectra acquired on time-of-flight or Orbitrap mass spectrometer against a metabolite database in an automated and high-throughput manner. The accuracy of metabolite identification by MIDAS was benchmarked using four sets of standard tandem mass spectra from MassBank. On average, for 77% of original spectra and 84% of composite spectra, MIDAS correctly ranked the true compounds as the first MSMs out of all MetaCyc metabolites as decoys. MIDAS correctly identified 46% more original spectra and 59% more composite spectra at the first MSMs than an existing database-searching algorithm, MetFrag. MIDAS was showcased by searching a published real-world measurement of a metabolome from Synechococcus sp. PCC 7002 against the MetaCyc metabolite database. MIDAS identified many metabolites missed in the previous study. MIDAS identifications should be considered only as candidate metabolites, which need to be confirmed using standard compounds. To facilitate manual validation, MIDAS provides annotated spectra for MSMs and labels observed mass spectral peaks with predicted fragments. The database searching and manual validation can be performed online at http://midas.omicsbio.org.

  13. Global search tool for the Advanced Photon Source Integrated Relational Model of Installed Systems (IRMIS) database

    International Nuclear Information System (INIS)

    Quock, D.E.R.; Cianciarulo, M.B.

    2007-01-01

    The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, the necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.

  14. Strategy for a transparent, accessible, and sustainable national claims database.

    Science.gov (United States)

    Gelburd, Robin

    2015-03-01

    The article outlines the strategy employed by FAIR Health, Inc, an independent nonprofit, to maintain a national database of over 18 billion private health insurance claims to support consumer education, payer and provider operations, policy makers, and researchers with standard and customized data sets on an economically self-sufficient basis. It explains how FAIR Health conducts all operations in-house, including data collection, security, validation, information organization, product creation, and transmission, with a commitment to objectivity and reliability in data and data products. It also describes the data elements available to researchers and the diverse studies that FAIR Health data facilitate.

  15. Sagace: A web-based search engine for biomedical databases in Japan

    Directory of Open Access Journals (Sweden)

    Morita Mizuki

    2012-10-01

    Full Text Available Abstract Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data and biological resource banks (such as mouse models of disease and cell lines. With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/.

  16. The impact of PICO as a search strategy tool on literature search quality

    DEFF Research Database (Denmark)

    Eriksen, Mette Brandt; Frandsen, Tove Faber

    2018-01-01

    Objective: This review aimed to determine, if the use of the PICO model (Patient Intervention Comparison Outcome) as a search strategy tool affects the quality of the literature search. Methods: A comprehensive literature search was conducted in: PubMed, Embase, CINAHL, PsycInfo, Cochrane Library...... and three studies were included, data was extracted, risk of bias was assessed and a qualitative analysis was conducted. The included studies compared PICO to PIC or link to related articles in PubMed; PICOS and SPIDER. One study compared PICO to unguided searching. Due to differences in intervention...

  17. STRATEGIES IN SEARCH FOR INTERNATIONAL PARTNERSHIPS

    Directory of Open Access Journals (Sweden)

    Denise de Freitas

    Full Text Available Introduction: Internationalization is the process which integrates universal, intercultural or global dimension in a program, in this case, postgraduate. It can be understood as the process of increasing participation in international operations. Method: To offer design and motivational logistics on how to run a process of international scientific relationship. Results: It is necessary to develop several different aspects to be reached internationalization: to know the fundamentals of internationalization; to have international vision; to promote strategy for internationalization; to know the characteristics of an institutional center of internationalization; and to understand the institutional advantages of internationalization. Conclusion: Internationalization is essential to aerate the Brazilian postgraduate and not mischaracterize or weakens the process. On the contrary, it contributes to increase its vitality and capacity for innovation. Today is not possible to imagine science without internationalization.

  18. The LAILAPS search engine: a feature model for relevance ranking in life science databases.

    Science.gov (United States)

    Lange, Matthias; Spies, Karl; Colmsee, Christian; Flemming, Steffen; Klapperstück, Matthias; Scholz, Uwe

    2010-03-25

    Efficient and effective information retrieval in life sciences is one of the most pressing challenge in bioinformatics. The incredible growth of life science databases to a vast network of interconnected information systems is to the same extent a big challenge and a great chance for life science research. The knowledge found in the Web, in particular in life-science databases, are a valuable major resource. In order to bring it to the scientist desktop, it is essential to have well performing search engines. Thereby, not the response time nor the number of results is important. The most crucial factor for millions of query results is the relevance ranking. In this paper, we present a feature model for relevance ranking in life science databases and its implementation in the LAILAPS search engine. Motivated by the observation of user behavior during their inspection of search engine result, we condensed a set of 9 relevance discriminating features. These features are intuitively used by scientists, who briefly screen database entries for potential relevance. The features are both sufficient to estimate the potential relevance, and efficiently quantifiable. The derivation of a relevance prediction function that computes the relevance from this features constitutes a regression problem. To solve this problem, we used artificial neural networks that have been trained with a reference set of relevant database entries for 19 protein queries. Supporting a flexible text index and a simple data import format, this concepts are implemented in the LAILAPS search engine. It can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases. LAILAPS is publicly available for SWISSPROT data at http://lailaps.ipk-gatersleben.de.

  19. Search strategies in practice: Influence of information and task constraints.

    Science.gov (United States)

    Pacheco, Matheus M; Newell, Karl M

    2018-01-01

    The practice of a motor task has been conceptualized as a process of search through a perceptual-motor workspace. The present study investigated the influence of information and task constraints on the search strategy as reflected in the sequential relations of the outcome in a discrete movement virtual projectile task. The results showed that the relation between the changes of trial-to-trial movement outcome to performance level was dependent on the landscape of the task dynamics and the influence of inherent variability. Furthermore, the search was in a constrained parameter region of the perceptual-motor workspace that depended on the task constraints. These findings show that there is not a single function of trial-to-trial change over practice but rather that local search strategies (proportional, discontinuous, constant) adapt to the level of performance and the confluence of constraints to action. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Computational assessment of visual search strategies in volumetric medical images.

    Science.gov (United States)

    Wen, Gezheng; Aizenman, Avigael; Drew, Trafton; Wolfe, Jeremy M; Haygood, Tamara Miner; Markey, Mia K

    2016-01-01

    When searching through volumetric images [e.g., computed tomography (CT)], radiologists appear to use two different search strategies: "drilling" (restrict eye movements to a small region of the image while quickly scrolling through slices), or "scanning" (search over large areas at a given depth before moving on to the next slice). To computationally identify the type of image information that is used in these two strategies, 23 naïve observers were instructed with either "drilling" or "scanning" when searching for target T's in 20 volumes of faux lung CTs. We computed saliency maps using both classical two-dimensional (2-D) saliency, and a three-dimensional (3-D) dynamic saliency that captures the characteristics of scrolling through slices. Comparing observers' gaze distributions with the saliency maps showed that search strategy alters the type of saliency that attracts fixations. Drillers' fixations aligned better with dynamic saliency and scanners with 2-D saliency. The computed saliency was greater for detected targets than for missed targets. Similar results were observed in data from 19 radiologists who searched five stacks of clinical chest CTs for lung nodules. Dynamic saliency may be superior to the 2-D saliency for detecting targets embedded in volumetric images, and thus "drilling" may be more efficient than "scanning."

  1. A Bayesian network approach to the database search problem in criminal proceedings

    Science.gov (United States)

    2012-01-01

    Background The ‘database search problem’, that is, the strengthening of a case - in terms of probative value - against an individual who is found as a result of a database search, has been approached during the last two decades with substantial mathematical analyses, accompanied by lively debate and centrally opposing conclusions. This represents a challenging obstacle in teaching but also hinders a balanced and coherent discussion of the topic within the wider scientific and legal community. This paper revisits and tracks the associated mathematical analyses in terms of Bayesian networks. Their derivation and discussion for capturing probabilistic arguments that explain the database search problem are outlined in detail. The resulting Bayesian networks offer a distinct view on the main debated issues, along with further clarity. Methods As a general framework for representing and analyzing formal arguments in probabilistic reasoning about uncertain target propositions (that is, whether or not a given individual is the source of a crime stain), this paper relies on graphical probability models, in particular, Bayesian networks. This graphical probability modeling approach is used to capture, within a single model, a series of key variables, such as the number of individuals in a database, the size of the population of potential crime stain sources, and the rarity of the corresponding analytical characteristics in a relevant population. Results This paper demonstrates the feasibility of deriving Bayesian network structures for analyzing, representing, and tracking the database search problem. The output of the proposed models can be shown to agree with existing but exclusively formulaic approaches. Conclusions The proposed Bayesian networks allow one to capture and analyze the currently most well-supported but reputedly counter-intuitive and difficult solution to the database search problem in a way that goes beyond the traditional, purely formulaic expressions

  2. Search extension transforms Wiki into a relational system: a case for flavonoid metabolite database.

    Science.gov (United States)

    Arita, Masanori; Suwa, Kazuhiro

    2008-09-17

    In computer science, database systems are based on the relational model founded by Edgar Codd in 1970. On the other hand, in the area of biology the word 'database' often refers to loosely formatted, very large text files. Although such bio-databases may describe conflicts or ambiguities (e.g. a protein pair do and do not interact, or unknown parameters) in a positive sense, the flexibility of the data format sacrifices a systematic query mechanism equivalent to the widely used SQL. To overcome this disadvantage, we propose embeddable string-search commands on a Wiki-based system and designed a half-formatted database. As proof of principle, a database of flavonoid with 6902 molecular structures from over 1687 plant species was implemented on MediaWiki, the background system of Wikipedia. Registered users can describe any information in an arbitrary format. Structured part is subject to text-string searches to realize relational operations. The system was written in PHP language as the extension of MediaWiki. All modifications are open-source and publicly available. This scheme benefits from both the free-formatted Wiki style and the concise and structured relational-database style. MediaWiki supports multi-user environments for document management, and the cost for database maintenance is alleviated.

  3. Locating qualitative studies in dementia on MEDLINE, EMBASE, CINAHL, and PsycINFO: A comparison of search strategies.

    Science.gov (United States)

    Rogers, Morwenna; Bethel, Alison; Abbott, Rebecca

    2017-10-28

    Qualitative research in dementia improves understanding of the experience of people affected by dementia. Searching databases for qualitative studies is problematic. Qualitative-specific search strategies might help with locating studies. To examine the effectiveness (sensitivity and precision) of 5 qualitative strategies on locating qualitative research studies in dementia in 4 major databases (MEDLINE, EMBASE, PsycINFO, and CINAHL). Qualitative dementia studies were checked for inclusion on MEDLINE, EMBASE, PsycINFO, and CINAHL. Five qualitative search strategies (subject headings, simple free-text terms, complex free-text terms, and 2 broad-based strategies) were tested for study retrieval. Specificity, precision and number needed to read were calculated. Two hundred fourteen qualitative studies in dementia were included. PsycINFO and CINAHL held the most qualitative studies out the 4 databases studied (N = 171 and 166, respectively) and both held unique records (N = 14 and 7, respectively). The controlled vocabulary strategy in CINAHL returned 96% (N = 192) of studies held; by contrast, controlled vocabulary in PsycINFO returned 7% (N = 13) of studies held. The broad-based strategies returned more studies (93-99%) than the other free-text strategies (22-82%). Precision ranged from 0.061 to 0.004 resulting in a number needed to read to obtain 1 relevant study ranging from 16 (simple free-text search in CINAHL) to 239 (broad-based search in EMBASE). Qualitative search strategies using 3 broad terms were more sensitive than long complex searches. The controlled vocabulary for qualitative research in CINAHL was particularly effective. Furthermore, results indicate that MEDLINE and EMBASE offer little benefit for locating qualitative dementia research if CINAHL and PSYCINFO are also searched. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Parallel database search and prime factorization with magnonic holographic memory devices

    Energy Technology Data Exchange (ETDEWEB)

    Khitun, Alexander [Electrical and Computer Engineering Department, University of California - Riverside, Riverside, California 92521 (United States)

    2015-12-28

    In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed.

  5. Parallel database search and prime factorization with magnonic holographic memory devices

    Science.gov (United States)

    Khitun, Alexander

    2015-12-01

    In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed.

  6. Parallel database search and prime factorization with magnonic holographic memory devices

    International Nuclear Information System (INIS)

    Khitun, Alexander

    2015-01-01

    In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed

  7. Toward a public analysis database for LHC new physics searches using M ADA NALYSIS 5

    Science.gov (United States)

    Dumont, B.; Fuks, B.; Kraml, S.; Bein, S.; Chalons, G.; Conte, E.; Kulkarni, S.; Sengupta, D.; Wymant, C.

    2015-02-01

    We present the implementation, in the MadAnalysis 5 framework, of several ATLAS and CMS searches for supersymmetry in data recorded during the first run of the LHC. We provide extensive details on the validation of our implementations and propose to create a public analysis database within this framework.

  8. A Web-based Tool for SDSS and 2MASS Database Searches

    Science.gov (United States)

    Hendrickson, M. A.; Uomoto, A.; Golimowski, D. A.

    We have developed a web site using HTML, Php, Python, and MySQL that extracts, processes, and displays data from the Sloan Digital Sky Survey (SDSS) and the Two-Micron All-Sky Survey (2MASS). The goal is to locate brown dwarf candidates in the SDSS database by looking at color cuts; however, this site could also be useful for targeted searches of other databases as well. MySQL databases are created from broad searches of SDSS and 2MASS data. Broad queries on the SDSS and 2MASS database servers are run weekly so that observers have the most up-to-date information from which to select candidates for observation. Observers can look at detailed information about specific objects including finding charts, images, and available spectra. In addition, updates from previous observations can be added by any collaborators; this format makes observational collaboration simple. Observers can also restrict the database search, just before or during an observing run, to select objects of special interest.

  9. Identifying complications of interventional procedures from UK routine healthcare databases: a systematic search for methods using clinical codes.

    Science.gov (United States)

    Keltie, Kim; Cole, Helen; Arber, Mick; Patrick, Hannah; Powell, John; Campbell, Bruce; Sims, Andrew

    2014-11-28

    Several authors have developed and applied methods to routine data sets to identify the nature and rate of complications following interventional procedures. But, to date, there has been no systematic search for such methods. The objective of this article was to find, classify and appraise published methods, based on analysis of clinical codes, which used routine healthcare databases in a United Kingdom setting to identify complications resulting from interventional procedures. A literature search strategy was developed to identify published studies that referred, in the title or abstract, to the name or acronym of a known routine healthcare database and to complications from procedures or devices. The following data sources were searched in February and March 2013: Cochrane Methods Register, Conference Proceedings Citation Index - Science, Econlit, EMBASE, Health Management Information Consortium, Health Technology Assessment database, MathSciNet, MEDLINE, MEDLINE in-process, OAIster, OpenGrey, Science Citation Index Expanded and ScienceDirect. Of the eligible papers, those which reported methods using clinical coding were classified and summarised in tabular form using the following headings: routine healthcare database; medical speciality; method for identifying complications; length of follow-up; method of recording comorbidity. The benefits and limitations of each approach were assessed. From 3688 papers identified from the literature search, 44 reported the use of clinical codes to identify complications, from which four distinct methods were identified: 1) searching the index admission for specified clinical codes, 2) searching a sequence of admissions for specified clinical codes, 3) searching for specified clinical codes for complications from procedures and devices within the International Classification of Diseases 10th revision (ICD-10) coding scheme which is the methodology recommended by NHS Classification Service, and 4) conducting manual clinical

  10. A Competitive and Experiential Assignment in Search Engine Optimization Strategy

    Science.gov (United States)

    Clarke, Theresa B.; Clarke, Irvine, III

    2014-01-01

    Despite an increase in ad spending and demand for employees with expertise in search engine optimization (SEO), methods for teaching this important marketing strategy have received little coverage in the literature. Using Bloom's cognitive goals hierarchy as a framework, this experiential assignment provides a process for educators who may be new…

  11. The Development of Visual Search Strategies in Biscriptal Readers.

    Science.gov (United States)

    Liow, Susan Rikard; Green, David; Tam, Melissa

    1999-01-01

    To test whether cognitive processing in bilingual depends on script combinations and language proficiency, this study investigated the development of alphabetic and logographic visual search strategies in two kinds of biscriptals: (1) Malay-English and (2) Chinese-English readers. Results support the view that there are script implications of…

  12. Strategies to optimize MEDLINE and EMBASE search strategies for anesthesiology systematic reviews. An experimental study.

    Science.gov (United States)

    Volpato, Enilze de Souza Nogueira; Betini, Marluci; Puga, Maria Eduarda; Agarwal, Arnav; Cataneo, Antônio José Maria; Oliveira, Luciane Dias de; Bazan, Rodrigo; Braz, Leandro Gobbo; Pereira, José Eduardo Guimarães; El Dib, Regina

    2018-01-15

    A high-quality electronic search is essential for ensuring accuracy and comprehensiveness among the records retrieved when conducting systematic reviews. Therefore, we aimed to identify the most efficient method for searching in both MEDLINE (through PubMed) and EMBASE, covering search terms with variant spellings, direct and indirect orders, and associations with MeSH and EMTREE terms (or lack thereof). Experimental study. UNESP, Brazil. We selected and analyzed 37 search strategies that had specifically been developed for the field of anesthesiology. These search strategies were adapted in order to cover all potentially relevant search terms, with regard to variant spellings and direct and indirect orders, in the most efficient manner. When the strategies included variant spellings and direct and indirect orders, these adapted versions of the search strategies selected retrieved the same number of search results in MEDLINE (mean of 61.3%) and a higher number in EMBASE (mean of 63.9%) in the sample analyzed. The numbers of results retrieved through the searches analyzed here were not identical with and without associated use of MeSH and EMTREE terms. However, association of these terms from both controlled vocabularies retrieved a larger number of records than did the use of either one of them. In view of these results, we recommend that the search terms used should include both preferred and non-preferred terms (i.e. variant spellings and direct/indirect order of the same term) and associated MeSH and EMTREE terms, in order to develop highly-sensitive search strategies for systematic reviews.

  13. Searching for evidence: Knowledge and search strategies used by forensic scientists

    NARCIS (Netherlands)

    Schraagen, J.M.C.; Leijenhorst, H.

    2001-01-01

    The Forensic Science Laboratory of The Netherlands is suffering from a growing backlog due to an increasing number of cases, which results in long delivery times of research reports within a number of departments. The project, "Strategies for Searching Trace Evidence," was started by the Forensic

  14. IMPROVED SEARCH OF PRINCIPAL COMPONENT ANALYSIS DATABASES FOR SPECTRO-POLARIMETRIC INVERSION

    International Nuclear Information System (INIS)

    Casini, R.; Lites, B. W.; Ramos, A. Asensio; Ariste, A. López

    2013-01-01

    We describe a simple technique for the acceleration of spectro-polarimetric inversions based on principal component analysis (PCA) of Stokes profiles. This technique involves the indexing of the database models based on the sign of the projections (PCA coefficients) of the first few relevant orders of principal components of the four Stokes parameters. In this way, each model in the database can be attributed a distinctive binary number of 2 4n bits, where n is the number of PCA orders used for the indexing. Each of these binary numbers (indices) identifies a group of ''compatible'' models for the inversion of a given set of observed Stokes profiles sharing the same index. The complete set of the binary numbers so constructed evidently determines a partition of the database. The search of the database for the PCA inversion of spectro-polarimetric data can profit greatly from this indexing. In practical cases it becomes possible to approach the ideal acceleration factor of 2 4n as compared to the systematic search of a non-indexed database for a traditional PCA inversion. This indexing method relies on the existence of a physical meaning in the sign of the PCA coefficients of a model. For this reason, the presence of model ambiguities and of spectro-polarimetric noise in the observations limits in practice the number n of relevant PCA orders that can be used for the indexing

  15. Tandem Mass Spectrum Sequencing: An Alternative to Database Search Engines in Shotgun Proteomics.

    Science.gov (United States)

    Muth, Thilo; Rapp, Erdmann; Berven, Frode S; Barsnes, Harald; Vaudel, Marc

    2016-01-01

    Protein identification via database searches has become the gold standard in mass spectrometry based shotgun proteomics. However, as the quality of tandem mass spectra improves, direct mass spectrum sequencing gains interest as a database-independent alternative. In this chapter, the general principle of this so-called de novo sequencing is introduced along with pitfalls and challenges of the technique. The main tools available are presented with a focus on user friendly open source software which can be directly applied in everyday proteomic workflows.

  16. Efficient Similarity Search Using the Earth Mover's Distance for Large Multimedia Databases

    DEFF Research Database (Denmark)

    Assent, Ira; Wichterich, Marc; Meisen, Tobias

    2008-01-01

    Multimedia similarity search in large databases requires efficient query processing. The Earth mover's distance, introduced in computer vision, is successfully used as a similarity model in a number of small-scale applications. Its computational complexity hindered its adoption in large multimedia...... databases. We enable directly indexing the Earth mover's distance in structures such as the R-tree and the VA-file by providing the accurate 'MinDist' function to any bounding rectangle in the index. We exploit the computational structure of the new MinDist to derive a new lower bound for the EMD Min...

  17. Quantum Partial Searching Algorithm of a Database with Several Target Items

    International Nuclear Information System (INIS)

    Pu-Cha, Zhong; Wan-Su, Bao; Yun, Wei

    2009-01-01

    Choi and Korepin [Quantum Information Processing 6(2007)243] presented a quantum partial search algorithm of a database with several target items which can find a target block quickly when each target block contains the same number of target items. Actually, the number of target items in each target block is arbitrary. Aiming at this case, we give a condition to guarantee performance of the partial search algorithm to be performed and the number of queries to oracle of the algorithm to be minimized. In addition, by further numerical computing we come to the conclusion that the more uniform the distribution of target items, the smaller the number of queries

  18. Indexing Bibliographic Database Content Using MariaDB and Sphinx Search Server

    Directory of Open Access Journals (Sweden)

    Arie Nugraha

    2014-07-01

    Full Text Available Fast retrieval of digital content has become mandatory for library and archive information systems. Many software applications have emerged to handle the indexing of digital content, from low-level ones such Apache Lucene, to more RESTful and web-services-ready ones such Apache Solr and ElasticSearch. Solr’s popularity among library software developers makes it the “de-facto” standard software for indexing digital content. For content (full-text content or bibliographic description already stored inside a relational DBMS such as MariaDB (a fork of MySQL or PostgreSQL, Sphinx Search Server (Sphinx is a suitable alternative. This article will cover an introduction on how to use Sphinx with MariaDB databases to index database content as well as some examples of Sphinx API usage.

  19. Global Search Strategies for Solving Multilinear Least-Squares Problems

    Directory of Open Access Journals (Sweden)

    Mats Andersson

    2012-04-01

    Full Text Available The multilinear least-squares (MLLS problem is an extension of the linear least-squares problem. The difference is that a multilinear operator is used in place of a matrix-vector product. The MLLS is typically a large-scale problem characterized by a large number of local minimizers. It originates, for instance, from the design of filter networks. We present a global search strategy that allows for moving from one local minimizer to a better one. The efficiency of this strategy is illustrated by the results of numerical experiments performed for some problems related to the design of filter networks.

  20. Identification of Alternative Splice Variants Using Unique Tryptic Peptide Sequences for Database Searches.

    Science.gov (United States)

    Tran, Trung T; Bollineni, Ravi C; Strozynski, Margarita; Koehler, Christian J; Thiede, Bernd

    2017-07-07

    Alternative splicing is a mechanism in eukaryotes by which different forms of mRNAs are generated from the same gene. Identification of alternative splice variants requires the identification of peptides specific for alternative splice forms. For this purpose, we generated a human database that contains only unique tryptic peptides specific for alternative splice forms from Swiss-Prot entries. Using this database allows an easy access to splice variant-specific peptide sequences that match to MS data. Furthermore, we combined this database without alternative splice variant-1-specific peptides with human Swiss-Prot. This combined database can be used as a general database for searching of LC-MS data. LC-MS data derived from in-solution digests of two different cell lines (LNCaP, HeLa) and phosphoproteomics studies were analyzed using these two databases. Several nonalternative splice variant-1-specific peptides were found in both cell lines, and some of them seemed to be cell-line-specific. Control and apoptotic phosphoproteomes from Jurkat T cells revealed several nonalternative splice variant-1-specific peptides, and some of them showed clear quantitative differences between the two states.

  1. Strategies to improve sleep during extended search and rescue operations.

    Science.gov (United States)

    Jenkins, Jennifer Lee; Fredericksen, Kim; Stone, Roger; Tang, Nelson

    2007-01-01

    This study investigated strategies to improve sleeping conditions during search and rescue operations during disaster response. Forty members of the Montgomery County (Maryland) Urban Search and Rescue Team were surveyed for individual sleep habits and sleeping aids used during extended deployments. Team members were also asked to suggest methods to improve sleep on future deployments. The average amount of sleep during field operations was 5.4 hours with a range of 4-8 hours. Eight percent surveyed would prefer another schedule besides the 12-hour work day, all of whom proposed three 8-hour shifts. Fifteen percent of participants were interested in a pharmacological sleeping aid. Fifty percent of search and rescue members interviewed would consider using nonpharmacological sleeping aids. Furthermore, 40% of participants stated they had successfully devised self-employed methods of sleep aids for previous deployments, such as ear plugs, massage, mental imagery, personal routines, music and headphones, reading, and blindfolds. This study suggests that availability of both pharmacological and nonpharmacological sleeping aids to search and rescue workers via the team cache could impact the quantity of sleep. Further investigation into methods of optimizing sleep during field missions could theoretically show enhanced performance through various aspects of missions including mitigation of errors, improved productivity, and improved overall physiological and emotional well-being of search and rescue personnel.

  2. Accelerating Smith-Waterman Algorithm for Biological Database Search on CUDA-Compatible GPUs

    Science.gov (United States)

    Munekawa, Yuma; Ino, Fumihiko; Hagihara, Kenichi

    This paper presents a fast method capable of accelerating the Smith-Waterman algorithm for biological database search on a cluster of graphics processing units (GPUs). Our method is implemented using compute unified device architecture (CUDA), which is available on the nVIDIA GPU. As compared with previous methods, our method has four major contributions. (1) The method efficiently uses on-chip shared memory to reduce the data amount being transferred between off-chip video memory and processing elements in the GPU. (2) It also reduces the number of data fetches by applying a data reuse technique to query and database sequences. (3) A pipelined method is also implemented to overlap GPU execution with database access. (4) Finally, a master/worker paradigm is employed to accelerate hundreds of database searches on a cluster system. In experiments, the peak performance on a GeForce GTX 280 card reaches 8.32 giga cell updates per second (GCUPS). We also find that our method reduces the amount of data fetches to 1/140, achieving approximately three times higher performance than a previous CUDA-based method. Our 32-node cluster version is approximately 28 times faster than a single GPU version. Furthermore, the effective performance reaches 75.6 giga instructions per second (GIPS) using 32 GeForce 8800 GTX cards.

  3. What is lost when searching only one literature database for articles relevant to injury prevention and safety promotion?

    Science.gov (United States)

    Lawrence, D W

    2008-12-01

    To assess what is lost if only one literature database is searched for articles relevant to injury prevention and safety promotion (IPSP) topics. Serial textword (keyword, free-text) searches using multiple synonym terms for five key IPSP topics (bicycle-related brain injuries, ethanol-impaired driving, house fires, road rage, and suicidal behaviors among adolescents) were conducted in four of the bibliographic databases that are most used by IPSP professionals: EMBASE, MEDLINE, PsycINFO, and Web of Science. Through a systematic procedure, an inventory of articles on each topic in each database was conducted to identify the total unduplicated count of all articles on each topic, the number of articles unique to each database, and the articles available if only one database is searched. No single database included all of the relevant articles on any topic, and the database with the broadest coverage differed by topic. A search of only one literature database will return 16.7-81.5% (median 43.4%) of the available articles on any of five key IPSP topics. Each database contributed unique articles to the total bibliography for each topic. A literature search performed in only one database will, on average, lead to a loss of more than half of the available literature on a topic.

  4. In Search of Search Engine Marketing Strategy Amongst SME's in Ireland

    Science.gov (United States)

    Barry, Chris; Charleton, Debbie

    Researchers have identified the Web as a searchers first port of call for locating information. Search Engine Marketing (SEM) strategies have been noted as a key consideration when developing, maintaining and managing Websites. A study presented here of SEM practices of Irish small to medium enterprises (SMEs) reveals they plan to spend more resources on SEM in the future. Most firms utilize an informal SEM strategy, where Website optimization is perceived most effective in attracting traffic. Respondents cite the use of ‘keywords in title and description tags’ as the most used SEM technique, followed by the use of ‘keywords throughout the whole Website’; while ‘Pay for Placement’ was most widely used Paid Search technique. In concurrence with the literature, measuring SEM performance remains a significant challenge with many firms unsure if they measure it effectively. An encouraging finding is that Irish SMEs adopt a positive ethical posture when undertaking SEM.

  5. Colil: a database and search service for citation contexts in the life sciences domain.

    Science.gov (United States)

    Fujiwara, Toyofumi; Yamamoto, Yasunori

    2015-01-01

    To promote research activities in a particular research area, it is important to efficiently identify current research trends, advances, and issues in that area. Although review papers in the research area can suffice for this purpose in general, researchers are not necessarily able to obtain these papers from research aspects of their interests at the time they are required. Therefore, the utilization of the citation contexts of papers in a research area has been considered as another approach. However, there are few search services to retrieve citation contexts in the life sciences domain; furthermore, efficiently obtaining citation contexts is becoming difficult due to the large volume and rapid growth of life sciences papers. Here, we introduce the Colil (Comments on Literature in Literature) database to store citation contexts in the life sciences domain. By using the Resource Description Framework (RDF) and a newly compiled vocabulary, we built the Colil database and made it available through the SPARQL endpoint. In addition, we developed a web-based search service called Colil that searches for a cited paper in the Colil database and then returns a list of citation contexts for it along with papers relevant to it based on co-citations. The citation contexts in the Colil database were extracted from full-text papers of the PubMed Central Open Access Subset (PMC-OAS), which includes 545,147 papers indexed in PubMed. These papers are distributed across 3,171 journals and cite 5,136,741 unique papers that correspond to approximately 25 % of total PubMed entries. By utilizing Colil, researchers can easily refer to a set of citation contexts and relevant papers based on co-citations for a target paper. Colil helps researchers to comprehend life sciences papers in a research area more efficiently and makes their biological research more efficient.

  6. CUDASW++: optimizing Smith-Waterman sequence database searches for CUDA-enabled graphics processing units

    Directory of Open Access Journals (Sweden)

    Maskell Douglas L

    2009-05-01

    Full Text Available Abstract Background The Smith-Waterman algorithm is one of the most widely used tools for searching biological sequence databases due to its high sensitivity. Unfortunately, the Smith-Waterman algorithm is computationally demanding, which is further compounded by the exponential growth of sequence databases. The recent emergence of many-core architectures, and their associated programming interfaces, provides an opportunity to accelerate sequence database searches using commonly available and inexpensive hardware. Findings Our CUDASW++ implementation (benchmarked on a single-GPU NVIDIA GeForce GTX 280 graphics card and a dual-GPU GeForce GTX 295 graphics card provides a significant performance improvement compared to other publicly available implementations, such as SWPS3, CBESW, SW-CUDA, and NCBI-BLAST. CUDASW++ supports query sequences of length up to 59K and for query sequences ranging in length from 144 to 5,478 in Swiss-Prot release 56.6, the single-GPU version achieves an average performance of 9.509 GCUPS with a lowest performance of 9.039 GCUPS and a highest performance of 9.660 GCUPS, and the dual-GPU version achieves an average performance of 14.484 GCUPS with a lowest performance of 10.660 GCUPS and a highest performance of 16.087 GCUPS. Conclusion CUDASW++ is publicly available open-source software. It provides a significant performance improvement for Smith-Waterman-based protein sequence database searches by fully exploiting the compute capability of commonly used CUDA-enabled low-cost GPUs.

  7. Galaxy Strategy for Ligo-Virgo Gravitational Wave Counterpart Searches

    Science.gov (United States)

    Gehrels, Neil; Cannizzo, John K.; Kanner, Jonah; Kasliwal, Mansi M.; Nissanke, Samaya; Singer, Leo P.

    2016-01-01

    In this work we continue a line of inquiry begun in Kanner et al. which detailed a strategy for utilizing telescopes with narrow fields of view, such as the Swift X-Ray Telescope (XRT), to localize gravity wave (GW) triggers from LIGO (Laser Interferometer Gravitational-Wave Observatory) / Virgo. If one considers the brightest galaxies that produce 50 percent of the light, then the number of galaxies inside typical GW error boxes will be several tens. We have found that this result applies both in the early years of Advanced LIGO when the range is small and the error boxes large, and in the later years when the error boxes will be small and the range large. This strategy has the beneficial property of reducing the number of telescope pointings by a factor 10 to 100 compared with tiling the entire error box. Additional galaxy count reduction will come from a GW rapid distance estimate which will restrict the radial slice in search volume. Combining the bright galaxy strategy with a convolution based on anticipated GW localizations, we find that the searches can be restricted to about 18 plus or minus 5 galaxies for 2015, about 23 plus or minus 4 for 2017, and about 11 plus or minus for 2020. This assumes a distance localization at the putative neutron star-neutron star (NS-NS) merger range mu for each target year, and these totals are integrated out to the range. Integrating out to the horizon would roughly double the totals. For localizations with r (rotation) greatly less than mu the totals would decrease. The galaxy strategy we present in this work will enable numerous sensitive optical and X-ray telescopes with small fields of view to participate meaningfully in searches wherein the prospects for rapidly fading afterglow place a premium on a fast response time.

  8. Dialysis search filters for PubMed, Ovid MEDLINE, and Embase databases.

    Science.gov (United States)

    Iansavichus, Arthur V; Haynes, R Brian; Lee, Christopher W C; Wilczynski, Nancy L; McKibbon, Ann; Shariff, Salimah Z; Blake, Peter G; Lindsay, Robert M; Garg, Amit X

    2012-10-01

    Physicians frequently search bibliographic databases, such as MEDLINE via PubMed, for best evidence for patient care. The objective of this study was to develop and test search filters to help physicians efficiently retrieve literature related to dialysis (hemodialysis or peritoneal dialysis) from all other articles indexed in PubMed, Ovid MEDLINE, and Embase. A diagnostic test assessment framework was used to develop and test robust dialysis filters. The reference standard was a manual review of the full texts of 22,992 articles from 39 journals to determine whether each article contained dialysis information. Next, 1,623,728 unique search filters were developed, and their ability to retrieve relevant articles was evaluated. The high-performance dialysis filters consisted of up to 65 search terms in combination. These terms included the words "dialy" (truncated), "uremic," "catheters," and "renal transplant wait list." These filters reached peak sensitivities of 98.6% and specificities of 98.5%. The filters' performance remained robust in an independent validation subset of articles. These empirically derived and validated high-performance search filters should enable physicians to effectively retrieve dialysis information from PubMed, Ovid MEDLINE, and Embase.

  9. Protein backbone angle restraints from searching a database for chemical shift and sequence homology

    Energy Technology Data Exchange (ETDEWEB)

    Cornilescu, Gabriel; Delaglio, Frank; Bax, Ad [National Institutes of Health, Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases (United States)

    1999-03-15

    Chemical shifts of backbone atoms in proteins are exquisitely sensitive to local conformation, and homologous proteins show quite similar patterns of secondary chemical shifts. The inverse of this relation is used to search a database for triplets of adjacent residues with secondary chemical shifts and sequence similarity which provide the best match to the query triplet of interest. The database contains 13C{alpha}, 13C{beta}, 13C', 1H{alpha} and 15N chemical shifts for 20 proteins for which a high resolution X-ray structure is available. The computer program TALOS was developed to search this database for strings of residues with chemical shift and residue type homology. The relative importance of the weighting factors attached to the secondary chemical shifts of the five types of resonances relative to that of sequence similarity was optimized empirically. TALOS yields the 10 triplets which have the closest similarity in secondary chemical shift and amino acid sequence to those of the query sequence. If the central residues in these 10 triplets exhibit similar {phi} and {psi} backbone angles, their averages can reliably be used as angular restraints for the protein whose structure is being studied. Tests carried out for proteins of known structure indicate that the root-mean-square difference (rmsd) between the output of TALOS and the X-ray derived backbone angles is about 15 deg. Approximately 3% of the predictions made by TALOS are found to be in error.

  10. Optimal swimming strategies in mate searching pelagic copepods

    DEFF Research Database (Denmark)

    Kiørboe, Thomas

    2008-01-01

    Male copepods must swim to find females, but swimming increases the risk of meeting predators and is expensive in terms of energy expenditure. Here I address the trade-offs between gains and risks and the question of how much and how fast to swim using simple models that optimise the number...... of lifetime mate encounters. Radically different swimming strategies are predicted for different feeding behaviours, and these predictions are tested experimentally using representative species. In general, male swimming speeds and the difference in swimming speeds between the genders are predicted...... and observed to increase with increasing conflict between mate searching and feeding. It is high in ambush feeders, where searching (swimming) and feeding are mutually exclusive and low in species, where the matured males do not feed at all. Ambush feeding males alternate between stationary ambush feeding...

  11. Can collective searches profit from Levy walk strategies?

    Energy Technology Data Exchange (ETDEWEB)

    Santos, M C; Da Luz, M G E [Departamento de Fisica, Universidade Federal do Parana, Curitiba-PR, 81531-990 (Brazil); Raposo, E P [Laboratorio de Fisica Teorica e Computacional, Departamento de Fisica, Universidade Federal de Pernambuco, Recife-PE, 50670-901 (Brazil); Viswanathan, G M [Instituto de Fisica, Universidade Federal de Alagoas, Maceio-AL, 57072-970 (Brazil)], E-mail: luz@fisica.ufpr.br

    2009-10-30

    We address the problem of collective searching in which a group of walkers, guided by a leader, looks for randomly located target sites. In such a process, the necessity to maintain the group aggregated imposes a constraint in the foraging dynamics. We discuss four different models for the system collective behavior, with the leader and followers performing Gaussian as well as truncated Levy walks. In environments with low density of targets we show that Levy foraging is advantageous for the whole group, when compared with Gaussian strategy. Furthermore, certain extra rules must be incorporated in the individuals' dynamics, so that a compromise between the trend to keep the group together and the global efficiency of search is met. The exact character of these rules depends on specific details of the foraging process, such as regeneration of target sites and energy costs.

  12. mirPub: a database for searching microRNA publications.

    Science.gov (United States)

    Vergoulis, Thanasis; Kanellos, Ilias; Kostoulas, Nikos; Georgakilas, Georgios; Sellis, Timos; Hatzigeorgiou, Artemis; Dalamagas, Theodore

    2015-05-01

    Identifying, amongst millions of publications available in MEDLINE, those that are relevant to specific microRNAs (miRNAs) of interest based on keyword search faces major obstacles. References to miRNA names in the literature often deviate from standard nomenclature for various reasons, since even the official nomenclature evolves. For instance, a single miRNA name may identify two completely different molecules or two different names may refer to the same molecule. mirPub is a database with a powerful and intuitive interface, which facilitates searching for miRNA literature, addressing the aforementioned issues. To provide effective search services, mirPub applies text mining techniques on MEDLINE, integrates data from several curated databases and exploits data from its user community following a crowdsourcing approach. Other key features include an interactive visualization service that illustrates intuitively the evolution of miRNA data, tag clouds summarizing the relevance of publications to particular diseases, cell types or tissues and access to TarBase 6.0 data to oversee genes related to miRNA publications. mirPub is freely available at http://www.microrna.gr/mirpub/. vergoulis@imis.athena-innovation.gr or dalamag@imis.athena-innovation.gr Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  13. Faster Smith-Waterman database searches with inter-sequence SIMD parallelisation

    Directory of Open Access Journals (Sweden)

    Rognes Torbjørn

    2011-06-01

    Full Text Available Abstract Background The Smith-Waterman algorithm for local sequence alignment is more sensitive than heuristic methods for database searching, but also more time-consuming. The fastest approach to parallelisation with SIMD technology has previously been described by Farrar in 2007. The aim of this study was to explore whether further speed could be gained by other approaches to parallelisation. Results A faster approach and implementation is described and benchmarked. In the new tool SWIPE, residues from sixteen different database sequences are compared in parallel to one query residue. Using a 375 residue query sequence a speed of 106 billion cell updates per second (GCUPS was achieved on a dual Intel Xeon X5650 six-core processor system, which is over six times more rapid than software based on Farrar's 'striped' approach. SWIPE was about 2.5 times faster when the programs used only a single thread. For shorter queries, the increase in speed was larger. SWIPE was about twice as fast as BLAST when using the BLOSUM50 score matrix, while BLAST was about twice as fast as SWIPE for the BLOSUM62 matrix. The software is designed for 64 bit Linux on processors with SSSE3. Source code is available from http://dna.uio.no/swipe/ under the GNU Affero General Public License. Conclusions Efficient parallelisation using SIMD on standard hardware makes it possible to run Smith-Waterman database searches more than six times faster than before. The approach described here could significantly widen the potential application of Smith-Waterman searches. Other applications that require optimal local alignment scores could also benefit from improved performance.

  14. Faster Smith-Waterman database searches with inter-sequence SIMD parallelisation.

    Science.gov (United States)

    Rognes, Torbjørn

    2011-06-01

    The Smith-Waterman algorithm for local sequence alignment is more sensitive than heuristic methods for database searching, but also more time-consuming. The fastest approach to parallelisation with SIMD technology has previously been described by Farrar in 2007. The aim of this study was to explore whether further speed could be gained by other approaches to parallelisation. A faster approach and implementation is described and benchmarked. In the new tool SWIPE, residues from sixteen different database sequences are compared in parallel to one query residue. Using a 375 residue query sequence a speed of 106 billion cell updates per second (GCUPS) was achieved on a dual Intel Xeon X5650 six-core processor system, which is over six times more rapid than software based on Farrar's 'striped' approach. SWIPE was about 2.5 times faster when the programs used only a single thread. For shorter queries, the increase in speed was larger. SWIPE was about twice as fast as BLAST when using the BLOSUM50 score matrix, while BLAST was about twice as fast as SWIPE for the BLOSUM62 matrix. The software is designed for 64 bit Linux on processors with SSSE3. Source code is available from http://dna.uio.no/swipe/ under the GNU Affero General Public License. Efficient parallelisation using SIMD on standard hardware makes it possible to run Smith-Waterman database searches more than six times faster than before. The approach described here could significantly widen the potential application of Smith-Waterman searches. Other applications that require optimal local alignment scores could also benefit from improved performance.

  15. Database with web interface and search engine as a diagnostics tool for electromagnetic calorimeter

    CERN Document Server

    Paluoja, Priit

    2017-01-01

    During 2016 data collection, the Compact Muon Solenoid Data Acquisition (CMS DAQ) system has shown a very good reliability. Nevertheless, the high complexity of the hardware and the software involved is, by its nature, prone to some occasional problems. As CMS subdetector, electromagnetic calorimeter (ECAL) is affected in the same way. Some of the issues are not predictable and can appear during the year more than once such as components getting noisy, power shortcuts or failing communication between machines. The chain detection-diagnosis-intervention must be as fast as possible to minimise the downtime of the detector. The aim of this project was to create a diagnostic software for ECAL crew, which consists of database and its web interface that allows to search, add and edit the contents of the database.

  16. The VirusBanker database uses a Java program to allow flexible searching through Bunyaviridae sequences

    Directory of Open Access Journals (Sweden)

    Gibbs Mark J

    2008-02-01

    Full Text Available Abstract Background Viruses of the Bunyaviridae have segmented negative-stranded RNA genomes and several of them cause significant disease. Many partial sequences have been obtained from the segments so that GenBank searches give complex results. Sequence databases usually use HTML pages to mediate remote sorting, but this approach can be limiting and may discourage a user from exploring a database. Results The VirusBanker database contains Bunyaviridae sequences and alignments and is presented as two spreadsheets generated by a Java program that interacts with a MySQL database on a server. Sequences are displayed in rows and may be sorted using information that is displayed in columns and includes data relating to the segment, gene, protein, species, strain, sequence length, terminal sequence and date and country of isolation. Bunyaviridae sequences and alignments may be downloaded from the second spreadsheet with titles defined by the user from the columns, or viewed when passed directly to the sequence editor, Jalview. Conclusion VirusBanker allows large datasets of aligned nucleotide and protein sequences from the Bunyaviridae to be compiled and winnowed rapidly using criteria that are formulated heuristically.

  17. The VirusBanker database uses a Java program to allow flexible searching through Bunyaviridae sequences.

    Science.gov (United States)

    Fourment, Mathieu; Gibbs, Mark J

    2008-02-05

    Viruses of the Bunyaviridae have segmented negative-stranded RNA genomes and several of them cause significant disease. Many partial sequences have been obtained from the segments so that GenBank searches give complex results. Sequence databases usually use HTML pages to mediate remote sorting, but this approach can be limiting and may discourage a user from exploring a database. The VirusBanker database contains Bunyaviridae sequences and alignments and is presented as two spreadsheets generated by a Java program that interacts with a MySQL database on a server. Sequences are displayed in rows and may be sorted using information that is displayed in columns and includes data relating to the segment, gene, protein, species, strain, sequence length, terminal sequence and date and country of isolation. Bunyaviridae sequences and alignments may be downloaded from the second spreadsheet with titles defined by the user from the columns, or viewed when passed directly to the sequence editor, Jalview. VirusBanker allows large datasets of aligned nucleotide and protein sequences from the Bunyaviridae to be compiled and winnowed rapidly using criteria that are formulated heuristically.

  18. Quantum Query Complexity for Searching Multiple Marked States from an Unsorted Database

    International Nuclear Information System (INIS)

    Shang Bin

    2007-01-01

    An important and usual sort of search problems is to find all marked states from an unsorted database with a large number of states. Grover's original quantum search algorithm is for finding single marked state with uncertainty, and it has been generalized to the case of multiple marked states, as well as been modified to find single marked state with certainty. However, the query complexity for finding all multiple marked states has not been addressed. We use a generalized Long's algorithm with high precision to solve such a problem. We calculate the approximate query complexity, which increases with the number of marked states and with the precision that we demand. In the end we introduce an algorithm for the problem on a 'duality computer' and show its advantage over other algorithms.

  19. Making a search engine for Indocean - A database of abstracts: An experience

    Digital Repository Service at National Institute of Oceanography (India)

    Tapaswi, M.P.; Haravu, L.J.

    stream_size 23701 stream_content_type text/plain stream_name Inf_Manage_Trends_Issues_2003_307.pdf.txt stream_source_info Inf_Manage_Trends_Issues_2003_307.pdf.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8... Information Mallagement : Trends and Issues (Festschrift ill honour of Prof S. Seetharama) 52 . Making a Search Engine for Indocean - A Database of Abstracts : An Experience Murari P Tapaswi* and L J Haravu** *Documentation Officer. National Information...

  20. Allie: a database and a search service of abbreviations and long forms

    Science.gov (United States)

    Yamamoto, Yasunori; Yamaguchi, Atsuko; Bono, Hidemasa; Takagi, Toshihisa

    2011-01-01

    Many abbreviations are used in the literature especially in the life sciences, and polysemous abbreviations appear frequently, making it difficult to read and understand scientific papers that are outside of a reader’s expertise. Thus, we have developed Allie, a database and a search service of abbreviations and their long forms (a.k.a. full forms or definitions). Allie searches for abbreviations and their corresponding long forms in a database that we have generated based on all titles and abstracts in MEDLINE. When a user query matches an abbreviation, Allie returns all potential long forms of the query along with their bibliographic data (i.e. title and publication year). In addition, for each candidate, co-occurring abbreviations and a research field in which it frequently appears in the MEDLINE data are displayed. This function helps users learn about the context in which an abbreviation appears. To deal with synonymous long forms, we use a dictionary called GENA that contains domain-specific terms such as gene, protein or disease names along with their synonymic information. Conceptually identical domain-specific terms are regarded as one term, and then conceptually identical abbreviation-long form pairs are grouped taking into account their appearance in MEDLINE. To keep up with new abbreviations that are continuously introduced, Allie has an automatic update system. In addition, the database of abbreviations and their long forms with their corresponding PubMed IDs is constructed and updated weekly. Database URL: The Allie service is available at http://allie.dbcls.jp/. PMID:21498548

  1. Protein structure determination by exhaustive search of Protein Data Bank derived databases.

    Science.gov (United States)

    Stokes-Rees, Ian; Sliz, Piotr

    2010-12-14

    Parallel sequence and structure alignment tools have become ubiquitous and invaluable at all levels in the study of biological systems. We demonstrate the application and utility of this same parallel search paradigm to the process of protein structure determination, benefitting from the large and growing corpus of known structures. Such searches were previously computationally intractable. Through the method of Wide Search Molecular Replacement, developed here, they can be completed in a few hours with the aide of national-scale federated cyberinfrastructure. By dramatically expanding the range of models considered for structure determination, we show that small (less than 12% structural coverage) and low sequence identity (less than 20% identity) template structures can be identified through multidimensional template scoring metrics and used for structure determination. Many new macromolecular complexes can benefit significantly from such a technique due to the lack of known homologous protein folds or sequences. We demonstrate the effectiveness of the method by determining the structure of a full-length p97 homologue from Trichoplusia ni. Example cases with the MHC/T-cell receptor complex and the EmoB protein provide systematic estimates of minimum sequence identity, structure coverage, and structural similarity required for this method to succeed. We describe how this structure-search approach and other novel computationally intensive workflows are made tractable through integration with the US national computational cyberinfrastructure, allowing, for example, rapid processing of the entire Structural Classification of Proteins protein fragment database.

  2. Database searching and accounting of multiplexed precursor and product ion spectra from the data independent analysis of simple and complex peptide mixtures.

    Science.gov (United States)

    Li, Guo-Zhong; Vissers, Johannes P C; Silva, Jeffrey C; Golick, Dan; Gorenstein, Marc V; Geromanos, Scott J

    2009-03-01

    A novel database search algorithm is presented for the qualitative identification of proteins over a wide dynamic range, both in simple and complex biological samples. The algorithm has been designed for the analysis of data originating from data independent acquisitions, whereby multiple precursor ions are fragmented simultaneously. Measurements used by the algorithm include retention time, ion intensities, charge state, and accurate masses on both precursor and product ions from LC-MS data. The search algorithm uses an iterative process whereby each iteration incrementally increases the selectivity, specificity, and sensitivity of the overall strategy. Increased specificity is obtained by utilizing a subset database search approach, whereby for each subsequent stage of the search, only those peptides from securely identified proteins are queried. Tentative peptide and protein identifications are ranked and scored by their relative correlation to a number of models of known and empirically derived physicochemical attributes of proteins and peptides. In addition, the algorithm utilizes decoy database techniques for automatically determining the false positive identification rates. The search algorithm has been tested by comparing the search results from a four-protein mixture, the same four-protein mixture spiked into a complex biological background, and a variety of other "system" type protein digest mixtures. The method was validated independently by data dependent methods, while concurrently relying on replication and selectivity. Comparisons were also performed with other commercially and publicly available peptide fragmentation search algorithms. The presented results demonstrate the ability to correctly identify peptides and proteins from data independent acquisition strategies with high sensitivity and specificity. They also illustrate a more comprehensive analysis of the samples studied; providing approximately 20% more protein identifications, compared to

  3. Online strategies to facilitate health-related knowledge transfer: a systematic search and review.

    Science.gov (United States)

    Mairs, Katie; McNeil, Heather; McLeod, Jordache; Prorok, Jeanette C; Stolee, Paul

    2013-12-01

    Health interventions and practices often lag behind the available research, and the need for timely translation of new health knowledge into practice is becoming increasingly important. The objective of this study was to conduct a systematic search and review of the literature on online knowledge translation techniques that foster the interaction between various stakeholders and assist in the sharing of ideas and knowledge within the health field. The search strategy included all published literature in the English language since January 2003 and used the medline, Cumulative Index to Nursing and Allied Health Literature (cinahl), embase and Inspec databases. The results of the review indicate that online strategies are diverse, yet all are applicable in facilitating online health-related knowledge translation. The method of knowledge sharing ranged from use of wikis, discussion forums, blogs, and social media to data/knowledge management tools, virtual communities of practice and conferencing technology - all of which can encourage online health communication and knowledge translation. Online technologies are a key facilitator of health-related knowledge translation. This review of online strategies to facilitate health-related knowledge translation can inform the development and improvement of future strategies to expedite the translation of research to practice. © 2013 Health Libraries Group of CILIP and John Wiley & Sons Ltd.

  4. A highly sensitive search strategy for clinical trials in Literatura Latino Americana e do Caribe em Ciências da Saúde (LILACS) was developed.

    Science.gov (United States)

    Manríquez, Juan J

    2008-04-01

    Systematic reviews should include as many articles as possible. However, many systematic reviews use only databases with high English language content as sources of trials. Literatura Latino Americana e do Caribe em Ciências da Saúde (LILACS) is an underused source of trials, and there is not a validated strategy for searching clinical trials to be used in this database. The objective of this study was to develop a sensitive search strategy for clinical trials in LILACS. An analytical survey was performed. Several single and multiple-term search strategies were tested for their ability to retrieve clinical trials in LILACS. Sensitivity, specificity, and accuracy of each single and multiple-term strategy were calculated using the results of a hand-search of 44 Chilean journals as gold standard. After combining the most sensitive, specific, and accurate single and multiple-term search strategy, a strategy with a sensitivity of 97.75% (95% confidence interval [CI]=95.98-99.53) and a specificity of 61.85 (95% CI=61.19-62.51) was obtained. LILACS is a source of trials that could improve systematic reviews. A new highly sensitive search strategy for clinical trials in LILACS has been developed. It is hoped this search strategy will improve and increase the utilization of LILACS in future systematic reviews.

  5. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  6. The DNA database search controversy revisited: bridging the Bayesian-frequentist gap.

    Science.gov (United States)

    Storvik, Geir; Egeland, Thore

    2007-09-01

    Two different quantities have been suggested for quantification of evidence in cases where a suspect is found by a search through a database of DNA profiles. The likelihood ratio, typically motivated from a Bayesian setting, is preferred by most experts in the field. The so-called np rule has been suggested through frequentist arguments and has been suggested by the American National Research Council and Stockmarr (1999, Biometrics55, 671-677). The two quantities differ substantially and have given rise to the DNA database search controversy. Although several authors have criticized the different approaches, a full explanation of why these differences appear is still lacking. In this article we show that a P-value in a frequentist hypothesis setting is approximately equal to the result of the np rule. We argue, however, that a more reasonable procedure in this case is to use conditional testing, in which case a P-value directly related to posterior probabilities and the likelihood ratio is obtained. This way of viewing the problem bridges the gap between the Bayesian and frequentist approaches. At the same time it indicates that the np rule should not be used to quantify evidence.

  7. Current Comparative Table (CCT) automates customized searches of dynamic biological databases.

    Science.gov (United States)

    Landsteiner, Benjamin R; Olson, Michael R; Rutherford, Robert

    2005-07-01

    The Current Comparative Table (CCT) software program enables working biologists to automate customized bioinformatics searches, typically of remote sequence or HMM (hidden Markov model) databases. CCT currently supports BLAST, hmmpfam and other programs useful for gene and ortholog identification. The software is web based, has a BioPerl core and can be used remotely via a browser or locally on Mac OS X or Linux machines. CCT is particularly useful to scientists who study large sets of molecules in today's evolving information landscape because it color-codes all result files by age and highlights even tiny changes in sequence or annotation. By empowering non-bioinformaticians to automate custom searches and examine current results in context at a glance, CCT allows a remote database submission in the evening to influence the next morning's bench experiment. A demonstration of CCT is available at http://orb.public.stolaf.edu/CCTdemo and the open source software is freely available from http://sourceforge.net/projects/orb-cct.

  8. Analysis of Users' Searches of CD-ROM Databases in the National and University Library in Zagreb.

    Science.gov (United States)

    Jokic, Maja

    1997-01-01

    Investigates the search behavior of CD-ROM database users in Zagreb (Croatia) libraries: one group needed a minimum of technical assistance, and the other was completely independent. Highlights include the use of questionnaires and transaction log analysis and the need for end-user education. The questionnaire and definitions of search process…

  9. Fine-grained Database Field Search Using Attribute-Based Encryption for E-Healthcare Clouds.

    Science.gov (United States)

    Guo, Cheng; Zhuang, Ruhan; Jie, Yingmo; Ren, Yizhi; Wu, Ting; Choo, Kim-Kwang Raymond

    2016-11-01

    An effectively designed e-healthcare system can significantly enhance the quality of access and experience of healthcare users, including facilitating medical and healthcare providers in ensuring a smooth delivery of services. Ensuring the security of patients' electronic health records (EHRs) in the e-healthcare system is an active research area. EHRs may be outsourced to a third-party, such as a community healthcare cloud service provider for storage due to cost-saving measures. Generally, encrypting the EHRs when they are stored in the system (i.e. data-at-rest) or prior to outsourcing the data is used to ensure data confidentiality. Searchable encryption (SE) scheme is a promising technique that can ensure the protection of private information without compromising on performance. In this paper, we propose a novel framework for controlling access to EHRs stored in semi-trusted cloud servers (e.g. a private cloud or a community cloud). To achieve fine-grained access control for EHRs, we leverage the ciphertext-policy attribute-based encryption (CP-ABE) technique to encrypt tables published by hospitals, including patients' EHRs, and the table is stored in the database with the primary key being the patient's unique identity. Our framework can enable different users with different privileges to search on different database fields. Differ from previous attempts to secure outsourcing of data, we emphasize the control of the searches of the fields within the database. We demonstrate the utility of the scheme by evaluating the scheme using datasets from the University of California, Irvine.

  10. Seismic Search Engine: A distributed database for mining large scale seismic data

    Science.gov (United States)

    Liu, Y.; Vaidya, S.; Kuzma, H. A.

    2009-12-01

    The International Monitoring System (IMS) of the CTBTO collects terabytes worth of seismic measurements from many receiver stations situated around the earth with the goal of detecting underground nuclear testing events and distinguishing them from other benign, but more common events such as earthquakes and mine blasts. The International Data Center (IDC) processes and analyzes these measurements, as they are collected by the IMS, to summarize event detections in daily bulletins. Thereafter, the data measurements are archived into a large format database. Our proposed Seismic Search Engine (SSE) will facilitate a framework for data exploration of the seismic database as well as the development of seismic data mining algorithms. Analogous to GenBank, the annotated genetic sequence database maintained by NIH, through SSE, we intend to provide public access to seismic data and a set of processing and analysis tools, along with community-generated annotations and statistical models to help interpret the data. SSE will implement queries as user-defined functions composed from standard tools and models. Each query is compiled and executed over the database internally before reporting results back to the user. Since queries are expressed with standard tools and models, users can easily reproduce published results within this framework for peer-review and making metric comparisons. As an illustration, an example query is “what are the best receiver stations in East Asia for detecting events in the Middle East?” Evaluating this query involves listing all receiver stations in East Asia, characterizing known seismic events in that region, and constructing a profile for each receiver station to determine how effective its measurements are at predicting each event. The results of this query can be used to help prioritize how data is collected, identify defective instruments, and guide future sensor placements.

  11. Anatomy and evolution of database search engines-a central component of mass spectrometry based proteomic workflows.

    Science.gov (United States)

    Verheggen, Kenneth; Raeder, Helge; Berven, Frode S; Martens, Lennart; Barsnes, Harald; Vaudel, Marc

    2017-09-13

    Sequence database search engines are bioinformatics algorithms that identify peptides from tandem mass spectra using a reference protein sequence database. Two decades of development, notably driven by advances in mass spectrometry, have provided scientists with more than 30 published search engines, each with its own properties. In this review, we present the common paradigm behind the different implementations, and its limitations for modern mass spectrometry datasets. We also detail how the search engines attempt to alleviate these limitations, and provide an overview of the different software frameworks available to the researcher. Finally, we highlight alternative approaches for the identification of proteomic mass spectrometry datasets, either as a replacement for, or as a complement to, sequence database search engines. © 2017 Wiley Periodicals, Inc.

  12. The Relationship between Searches Performed in Online Databases and the Number of Full-Text Articles Accessed: Measuring the Interaction between Database and E-Journal Collections

    Science.gov (United States)

    Lamothe, Alain R.

    2011-01-01

    The purpose of this paper is to report the results of a quantitative analysis exploring the interaction and relationship between the online database and electronic journal collections at the J. N. Desmarais Library of Laurentian University. A very strong relationship exists between the number of searches and the size of the online database…

  13. Optimal search strategies for detecting cost and economic studies in EMBASE

    Directory of Open Access Journals (Sweden)

    Haynes R Brian

    2006-06-01

    Full Text Available Abstract Background Economic evaluations in the medical literature compare competing diagnosis or treatment methods for their use of resources and their expected outcomes. The best evidence currently available from research regarding both cost and economic comparisons will continue to expand as this type of information becomes more important in today's clinical practice. Researchers and clinicians need quick, reliable ways to access this information. A key source of this type of information is large bibliographic databases such as EMBASE. The objective of this study was to develop search strategies that optimize the retrieval of health costs and economics studies from EMBASE. Methods We conducted an analytic survey, comparing hand searches of journals with retrievals from EMBASE for candidate search terms and combinations. 6 research assistants read all issues of 55 journals indexed by EMBASE for the publishing year 2000. We rated all articles using purpose and quality indicators and categorized them into clinically relevant original studies, review articles, general papers, or case reports. The original and review articles were then categorized for purpose (i.e., cost and economics and other clinical topics and depending on the purpose as 'pass' or 'fail' for methodologic rigor. Candidate search strategies were developed for economic and cost studies, then run in the 55 EMBASE journals, the retrievals being compared with the hand search data. The sensitivity, specificity, precision, and accuracy of the search strategies were calculated. Results Combinations of search terms for detecting both cost and economic studies attained levels of 100% sensitivity with specificity levels of 92.9% and 92.3% respectively. When maximizing for both sensitivity and specificity, the combination of terms for detecting cost studies (sensitivity increased 2.2% over the single term but at a slight decrease in specificity of 0.9%. The maximized combination of terms

  14. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  15. Internet Databases of the Properties, Enzymatic Reactions, and Metabolism of Small Molecules—Search Options and Applications in Food Science

    Directory of Open Access Journals (Sweden)

    Piotr Minkiewicz

    2016-12-01

    Full Text Available Internet databases of small molecules, their enzymatic reactions, and metabolism have emerged as useful tools in food science. Database searching is also introduced as part of chemistry or enzymology courses for food technology students. Such resources support the search for information about single compounds and facilitate the introduction of secondary analyses of large datasets. Information can be retrieved from databases by searching for the compound name or structure, annotating with the help of chemical codes or drawn using molecule editing software. Data mining options may be enhanced by navigating through a network of links and cross-links between databases. Exemplary databases reviewed in this article belong to two classes: tools concerning small molecules (including general and specialized databases annotating food components and tools annotating enzymes and metabolism. Some problems associated with database application are also discussed. Data summarized in computer databases may be used for calculation of daily intake of bioactive compounds, prediction of metabolism of food components, and their biological activity as well as for prediction of interactions between food component and drugs.

  16. Protocol: a systematic review of studies developing and/or evaluating search strategies to identify prognosis studies.

    Science.gov (United States)

    Corp, Nadia; Jordan, Joanne L; Hayden, Jill A; Irvin, Emma; Parker, Robin; Smith, Andrea; van der Windt, Danielle A

    2017-04-20

    Prognosis research is on the rise, its importance recognised because chronic health conditions and diseases are increasingly common and costly. Prognosis systematic reviews are needed to collate and synthesise these research findings, especially to help inform effective clinical decision-making and healthcare policy. A detailed, comprehensive search strategy is central to any systematic review. However, within prognosis research, this is challenging due to poor reporting and inconsistent use of available indexing terms in electronic databases. Whilst many published search filters exist for finding clinical trials, this is not the case for prognosis studies. This systematic review aims to identify and compare existing methodological filters developed and evaluated to identify prognosis studies of any of the three main types: overall prognosis, prognostic factors, and prognostic [risk prediction] models. Primary studies reporting the development and/or evaluation of methodological search filters to retrieve any type of prognosis study will be included in this systematic review. Multiple electronic bibliographic databases will be searched, grey literature will be sought from relevant organisations and websites, experts will be contacted, and citation tracking of key papers and reference list checking of all included papers will be undertaken. Titles will be screened by one person, and abstracts and full articles will be reviewed for inclusion independently by two reviewers. Data extraction and quality assessment will also be undertaken independently by two reviewers with disagreements resolved by discussion or by a third reviewer if necessary. Filters' characteristics and performance metrics reported in the included studies will be extracted and tabulated. To enable comparisons, filters will be grouped according to database, platform, type of prognosis study, and type of filter for which it was intended. This systematic review will identify all existing validated

  17. Two Complementary Strategies for New Physics Searches at Lepton Colliders

    Energy Technology Data Exchange (ETDEWEB)

    Hooberman, Benjamin Henry [Univ. of California, Berkeley, CA (United States)

    2009-07-06

    In this thesis I present two complementary strategies for probing beyond-the-Standard Model physics using data collected in e+e- collisions at lepton colliders. One strategy involves searching for effects at low energy mediated by new particles at the TeV mass scale, at which new physics is expected to manifest. Several new physics scenarios, including Supersymmetry and models with leptoquarks or compositeness, may lead to observable rates for charged lepton-flavor violating processes, which are forbidden in the Standard Model. I present a search for lepton-flavor violating decays of the Υ(3S) using data collected with the BABAR detector. This study establishes the 90% confidence level upper limits BF(Υ(3S) → eτ) < 5.0 x 10-6 and BF(Υ(3S) → μτ) < 4.1 x 10-6 which are used to place constraints on new physics contributing to lepton-flavor violation at the TeV mass scale. An alternative strategy is to increase the collision energy above the threshold for new particles and produce them directly. I discuss research and development efforts aimed at producing a vertex tracker which achieves the physics performance required of a high energy lepton collider. A small-scale vertex tracker prototype is constructed using Silicon sensors of 50 μm thickness and tested using charged particle beams. This tracker achieves the targeted impact parameter resolution of σLP = (5⊕10 GeV/pT) as well as a longitudinal vertex resolution of (260 ± 10) μm, which is consistent with the requirements of a TeV-scale lepton collider. This detector research and development effort must be motivated and directed by simulation studies of physics processes. Investigation of a dark matter-motivated Supersymmetry scenario is presented, in which the dark matter is composed of Supersymmetric neutralinos. In this scenario, studies of the e+e- → H0A0 production process allow for

  18. The Open Spectral Database: an open platform for sharing and searching spectral data.

    Science.gov (United States)

    Chalk, Stuart J

    2016-01-01

    A number of websites make available spectral data for download (typically as JCAMP-DX text files) and one (ChemSpider) that also allows users to contribute spectral files. As a result, searching and retrieving such spectral data can be time consuming, and difficult to reuse if the data is compressed in the JCAMP-DX file. What is needed is a single resource that allows submission of JCAMP-DX files, export of the raw data in multiple formats, searching based on multiple chemical identifiers, and is open in terms of license and access. To address these issues a new online resource called the Open Spectral Database (OSDB) http://osdb.info/ has been developed and is now available. Built using open source tools, using open code (hosted on GitHub), providing open data, and open to community input about design and functionality, the OSDB is available for anyone to submit spectral data, making it searchable and available to the scientific community. This paper details the concept and coding, internal architecture, export formats, Representational State Transfer (REST) Application Programming Interface and options for submission of data. The OSDB website went live in November 2015. Concurrently, the GitHub repository was made available at https://github.com/stuchalk/OSDB/, and is open for collaborators to join the project, submit issues, and contribute code. The combination of a scripting environment (PHPStorm), a PHP Framework (CakePHP), a relational database (MySQL) and a code repository (GitHub) provides all the capabilities to easily develop REST based websites for ingestion, curation and exposure of open chemical data to the community at all levels. It is hoped this software stack (or equivalent ones in other scripting languages) will be leveraged to make more chemical data available for both humans and computers.

  19. Searching the protein structure database for ligand-binding site similarities using CPASS v.2

    Directory of Open Access Journals (Sweden)

    Caprez Adam

    2011-01-01

    Full Text Available Abstract Background A recent analysis of protein sequences deposited in the NCBI RefSeq database indicates that ~8.5 million protein sequences are encoded in prokaryotic and eukaryotic genomes, where ~30% are explicitly annotated as "hypothetical" or "uncharacterized" protein. Our Comparison of Protein Active-Site Structures (CPASS v.2 database and software compares the sequence and structural characteristics of experimentally determined ligand binding sites to infer a functional relationship in the absence of global sequence or structure similarity. CPASS is an important component of our Functional Annotation Screening Technology by NMR (FAST-NMR protocol and has been successfully applied to aid the annotation of a number of proteins of unknown function. Findings We report a major upgrade to our CPASS software and database that significantly improves its broad utility. CPASS v.2 is designed with a layered architecture to increase flexibility and portability that also enables job distribution over the Open Science Grid (OSG to increase speed. Similarly, the CPASS interface was enhanced to provide more user flexibility in submitting a CPASS query. CPASS v.2 now allows for both automatic and manual definition of ligand-binding sites and permits pair-wise, one versus all, one versus list, or list versus list comparisons. Solvent accessible surface area, ligand root-mean square difference, and Cβ distances have been incorporated into the CPASS similarity function to improve the quality of the results. The CPASS database has also been updated. Conclusions CPASS v.2 is more than an order of magnitude faster than the original implementation, and allows for multiple simultaneous job submissions. Similarly, the CPASS database of ligand-defined binding sites has increased in size by ~ 38%, dramatically increasing the likelihood of a positive search result. The modification to the CPASS similarity function is effective in reducing CPASS similarity scores

  20. Protein backbone chemical shifts predicted from searching a database for torsion angle and sequence homology

    International Nuclear Information System (INIS)

    Shen Yang; Bax, Ad

    2007-01-01

    Chemical shifts of nuclei in or attached to a protein backbone are exquisitely sensitive to their local environment. A computer program, SPARTA, is described that uses this correlation with local structure to predict protein backbone chemical shifts, given an input three-dimensional structure, by searching a newly generated database for triplets of adjacent residues that provide the best match in φ/ψ/χ 1 torsion angles and sequence similarity to the query triplet of interest. The database contains 15 N, 1 H N , 1 H α , 13 C α , 13 C β and 13 C' chemical shifts for 200 proteins for which a high resolution X-ray (≤2.4 A) structure is available. The relative importance of the weighting factors for the φ/ψ/χ 1 angles and sequence similarity was optimized empirically. The weighted, average secondary shifts of the central residues in the 20 best-matching triplets, after inclusion of nearest neighbor, ring current, and hydrogen bonding effects, are used to predict chemical shifts for the protein of known structure. Validation shows good agreement between the SPARTA-predicted and experimental shifts, with standard deviations of 2.52, 0.51, 0.27, 0.98, 1.07 and 1.08 ppm for 15 N, 1 H N , 1 H α , 13 C α , 13 C β and 13 C', respectively, including outliers

  1. Matrix-product-state simulation of an extended Brueschweiler bulk-ensemble database search

    International Nuclear Information System (INIS)

    SaiToh, Akira; Kitagawa, Masahiro

    2006-01-01

    Brueschweiler's database search in a spin Liouville space can be efficiently simulated on a conventional computer without error as long as the simulation cost of the internal circuit of an oracle function is polynomial, unlike the fact that in true NMR experiments, it suffers from an exponential decrease in the variation of a signal intensity. With the simulation method using the matrix-product-state proposed by Vidal [G. Vidal, Phys. Rev. Lett. 91, 147902 (2003)], we perform such a simulation. We also show the extensions of the algorithm without utilizing the J-coupling or DD-coupling splitting of frequency peaks in observation: searching can be completed with a single query in polynomial postoracle circuit complexities in an extension; multiple solutions of an oracle can be found in another extension whose query complexity is linear in the key length and in the number of solutions (this extension is to find all of marked keys). These extended algorithms are also simulated with the same simulation method

  2. Decision making in family medicine: randomized trial of the effects of the InfoClinique and Trip database search engines.

    Science.gov (United States)

    Labrecque, Michel; Ratté, Stéphane; Frémont, Pierre; Cauchon, Michel; Ouellet, Jérôme; Hogg, William; McGowan, Jessie; Gagnon, Marie-Pierre; Njoya, Merlin; Légaré, France

    2013-10-01

    To compare the ability of users of 2 medical search engines, InfoClinique and the Trip database, to provide correct answers to clinical questions and to explore the perceived effects of the tools on the clinical decision-making process. Randomized trial. Three family medicine units of the family medicine program of the Faculty of Medicine at Laval University in Quebec city, Que. Fifteen second-year family medicine residents. Residents generated 30 structured questions about therapy or preventive treatment (2 questions per resident) based on clinical encounters. Using an Internet platform designed for the trial, each resident answered 20 of these questions (their own 2, plus 18 of the questions formulated by other residents, selected randomly) before and after searching for information with 1 of the 2 search engines. For each question, 5 residents were randomly assigned to begin their search with InfoClinique and 5 with the Trip database. The ability of residents to provide correct answers to clinical questions using the search engines, as determined by third-party evaluation. After answering each question, participants completed a questionnaire to assess their perception of the engine's effect on the decision-making process in clinical practice. Of 300 possible pairs of answers (1 answer before and 1 after the initial search), 254 (85%) were produced by 14 residents. Of these, 132 (52%) and 122 (48%) pairs of answers concerned questions that had been assigned an initial search with InfoClinique and the Trip database, respectively. Both engines produced an important and similar absolute increase in the proportion of correct answers after searching (26% to 62% for InfoClinique, for an increase of 36%; 24% to 63% for the Trip database, for an increase of 39%; P = .68). For all 30 clinical questions, at least 1 resident produced the correct answer after searching with either search engine. The mean (SD) time of the initial search for each question was 23.5 (7

  3. Search Engine Marketing (SEM: Financial & Competitive Advantages of an Effective Hotel SEM Strategy

    Directory of Open Access Journals (Sweden)

    Leora Halpern Lanz

    2015-05-01

    Full Text Available Search Engine Marketing and Optimization (SEO, SEM are keystones of a hotels marketing strategy, in fact research shows that 90% of travelers start their vacation planning with a Google search. Learn five strategies that can enhance a hotels SEO and SEM strategies to boost bookings.

  4. Search Engine Marketing (SEM): Financial & Competitive Advantages of an Effective Hotel SEM Strategy

    OpenAIRE

    Leora Halpern Lanz

    2015-01-01

    Search Engine Marketing and Optimization (SEO, SEM) are keystones of a hotels marketing strategy, in fact research shows that 90% of travelers start their vacation planning with a Google search. Learn five strategies that can enhance a hotels SEO and SEM strategies to boost bookings.

  5. Development of a marketing strategy for the Coal Research Establishment`s emissions monitoring database

    Energy Technology Data Exchange (ETDEWEB)

    Beer, A.D.; Hughes, I.S.C. [British Coal Corporation, Stoke Orchard (United Kingdom). Coal Research Establishment

    1995-06-01

    A summary is presented of the results of work conducted by the UK`s Coal Research Establishment (CRE) between April 1994 and December 1994 following the completion of a project on the utilisation and publication of an emissions monitoring database. The database contains emissions data for most UK combustion plant, gathered over the past 10 years. The aim of this further work was to identify the strengths and weaknesses of CRE`s database, to investigate potential additional sources of data, and to develop a strategy for marketing the information contained within the database to interested parties. 3 figs.

  6. Improving Search Strategies of Auditors – A Focus Group on Reflection Interventions

    OpenAIRE

    Fessl, Angela; Pammer, Viktoria; Wiese, Michael; Thalmann, Stefan

    2017-01-01

    Financial auditors routinely search internal as well as public knowledge bases as part of the auditing process. Efficient search strategies are crucial for knowledge workers in general and for auditors in particular. Modern search technology quickly evolves; and features beyond keyword search like fac-etted search or visual overview of knowledge bases like graph visualisations emerge. It is therefore desirable for auditors to learn about new innovations and to explore and experiment with such...

  7. Beyond MEDLINE for literature searches.

    Science.gov (United States)

    Conn, Vicki S; Isaramalai, Sang-arun; Rath, Sabyasachi; Jantarakupt, Peeranuch; Wadhawan, Rohini; Dash, Yashodhara

    2003-01-01

    To describe strategies for a comprehensive literature search. MEDLINE searches result in limited numbers of studies that are often biased toward statistically significant findings. Diversified search strategies are needed. Empirical evidence about the recall and precision of diverse search strategies is presented. Challenges and strengths of each search strategy are identified. Search strategies vary in recall and precision. Often sensitivity and specificity are inversely related. Valuable search strategies include examination of multiple diverse computerized databases, ancestry searches, citation index searches, examination of research registries, journal hand searching, contact with the "invisible college," examination of abstracts, Internet searches, and contact with sources of synthesized information. Extending searches beyond MEDLINE enables researchers to conduct more systematic comprehensive searches.

  8. Information retrieval from the INIS database. Is the new online search system poorer than the old one?

    International Nuclear Information System (INIS)

    Adamek, Petr

    2011-01-01

    A brief overview of the search options for the INIS database is presented, categorized into offline and online systems, and their assets and drawbacks are described. In the Online section, the old system on the BASIS platform and the new system on the Google Search Appliance platform are compared. The capabilities of the new system seem to be more limited than those of the old system. (author)

  9. Similarity Digest Search: A Survey and Comparative Analysis of Strategies to Perform Known File Filtering Using Approximate Matching

    Directory of Open Access Journals (Sweden)

    Vitor Hugo Galhardo Moia

    2017-01-01

    Full Text Available Digital forensics is a branch of Computer Science aiming at investigating and analyzing electronic devices in the search for crime evidence. There are several ways to perform this search. Known File Filter (KFF is one of them, where a list of interest objects is used to reduce/separate data for analysis. Holding a database of hashes of such objects, the examiner performs lookups for matches against the target device. However, due to limitations over hash functions (inability to detect similar objects, new methods have been designed, called approximate matching. This sort of function has interesting characteristics for KFF investigations but suffers mainly from high costs when dealing with huge data sets, as the search is usually done by brute force. To mitigate this problem, strategies have been developed to better perform lookups. In this paper, we present the state of the art of similarity digest search strategies, along with a detailed comparison involving several aspects, as time complexity, memory requirement, and search precision. Our results show that none of the approaches address at least these main aspects. Finally, we discuss future directions and present requirements for a new strategy aiming to fulfill current limitations.

  10. A framework for intelligent data acquisition and real-time database searching for shotgun proteomics.

    Science.gov (United States)

    Graumann, Johannes; Scheltema, Richard A; Zhang, Yong; Cox, Jürgen; Mann, Matthias

    2012-03-01

    In the analysis of complex peptide mixtures by MS-based proteomics, many more peptides elute at any given time than can be identified and quantified by the mass spectrometer. This makes it desirable to optimally allocate peptide sequencing and narrow mass range quantification events. In computer science, intelligent agents are frequently used to make autonomous decisions in complex environments. Here we develop and describe a framework for intelligent data acquisition and real-time database searching and showcase selected examples. The intelligent agent is implemented in the MaxQuant computational proteomics environment, termed MaxQuant Real-Time. It analyzes data as it is acquired on the mass spectrometer, constructs isotope patterns and SILAC pair information as well as controls MS and tandem MS events based on real-time and prior MS data or external knowledge. Re-implementing a top10 method in the intelligent agent yields similar performance to the data dependent methods running on the mass spectrometer itself. We demonstrate the capabilities of MaxQuant Real-Time by creating a real-time search engine capable of identifying peptides "on-the-fly" within 30 ms, well within the time constraints of a shotgun fragmentation "topN" method. The agent can focus sequencing events onto peptides of specific interest, such as those originating from a specific gene ontology (GO) term, or peptides that are likely modified versions of already identified peptides. Finally, we demonstrate enhanced quantification of SILAC pairs whose ratios were poorly defined in survey spectra. MaxQuant Real-Time is flexible and can be applied to a large number of scenarios that would benefit from intelligent, directed data acquisition. Our framework should be especially useful for new instrument types, such as the quadrupole-Orbitrap, that are currently becoming available.

  11. Heart research advances using database search engines, Human Protein Atlas and the Sydney Heart Bank.

    Science.gov (United States)

    Li, Amy; Estigoy, Colleen; Raftery, Mark; Cameron, Darryl; Odeberg, Jacob; Pontén, Fredrik; Lal, Sean; Dos Remedios, Cristobal G

    2013-10-01

    This Methodological Review is intended as a guide for research students who may have just discovered a human "novel" cardiac protein, but it may also help hard-pressed reviewers of journal submissions on a "novel" protein reported in an animal model of human heart failure. Whether you are an expert or not, you may know little or nothing about this particular protein of interest. In this review we provide a strategic guide on how to proceed. We ask: How do you discover what has been published (even in an abstract or research report) about this protein? Everyone knows how to undertake literature searches using PubMed and Medline but these are usually encyclopaedic, often producing long lists of papers, most of which are either irrelevant or only vaguely relevant to your query. Relatively few will be aware of more advanced search engines such as Google Scholar and even fewer will know about Quertle. Next, we provide a strategy for discovering if your "novel" protein is expressed in the normal, healthy human heart, and if it is, we show you how to investigate its subcellular location. This can usually be achieved by visiting the website "Human Protein Atlas" without doing a single experiment. Finally, we provide a pathway to discovering if your protein of interest changes its expression level with heart failure/disease or with ageing. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.

  12. Identifying nurse staffing research in Medline: development and testing of empirically derived search strategies with the PubMed interface.

    Science.gov (United States)

    Simon, Michael; Hausner, Elke; Klaus, Susan F; Dunton, Nancy E

    2010-08-23

    The identification of health services research in databases such as PubMed/Medline is a cumbersome task. This task becomes even more difficult if the field of interest involves the use of diverse methods and data sources, as is the case with nurse staffing research. This type of research investigates the association between nurse staffing parameters and nursing and patient outcomes. A comprehensively developed search strategy may help identify nurse staffing research in PubMed/Medline. A set of relevant references in PubMed/Medline was identified by means of three systematic reviews. This development set was used to detect candidate free-text and MeSH terms. The frequency of these terms was compared to a random sample from PubMed/Medline in order to identify terms specific to nurse staffing research, which were then used to develop a sensitive, precise and balanced search strategy. To determine their precision, the newly developed search strategies were tested against a) the pool of relevant references extracted from the systematic reviews, b) a reference set identified from an electronic journal screening, and c) a sample from PubMed/Medline. Finally, all newly developed strategies were compared to PubMed's Health Services Research Queries (PubMed's HSR Queries). The sensitivities of the newly developed search strategies were almost 100% in all of the three test sets applied; precision ranged from 6.1% to 32.0%. PubMed's HSR queries were less sensitive (83.3% to 88.2%) than the new search strategies. Only minor differences in precision were found (5.0% to 32.0%). As with other literature on health services research, nurse staffing studies are difficult to identify in PubMed/Medline. Depending on the purpose of the search, researchers can choose between high sensitivity and retrieval of a large number of references or high precision, i.e. and an increased risk of missing relevant references, respectively. More standardized terminology (e.g. by consistent use of the

  13. Search strategies to identify diagnostic accuracy studies in MEDLINE and EMBASE

    NARCIS (Netherlands)

    Beynon, Rebecca; Leeflang, Mariska M. G.; McDonald, Steve; Eisinga, Anne; Mitchell, Ruth L.; Whiting, Penny; Glanville, Julie M.

    2013-01-01

    A systematic and extensive search for as many eligible studies as possible is essential in any systematic review. When searching for diagnostic test accuracy (DTA) studies in bibliographic databases, it is recommended that terms for disease (target condition) are combined with terms for the

  14. Complementary Value of Databases for Discovery of Scholarly Literature: A User Survey of Online Searching for Publications in Art History

    Science.gov (United States)

    Nemeth, Erik

    2010-01-01

    Discovery of academic literature through Web search engines challenges the traditional role of specialized research databases. Creation of literature outside academic presses and peer-reviewed publications expands the content for scholarly research within a particular field. The resulting body of literature raises the question of whether scholars…

  15. Usefulness of systematic review search strategies in finding child health systematic reviews in MEDLINE

    NARCIS (Netherlands)

    Boluyt, Nicole; Tjosvold, Lisa; Lefebvre, Carol; Klassen, Terry P.; Offringa, Martin

    2008-01-01

    OBJECTIVE: To determine the sensitivity and precision of existing search strategies for retrieving child health systematic reviews in MEDLINE using PubMed. DESIGN: Filter (diagnostic) accuracy study. We identified existing search strategies for systematic reviews, combined them with a filter that

  16. How to automatically test and validate your database backup and recovery strategy

    International Nuclear Information System (INIS)

    Gaspar Aparicio, Ruben

    2011-01-01

    The major challenge we solve with this software project is the automated validation of backups sent to tape for Oracle databases. While Oracle Recovery Manager (RMAN) provides tools like 'restore validate', the real and only certain proof is a restore. This initial aim evolved to provide a recovery platform capable to cover more complex user cases, such as validations of backup strategy of Very Large DataBases (VLDB), and schema recoveries to cure logical errors or to provide the kind of database snapshots by means of exports.

  17. The relationship between emotion regulation strategies and job search behavior among fourth-year university students.

    Science.gov (United States)

    Wang, Ling; Xu, Huihui; Zhang, Xue; Fang, Ping

    2017-08-01

    The job search process is a stressful experience. This study investigated the effect of emotion regulation strategies on job search behavior in combination with anxiety and job search self-efficacy among Chinese university fourth-year students (N = 816, mean age = 21.98, 31.5% male, 34.9% majored in science, 18.0% from "211 Project" universities). Results showed that cognitive reappraisal was positively related to job search behavior, while expressive suppression was negatively related to job search behavior. Additionally, anxiety was negatively related to job search behavior, while job search self-efficacy was positively associated with job search behavior. Moreover, both anxiety and job search self-efficacy mediated the relationship between emotion regulation strategies and job search behavior. In general, emotion regulation strategies played an important role in job search behavior. Implications include the notion that emotion regulation interventions may be helpful to increase job search behavior among university students. Copyright © 2017 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  18. Preparing College Students To Search Full-Text Databases: Is Instruction Necessary?

    Science.gov (United States)

    Riley, Cheryl; Wales, Barbara

    Full-text databases allow Central Missouri State University's clients to access some of the serials that libraries have had to cancel due to escalating subscription costs; EbscoHost, the subject of this study, is one such database. The database is available free to all Missouri residents. A survey was designed consisting of 21 questions intended…

  19. Search strategies for top partners in composite Higgs models

    Science.gov (United States)

    Gripaios, Ben; Müller, Thibaut; Parker, M. A.; Sutherland, Dave

    2014-08-01

    We consider how best to search for top partners in generic composite Higgs models. We begin by classifying the possible group representations carried by top partners in models with and without a custodial SU(2) × SU(2) ⋊ 2 symmetry protecting the rate for Z → decays. We identify a number of minimal models whose top partners only have electric charges of , , or and thus decay to top or bottom quarks via a single Higgs or electroweak gauge boson. We develop an inclusive search for these based on a top veto, which we find to be more effective than existing searches. Less minimal models feature light states that can be sought in final states with like-sign leptons and so we find that 2 straightforward LHC searches give a reasonable coverage of the gamut of composite Higgs models.

  20. Oracle Database 10g: a platform for BLAST search and Regular Expression pattern matching in life sciences.

    Science.gov (United States)

    Stephens, Susie M; Chen, Jake Y; Davidson, Marcel G; Thomas, Shiby; Trute, Barry M

    2005-01-01

    As database management systems expand their array of analytical functionality, they become powerful research engines for biomedical data analysis and drug discovery. Databases can hold most of the data types commonly required in life sciences and consequently can be used as flexible platforms for the implementation of knowledgebases. Performing data analysis in the database simplifies data management by minimizing the movement of data from disks to memory, allowing pre-filtering and post-processing of datasets, and enabling data to remain in a secure, highly available environment. This article describes the Oracle Database 10g implementation of BLAST and Regular Expression Searches and provides case studies of their usage in bioinformatics. http://www.oracle.com/technology/software/index.html.

  1. Academic Users' Information Searching on Research Topics: Characteristics of Research Tasks and Search Strategies

    Science.gov (United States)

    Du, Jia Tina; Evans, Nina

    2011-01-01

    This project investigated how academic users search for information on their real-life research tasks. This article presents the findings of the first of two studies. The study data were collected in the Queensland University of Technology (QUT) in Brisbane, Australia. Eleven PhD students' searching behaviors on personal research topics were…

  2. The effect of wild card designations and rare alleles in forensic DNA database searches

    DEFF Research Database (Denmark)

    Tvedebrink, Torben; Bright, Jo-Anne; Buckleton, John S

    2015-01-01

    Forensic DNA databases are powerful tools used for the identification of persons of interest in criminal investigations. Typically, they consist of two parts: (1) a database containing DNA profiles of known individuals and (2) a database of DNA profiles associated with crime scenes. The risk...... of adventitious or chance matches between crimes and innocent people increases as the number of profiles within a database grows and more data is shared between various forensic DNA databases, e.g. from different jurisdictions. The DNA profiles obtained from crime scenes are often partial because crime samples...

  3. The topography of the environment alters the optimal search strategy for active particles

    Science.gov (United States)

    Volpe, Giorgio; Volpe, Giovanni

    2017-10-01

    In environments with scarce resources, adopting the right search strategy can make the difference between succeeding and failing, even between life and death. At different scales, this applies to molecular encounters in the cell cytoplasm, to animals looking for food or mates in natural landscapes, to rescuers during search and rescue operations in disaster zones, and to genetic computer algorithms exploring parameter spaces. When looking for sparse targets in a homogeneous environment, a combination of ballistic and diffusive steps is considered optimal; in particular, more ballistic Lévy flights with exponent α≤1 are generally believed to optimize the search process. However, most search spaces present complex topographies. What is the best search strategy in these more realistic scenarios? Here, we show that the topography of the environment significantly alters the optimal search strategy toward less ballistic and more Brownian strategies. We consider an active particle performing a blind cruise search for nonregenerating sparse targets in a 2D space with steps drawn from a Lévy distribution with the exponent varying from α=1 to α=2 (Brownian). We show that, when boundaries, barriers, and obstacles are present, the optimal search strategy depends on the topography of the environment, with α assuming intermediate values in the whole range under consideration. We interpret these findings using simple scaling arguments and discuss their robustness to varying searcher's size. Our results are relevant for search problems at different length scales from animal and human foraging to microswimmers' taxis to biochemical rates of reaction.

  4. NIMS structural materials databases and cross search engine - MatNavi

    Energy Technology Data Exchange (ETDEWEB)

    Yamazaki, M.; Xu, Y.; Murata, M.; Tanaka, H.; Kamihira, K.; Kimura, K. [National Institute for Materials Science, Tokyo (Japan)

    2007-06-15

    Materials Database Station (MDBS) of National Institute for Materials Science (NIMS) owns the world's largest Internet materials database for academic and industry purpose, which is composed of twelve databases: five concerning structural materials, five concerning basic physical properties, one for superconducting materials and one for polymers. All of theses databases are opened to Internet access at the website of http://mits.nims.go.jp/en. Online tools for predicting properties of polymers and composite materials are also available. The NIMS structural materials databases are composed of structural materials data sheet online version (creep, fatigue, corrosion and space use materials strength), microstructure for crept material database, Pressure vessel materials database and CCT diagram for welding. (orig.)

  5. Validation of SmartRank: A likelihood ratio software for searching national DNA databases with complex DNA profiles.

    Science.gov (United States)

    Benschop, Corina C G; van de Merwe, Linda; de Jong, Jeroen; Vanvooren, Vanessa; Kempenaers, Morgane; Kees van der Beek, C P; Barni, Filippo; Reyes, Eusebio López; Moulin, Léa; Pene, Laurent; Haned, Hinda; Sijen, Titia

    2017-07-01

    Searching a national DNA database with complex and incomplete profiles usually yields very large numbers of possible matches that can present many candidate suspects to be further investigated by the forensic scientist and/or police. Current practice in most forensic laboratories consists of ordering these 'hits' based on the number of matching alleles with the searched profile. Thus, candidate profiles that share the same number of matching alleles are not differentiated and due to the lack of other ranking criteria for the candidate list it may be difficult to discern a true match from the false positives or notice that all candidates are in fact false positives. SmartRank was developed to put forward only relevant candidates and rank them accordingly. The SmartRank software computes a likelihood ratio (LR) for the searched profile and each profile in the DNA database and ranks database entries above a defined LR threshold according to the calculated LR. In this study, we examined for mixed DNA profiles of variable complexity whether the true donors are retrieved, what the number of false positives above an LR threshold is and the ranking position of the true donors. Using 343 mixed DNA profiles over 750 SmartRank searches were performed. In addition, the performance of SmartRank and CODIS were compared regarding DNA database searches and SmartRank was found complementary to CODIS. We also describe the applicable domain of SmartRank and provide guidelines. The SmartRank software is open-source and freely available. Using the best practice guidelines, SmartRank enables obtaining investigative leads in criminal cases lacking a suspect. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Searching the Literatura Latino Americana e do Caribe em Ciências da Saúde (LILACS) database improves systematic reviews.

    Science.gov (United States)

    Clark, Otavio Augusto Camara; Castro, Aldemar Araujo

    2002-02-01

    An unbiased systematic review (SR) should analyse as many articles as possible in order to provide the best evidence available. However, many SR use only databases with high English-language content as sources for articles. Literatura Latino Americana e do Caribe em Ciências da Saúde (LILACS) indexes 670 journals from the Latin American and Caribbean health literature but is seldom used in these SR. Our objective is to evaluate if LILACS should be used as a routine source of articles for SR. First we identified SR published in 1997 in five medical journals with a high impact factor. Then we searched LILACS for articles that could match the inclusion criteria of these SR. We also checked if the authors had already identified these articles located in LILACS. In all, 64 SR were identified. Two had already searched LILACS and were excluded. In 39 of 62 (63%) SR a LILACS search identified articles that matched the inclusion criteria. In 5 (8%) our search was inconclusive and in 18 (29%) no articles were found in LILACS. Therefore, in 71% (44/72) of cases, a LILACS search could have been useful to the authors. This proportion remains the same if we consider only the 37 SR that performed a meta-analysis. In only one case had the article identified in LILACS already been located elsewhere by the authors' strategy. LILACS is an under-explored and unique source of articles whose use can improve the quality of systematic reviews. This database should be used as a routine source to identify studies for systematic reviews.

  7. Protein-Level Integration Strategy of Multiengine MS Spectra Search Results for Higher Confidence and Sequence Coverage.

    Science.gov (United States)

    Zhao, Panpan; Zhong, Jiayong; Liu, Wanting; Zhao, Jing; Zhang, Gong

    2017-12-01

    Multiple search engines based on various models have been developed to search MS/MS spectra against a reference database, providing different results for the same data set. How to integrate these results efficiently with minimal compromise on false discoveries is an open question due to the lack of an independent, reliable, and highly sensitive standard. We took the advantage of the translating mRNA sequencing (RNC-seq) result as a standard to evaluate the integration strategies of the protein identifications from various search engines. We used seven mainstream search engines (Andromeda, Mascot, OMSSA, X!Tandem, pFind, InsPecT, and ProVerB) to search the same label-free MS data sets of human cell lines Hep3B, MHCCLM3, and MHCC97H from the Chinese C-HPP Consortium for Chromosomes 1, 8, and 20. As expected, the union of seven engines resulted in a boosted false identification, whereas the intersection of seven engines remarkably decreased the identification power. We found that identifications of at least two out of seven engines resulted in maximizing the protein identification power while minimizing the ratio of suspicious/translation-supported identifications (STR), as monitored by our STR index, based on RNC-Seq. Furthermore, this strategy also significantly improves the peptides coverage of the protein amino acid sequence. In summary, we demonstrated a simple strategy to significantly improve the performance for shotgun mass spectrometry by protein-level integrating multiple search engines, maximizing the utilization of the current MS spectra without additional experimental work.

  8. Astrophysical search strategies for accelerator blind dark matter

    International Nuclear Information System (INIS)

    Wells, J.D.

    1998-04-01

    A weakly interacting dark matter particle may be very difficult to discover at an accelerator because it either (1) is too heavy, (2) has no standard model gauge interactions, or (3) is almost degenerate with other states. In each of these cases, searches for annihilation products in the galactic halo are useful probes of dark matter properties. Using the example of supersymmetric dark matter, the author demonstrates how astrophysical searches for dark matter may provide discovery and mass information inaccessible to collider physics programs such as the Tevatron and LHC

  9. Methods and pitfalls in searching drug safety databases utilising the Medical Dictionary for Regulatory Activities (MedDRA).

    Science.gov (United States)

    Brown, Elliot G

    2003-01-01

    The Medical Dictionary for Regulatory Activities (MedDRA) is a unified standard terminology for recording and reporting adverse drug event data. Its introduction is widely seen as a significant improvement on the previous situation, where a multitude of terminologies of widely varying scope and quality were in use. However, there are some complexities that may cause difficulties, and these will form the focus for this paper. Two methods of searching MedDRA-coded databases are described: searching based on term selection from all of MedDRA and searching based on terms in the safety database. There are several potential traps for the unwary in safety searches. There may be multiple locations of relevant terms within a system organ class (SOC) and lack of recognition of appropriate group terms; the user may think that group terms are more inclusive than is the case. MedDRA may distribute terms relevant to one medical condition across several primary SOCs. If the database supports the MedDRA model, it is possible to perform multiaxial searching: while this may help find terms that might have been missed, it is still necessary to consider the entire contents of the SOCs to find all relevant terms and there are many instances of incomplete secondary linkages. It is important to adjust for multiaxiality if data are presented using primary and secondary locations. Other sources for errors in searching are non-intuitive placement and the selection of terms as preferred terms (PTs) that may not be widely recognised. Some MedDRA rules could also result in errors in data retrieval if the individual is unaware of these: in particular, the lack of multiaxial linkages for the Investigations SOC, Social circumstances SOC and Surgical and medical procedures SOC and the requirement that a PT may only be present under one High Level Term (HLT) and one High Level Group Term (HLGT) within any single SOC. Special Search Categories (collections of PTs assembled from various SOCs by

  10. Palaeo sea-level and ice-sheet databases: problems, strategies and perspectives

    Science.gov (United States)

    Rovere, Alessio; Düsterhus, André; Carlson, Anders; Barlow, Natasha; Bradwell, Tom; Dutton, Andrea; Gehrels, Roland; Hibbert, Fiona; Hijma, Marc; Horton, Benjamin; Klemann, Volker; Kopp, Robert; Sivan, Dorit; Tarasov, Lev; Törnqvist, Torbjorn

    2016-04-01

    Databases of palaeoclimate data have driven many major developments in understanding the Earth system. The measurement and interpretation of palaeo sea-level and ice-sheet data that form such databases pose considerable challenges to the scientific communities that use them for further analyses. In this paper, we build on the experience of the PALSEA (PALeo constraints on SEA level rise) community, which is a working group inside the PAGES (Past Global Changes) project, to describe the challenges and best strategies that can be adopted to build a self-consistent and standardised database of geological and geochemical data related to palaeo sea levels and ice sheets. Our aim in this paper is to identify key points that need attention and subsequent funding when undertaking the task of database creation. We conclude that any sea-level or ice-sheet database must be divided into three instances: i) measurement; ii) interpretation; iii) database creation. Measurement should include postion, age, description of geological features, and quantification of uncertainties. All must be described as objectively as possible. Interpretation can be subjective, but it should always include uncertainties and include all the possible interpretations, without unjustified a priori exclusions. We propose that, in the creation of a database, an approach based on Accessibility, Transparency, Trust, Availability, Continued updating, Completeness and Communication of content (ATTAC3) must be adopted. Also, it is essential to consider the community structure that creates and benefits of a database. We conclude that funding sources should consider to address not only the creation of original data in specific research-question oriented projects, but also include the possibility to use part of the funding for IT-related and database creation tasks, which are essential to guarantee accessibility and maintenance of the collected data.

  11. Review and Comparison of the Search Effectiveness and User Interface of Three Major Online Chemical Databases

    Science.gov (United States)

    Bharti, Neelam; Leonard, Michelle; Singh, Shailendra

    2016-01-01

    Online chemical databases are the largest source of chemical information and, therefore, the main resource for retrieving results from published journals, books, patents, conference abstracts, and other relevant sources. Various commercial, as well as free, chemical databases are available. SciFinder, Reaxys, and Web of Science are three major…

  12. Win-Win transport strategies: searching for synergies.

    OpenAIRE

    Valdés Serrano, Cristina; Monzón de Cáceres, Andrés; García Benítez, Francisco

    2014-01-01

    The need of an urban transport strategy on urban areas which solves the environmental problems derived from traffic without decreasing the trip attraction of these urban areas is taken for granted. Besides there is also a clear consensus among researchers and institutions in the need for integrated transport strategies (May et al., 2006; Zhang et al., 2006). But there is still a lack of knowledge on the policy measures to be implemented. This research aims to deepen in the understanding of h...

  13. Knowledge in Artificial Intelligence Systems: Searching the Strategies for Application

    OpenAIRE

    Kornienko, Alla A.; Kornienko, Anatoly V.; Fofanov, Oleg B.; Chubik, Maxim P.

    2015-01-01

    The studies based on auto-epistemic logic are pointed out as an advanced direction for development of artificial intelligence (AI). Artificial intelligence is taken as a system that imitates the solution of complicated problems by human during the course of life. The structure of symbols and operations, by which intellectual solution is performed, as well as searching the strategic reference points for those solutions, which are caused by certain structures of symbols and operations, – are co...

  14. Strategies for the search of life in the universe

    OpenAIRE

    Schneider, Jean

    1996-01-01

    The discovery of an increasing number of Jupiter-like planets in orbit around other stars (or extra-solar planets) is a promising first step toward the search for Life in the Universe. We review all aspects of the question: - definition of Life - definition and characterization of the `habitable zone' around a star - overview of detection methods of planets, with special attention to habitable planets - present fingings - future projects.

  15. Fast quantum search algorithm for databases of arbitrary size and its implementation in a cavity QED system

    International Nuclear Information System (INIS)

    Li, H.Y.; Wu, C.W.; Liu, W.T.; Chen, P.X.; Li, C.Z.

    2011-01-01

    We propose a method for implementing the Grover search algorithm directly in a database containing any number of items based on multi-level systems. Compared with the searching procedure in the database with qubits encoding, our modified algorithm needs fewer iteration steps to find the marked item and uses the carriers of the information more economically. Furthermore, we illustrate how to realize our idea in cavity QED using Zeeman's level structure of atoms. And the numerical simulation under the influence of the cavity and atom decays shows that the scheme could be achieved efficiently within current state-of-the-art technology. -- Highlights: ► A modified Grover algorithm is proposed for searching in an arbitrary dimensional Hilbert space. ► Our modified algorithm requires fewer iteration steps to find the marked item. ► The proposed method uses the carriers of the information more economically. ► A scheme for a six-item Grover search in cavity QED is proposed. ► Numerical simulation under decays shows that the scheme can be achieved with enough fidelity.

  16. Talk as a Metacognitive Strategy during the Information Search Process of Adolescents

    Science.gov (United States)

    Bowler, Leanne

    2010-01-01

    Introduction: This paper describes a metacognitive strategy related to the social dimension of the information search process of adolescents. Method: A case study that used naturalistic methods to explore the metacognitive thinking nd associated emotions of ten adolescents. The study was framed by Kuhlthau's Information Search Process model and…

  17. A rational analysis of alternating search and reflection strategies in problem solving

    NARCIS (Netherlands)

    Taatgen, N; Shafto, MG; Langley, P

    1997-01-01

    In this paper two approaches to problem solving, search and reflection, are discussed, and combined in two models, both based on rational analysis (Anderson, 1990). The first model is a dynamic growth model, which shows that alternating search and reflection is a rational strategy. The second model

  18. Alumni Job Search Strategies, Class of 2011. GMAC[R] Data-to-Go Series

    Science.gov (United States)

    Graduate Management Admission Council, 2012

    2012-01-01

    Examining the job search strategies and employment outcomes for Class of 2011 graduate business school alumni sheds light on current job market trends and the effort required to secure a first job after earning a graduate business degree. This fact sheet highlights the job search methods used by Class of 2011 business school graduates as reported…

  19. Websites for children: search strategies and interface design. Three studies on children's search performance and evaluation

    NARCIS (Netherlands)

    Jochmann-Mannak, Hanna

    2014-01-01

    Children experience all kinds of problems using search interfaces for adults such as Google. The research reported in this dissertation is about the design of informational interfaces for children between 8 and 12 years old. The goal of the research was to learn more about interfaces that ‘work’ for

  20. How Interface Design and Search Strategy Influence Children’s Search Performance and Evaluation

    NARCIS (Netherlands)

    Jochmann-Mannak, Hanna; Lentz, Leo; Huibers, Theo W.C.; Sanders, Ted

    This chapter presents an experiment with 158 children, aged 10 to 12, in which search performance and attitudes towards an informational Website are investigated. The same Website was designed in 3 different types of interface design varying in playfulness of navigation structure and in playfulness

  1. Supervised learning of tools for content-based search of image databases

    Science.gov (United States)

    Delanoy, Richard L.

    1996-03-01

    A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.

  2. Mascot search results - CREATE portal | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available search(/contents-en/) != -1 || url.search(/index-e.html/) != -1 ) { document.getElementById(lang).innerHTML=.../) != -1 ) { url = url.replace(-e.html,.html); document.getElementById(lang).innerHTML=[ Japanese |...en/,/jp/); document.getElementById(lang).innerHTML=[ Japanese | English ]; } else if ( url.search(//contents...//) != -1 ) { url = url.replace(/contents/,/contents-en/); document.getElementById(lang).innerHTML=[ Japanes...e(/contents-en/,/contents/); document.getElementById(lang).innerHTML=[ Japanese | English ]; } else if( url.

  3. Blackout risk prevention in a smart grid based flexible optimal strategy using Grey Wolf-pattern search algorithms

    International Nuclear Information System (INIS)

    Mahdad, Belkacem; Srairi, K.

    2015-01-01

    Highlights: • A generalized optimal security power system planning strategy for blackout risk prevention is proposed. • A Grey Wolf Optimizer dynamically coordinated with Pattern Search algorithm is proposed. • A useful optimized database dynamically generated considering margin loading stability under severe faults. • The robustness and feasibility of the proposed strategy is validated in the standard IEEE 30 Bus system. • The proposed planning strategy will be useful for power system protection coordination and control. - Abstract: Developing a flexible and reliable power system planning strategy under critical situations is of great importance to experts and industrials to minimize the probability of blackouts occurrence. This paper introduces the first stage of this practical strategy by the application of Grey Wolf Optimizer coordinated with pattern search algorithm for solving the security smart grid power system management under critical situations. The main objective of this proposed planning strategy is to prevent the practical power system against blackout due to the apparition of faults in generating units or important transmission lines. At the first stage the system is pushed to its margin stability limit, the critical loads shedding are selected using voltage stability index. In the second stage the generator control variables, the reactive power of shunt and dynamic compensators are adjusted in coordination with minimization the active and reactive power at critical loads to maintain the system at security state to ensure service continuity. The feasibility and efficiency of the proposed strategy is applied to IEEE 30-Bus test system. Results are promising and prove the practical efficiency of the proposed strategy to ensure system security under critical situations

  4. Search strategy for theorem proving in artificial systems. II

    Energy Technology Data Exchange (ETDEWEB)

    Lovitskii, V A; Barenboim, M S

    1981-01-01

    For Pt.I see IBID., p.28 (1981). An algorithm is presented, realizing the strategy of part I, by constructing a graph of disproofs. The idea central to the algorithm is to choose pairs of literals with the highest probability at the given moment, rather than to choose literals at random. 3 references.

  5. Search strategy for theorem proving in artificial systems. I

    Energy Technology Data Exchange (ETDEWEB)

    Lovitskii, V A; Barenboim, M S

    1981-01-01

    A strategy is contrived, employing the language of finite-order predicate calculus, for finding proofs of theorems. A theorem is formulated, based on 2 known theorems on purity and absorption, and used to determine 5 properties of a set of propositions. 3 references.

  6. Verification of Single-Peptide Protein Identifications by the Application of Complementary Database Search Algorithms

    National Research Council Canada - National Science Library

    Rohrbough, James G; Breci, Linda; Merchant, Nirav; Miller, Susan; Haynes, Paul A

    2005-01-01

    .... One such technique, known as the Multi-Dimensional Protein Identification Technique, or MudPIT, involves the use of computer search algorithms that automate the process of identifying proteins...

  7. Users’ Perceived Difficulties and Corresponding Reformulation Strategies in Google Voice Search

    Directory of Open Access Journals (Sweden)

    Wei Jeng

    2016-06-01

    Full Text Available In this article, we report users’ perceptions of query input errors and query reformulation strategies in voice search using data collected through a laboratory user study. Our results reveal that: 1 users’ perceived obstacles during a voice search can be related to speech recognition errors and topic complexity; 2 users naturally develop different strategies to deal with various types of words (e.g., acronyms, single-worded queries, non-English words with high error rates in speech recognition; and 3 users can have various emotional reactions when encounter voice input errors and they develop preferred usage occasions for voice search.

  8. Win the game of Googleopoly unlocking the secret strategy of search engines

    CERN Document Server

    Bradley, Sean V

    2015-01-01

    Rank higher in search results with this guide to SEO and content building supremacy Google is not only the number one search engine in the world, it is also the number one website in the world. Only 5 percent of site visitors search past the first page of Google, so if you're not in those top ten results, you are essentially invisible. Winning the Game of Googleopoly is the ultimate roadmap to Page One Domination. The POD strategy is what gets you on that super-critical first page of Google results by increasing your page views. You'll learn how to shape your online presence for Search Engine

  9. Confirming preferences or collecting data? Information search strategies and romantic partner selection.

    Science.gov (United States)

    Hennessy, Michael H; Fishbein, Marty; Curtis, Brenda; Barrett, Daniel

    2008-03-01

    This article investigates two kinds of information search strategies in the context of selecting romantic partners. Confirmatory searching occurs when people ask for more information about a romantic partner in order to validate or confirm their assessment. Balanced searches are characterized by a search for risk information for partners rated as attractive and for attractiveness information about partners rated as risky in order to attain a more complete evaluation. A factorial survey computer program randomly constructed five types of partner descriptions and college-age respondents evaluated nine descriptions in terms of both health risk and romantic attractiveness outcomes. The results show little evidence of balanced search strategies: for all vignette types the respondents searched for attractiveness information. Regression analysis of the search outcomes showed no difference between males and females in the desire for attractiveness or risk information, the amount of additional information desired, or the proportion of descriptions for which more information was desired. However, an attractive physical appearance did increase the amount of additional information desired and the proportion of vignettes for which more information was desired. The results were generally inconsistent with a balanced search hypothesis; a better characterization of the respondents' strategy might be "confirmatory bias."

  10. Intermittent random walks: transport regimes and implications on search strategies

    International Nuclear Information System (INIS)

    Gomez Portillo, Ignacio; Campos, Daniel; Méndez, Vicenç

    2011-01-01

    We construct a transport model for particles that alternate rests of random duration and flights with random velocities. The model provides a balance equation for the mesoscopic particle density obtained from the continuous-time random walk framework. By assuming power laws for the distributions of waiting times and flight durations (for any velocity distribution with finite moments) we have found that the model can yield all the transport regimes ranging from subdiffusion to ballistic depending on the values of the characteristic exponents of the distributions. In addition, if the exponents satisfy a simple relationship it is shown how the competition between the tails of the distributions gives rise to a diffusive transport. Finally, we explore how the details of this intermittent transport process affect the success probability in an optimal search problem where an individual searcher looks for a target distributed (heterogeneously) in space. All the results are conveniently checked with numerical simulations

  11. An automated supernova search and the design strategy

    International Nuclear Information System (INIS)

    Colgate, S.A.

    1987-01-01

    The design considerations for an automated supernova search are reviewed. If supernova are to be found a week after explosion well before light maximum of both Types I and II, and if a rate of finding of 52 per year is justified, then one needs to keep roughly 5000 galaxies under surveillance out of a full set of 15,000 galaxies at ≅50 Mpc distance. For detection at 1% of Type I maximum light requires a 30-inch telescope, 10 photoelectrons per pixel threshold, a 128 x 128 pixel photodetector operating with a 3-second integration time, and 2 seconds to slew and settle ≅1 0 . A system designed to perform this function in real-time is described. 7 refs

  12. SimShiftDB; local conformational restraints derived from chemical shift similarity searches on a large synthetic database

    International Nuclear Information System (INIS)

    Ginzinger, Simon W.; Coles, Murray

    2009-01-01

    We present SimShiftDB, a new program to extract conformational data from protein chemical shifts using structural alignments. The alignments are obtained in searches of a large database containing 13,000 structures and corresponding back-calculated chemical shifts. SimShiftDB makes use of chemical shift data to provide accurate results even in the case of low sequence similarity, and with even coverage of the conformational search space. We compare SimShiftDB to HHSearch, a state-of-the-art sequence-based search tool, and to TALOS, the current standard tool for the task. We show that for a significant fraction of the predicted similarities, SimShiftDB outperforms the other two methods. Particularly, the high coverage afforded by the larger database often allows predictions to be made for residues not involved in canonical secondary structure, where TALOS predictions are both less frequent and more error prone. Thus SimShiftDB can be seen as a complement to currently available methods

  13. SimShiftDB; local conformational restraints derived from chemical shift similarity searches on a large synthetic database

    Energy Technology Data Exchange (ETDEWEB)

    Ginzinger, Simon W. [Center of Applied Molecular Engineering, University of Salzburg, Department of Molecular Biology, Division of Bioinformatics (Austria)], E-mail: simon@came.sbg.ac.at; Coles, Murray [Max-Planck-Institute for Developmental Biology, Department of Protein Evolution (Germany)], E-mail: Murray.Coles@tuebingen.mpg.de

    2009-03-15

    We present SimShiftDB, a new program to extract conformational data from protein chemical shifts using structural alignments. The alignments are obtained in searches of a large database containing 13,000 structures and corresponding back-calculated chemical shifts. SimShiftDB makes use of chemical shift data to provide accurate results even in the case of low sequence similarity, and with even coverage of the conformational search space. We compare SimShiftDB to HHSearch, a state-of-the-art sequence-based search tool, and to TALOS, the current standard tool for the task. We show that for a significant fraction of the predicted similarities, SimShiftDB outperforms the other two methods. Particularly, the high coverage afforded by the larger database often allows predictions to be made for residues not involved in canonical secondary structure, where TALOS predictions are both less frequent and more error prone. Thus SimShiftDB can be seen as a complement to currently available methods.

  14. Development and use of a content search strategy for retrieving studies on patients' views and preferences.

    Science.gov (United States)

    Selva, Anna; Solà, Ivan; Zhang, Yuan; Pardo-Hernandez, Hector; Haynes, R Brian; Martínez García, Laura; Navarro, Tamara; Schünemann, Holger; Alonso-Coello, Pablo

    2017-08-30

    Identifying scientific literature addressing patients' views and preferences is complex due to the wide range of studies that can be informative and the poor indexing of this evidence. Given the lack of guidance we developed a search strategy to retrieve this type of evidence. We assembled an initial list of terms from several sources, including the revision of the terms and indexing of topic-related studies and, methods research literature, and other relevant projects and systematic reviews. We used the relative recall approach, evaluating the capacity of the designed search strategy for retrieving studies included in relevant systematic reviews for the topic. We implemented in practice the final version of the search strategy for conducting systematic reviews and guidelines, and calculated search's precision and the number of references needed to read (NNR). We assembled an initial version of the search strategy, which had a relative recall of 87.4% (yield of 132/out of 151 studies). We then added some additional terms from the studies not initially identified, and re-tested this improved version against the studies included in a new set of systematic reviews, reaching a relative recall of 85.8% (151/out of 176 studies, 95% CI 79.9 to 90.2). This final version of the strategy includes two sets of terms related with two domains: "Patient Preferences and Decision Making" and "Health State Utilities Values". When we used the search strategy for the development of systematic reviews and clinical guidelines we obtained low precision values (ranging from 2% to 5%), and the NNR from 20 to 50. This search strategy fills an important research gap in this field. It will help systematic reviewers, clinical guideline developers, and policy-makers to retrieve published research on patients' views and preferences. In turn, this will facilitate the inclusion of this critical aspect when formulating heath care decisions, including recommendations.

  15. Identification of risk conditions for the development of adrenal disorders: how optimized PubMed search strategies makes the difference.

    Science.gov (United States)

    Guaraldi, Federica; Parasiliti-Caprino, Mirko; Goggi, Riccardo; Beccuti, Guglielmo; Grottoli, Silvia; Arvat, Emanuela; Ghizzoni, Lucia; Ghigo, Ezio; Giordano, Roberta; Gori, Davide

    2014-12-01

    The exponential growth of scientific literature available through electronic databases (namely PubMed) has increased the chance of finding interesting articles. At the same time, search has become more complicated, time consuming, and at risk of missing important information. Therefore, optimized strategies have to be adopted to maximize searching impact. The aim of this study was to formulate efficient strings to search PubMed for etiologic associations between adrenal disorders (ADs) and other conditions. A comprehensive list of terms identifying endogenous conditions primarily affecting adrenals was compiled. An ad hoc analysis was performed to find the best way to express each term in order to find the highest number of potentially pertinent articles in PubMed. A predefined number of retrieved abstracts were read to assess their association with ADs' etiology. A more sensitive (providing the largest literature coverage) and a more specific (including only those terms retrieving >40 % of potentially pertinent articles) string were formulated. Various researches were performed to assess strings' ability to identify articles of interest in comparison with non-optimized literature searches. We formulated optimized, ready applicable tools for the identification of the literature assessing etiologic associations in the field of ADs using PubMed, and demonstrated the advantages deriving from their application. Detailed description of the methodological process is also provided, so that this work can easily be translated to other fields of practice.

  16. Content Based Retrieval Database Management System with Support for Similarity Searching and Query Refinement

    Science.gov (United States)

    2002-01-01

    to the OODBMS approach. The ORDBMS approach produced such research prototypes as Postgres [155], and Starburst [67] and commercial products such as...Kemnitz. The POSTGRES Next-Generation Database Management System. Communications of the ACM, 34(10):78–92, 1991. [156] Michael Stonebreaker and Dorothy

  17. Ariadne: a database search engine for identification and chemical analysis of RNA using tandem mass spectrometry data.

    Science.gov (United States)

    Nakayama, Hiroshi; Akiyama, Misaki; Taoka, Masato; Yamauchi, Yoshio; Nobe, Yuko; Ishikawa, Hideaki; Takahashi, Nobuhiro; Isobe, Toshiaki

    2009-04-01

    We present here a method to correlate tandem mass spectra of sample RNA nucleolytic fragments with an RNA nucleotide sequence in a DNA/RNA sequence database, thereby allowing tandem mass spectrometry (MS/MS)-based identification of RNA in biological samples. Ariadne, a unique web-based database search engine, identifies RNA by two probability-based evaluation steps of MS/MS data. In the first step, the software evaluates the matches between the masses of product ions generated by MS/MS of an RNase digest of sample RNA and those calculated from a candidate nucleotide sequence in a DNA/RNA sequence database, which then predicts the nucleotide sequences of these RNase fragments. In the second step, the candidate sequences are mapped for all RNA entries in the database, and each entry is scored for a function of occurrences of the candidate sequences to identify a particular RNA. Ariadne can also predict post-transcriptional modifications of RNA, such as methylation of nucleotide bases and/or ribose, by estimating mass shifts from the theoretical mass values. The method was validated with MS/MS data of RNase T1 digests of in vitro transcripts. It was applied successfully to identify an unknown RNA component in a tRNA mixture and to analyze post-transcriptional modification in yeast tRNA(Phe-1).

  18. Federated Search Tools in Fusion Centers: Bridging Databases in the Information Sharing Environment

    Science.gov (United States)

    2012-09-01

    Suspicious Activity Reporting Initiative ODNI Office of the Director of National Intelligence OSINT Open Source Intelligence PERF Police Executive...Fusion centers are encouraged to explore all available information sources to enhance the intelligence analysis process. It follows then that fusion...WSIC also utilizes ACCURINT, a web-based, subscription service. ACCURINT searches open source information and is able to collect and collate

  19. Combining history of medicine and library instruction: an innovative approach to teaching database searching to medical students.

    Science.gov (United States)

    Timm, Donna F; Jones, Dee; Woodson, Deidra; Cyrus, John W

    2012-01-01

    Library faculty members at the Health Sciences Library at the LSU Health Shreveport campus offer a database searching class for third-year medical students during their surgery rotation. For a number of years, students completed "ten-minute clinical challenges," but the instructors decided to replace the clinical challenges with innovative exercises using The Edwin Smith Surgical Papyrus to emphasize concepts learned. The Surgical Papyrus is an online resource that is part of the National Library of Medicine's "Turning the Pages" digital initiative. In addition, vintage surgical instruments and historic books are displayed in the classroom to enhance the learning experience.

  20. Search strategy has influenced the discovery rate of human viruses.

    Science.gov (United States)

    Rosenberg, Ronald; Johansson, Michael A; Powers, Ann M; Miller, Barry R

    2013-08-20

    A widely held concern is that the pace of infectious disease emergence has been increasing. We have analyzed the rate of discovery of pathogenic viruses, the preeminent source of newly discovered causes of human disease, from 1897 through 2010. The rate was highest during 1950-1969, after which it moderated. This general picture masks two distinct trends: for arthropod-borne viruses, which comprised 39% of pathogenic viruses, the discovery rate peaked at three per year during 1960-1969, but subsequently fell nearly to zero by 1980; however, the rate of discovery of nonarboviruses remained stable at about two per year from 1950 through 2010. The period of highest arbovirus discovery coincided with a comprehensive program supported by The Rockefeller Foundation of isolating viruses from humans, animals, and arthropod vectors at field stations in Latin America, Africa, and India. The productivity of this strategy illustrates the importance of location, approach, long-term commitment, and sponsorship in the discovery of emerging pathogens.

  1. DOT Online Database

    Science.gov (United States)

    Page Home Table of Contents Contents Search Database Search Login Login Databases Advisory Circulars accessed by clicking below: Full-Text WebSearch Databases Database Records Date Advisory Circulars 2092 5 data collection and distribution policies. Document Database Website provided by MicroSearch

  2. Research on the optimization strategy of web search engine based on data mining

    Science.gov (United States)

    Chen, Ronghua

    2018-04-01

    With the wide application of search engines, web site information has become an important way for people to obtain information. People have found that they are growing in an increasingly explosive manner. Web site information is verydifficult to find the information they need, and now the search engine can not meet the need, so there is an urgent need for the network to provide website personalized information service, data mining technology for this new challenge is to find a breakthrough. In order to improve people's accuracy of finding information from websites, a website search engine optimization strategy based on data mining is proposed, and verified by website search engine optimization experiment. The results show that the proposed strategy improves the accuracy of the people to find information, and reduces the time for people to find information. It has an important practical value.

  3. Crescendo: A Protein Sequence Database Search Engine for Tandem Mass Spectra.

    Science.gov (United States)

    Wang, Jianqi; Zhang, Yajie; Yu, Yonghao

    2015-07-01

    A search engine that discovers more peptides reliably is essential to the progress of the computational proteomics. We propose two new scoring functions (L- and P-scores), which aim to capture similar characteristics of a peptide-spectrum match (PSM) as Sequest and Comet do. Crescendo, introduced here, is a software program that implements these two scores for peptide identification. We applied Crescendo to test datasets and compared its performance with widely used search engines, including Mascot, Sequest, and Comet. The results indicate that Crescendo identifies a similar or larger number of peptides at various predefined false discovery rates (FDR). Importantly, it also provides a better separation between the true and decoy PSMs, warranting the future development of a companion post-processing filtering algorithm.

  4. Remediation strategies after nuclear or radiological accidents: part 1 - database development

    International Nuclear Information System (INIS)

    Silva, Diogo N.G.; Wasserman, Maria Angelica V.; Rochedo, Elaine R.R.

    2009-01-01

    The selection of protective measures and of remediation strategies of areas after a nuclear or radiological accident needs to be based on previously established criteria, in way to minimize the public's emotional stress and the exposure to workers involved in cleanup operations due to the implementation of procedures that are not effective in reducing doses to the public. Thus this work intended to develop a database which allows supporting the decision-making process after these accidents, by describing the foreseen strategies according to the type of accident and the type of affected environment, in order to be used in a multi-criteria selective process. To achieve that, in this first stage, the database has been developed including the following aspects: type of environment (urban, rural or aquatic); their contamination removal efficiency, as function of the time elapsed since the contamination event; the type and the amount of waste generated in the application of the strategy; the expected doses to the work team and basic needs such as specific materials, equipment, training, IPE, among others. The protection measures are usually described in literature considering their activity removal efficiency of a certain surface or environment. In order to determine their efficiency in the reduction of doses, a second stage is foreseen, involving the simulation of the implementation of the measures in different moments after the contamination, based on pre-defined accidents and scenarios, with focus on the surroundings of the Brazilian Nuclear Power Plants in Angra dos Reis. (author)

  5. Remediation strategies after nuclear or radiological accidents: part 1 - database development

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Diogo N.G.; Wasserman, Maria Angelica V. [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil)], e-mail: dneves@ird.gov.br, e-mail: angelica@ird.gov.br, e-mail: lfcconti@ird.gov.br; Rochedo, Elaine R.R. [Comissao Nacional de Energia Nuclear (CNEN-RJ), Rio de Janeiro, RJ (Brazil). Coordenacao de Instalacoes Nucleares], e-mail: erochedo@cnen.gov.br

    2009-07-01

    The selection of protective measures and of remediation strategies of areas after a nuclear or radiological accident needs to be based on previously established criteria, in way to minimize the public's emotional stress and the exposure to workers involved in cleanup operations due to the implementation of procedures that are not effective in reducing doses to the public. Thus this work intended to develop a database which allows supporting the decision-making process after these accidents, by describing the foreseen strategies according to the type of accident and the type of affected environment, in order to be used in a multi-criteria selective process. To achieve that, in this first stage, the database has been developed including the following aspects: type of environment (urban, rural or aquatic); their contamination removal efficiency, as function of the time elapsed since the contamination event; the type and the amount of waste generated in the application of the strategy; the expected doses to the work team and basic needs such as specific materials, equipment, training, IPE, among others. The protection measures are usually described in literature considering their activity removal efficiency of a certain surface or environment. In order to determine their efficiency in the reduction of doses, a second stage is foreseen, involving the simulation of the implementation of the measures in different moments after the contamination, based on pre-defined accidents and scenarios, with focus on the surroundings of the Brazilian Nuclear Power Plants in Angra dos Reis. (author)

  6. A comparison of information functions and search strategies for sensor planning in target classification.

    Science.gov (United States)

    Zhang, Guoxian; Ferrari, Silvia; Cai, Chenghui

    2012-02-01

    This paper investigates the comparative performance of several information-driven search strategies and decision rules using a canonical target classification problem. Five sensor models are considered: one obtained from classical estimation theory and four obtained from Bernoulli, Poisson, binomial, and mixture-of-binomial distributions. A systematic approach is presented for deriving information functions that represent the expected utility of future sensor measurements from mutual information, Rènyi divergence, Kullback-Leibler divergence, information potential, quadratic entropy, and the Cauchy-Schwarz distance. The resulting information-driven strategies are compared to direct-search, alert-confirm, task-driven (TS), and log-likelihood-ratio (LLR) search strategies. Extensive numerical simulations show that quadratic entropy typically leads to the most effective search strategy with respect to correct-classification rates. In the presence of prior information, the quadratic-entropy-driven strategy also displays the lowest rate of false alarms. However, when prior information is absent or very noisy, TS and LLR strategies achieve the lowest false-alarm rates for the Bernoulli, mixture-of-binomial, and classical sensor models.

  7. Sleuth: A quasi-model-independent search strategy for new physics

    International Nuclear Information System (INIS)

    Bruce O. Knuteson

    2001-01-01

    How can we search for new physics when we only vaguely know what it should look like? How can we perform an unbiased yet data-driven search? If we see apparently anomalous events in our data, how can we quantify their ''interestingness'' a posteriori? We present an analysis strategy (SLEUTH) that simultaneously addresses each of these questions, and we demonstrate its application to over thirty exclusive final states in data collected by D0 in Run I of the Fermilab Tevatron

  8. The Magnetics Information Consortium (MagIC) Online Database: Uploading, Searching and Visualizing Paleomagnetic and Rock Magnetic Data

    Science.gov (United States)

    Minnett, R.; Koppers, A.; Tauxe, L.; Constable, C.; Pisarevsky, S. A.; Jackson, M.; Solheid, P.; Banerjee, S.; Johnson, C.

    2006-12-01

    The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by both rock and paleomagnetic data. The goal of MagIC is to archive all measurements and the derived properties for studies of paleomagnetic directions (inclination, declination) and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and has two search nodes, one for paleomagnetism and one for rock magnetism. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual map interface to browse and select locations. The query result set is displayed in a digestible tabular format allowing the user to descend through hierarchical levels such as from locations to sites, samples, specimens, and measurements. At each stage, the result set can be saved and, if supported by the data, can be visualized by plotting global location maps, equal area plots, or typical Zijderveld, hysteresis, and various magnetization and remanence diagrams. User contributions to the MagIC database are critical to achieving a useful research tool. We have developed a standard data and metadata template (Version 2.1) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate population of these templates within Microsoft Excel. These tools allow for the import/export of text files and provide advanced functionality to manage and edit the data, and to perform various internal checks to maintain data integrity and prepare for uploading. The MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm executes the upload and takes only a few minutes to process several thousand data records. The standardized MagIC template files are stored in the digital archives of EarthRef.org where they

  9. Optimal Search Strategy of Robotic Assembly Based on Neural Vibration Learning

    Directory of Open Access Journals (Sweden)

    Lejla Banjanovic-Mehmedovic

    2011-01-01

    Full Text Available This paper presents implementation of optimal search strategy (OSS in verification of assembly process based on neural vibration learning. The application problem is the complex robot assembly of miniature parts in the example of mating the gears of one multistage planetary speed reducer. Assembly of tube over the planetary gears was noticed as the most difficult problem of overall assembly. The favourable influence of vibration and rotation movement on compensation of tolerance was also observed. With the proposed neural-network-based learning algorithm, it is possible to find extended scope of vibration state parameter. Using optimal search strategy based on minimal distance path between vibration parameter stage sets (amplitude and frequencies of robots gripe vibration and recovery parameter algorithm, we can improve the robot assembly behaviour, that is, allow the fastest possible way of mating. We have verified by using simulation programs that search strategy is suitable for the situation of unexpected events due to uncertainties.

  10. A Semidefinite Programming Based Search Strategy for Feature Selection with Mutual Information Measure.

    Science.gov (United States)

    Naghibi, Tofigh; Hoffmann, Sarah; Pfister, Beat

    2015-08-01

    Feature subset selection, as a special case of the general subset selection problem, has been the topic of a considerable number of studies due to the growing importance of data-mining applications. In the feature subset selection problem there are two main issues that need to be addressed: (i) Finding an appropriate measure function than can be fairly fast and robustly computed for high-dimensional data. (ii) A search strategy to optimize the measure over the subset space in a reasonable amount of time. In this article mutual information between features and class labels is considered to be the measure function. Two series expansions for mutual information are proposed, and it is shown that most heuristic criteria suggested in the literature are truncated approximations of these expansions. It is well-known that searching the whole subset space is an NP-hard problem. Here, instead of the conventional sequential search algorithms, we suggest a parallel search strategy based on semidefinite programming (SDP) that can search through the subset space in polynomial time. By exploiting the similarities between the proposed algorithm and an instance of the maximum-cut problem in graph theory, the approximation ratio of this algorithm is derived and is compared with the approximation ratio of the backward elimination method. The experiments show that it can be misleading to judge the quality of a measure solely based on the classification accuracy, without taking the effect of the non-optimum search strategy into account.

  11. Improved quantum-behaved particle swarm optimization with local search strategy

    Directory of Open Access Journals (Sweden)

    Maolong Xi

    2017-03-01

    Full Text Available Quantum-behaved particle swarm optimization, which was motivated by analysis of particle swarm optimization and quantum system, has shown compared performance in finding the optimal solutions for many optimization problems to other evolutionary algorithms. To address the problem of premature, a local search strategy is proposed to improve the performance of quantum-behaved particle swarm optimization. In proposed local search strategy, a super particle is presented which is a collection body of randomly selected particles’ dimension information in the swarm. The selected probability of particles in swarm is different and determined by their fitness values. To minimization problems, the fitness value of one particle is smaller; the selected probability is more and will contribute more information in constructing the super particle. In addition, in order to investigate the influence on algorithm performance with different local search space, four methods of computing the local search radius are applied in local search strategy and propose four variants of local search quantum-behaved particle swarm optimization. Empirical studies on a suite of well-known benchmark functions are undertaken in order to make an overall performance comparison among the proposed methods and other quantum-behaved particle swarm optimization. The simulation results show that the proposed quantum-behaved particle swarm optimization variants have better advantages over the original quantum-behaved particle swarm optimization.

  12. Heat pumps: Industrial applications. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-04-01

    The bibliography contains citations concerning design, development, and applications of heat pumps for industrial processes. Included are thermal energy exchanges based on air-to-air, ground-coupled, air-to-water, and water-to-water systems. Specific applications include industrial process heat, drying, district heating, and waste processing plants. Other Published Searches in this series cover heat pump technology and economics, and heat pumps for residential and commercial applications. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  13. Heat pumps: Industrial applications. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-01-01

    The bibliography contains citations concerning design, development, and applications of heat pumps for industrial processes. Included are thermal energy exchanges based on air-to-air, ground-coupled, air-to-water, and water-to-water systems. Specific applications include industrial process heat, drying, district heating, and waste processing plants. Other Published Searches in this series cover heat pump technology and economics, and heat pumps for residential and commercial applications. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  14. Using a Native XML Database for Encoded Archival Description Search and Retrieval

    Directory of Open Access Journals (Sweden)

    Alan Cornish

    2017-09-01

    Full Text Available This article is an attempt to develop Geographic Information Systems (GIS technology into an analytical tool for examining the relationships between the height of the bookshelves and the behavior of library readers in utilizing books within a library. The tool would contain a database to store book-use information and some GIS maps to represent bookshelves. Upon analyzing the data stored in the database, different frequencies of book use across bookshelf layers are displayed on the maps. The tool would provide a wonderful means of visualization through which analysts can quickly realize the spatial distribution of books used in a library. This article reveals that readers tend to pull books out of the bookshelf layers that are easily reachable by human eyes and hands, and thus opens some issues for librarians to reconsider the management of library collections.

  15. A Critical Review of Search Strategies Used in Recent Systematic Reviews Published in Selected Prosthodontic and Implant-Related Journals: Are Systematic Reviews Actually Systematic?

    Science.gov (United States)

    Layton, Danielle

    The aim of this study was to outline how search strategies can be systematic, to examine how the searches in recent systematic reviews in prosthodontic and implant-related journals were structured, and to determine whether the search strategies used in those articles were systematic. A total of 103 articles published as systematic reviews and indexed in Medline between January 2013 and May 2016 were identified from eight prosthodontic and implant journals and reviewed. The search strategies were considered systematic when they met the following criteria: (1) more than one electronic database was searched, (2) more than one searcher was clearly involved, (3) both text words and indexing terms were clearly included in the search strategy, (4) a hand search of selected journals or reference lists was undertaken, (5) gray research was specifically sought, and (6) the articles were published in English and at least one other language. The data were tallied and qualitatively assessed. The majority of articles reported on implants (54%), followed by tooth-supported fixed prosthodontics (13%). A total of 23 different electronic resources were consulted, including Medline (by 100% of articles), the Cochrane Library (52%), and Embase (37%). The majority consulted more than one electronic resource (71%), clearly included more than one searcher (73%), and employed a hand search of either selected journals or reference lists (86%). Less than half used both text words and indexing terms to identify articles (42%), while 15% actively sought gray research. Articles published in languages other than English were considered in 63 reviews, but only 14 had no language restrictions. Of the 103 articles, 5 completed search strategies that met all 6 criteria, and a further 12 met 5 criteria. Two articles did not fulfill any of the criteria. More than 95% of recent prosthodontic and implant review articles published in the selected journals failed to use search strategies that were

  16. Centrifuge enrichment plants. (Latest citations from the NTIS bibliographic database). Published Search

    International Nuclear Information System (INIS)

    1993-09-01

    The bibliography contains citations concerning the design, control, monitoring, and safety of centrifuge enrichment plants. Power supplies, enrichment plant safeguards, facility design, cascade heater test loops to monitor the enrichment process, inspection strategies, and the socioeconomic effects of centrifuge enrichment plants are examined. Radioactive waste disposal problems are considered. (Contains a minimum of 171 citations and includes a subject term index and title list.)

  17. Searching the databases: a quick look at Amazon and two other online catalogues.

    Science.gov (United States)

    Potts, Hilary

    2003-01-01

    The Amazon Online Catalogue was compared with the Library of Congress Catalogue and the British Library Catalogue, both also available online, by searching on both neutral (Gay, Lesbian, Homosexual) and pejorative (Perversion, Sex Crime) subject terms, and also by searches using Boolean logic in an attempt to identify Lesbian Fiction items and religion-based anti-gay material. Amazon was much more likely to be the first port of call for non-academic enquiries. Although excluding much material necessary for academic research, it carried more information about the individual books and less historical homophobic baggage in its terminology than the great national catalogues. Its back catalogue of second-hand books outnumbered those in print. Current attitudes may partially be gauged by the relative numbers of titles published under each heading--e.g., there may be an inverse relationship between concern about child sex abuse and homophobia, more noticeable in U.S. because of the activities of the religious right.

  18. On-line biomedical databases-the best source for quick search of the scientific information in the biomedicine.

    Science.gov (United States)

    Masic, Izet; Milinovic, Katarina

    2012-06-01

    Most of medical journals now has it's electronic version, available over public networks. Although there are parallel printed and electronic versions, and one other form need not to be simultaneously published. Electronic version of a journal can be published a few weeks before the printed form and must not has identical content. Electronic form of a journals may have an extension that does not contain a printed form, such as animation, 3D display, etc., or may have available fulltext, mostly in PDF or XML format, or just the contents or a summary. Access to a full text is usually not free and can be achieved only if the institution (library or host) enters into an agreement on access. Many medical journals, however, provide free access for some articles, or after a certain time (after 6 months or a year) to complete content. The search for such journals provide the network archive as High Wire Press, Free Medical Journals.com. It is necessary to allocate PubMed and PubMed Central, the first public digital archives unlimited collect journals of available medical literature, which operates in the system of the National Library of Medicine in Bethesda (USA). There are so called on- line medical journals published only in electronic form. It could be searched over on-line databases. In this paper authors shortly described about 30 data bases and short instructions how to make access and search the published papers in indexed medical journals.

  19. The search for dark matter in xenon: Innovative calibration strategies and novel search channels

    Science.gov (United States)

    Reichard, Shayne Edward

    The direct detection dark matter experiment XENON1T became operational in early 2016, heralding the era of tonne-scale dark matter detectors. Direct detection experiments typically search for elastic scatters of dark matter particles off target nuclei. XENON1T's larger xenon target provides the advantage of stronger dark matter signals and lower background rates compared to its predecessors, XENON10 and XENON100; but, at the same time, calibration of the detector's response to backgrounds with traditional external sources becomes exceedingly more difficult. A 220Rn source is deployed on the XENON100 dark matter detector in order to address the challenges in calibration of tonne-scale liquid noble element detectors. I show that the subsequent 212Pb beta emission can be used for low-energy electronic recoil calibration in searches for dark matter. The isotope spreads throughout the entire active region of the detector, and its activity naturally decays below background level within a week after the source is closed. I find no increase in the activity of the troublesome 222Rn background after calibration. Alpha emitters are also distributed throughout the detector and facilitate calibration of its response to 222Rn. Using the delayed coincidence of 220Rn/216Po, I map for the first time the convective motion of particles in the XENON100 detector. Additionally, I make a competitive measurement of the half-life of 212Po, t1/2=293.9+/-(1.0)stat+/-(0.6)ns. In contrast to the elastic scattering of dark matter particles off nuclei, I explore inelastic scattering where the nucleus is excited to a low-lying state of 10-100 keV, with a subsequent prompt de-excitation. I use the inelastic structure factors for the odd-mass xenon isotopes based on state-of-the-art large-scale shell-model calculations with chiral effective field theory WIMP-nucleon currents, finding that the inelastic channel is comparable to or can dominate the elastic channel for momentum transfers around 150 Me

  20. Developing optimal search strategies for detecting clinically sound prognostic studies in MEDLINE: an analytic survey

    Directory of Open Access Journals (Sweden)

    Haynes R Brian

    2004-06-01

    Full Text Available Abstract Background Clinical end users of MEDLINE have a difficult time retrieving articles that are both scientifically sound and directly relevant to clinical practice. Search filters have been developed to assist end users in increasing the success of their searches. Many filters have been developed for the literature on therapy and reviews but little has been done in the area of prognosis. The objective of this study is to determine how well various methodologic textwords, Medical Subject Headings, and their Boolean combinations retrieve methodologically sound literature on the prognosis of health disorders in MEDLINE. Methods An analytic survey was conducted, comparing hand searches of journals with retrievals from MEDLINE for candidate search terms and combinations. Six research assistants read all issues of 161 journals for the publishing year 2000. All articles were rated using purpose and quality indicators and categorized into clinically relevant original studies, review articles, general papers, or case reports. The original and review articles were then categorized as 'pass' or 'fail' for methodologic rigor in the areas of prognosis and other clinical topics. Candidate search strategies were developed for prognosis and run in MEDLINE – the retrievals being compared with the hand search data. The sensitivity, specificity, precision, and accuracy of the search strategies were calculated. Results 12% of studies classified as prognosis met basic criteria for scientific merit for testing clinical applications. Combinations of terms reached peak sensitivities of 90%. Compared with the best single term, multiple terms increased sensitivity for sound studies by 25.2% (absolute increase, and increased specificity, but by a much smaller amount (1.1% when sensitivity was maximized. Combining terms to optimize both sensitivity and specificity achieved sensitivities and specificities of approximately 83% for each. Conclusion Empirically derived

  1. [Method of traditional Chinese medicine formula design based on 3D-database pharmacophore search and patent retrieval].

    Science.gov (United States)

    He, Yu-su; Sun, Zhi-yi; Zhang, Yan-ling

    2014-11-01

    By using the pharmacophore model of mineralocorticoid receptor antagonists as a starting point, the experiment stud- ies the method of traditional Chinese medicine formula design for anti-hypertensive. Pharmacophore models were generated by 3D-QSAR pharmacophore (Hypogen) program of the DS3.5, based on the training set composed of 33 mineralocorticoid receptor antagonists. The best pharmacophore model consisted of two Hydrogen-bond acceptors, three Hydrophobic and four excluded volumes. Its correlation coefficient of training set and test set, N, and CAI value were 0.9534, 0.6748, 2.878, and 1.119. According to the database screening, 1700 active compounds from 86 source plant were obtained. Because of lacking of available anti-hypertensive medi cation strategy in traditional theory, this article takes advantage of patent retrieval in world traditional medicine patent database, in order to design drug formula. Finally, two formulae was obtained for antihypertensive.

  2. Enhancing Islamic Students’ Reading Comprehension through Predict Organize Search Summarize Evaluate Strategy

    Directory of Open Access Journals (Sweden)

    Darmayenti Darmayenti

    2017-02-01

    Full Text Available This paper is a report of an experimental research project conducted in a reading comprehension course for first-year students of the Adab Faculty of the State Institute for Islamic Studies Imam Bonjol Padang, West Sumatera, Indonesia, during the academic year 2015/2016. The “Predict Organize Search Summarize Evaluate” (POSSE is one strategy that can enhance students’ comprehension in reading. Two classes of Arabic and History students chosen through cluster random sampling technique were used as the sample of the research. Reading tests were used to collect the data which was given to both of classes on pre-test and post-test. The result of the research showed that the implementation of Predict Organize Search Summarize Evaluate strategy gave a significant difference in term of the students-learning outcome between the students who were taught through POSSE strategy and by traditional one. The finding of the study showed that teaching reading by using POSSE strategy gave significant effect towards students’ reading comprehension. This strategy could improve the students’ reading component on finding topic. It can be concluded that using POSSE Strategy has improved Indonesian students’ reading comprehension. It is also recommended for English lecturers use POSSE strategy as one of teaching strategies for reading comprehension.

  3. Wayfinding search strategies and matching familiarity in the built environment through virtual navigation

    NARCIS (Netherlands)

    Dijkstra, J.; Vries, de B.; Jessurun, A.J.

    2014-01-01

    There is an underestimation of the conscious and unconscious wayfinding search strategies in a virtual built environment without signage information. Wayfinding is the process of determining and following a path or route between an origin and destination.This is the base of the experiment discussed

  4. Design and Implementation of Cancellation Tasks for Visual Search Strategies and Visual Attention in School Children

    Science.gov (United States)

    Wang, Tsui-Ying; Huang, Ho-Chuan; Huang, Hsiu-Shuang

    2006-01-01

    We propose a computer-assisted cancellation test system (CACTS) to understand the visual attention performance and visual search strategies in school children. The main aim of this paper is to present our design and development of the CACTS and demonstrate some ways in which computer techniques can allow the educator not only to obtain more…

  5. WANDERER IN THE MIST: THE SEARCH FOR INTELLIGENCE, SURVEILLANCE, AND RECONNAISSANCE (ISR) STRATEGY

    Science.gov (United States)

    2017-06-01

    the production of over 383,000 photographic prints to support various intelligence , mapping, and 15...WANDERER IN THE MIST: THE SEARCH FOR INTELLIGENCE , SURVEILLANCE, AND RECONNAISSANCE (ISR) STRATEGY BY MAJOR RYAN D. SKAGGS, USAF...program from the University of California at Los Angeles (UCLA) in 2004. He is a career intelligence officer with over 13 years of experience across a

  6. The “I’m Feeling Lucky Syndrome”: Teacher-Candidates’ Knowledge of Web Searching Strategies

    Directory of Open Access Journals (Sweden)

    Corinne Laverty

    2008-06-01

    Full Text Available The need for web literacy has become increasingly important with the exponential growth of learning materials on the web that are freely accessible to educators. Teachers need the skills to locate these tools and also the ability to teach their students web search strategies and evaluation of websites so they can effectively explore the web by themselves. This study examined the web searching strategies of 253 teachers-in-training using both a survey (247 participants and live screen capture with think aloud audio recording (6 participants. The results present a picture of the strategic, syntactic, and evaluative search abilities of these students that librarians and faculty can use to plan how instruction can target information skill deficits in university student populations.

  7. Institutional Embeddedness of Search Strategies and the Implications for Innovation Performance

    DEFF Research Database (Denmark)

    Grimpe, Christoph; Sofka, Wolfgang

    2013-01-01

    to experience the negative performance effects of oversearch. Based on a comprehensive sample of almost 8,000 firms from ten European countries, we find that institutions matter considerably for firms’ search activity. Higher market orientation of institutions increases the effectiveness of firms’ search...... ignored the institutional context that provides or denies access to external knowledge at the country level. Combining institutional and knowledge search theory, we suggest that the market orientation of the institutional environment and the magnitude of institutional change influence when firms begin...... for external knowledge while higher magnitudes of institutional change decrease it. Our results provide important insights for management on how to adapt search strategies to the institutional context....

  8. A fast search strategy for gravitational waves from low-mass x-ray binaries

    International Nuclear Information System (INIS)

    Messenger, C; Woan, G

    2007-01-01

    We present a new type of search strategy designed specifically to find continuously emitting gravitational wave sources in known binary systems. A component of this strategy is based on the incoherent summation of frequency-modulated binary signal sidebands, a method previously employed in the detection of electromagnetic pulsar signals from radio observations. The search pipeline can be divided into three stages: the first is a wide bandwidth, F-statistic search demodulated for sky position. This is followed by a fast second stage in which areas in frequency space are identified as signal candidates through the frequency domain convolution of the F-statistic with an approximate signal template. For this second stage only precise information on the orbit period and approximate information on the orbital semi-major axis are required a priori. For the final stage we propose a fully coherent Markov chain Monte Carlo based follow-up search on the frequency subspace defined by the candidates identified by the second stage. This search is particularly suited to the low-mass x-ray binaries, for which orbital period and sky position are typically well known and additional orbital parameters and neutron star spin frequency are not. We note that for the accreting x-ray millisecond pulsars, for which spin frequency and orbital parameters are well known, the second stage can be omitted and the fully coherent search stage can be performed. We describe the search pipeline with respect to its application to a simplified phase model and derive the corresponding sensitivity of the search

  9. Information search and decision making: effects of age and complexity on strategy use.

    Science.gov (United States)

    Queen, Tara L; Hess, Thomas M; Ennis, Gilda E; Dowd, Keith; Grühn, Daniel

    2012-12-01

    The impact of task complexity on information search strategy and decision quality was examined in a sample of 135 young, middle-aged, and older adults. We were particularly interested in the competing roles of fluid cognitive ability and domain knowledge and experience, with the former being a negative influence and the latter being a positive influence on older adults' performance. Participants utilized 2 decision matrices, which varied in complexity, regarding a consumer purchase. Using process tracing software and an algorithm developed to assess decision strategy, we recorded search behavior, strategy selection, and final decision. Contrary to expectations, older adults were not more likely than the younger age groups to engage in information-minimizing search behaviors in response to increases in task complexity. Similarly, adults of all ages used comparable decision strategies and adapted their strategies to the demands of the task. We also examined decision outcomes in relation to participants' preferences. Overall, it seems that older adults utilize simpler sets of information primarily reflecting the most valued attributes in making their choice. The results of this study suggest that older adults are adaptive in their approach to decision making and that this ability may benefit from accrued knowledge and experience. 2013 APA, all rights reserved

  10. High serum folate is associated with reduced biochemical recurrence after radical prostatectomy: Results from the SEARCH Database

    Directory of Open Access Journals (Sweden)

    Daniel M. Moreira

    2013-06-01

    Full Text Available Introduction To analyze the association between serum levels of folate and risk of biochemical recurrence after radical prostatectomy among men from the Shared Equal Access Regional Cancer Hospital (SEARCH database. Materials and Methods Retrospective analysis of 135 subjects from the SEARCH database treated between 1991-2009 with available preoperative serum folate levels. Patients' characteristics at the time of the surgery were analyzed with ranksum and linear regression. Uni- and multivariable analyses of folate levels (log-transformed and time to biochemical recurrence were performed with Cox proportional hazards. Results The median preoperative folate level was 11.6ng/mL (reference = 1.5-20.0ng/mL. Folate levels were significantly lower among African-American men than Caucasians (P = 0.003. In univariable analysis, higher folate levels were associated with more recent year of surgery (P < 0.001 and lower preoperative PSA (P = 0.003. In univariable analysis, there was a trend towards lower risk of biochemical recurrence among men with high folate levels (HR = 0.61, 95%CI = 0.37-1.03, P = 0.064. After adjustments for patients characteristics' and pre- and post-operative clinical and pathological findings, higher serum levels of folate were independently associated with lower risk for biochemical recurrence (HR = 0.42, 95%CI = 0.20-0.89, P = 0.023. Conclusion In a cohort of men undergoing radical prostatectomy at several VAs across the country, higher serum folate levels were associated with lower PSA and lower risk for biochemical failure. While the source of the folate in the serum in this study is unknown (i.e. diet vs. supplement, these findings, if confirmed, suggest a potential role of folic acid supplementation or increased consumption of folate rich foods to reduce the risk of recurrence.

  11. Database Objects vs Files: Evaluation of alternative strategies for managing large remote sensing data

    Science.gov (United States)

    Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram

    2010-05-01

    Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available

  12. Global strategies to reduce the price of antiretroviral medicines: evidence from transactional databases.

    Science.gov (United States)

    Waning, Brenda; Kaplan, Warren; King, Alexis C; Lawrence, Danielle A; Leufkens, Hubert G; Fox, Matthew P

    2009-07-01

    To estimate the impact of global strategies, such as pooled procurement arrangements, third-party price negotiation and differential pricing, on reducing the price of antiretrovirals (ARVs), which currently hinders universal access to HIV/AIDS treatment. We estimated the impact of global strategies to reduce ARV prices using data on 7253 procurement transactions (July 2002-October 2007) from databases hosted by WHO and the Global Fund to Fight AIDS, Tuberculosis and Malaria. For 19 of 24 ARV dosage forms, we detected no association between price and volume purchased. For the other five ARVs, high-volume purchases were 4-21% less expensive than medium- or low-volume purchases. Nine of 13 generic ARVs were priced 6-36% lower when purchased under the Clinton Foundation HIV/AIDS Initiative (CHAI). Fifteen of 18 branded ARVs were priced 23-498% higher for differentially priced purchases compared with non-CHAI generic purchases. However, two branded, differentially priced ARVs were priced 63% and 73% lower, respectively, than generic non-CHAI equivalents. Large purchase volumes did not necessarily result in lower ARV prices. Although current plans for pooled procurement will further increase purchase volumes, savings are uncertain and should be balanced against programmatic costs. Third-party negotiation by CHAI resulted in lower generic ARV prices. Generics were less expensive than differentially priced branded ARVs, except where little generic competition exists. Alternative strategies for reducing ARV prices, such as streamlining financial management systems, improving demand forecasting and removing barriers to generics, should be explored.

  13. Identifying quality improvement intervention publications - A comparison of electronic search strategies

    Directory of Open Access Journals (Sweden)

    Rubenstein Lisa V

    2011-08-01

    Full Text Available Abstract Background The evidence base for quality improvement (QI interventions is expanding rapidly. The diversity of the initiatives and the inconsistency in labeling these as QI interventions makes it challenging for researchers, policymakers, and QI practitioners to access the literature systematically and to identify relevant publications. Methods We evaluated search strategies developed for MEDLINE (Ovid and PubMed based on free text words, Medical subject headings (MeSH, QI intervention components, continuous quality improvement (CQI methods, and combinations of the strategies. Three sets of pertinent QI intervention publications were used for validation. Two independent expert reviewers screened publications for relevance. We compared the yield, recall rate, and precision of the search strategies for the identification of QI publications and for a subset of empirical studies on effects of QI interventions. Results The search yields ranged from 2,221 to 216,167 publications. Mean recall rates for reference publications ranged from 5% to 53% for strategies with yields of 50,000 publications or fewer. The 'best case' strategy, a simple text word search with high face validity ('quality' AND 'improv*' AND 'intervention*' identified 44%, 24%, and 62% of influential intervention articles selected by Agency for Healthcare Research and Quality (AHRQ experts, a set of exemplar articles provided by members of the Standards for Quality Improvement Reporting Excellence (SQUIRE group, and a sample from the Cochrane Effective Practice and Organization of Care Group (EPOC register of studies, respectively. We applied the search strategy to a PubMed search for articles published in 10 pertinent journals in a three-year period which retrieved 183 publications. Among these, 67% were deemed relevant to QI by at least one of two independent raters. Forty percent were classified as empirical studies reporting on a QI intervention. Conclusions The presented

  14. Evaluating random search strategies in three mammals from distinct feeding guilds.

    Science.gov (United States)

    Auger-Méthé, Marie; Derocher, Andrew E; DeMars, Craig A; Plank, Michael J; Codling, Edward A; Lewis, Mark A

    2016-09-01

    Searching allows animals to find food, mates, shelter and other resources essential for survival and reproduction and is thus among the most important activities performed by animals. Theory predicts that animals will use random search strategies in highly variable and unpredictable environments. Two prominent models have been suggested for animals searching in sparse and heterogeneous environments: (i) the Lévy walk and (ii) the composite correlated random walk (CCRW) and its associated area-restricted search behaviour. Until recently, it was difficult to differentiate between the movement patterns of these two strategies. Using a new method that assesses whether movement patterns are consistent with these two strategies and two other common random search strategies, we investigated the movement behaviour of three species inhabiting sparse northern environments: woodland caribou (Rangifer tarandus caribou), barren-ground grizzly bear (Ursus arctos) and polar bear (Ursus maritimus). These three species vary widely in their diets and thus allow us to contrast the movement patterns of animals from different feeding guilds. Our results showed that although more traditional methods would have found evidence for the Lévy walk for some individuals, a comparison of the Lévy walk to CCRWs showed stronger support for the latter. While a CCRW was the best model for most individuals, there was a range of support for its absolute fit. A CCRW was sufficient to explain the movement of nearly half of herbivorous caribou and a quarter of omnivorous grizzly bears, but was insufficient to explain the movement of all carnivorous polar bears. Strong evidence for CCRW movement patterns suggests that many individuals may use a multiphasic movement strategy rather than one-behaviour strategies such as the Lévy walk. The fact that the best model was insufficient to describe the movement paths of many individuals suggests that some animals living in sparse environments may use

  15. PubMed search strategies for the identification of etiologic associations between hypothalamic-pituitary disorders and other medical conditions.

    Science.gov (United States)

    Guaraldi, Federica; Grottoli, Silvia; Arvat, Emanuela; Mattioli, Stefano; Ghigo, Ezio; Gori, Davide

    2013-12-01

    Biomedical literature has enormously grown in the last decades and become broadly available through online databases. Ad-hoc search methods, created on the basis of research field and goals, are required to enhance the quality of searching. Aim of this study was to formulate efficient, evidence-based PubMed search strategies to retrieve articles assessing etiologic associations between a condition of interest and hypothalamic-pituitary disorders (HPD). Based on expert knowledge, 17 MeSH (Medical Subjects Headings) and 79 free terms related to HPD were identified to search PubMed. Using random samples of abstracts retrieved by each term, we estimated the proportion of articles containing pertinent information and formulated two strings (one more specific, one more sensitive) for the detection of articles focusing on the etiology of HPD, that were then applied to retrieve articles identifying possible etiologic associations between HPD and three diseases (malaria, LHON and celiac disease) considered not associated to HPD, and define the number of abstracts needed to read (NNR) to find one potentially pertinent article. We propose two strings: one sensitive string derived from the combination of articles providing the largest literature coverage in the field and one specific including combined terms retrieving ≥40% of potentially pertinent articles. NNR were 2.1 and 1.6 for malaria, 3.36 and 2.29 for celiac disease, 2.8 and 2.2 for LHON, respectively. For the first time, two reliable, readily applicable strings are proposed for the retrieval of medical literature assessing putative etiologic associations between HPD and other medical conditions of interest.

  16. Sensitivity and predictive value of 15 PubMed search strategies to answer clinical questions rated against full systematic reviews.

    Science.gov (United States)

    Agoritsas, Thomas; Merglen, Arnaud; Courvoisier, Delphine S; Combescure, Christophe; Garin, Nicolas; Perrier, Arnaud; Perneger, Thomas V

    2012-06-12

    Clinicians perform searches in PubMed daily, but retrieving relevant studies is challenging due to the rapid expansion of medical knowledge. Little is known about the performance of search strategies when they are applied to answer specific clinical questions. To compare the performance of 15 PubMed search strategies in retrieving relevant clinical trials on therapeutic interventions. We used Cochrane systematic reviews to identify relevant trials for 30 clinical questions. Search terms were extracted from the abstract using a predefined procedure based on the population, interventions, comparison, outcomes (PICO) framework and combined into queries. We tested 15 search strategies that varied in their query (PIC or PICO), use of PubMed's Clinical Queries therapeutic filters (broad or narrow), search limits, and PubMed links to related articles. We assessed sensitivity (recall) and positive predictive value (precision) of each strategy on the first 2 PubMed pages (40 articles) and on the complete search output. The performance of the search strategies varied widely according to the clinical question. Unfiltered searches and those using the broad filter of Clinical Queries produced large outputs and retrieved few relevant articles within the first 2 pages, resulting in a median sensitivity of only 10%-25%. In contrast, all searches using the narrow filter performed significantly better, with a median sensitivity of about 50% (all P PubMed pages. These results can help clinicians apply effective strategies to answer their questions at the point of care.

  17. Uploading, Searching and Visualizing of Paleomagnetic and Rock Magnetic Data in the Online MagIC Database

    Science.gov (United States)

    Minnett, R.; Koppers, A.; Tauxe, L.; Constable, C.; Donadini, F.

    2007-12-01

    The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by both rock and paleomagnetic data. The goal of MagIC is to archive all available measurements and derived properties from paleomagnetic studies of directions and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and will soon implement two search nodes, one for paleomagnetism and one for rock magnetism. Currently the PMAG node is operational. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual map interface to browse and select locations. Users can also browse the database by data type or by data compilation to view all contributions associated with well known earlier collections like PINT, GMPDB or PSVRL. The query result set is displayed in a digestible tabular format allowing the user to descend from locations to sites, samples, specimens and measurements. At each stage, the result set can be saved and, where appropriate, can be visualized by plotting global location maps, equal area, XY, age, and depth plots, or typical Zijderveld, hysteresis, magnetization and remanence diagrams. User contributions to the MagIC database are critical to achieving a useful research tool. We have developed a standard data and metadata template (version 2.3) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate population of these templates within Microsoft Excel. These tools allow for the import/export of text files and provide advanced functionality to manage and edit the data, and to perform various internal checks to maintain data integrity and prepare for uploading. The MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm executes the upload

  18. Search strategy in a complex and dynamic environment (the Indian Ocean case)

    Science.gov (United States)

    Loire, Sophie; Arbabi, Hassan; Clary, Patrick; Ivic, Stefan; Crnjaric-Zic, Nelida; Macesic, Senka; Crnkovic, Bojan; Mezic, Igor; UCSB Team; Rijeka Team

    2014-11-01

    The disappearance of Malaysia Airlines Flight 370 (MH370) in the early morning hours of 8 March 2014 has exposed the disconcerting lack of efficient methods for identifying where to look and how to look for missing objects in a complex and dynamic environment. The search area for plane debris is a remote part of the Indian Ocean. Searches, of the lawnmower type, have been unsuccessful so far. Lagrangian kinematics of mesoscale features are visible in hypergraph maps of the Indian Ocean surface currents. Without a precise knowledge of the crash site, these maps give an estimate of the time evolution of any initial distribution of plane debris and permits the design of a search strategy. The Dynamic Spectral Multiscale Coverage search algorithm is modified to search a spatial distribution of targets that is evolving with time following the dynamic of ocean surface currents. Trajectories are generated for multiple search agents such that their spatial coverage converges to the target distribution. Central to this DSMC algorithm is a metric for the ergodicity.

  19. Students are Confident Using Federated Search Tools as much as Single Databases. A Review of: Armstrong, A. (2009. Student perceptions of federated searching vs. single database searching. Reference Services Review, 37(3, 291-303. doi:10.1108/00907320910982785

    Directory of Open Access Journals (Sweden)

    Deena Yanofsky

    2011-09-01

    Full Text Available Objective – To measure students’ perceptions of the ease-of-use and efficacy of a federated search tool versus a single multidisciplinary database.Design – An evaluation worksheet, employing a combination of quantitative and qualitative questions.Setting – A required, first-year English composition course taught at the University of Illinois at Chicago (UIC.Subjects – Thirty-one undergraduate students completed and submitted the worksheet.Methods – Students attended two library instruction sessions. The first session introduced participants to basic Boolean searching (using AND only, selecting appropriate keywords and searching for books in the library catalogue. In the second library session, students were handed an evaluation worksheet and, with no introduction to the process of searching article databases, were asked to find relevant articles on a research topic of their own choosing using both a federated search tool and a single multidisciplinary database. The evaluation worksheet was divided into four sections: step-by-step instructions for accessing the single multidisciplinary database and the federated search tool; space to record search strings in both resources; space to record the titles of up to five relevant articles; and a series of quantitative and qualitative questions regarding ease-of-use, relevancy of results, overall preference (if any between the two resources, likeliness of future use and other preferred research tools. Half of the participants received a worksheet with instructions to search the federated search tool before the single database; the order was reversed for the other half of the students. The evaluation worksheet was designed to be completed in one hour.Participant responses to qualitative questions were analyzed, codified and grouped into thematic categories. If a student mentioned more than one factor in responding to a question, their response was recorded in multiple categories.Main Results

  20. Accelerating Smith-Waterman Alignment for Protein Database Search Using Frequency Distance Filtration Scheme Based on CPU-GPU Collaborative System.

    Science.gov (United States)

    Liu, Yu; Hong, Yang; Lin, Chun-Yuan; Hung, Che-Lun

    2015-01-01

    The Smith-Waterman (SW) algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs) and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS) to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.

  1. Native Health Research Database

    Science.gov (United States)

    ... Indian Health Board) Welcome to the Native Health Database. Please enter your search terms. Basic Search Advanced ... To learn more about searching the Native Health Database, click here. Tutorial Video The NHD has made ...

  2. Use of recommended search strategies in systematic reviews and the impact of librarian involvement: a cross-sectional survey of recent authors.

    Science.gov (United States)

    Koffel, Jonathan B

    2015-01-01

    Previous research looking at published systematic reviews has shown that their search strategies are often suboptimal and that librarian involvement, though recommended, is low. Confidence in the results, however, is limited due to poor reporting of search strategies the published articles. To more accurately measure the use of recommended search methods in systematic reviews, the levels of librarian involvement, and whether librarian involvement predicts the use of recommended methods. A survey was sent to all authors of English-language systematic reviews indexed in the Database of Abstracts of Reviews of Effects (DARE) from January 2012 through January 2014. The survey asked about their use of search methods recommended by the Institute of Medicine, Cochrane Collaboration, and the Agency for Healthcare Research and Quality and if and how a librarian was involved in the systematic review. Rates of use of recommended methods and librarian involvement were summarized. The impact of librarian involvement on use of recommended methods was examined using a multivariate logistic regression. 1560 authors completed the survey. Use of recommended search methods ranged widely from 98% for use of keywords to 9% for registration in PROSPERO and were generally higher than in previous studies. 51% of studies involved a librarian, but only 64% acknowledge their assistance. Librarian involvement was significantly associated with the use of 65% of recommended search methods after controlling for other potential predictors. Odds ratios ranged from 1.36 (95% CI 1.06 to 1.75) for including multiple languages to 3.07 (95% CI 2.06 to 4.58) for using controlled vocabulary. Use of recommended search strategies is higher than previously reported, but many methods are still under-utilized. Librarian involvement predicts the use of most methods, but their involvement is under-reported within the published article.

  3. Improving the basic skills of teaching mathematics through learning with search-solve-create-share strategy

    Science.gov (United States)

    Rahayu, D. V.; Kusumah, Y. S.; Darhim

    2018-05-01

    This study examined to see the improvement of prospective teachers’ basic skills of teaching mathematics through search-solve-create-share learning strategy based on overall and Mathematical Prior Knowledge (MPK) and interaction of both. Quasi experiments with the design of this experimental-non-equivalent control group design involved 67 students at the mathematics program of STKIP Garut. The instrument used in this study included pre-test and post-test. The result of this study showed that: (1) The improvement and achievement of the basic skills of teaching mathematics of the prospective teachers who get the learning of search-solve-create-share strategy is better than the improvement and achievement of the prospective teachers who get the conventional learning as a whole and based on MPK; (2) There is no interaction between the learning used and MPK on improving and achieving basic skills of teaching mathematics.

  4. Martian methane plume models for defining Mars rover methane source search strategies

    Science.gov (United States)

    Nicol, Christopher; Ellery, Alex; Lynch, Brian; Cloutis, Ed

    2018-07-01

    The detection of atmospheric methane on Mars implies an active methane source. This introduces the possibility of a biotic source with the implied need to determine whether the methane is indeed biotic in nature or geologically generated. There is a clear need for robotic algorithms which are capable of manoeuvring a rover through a methane plume on Mars to locate its source. We explore aspects of Mars methane plume modelling to reveal complex dynamics characterized by advection and diffusion. A statistical analysis of the plume model has been performed and compared to analyses of terrestrial plume models. Finally, we consider a robotic search strategy to find a methane plume source. We find that gradient-based techniques are ineffective, but that more sophisticated model-based search strategies are unlikely to be available in near-term rover missions.

  5. Evolution of optimal Lévy-flight strategies in human mental searches

    Science.gov (United States)

    Radicchi, Filippo; Baronchelli, Andrea

    2012-06-01

    Recent analysis of empirical data [Radicchi, Baronchelli, and Amaral, PloS ONE1932-620310.1371/journal.pone.0029910 7, e029910 (2012)] showed that humans adopt Lévy-flight strategies when exploring the bid space in online auctions. A game theoretical model proved that the observed Lévy exponents are nearly optimal, being close to the exponent value that guarantees the maximal economical return to players. Here, we rationalize these findings by adopting an evolutionary perspective. We show that a simple evolutionary process is able to account for the empirical measurements with the only assumption that the reproductive fitness of the players is proportional to their search ability. Contrary to previous modeling, our approach describes the emergence of the observed exponent without resorting to any strong assumptions on the initial searching strategies. Our results generalize earlier research, and open novel questions in cognitive, behavioral, and evolutionary sciences.

  6. Emergence of an optimal search strategy from a simple random walk.

    Science.gov (United States)

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2013-09-06

    In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths.

  7. Testing the effectiveness of simplified search strategies for updating systematic reviews.

    Science.gov (United States)

    Rice, Maureen; Ali, Muhammad Usman; Fitzpatrick-Lewis, Donna; Kenny, Meghan; Raina, Parminder; Sherifali, Diana

    2017-08-01

    The objective of the study was to test the overall effectiveness of a simplified search strategy (SSS) for updating systematic reviews. We identified nine systematic reviews undertaken by our research group for which both comprehensive and SSS updates were performed. Three relevant performance measures were estimated, that is, sensitivity, precision, and number needed to read (NNR). The update reference searches for all nine included systematic reviews identified a total of 55,099 citations that were screened resulting in final inclusion of 163 randomized controlled trials. As compared with reference search, the SSS resulted in 8,239 hits and had a median sensitivity of 83.3%, while precision and NNR were 4.5 times better. During analysis, we found that the SSS performed better for clinically focused topics, with a median sensitivity of 100% and precision and NNR 6 times better than for the reference searches. For broader topics, the sensitivity of the SSS was 80% while precision and NNR were 5.4 times better compared with reference search. SSS performed well for clinically focused topics and, with a median sensitivity of 100%, could be a viable alternative to a conventional comprehensive search strategy for updating this type of systematic reviews particularly considering the budget constraints and the volume of new literature being published. For broader topics, 80% sensitivity is likely to be considered too low for a systematic review update in most cases, although it might be acceptable if updating a scoping or rapid review. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Looking for trouble: a description of oculomotor search strategies during live CCTV operation.

    Science.gov (United States)

    Stainer, Matthew J; Scott-Brown, Kenneth C; Tatler, Benjamin W

    2013-01-01

    Recent research has begun to address how CCTV operators in the modern control room attempt to search for crime (e.g., Howard et al., 2011). However, an often-neglected element of the CCTV task is that the operators have at their disposal a multiplexed wall of scenes, and a single spot-monitor on which they can select any of these feeds for inspection. Here we examined how 2 trained CCTV operators used these sources of information to search from crime during a morning, afternoon, and night-time shift. We found that they spent surprisingly little time viewing the multiplex wall, instead preferentially spending most of their time searching on the single-scene spot-monitor. Such search must require a sophisticated understanding of the surveilled environment, as the operators must make their selection of which screen to view based on their prediction of where crime is likely to occur. This seems to be reflected in the difference in the screens that they selected to view at different times of the day. For example, night-clubs received close monitoring at night, but were seldom viewed in mid-morning. Such narrowing of search based on a contextual understanding of an environment is not a new idea (e.g., Torralba et al., 2006), and appears to contribute to operator's selection strategy. This research prompts new questions regarding the nature of representation that operators have of their environment, and how they might develop expectation-based search strategies to countermand the demands of the large influx of visual information. Future research should ensure not to neglect examination of operator behavior "in the wild" (Hutchins, 1995a), as such insights are difficult to gain from laboratory based paradigms alone.

  9. Looking for trouble: A description of oculomotor search strategies during live CCTV operation.

    Directory of Open Access Journals (Sweden)

    Matthew James Stainer

    2013-09-01

    Full Text Available Recent research has begun to address how CCTV operators in the modern control room attempt to search for crime (e.g., Howard et al., 2011. However, an often-neglected element of the CCTV task is that the operators have at their disposal a multiplexed wall of scenes, and a single spot-monitor on which they can select any of these feeds for inspection. Here we examined how 2 trained CCTV operators used these sources of information to search from crime during a morning, afternoon and night-time shift. We found that they spent surprisingly little time viewing the multiplex wall, instead preferentially spending most of their time searching on the single-scene spot-monitor. Such search must require a sophisticated understanding of the surveilled environment, as the operators must make their selection of which screen to view based on their prediction of where crime is likely to occur. This seems to be reflected in the difference in the screens that they selected to view at different times of the day. For example, night-clubs received close monitoring at night, but were seldom viewed in mid-morning. Such narrowing of search based on a contextual understanding of an environment is not a new idea (e.g., Torralba et al., 2006, and appears to contribute to operator’s selection strategy. This research prompts new questions regarding the nature of representation that operators have of their environment, and how they might develop expectation-based search strategies to countermand the demands of the large influx of visual information. Future research should ensure not to neglect examination of operator behavior ‘in the wild’ (Hutchins, 1995a, as such insights are difficult to gain from laboratory based paradigms alone.

  10. Search Strategies Used by Older Adults in a Virtual Reality Place Learning Task.

    Science.gov (United States)

    Davis, Rebecca L; Weisbeck, Catherine

    2015-06-01

    Older adults often have problems finding their way in novel environments such as senior living residences and hospitals. The purpose of this study was to examine the types of self-reported search strategies and cues that older adults use to find their way in a virtual maze. Healthy, independently living older adults (n = 129) aged 55-96 were tested in a virtual maze task over a period of 3 days in which they had to repeatedly find their way to a specified goal. They were interviewed about their strategies on days 1 and 3. Content analysis was used to identify the strategies and cues described by the participants in order to find their way. Strategies and cues used were compared among groups. The participants reported the use of multiple spatial and non-spatial strategies, and some of the strategies differed among age groups and over time. The oldest age group was less likely to use strategies such as triangulation and distance strategies. All participants used visual landmarks to find their way, but the use of geometric cues (corners) was used less by the older participants. These findings add to the theoretical understanding of how older adults find their way in complex environments. The understanding of how wayfinding changes with age is essential in order to design more supportive environments. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. An Improved Harmony Search Based on Teaching-Learning Strategy for Unconstrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Shouheng Tuo

    2013-01-01

    Full Text Available Harmony search (HS algorithm is an emerging population-based metaheuristic algorithm, which is inspired by the music improvisation process. The HS method has been developed rapidly and applied widely during the past decade. In this paper, an improved global harmony search algorithm, named harmony search based on teaching-learning (HSTL, is presented for high dimension complex optimization problems. In HSTL algorithm, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation are employed to maintain the proper balance between convergence and population diversity, and dynamic strategy is adopted to change the parameters. The proposed HSTL algorithm is investigated and compared with three other state-of-the-art HS optimization algorithms. Furthermore, to demonstrate the robustness and convergence, the success rate and convergence analysis is also studied. The experimental results of 31 complex benchmark functions demonstrate that the HSTL method has strong convergence and robustness and has better balance capacity of space exploration and local exploitation on high dimension complex optimization problems.

  12. Librarian co-authors correlated with higher quality reported search strategies in general internal medicine systematic reviews.

    Science.gov (United States)

    Rethlefsen, Melissa L; Farrell, Ann M; Osterhaus Trzasko, Leah C; Brigham, Tara J

    2015-06-01

    To determine whether librarian and information specialist authorship was associated with better reported systematic review (SR) search quality. SRs from high-impact general internal medicine journals were reviewed for search quality characteristics and reporting quality by independent reviewers using three instruments, including a checklist of Institute of Medicine Recommended Standards for the Search Process and a scored modification of the Peer Review of Electronic Search Strategies instrument. The level of librarian and information specialist participation was significantly associated with search reproducibility from reported search strategies (Χ(2) = 23.5; P Librarian co-authored SRs had significantly higher odds of meeting 8 of 13 analyzed search standards than those with no librarian participation and six more than those with mentioned librarian participation. One-way ANOVA showed that differences in total search quality scores between all three groups were statistically significant (F2,267 = 10.1233; P librarian or information specialist co-authors are correlated with significantly higher quality reported search strategies. To minimize bias in SRs, authors and editors could encourage librarian engagement in SRs including authorship as a potential way to help improve documentation of the search strategy. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Automatic sorting of toxicological information into the IUCLID (International Uniform Chemical Information Database) endpoint-categories making use of the semantic search engine Go3R.

    Science.gov (United States)

    Sauer, Ursula G; Wächter, Thomas; Hareng, Lars; Wareing, Britta; Langsch, Angelika; Zschunke, Matthias; Alvers, Michael R; Landsiedel, Robert

    2014-06-01

    The knowledge-based search engine Go3R, www.Go3R.org, has been developed to assist scientists from industry and regulatory authorities in collecting comprehensive toxicological information with a special focus on identifying available alternatives to animal testing. The semantic search paradigm of Go3R makes use of expert knowledge on 3Rs methods and regulatory toxicology, laid down in the ontology, a network of concepts, terms, and synonyms, to recognize the contents of documents. Search results are automatically sorted into a dynamic table of contents presented alongside the list of documents retrieved. This table of contents allows the user to quickly filter the set of documents by topics of interest. Documents containing hazard information are automatically assigned to a user interface following the endpoint-specific IUCLID5 categorization scheme required, e.g. for REACH registration dossiers. For this purpose, complex endpoint-specific search queries were compiled and integrated into the search engine (based upon a gold standard of 310 references that had been assigned manually to the different endpoint categories). Go3R sorts 87% of the references concordantly into the respective IUCLID5 categories. Currently, Go3R searches in the 22 million documents available in the PubMed and TOXNET databases. However, it can be customized to search in other databases including in-house databanks. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Antibiotic distribution channels in Thailand: results of key-informant interviews, reviews of drug regulations and database searches.

    Science.gov (United States)

    Sommanustweechai, Angkana; Chanvatik, Sunicha; Sermsinsiri, Varavoot; Sivilaikul, Somsajee; Patcharanarumol, Walaiporn; Yeung, Shunmay; Tangcharoensathien, Viroj

    2018-02-01

    To analyse how antibiotics are imported, manufactured, distributed and regulated in Thailand. We gathered information, on antibiotic distribution in Thailand, in in-depth interviews - with 43 key informants from farms, health facilities, pharmaceutical and animal feed industries, private pharmacies and regulators- and in database and literature searches. In 2016-2017, licensed antibiotic distribution in Thailand involves over 700 importers and about 24 000 distributors - e.g. retail pharmacies and wholesalers. Thailand imports antibiotics and active pharmaceutical ingredients. There is no system for monitoring the distribution of active ingredients, some of which are used directly on farms, without being processed. Most antibiotics can be bought from pharmacies, for home or farm use, without a prescription. Although the 1987 Drug Act classified most antibiotics as "dangerous drugs", it only classified a few of them as prescription-only medicines and placed no restrictions on the quantities of antibiotics that could be sold to any individual. Pharmacists working in pharmacies are covered by some of the Act's regulations, but the quality of their dispensing and prescribing appears to be largely reliant on their competences. In Thailand, most antibiotics are easily and widely available from retail pharmacies, without a prescription. If the inappropriate use of active pharmaceutical ingredients and antibiotics is to be reduced, we need to reclassify and restrict access to certain antibiotics and to develop systems to audit the dispensing of antibiotics in the retail sector and track the movements of active ingredients.

  15. First postoperative PSA is associated with outcomes in patients with node positive prostate cancer: Results from the SEARCH database.

    Science.gov (United States)

    McDonald, Michelle L; Howard, Lauren E; Aronson, William J; Terris, Martha K; Cooperberg, Matthew R; Amling, Christopher L; Freedland, Stephen J; Kane, Christopher J

    2018-05-01

    To analyze factors associated with metastases, prostate cancer-specific mortality, and all-cause mortality in pN1 patients. We analyzed 3,642 radical prostatectomy patients within the Shared Equal Access Regional Cancer Hospital (SEARCH) database. Pathologic Gleason grade, number of lymph nodes (LN) removed, and first postoperative prostate-specific antigen (PSA) (PSA. Of 3,642 patients, 124 (3.4%) had pN1. There were 71 (60%) patients with 1 positive LN, 32 (27%) with 2 positive LNs, and 15 (13%) with ≥3. Among men with pN1, first postoperative PSA wasPSA ≥0.2 ng/ml (P = 0.005) were associated with metastases. First postoperative PSA ≥0.2ng/ml was associated with metastasis on multivariable analysis (P = 0.046). Log-rank analysis revealed a more favorable metastases-free survival in patients with a first postoperative PSAPSAPSA ≥0.2ng/ml were more likely to develop metastases. First postoperative PSA may be useful in identifying pN1 patients who harbor distant disease and aid in secondary treatment decisions. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Optimal Control Strategy Search Using a Simplest 3-D PWR Xenon Oscillation Simulator

    International Nuclear Information System (INIS)

    Yoichiro, Shimazu

    2004-01-01

    Power spatial oscillations due to the transient xenon spatial distribution are well known as xenon oscillation in large PWRs. When the reactor size becomes larger than the current design, then even radial oscillations can be also divergent. Even if the radial oscillation is convergent, when some control rods malfunction occurs, it is necessary to suppress the oscillation in as short time as possible. In such cases, optimal control strategy is required. Generally speaking the optimality search based on the modern control theory requires a lot of calculation for the evaluation of state variables. In the case of control rod malfunctions the xenon oscillation could be three dimensional. In such case, direct core calculations would be inevitable. From this point of view a very simple model, only four point reactor model, has been developed and verified. In this paper, an example of a procedure and the results for optimal control strategy search are presented. It is shown that we have only one optimal strategy within a half cycle of the oscillation with fixed control strength. It is also shown that a 3-D xenon oscillation introduced by a control rod malfunction can not be controlled by only one control step as can be done for axial oscillations. They might be quite strong limitations to the operators. Thus it is recommended that a strategy generator, which is quick in analyzing and easy to use, might be installed in a monitoring system or operator guiding system. (author)

  17. Validation of a search strategy to identify nutrition trials in PubMed using the relative recall method.

    Science.gov (United States)

    Durão, Solange; Kredo, Tamara; Volmink, Jimmy

    2015-06-01

    To develop, assess, and maximize the sensitivity of a search strategy to identify diet and nutrition trials in PubMed using relative recall. We developed a search strategy to identify diet and nutrition trials in PubMed. We then constructed a gold standard reference set to validate the identified trials using the relative recall method. Relative recall was calculated by dividing the number of references from the gold standard our search strategy identified by the total number of references in the gold standard. Our gold standard comprised 298 trials, derived from 16 included systematic reviews. The initial search strategy identified 242 of 298 references, with a relative recall of 81.2% [95% confidence interval (CI): 76.3%, 85.5%]. We analyzed titles and abstracts of the 56 missed references for possible additional terms. We then modified the search strategy accordingly. The relative recall of the final search strategy was 88.6% (95% CI: 84.4%, 91.9%). We developed a search strategy to identify diet and nutrition trials in PubMed with a high relative recall (sensitivity). This could be useful for establishing a nutrition trials register to support the conduct of future research, including systematic reviews. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Preference vs. Authority: A Comparison of Student Searching in a Subject-Specific Indexing and Abstracting Database and a Customized Discovery Layer

    Science.gov (United States)

    Dahlen, Sarah P. C.; Hanson, Kathlene

    2017-01-01

    Discovery layers provide a simplified interface for searching library resources. Libraries with limited finances make decisions about retaining indexing and abstracting databases when similar information is available in discovery layers. These decisions should be informed by student success at finding quality information as well as satisfaction…

  19. The Impact of Competencies, Information Search, and Competitive Strategy on the Export Performance

    Directory of Open Access Journals (Sweden)

    Lena Elitan

    2011-04-01

    Full Text Available This confirmatory study is aimed at analyzing the impact of relationships, information search, and competencies on competitive strategy and SMEs performance in Indonesia. The study used 100 SMEs samples that obtained through mailed questionnaires. The results show: Firstly, the perception of information, information sources, and export competence has no significant effect on competitive strategy. However the findings for SMEs in Indonesia, indicated that export related information has negative effects on competitive strategy. Secondly, export competencies has an enormous influence on the company’s capacity and ability to use information appropriately, when the company must deal with challenges or when the company is eager to take advantage of the opportunities to increase growth and profitability. Thirdly, competitive strategy does not directly affect export performance but it is moderated by the environment uncertainty. It indicates that the influence of competitive strategy would be greater in an uncertain business environment. The uncertain business environment will encourage companies to explore competitive strategy to improve their performance.

  20. The Impact of Competencies, Information Search, and Competitive Strategy on the Export Performance

    Directory of Open Access Journals (Sweden)

    Lena Elitan

    2011-04-01

    Full Text Available This confirmatory study is aimed at analyzing the impact of relationships, information search, and competencies on competitive strategy and SMEs performance in Indonesia. The study used 100 SMEs samples that obtained through mailed questionnaires. The results show: Firstly, the perception of information, information sources, and export competence has no significant effect on competitive strategy. However the findings for SMEs in Indonesia, indicated that export related information has negative effects on competitive strategy. Secondly, export competencies has an enormous influence on the company’s capacity and ability to use information appropriately,when the company must deal with challenges or when the company is eager to take advantage of the opportunities to increase growth and profitability. Thirdly, competitive strategy does not directly affect export performance but it is moderated by the environment uncertainty. It indicates that the influence of competitive strategy would be greater in an uncertain business environment. The uncertain business environment will encourage companies to explore competitive strategy to improve their performance.

  1. The efficiency of different search strategies in estimating parsimony jackknife, bootstrap, and Bremer support

    Directory of Open Access Journals (Sweden)

    Müller Kai F

    2005-10-01

    Full Text Available Abstract Background For parsimony analyses, the most common way to estimate confidence is by resampling plans (nonparametric bootstrap, jackknife, and Bremer support (Decay indices. The recent literature reveals that parameter settings that are quite commonly employed are not those that are recommended by theoretical considerations and by previous empirical studies. The optimal search strategy to be applied during resampling was previously addressed solely via standard search strategies available in PAUP*. The question of a compromise between search extensiveness and improved support accuracy for Bremer support received even less attention. A set of experiments was conducted on different datasets to find an empirical cut-off point at which increased search extensiveness does not significantly change Bremer support and jackknife or bootstrap proportions any more. Results For the number of replicates needed for accurate estimates of support in resampling plans, a diagram is provided that helps to address the question whether apparently different support values really differ significantly. It is shown that the use of random addition cycles and parsimony ratchet iterations during bootstrapping does not translate into higher support, nor does any extension of the search extensiveness beyond the rather moderate effort of TBR (tree bisection and reconnection branch swapping plus saving one tree per replicate. Instead, in case of very large matrices, saving more than one shortest tree per iteration and using a strict consensus tree of these yields decreased support compared to saving only one tree. This can be interpreted as a small risk of overestimating support but should be more than compensated by other factors that counteract an enhanced type I error. With regard to Bremer support, a rule of thumb can be derived stating that not much is gained relative to the surplus computational effort when searches are extended beyond 20 ratchet iterations per

  2. Evidential significance of automotive paint trace evidence using a pattern recognition based infrared library search engine for the Paint Data Query Forensic Database.

    Science.gov (United States)

    Lavine, Barry K; White, Collin G; Allen, Matthew D; Fasasi, Ayuba; Weakley, Andrew

    2016-10-01

    A prototype library search engine has been further developed to search the infrared spectral libraries of the paint data query database to identify the line and model of a vehicle from the clear coat, surfacer-primer, and e-coat layers of an intact paint chip. For this study, search prefilters were developed from 1181 automotive paint systems spanning 3 manufacturers: General Motors, Chrysler, and Ford. The best match between each unknown and the spectra in the hit list generated by the search prefilters was identified using a cross-correlation library search algorithm that performed both a forward and backward search. In the forward search, spectra were divided into intervals and further subdivided into windows (which corresponds to the time lag for the comparison) within those intervals. The top five hits identified in each search window were compiled; a histogram was computed that summarized the frequency of occurrence for each library sample, with the IR spectra most similar to the unknown flagged. The backward search computed the frequency and occurrence of each line and model without regard to the identity of the individual spectra. Only those lines and models with a frequency of occurrence greater than or equal to 20% were included in the final hit list. If there was agreement between the forward and backward search results, the specific line and model common to both hit lists was always the correct assignment. Samples assigned to the same line and model by both searches are always well represented in the library and correlate well on an individual basis to specific library samples. For these samples, one can have confidence in the accuracy of the match. This was not the case for the results obtained using commercial library search algorithms, as the hit quality index scores for the top twenty hits were always greater than 99%. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Search strategies for pair production of heavy Higgs bosons decaying invisibly at the LHC

    Science.gov (United States)

    Arganda, E.; Diaz-Cruz, J. L.; Mileo, N.; Morales, R. A.; Szynkman, A.

    2018-04-01

    The search for heavy Higgs bosons at the LHC represents an intense experimental program, carried out by the ATLAS and CMS collaborations, which includes the hunt for invisible Higgs decays and dark matter candidates. No significant deviations from the SM backgrounds have been observed in any of these searches, imposing significant constraints on the parameter space of different new physics models with an extended Higgs sector. Here we discuss an alternative search strategy for heavy Higgs bosons decaying invisibly at the LHC, focusing on the pair production of a heavy scalar H together with a pseudoscalar A, through the production mode q q bar →Z* → HA. We identify as the most promising signal the final state made up of 4 b +ET miss, coming from the heavy scalar decay mode H → hh → b b bar b b bar , with h being the discovered SM-like Higgs boson with mh = 125GeV, together with the invisible channel of the pseudoscalar. We work within the context of simplified MSSM scenarios that contain quite heavy sfermions of most types with O (10)TeV masses, while the stops are heavy enough to reproduce the 125 GeV mass for the lightest SM-like Higgs boson. By contrast, the gauginos/higgsinos and the heavy MSSM Higgs bosons have masses near the EW scale. Our search strategies, for a LHC center-of-mass energy of √{ s } = 14TeV, allow us to obtain statistical significances of the signal over the SM backgrounds with values up to ∼ 1.6 σ and ∼ 3 σ, for total integrated luminosities of 300fb-1 and 1000fb-1, respectively.

  4. The development of PubMed search strategies for patient preferences for treatment outcomes

    Directory of Open Access Journals (Sweden)

    Ralph van Hoorn

    2016-07-01

    Full Text Available Abstract Background The importance of respecting patients’ preferences when making treatment decisions is increasingly recognized. Efficiently retrieving papers from the scientific literature reporting on the presence and nature of such preferences can help to achieve this goal. The objective of this study was to create a search filter for PubMed to help retrieve evidence on patient preferences for treatment outcomes. Methods A total of 27 journals were hand-searched for articles on patient preferences for treatment outcomes published in 2011. Selected articles served as a reference set. To develop optimal search strategies to retrieve this set, all articles in the reference set were randomly split into a development and a validation set. MeSH-terms and keywords retrieved using PubReMiner were tested individually and as combinations in PubMed and evaluated for retrieval performance (e.g. sensitivity (Se and specificity (Sp. Results Of 8238 articles, 22 were considered to report empirical evidence on patient preferences for specific treatment outcomes. The best search filters reached Se of 100 % [95 % CI 100-100] with Sp of 95 % [94–95 %] and Sp of 97 % [97–98 %] with 75 % Se [74–76 %]. In the validation set these queries reached values of Se of 90 % [89–91 %] with Sp 94 % [93–95 %] and Se of 80 % [79–81 %] with Sp of 97 % [96–96 %], respectively. Conclusions Narrow and broad search queries were developed which can help in retrieving literature on patient preferences for treatment outcomes. Identifying such evidence may in turn enhance the incorporation of patient preferences in clinical decision making and health technology assessment.

  5. The development of PubMed search strategies for patient preferences for treatment outcomes.

    Science.gov (United States)

    van Hoorn, Ralph; Kievit, Wietske; Booth, Andrew; Mozygemba, Kati; Lysdahl, Kristin Bakke; Refolo, Pietro; Sacchini, Dario; Gerhardus, Ansgar; van der Wilt, Gert Jan; Tummers, Marcia

    2016-07-29

    The importance of respecting patients' preferences when making treatment decisions is increasingly recognized. Efficiently retrieving papers from the scientific literature reporting on the presence and nature of such preferences can help to achieve this goal. The objective of this study was to create a search filter for PubMed to help retrieve evidence on patient preferences for treatment outcomes. A total of 27 journals were hand-searched for articles on patient preferences for treatment outcomes published in 2011. Selected articles served as a reference set. To develop optimal search strategies to retrieve this set, all articles in the reference set were randomly split into a development and a validation set. MeSH-terms and keywords retrieved using PubReMiner were tested individually and as combinations in PubMed and evaluated for retrieval performance (e.g. sensitivity (Se) and specificity (Sp)). Of 8238 articles, 22 were considered to report empirical evidence on patient preferences for specific treatment outcomes. The best search filters reached Se of 100 % [95 % CI 100-100] with Sp of 95 % [94-95 %] and Sp of 97 % [97-98 %] with 75 % Se [74-76 %]. In the validation set these queries reached values of Se of 90 % [89-91 %] with Sp 94 % [93-95 %] and Se of 80 % [79-81 %] with Sp of 97 % [96-96 %], respectively. Narrow and broad search queries were developed which can help in retrieving literature on patient preferences for treatment outcomes. Identifying such evidence may in turn enhance the incorporation of patient preferences in clinical decision making and health technology assessment.

  6. Literature search strategies for interdisciplinary research a sourcebook for scientists and engineers

    CERN Document Server

    Ackerson, Linda G

    2006-01-01

    The amount of published literature can be overwhelming for scientists and researchers moving from a broad disciplinary research area to a more specialized one, particularly in fields that use information from more than one discipline. Without a focused inquiry, the researcher may find too little information or may be overcome by too much. Striking the correct balance of information is the focus of Literature Search Strategies for Interdisciplinary Research. This useful reference tool studies diverse interdisciplinary areas revealing the general and individual qualities that dictate the strateg

  7. Making sense of the future: The information search strategies of construction practitioners in exploring the risk landscape

    DEFF Research Database (Denmark)

    Stingl, Verena; Maytorena-Sanchez, Eunice

    This paper explores the cognitive strategies that construction practitioners rely on when searching to identify risks in a simulated project. By using the active information search methodology in interviews with 45 industry practitioners, we were able to distinguish three stereotypical information...

  8. Web-Based Undergraduate Chemistry Problem-Solving: The Interplay of Task Performance, Domain Knowledge and Web-Searching Strategies

    Science.gov (United States)

    She, Hsiao-Ching; Cheng, Meng-Tzu; Li, Ta-Wei; Wang, Chia-Yu; Chiu, Hsin-Tien; Lee, Pei-Zon; Chou, Wen-Chi; Chuang, Ming-Hua

    2012-01-01

    This study investigates the effect of Web-based Chemistry Problem-Solving, with the attributes of Web-searching and problem-solving scaffolds, on undergraduate students' problem-solving task performance. In addition, the nature and extent of Web-searching strategies students used and its correlation with task performance and domain knowledge also…

  9. Enhancing Clinical Content and Race/Ethnicity Data in Statewide Hospital Administrative Databases: Obstacles Encountered, Strategies Adopted, and Lessons Learned.

    Science.gov (United States)

    Pine, Michael; Kowlessar, Niranjana M; Salemi, Jason L; Miyamura, Jill; Zingmond, David S; Katz, Nicole E; Schindler, Joe

    2015-08-01

    Eight grant teams used Agency for Healthcare Research and Quality infrastructure development research grants to enhance the clinical content of and improve race/ethnicity identifiers in statewide all-payer hospital administrative databases. Grantees faced common challenges, including recruiting data partners and ensuring their continued effective participation, acquiring and validating the accuracy and utility of new data elements, and linking data from multiple sources to create internally consistent enhanced administrative databases. Successful strategies to overcome these challenges included aggressively engaging with providers of critical sources of data, emphasizing potential benefits to participants, revising requirements to lessen burdens associated with participation, maintaining continuous communication with participants, being flexible when responding to participants' difficulties in meeting program requirements, and paying scrupulous attention to preparing data specifications and creating and implementing protocols for data auditing, validation, cleaning, editing, and linking. In addition to common challenges, grantees also had to contend with unique challenges from local environmental factors that shaped the strategies they adopted. The creation of enhanced administrative databases to support comparative effectiveness research is difficult, particularly in the face of numerous challenges with recruiting data partners such as competing demands on information technology resources. Excellent communication, flexibility, and attention to detail are essential ingredients in accomplishing this task. Additional research is needed to develop strategies for maintaining these databases when initial funding is exhausted. © Health Research and Educational Trust.

  10. Blind Channel Equalization Using Constrained Generalized Pattern Search Optimization and Reinitialization Strategy

    Directory of Open Access Journals (Sweden)

    Charles Tatkeu

    2008-12-01

    Full Text Available We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.

  11. Blind Channel Equalization Using Constrained Generalized Pattern Search Optimization and Reinitialization Strategy

    Science.gov (United States)

    Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles

    2008-12-01

    We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.

  12. Family Health Strategy: assessment and reasons for searching of health service by users

    Directory of Open Access Journals (Sweden)

    Loeste de Arruda-Barbosa

    2011-12-01

    Full Text Available Objective: To assess the evaluation of the users regarding the family health services and identify the main reasons that led them to seek such services. Methods: A descriptive study with qualitative approach, carried out in 5 Family Health Units with 25 users of theFamily Health Strategy (FHS of the city of Crato-CE, Brazil. The study took place from March to April 2009. Semi-structured interview was applied and recorded. We used thetechnique of thematic content analysis. Results: We found that the users of the FHS have great dissatisfaction, especially on the organization and access to health services, evaluating the family health as inefficient, although bringing care closer to the population, primarily through home visits. It was clear also that there is a search to the service mainly supported by curative vision and the acquisition of medicines. Conclusions: The subjects evaluate the organization and access to healthcare services as unsatisfactory, but value the actions, when there is a bond with the health team. However, there is still demand for health services, based on the search for medicines and medical consultation. Thus, it is necessary to improve services of the Family Health Strategy in Crato, with a view to ensure quality, accessibilityand greater resolution of health services.

  13. Face Recognition and Visual Search Strategies in Autism Spectrum Disorders: Amending and Extending a Recent Review by Weigelt et al.

    Directory of Open Access Journals (Sweden)

    Julia Tang

    Full Text Available The purpose of this review was to build upon a recent review by Weigelt et al. which examined visual search strategies and face identification between individuals with autism spectrum disorders (ASD and typically developing peers. Seven databases, CINAHL Plus, EMBASE, ERIC, Medline, Proquest, PsychInfo and PubMed were used to locate published scientific studies matching our inclusion criteria. A total of 28 articles not included in Weigelt et al. met criteria for inclusion into this systematic review. Of these 28 studies, 16 were available and met criteria at the time of the previous review, but were mistakenly excluded; and twelve were recently published. Weigelt et al. found quantitative, but not qualitative, differences in face identification in individuals with ASD. In contrast, the current systematic review found both qualitative and quantitative differences in face identification between individuals with and without ASD. There is a large inconsistency in findings across the eye tracking and neurobiological studies reviewed. Recommendations for future research in face recognition in ASD were discussed.

  14. A Fast, Background-Independent Retrieval Strategy for Color Image Databases

    National Research Council Canada - National Science Library

    Das, M; Draper, B. A; Lim, W. J; Manmatha, R; Riseman, E. M

    1996-01-01

    .... The method is fast and has low storage overhead. Good retrieval results are obtained with multi-colored query objects even when they occur in arbitrary sizes, rotations and locations in the database images...

  15. The Service Status and Development Strategy of the Mobile Application Service of Ancient Books Database

    Directory of Open Access Journals (Sweden)

    Yang Siluo

    2017-12-01

    Full Text Available [Purpose/significance] The mobile application of ancient books database is a change of the ancient books database from the online version to the mobile one. At present, the mobile application of ancient books database is in the initial stage of development, so it is necessary to investigate the current situation and provide suggestions for the development of it. [Method/process] This paper selected two kinds of ancient books databases, namely WeChat platform and the mobile phone client, and analyzed the operation mode and the main function. [Result/conclusion] We come to conclusion that the ancient database mobile application has some defects, such as resources in a small scale, single content and data form, and the function of single platform construction is not perfect, users pay inadequate attention to such issues. Then, we put forward some corresponding suggestions and point out that in order to construct ancient books database mobile applications, it is necessary to improve the platform construction, enrich the data form and quantity, optimize the function, emphasize the communication and interaction with the user.

  16. RNA FRABASE 2.0: an advanced web-accessible database with the capacity to search the three-dimensional fragments within RNA structures

    Directory of Open Access Journals (Sweden)

    Wasik Szymon

    2010-05-01

    Full Text Available Abstract Background Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. Description RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA

  17. Comparing the Precision of Information Retrieval of MeSH-Controlled Vocabulary Search Method and a Visual Method in the Medline Medical Database.

    Science.gov (United States)

    Hariri, Nadjla; Ravandi, Somayyeh Nadi

    2014-01-01

    Medline is one of the most important databases in the biomedical field. One of the most important hosts for Medline is Elton B. Stephens CO. (EBSCO), which has presented different search methods that can be used based on the needs of the users. Visual search and MeSH-controlled search methods are among the most common methods. The goal of this research was to compare the precision of the retrieved sources in the EBSCO Medline base using MeSH-controlled and visual search methods. This research was a semi-empirical study. By holding training workshops, 70 students of higher education in different educational departments of Kashan University of Medical Sciences were taught MeSH-Controlled and visual search methods in 2012. Then, the precision of 300 searches made by these students was calculated based on Best Precision, Useful Precision, and Objective Precision formulas and analyzed in SPSS software using the independent sample T Test, and three precisions obtained with the three precision formulas were studied for the two search methods. The mean precision of the visual method was greater than that of the MeSH-Controlled search for all three types of precision, i.e. Best Precision, Useful Precision, and Objective Precision, and their mean precisions were significantly different (P searches. Fifty-three percent of the participants in the research also mentioned that the use of the combination of the two methods produced better results. For users, it is more appropriate to use a natural, language-based method, such as the visual method, in the EBSCO Medline host than to use the controlled method, which requires users to use special keywords. The potential reason for their preference was that the visual method allowed them more freedom of action.

  18. Accelerating Smith-Waterman Alignment for Protein Database Search Using Frequency Distance Filtration Scheme Based on CPU-GPU Collaborative System

    Directory of Open Access Journals (Sweden)

    Yu Liu

    2015-01-01

    Full Text Available The Smith-Waterman (SW algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.

  19. Literature searches on Ayurveda: An update.

    Science.gov (United States)

    Aggithaya, Madhur G; Narahari, Saravu R

    2015-01-01

    The journals that publish on Ayurveda are increasingly indexed by popular medical databases in recent years. However, many Eastern journals are not indexed biomedical journal databases such as PubMed. Literature searches for Ayurveda continue to be challenging due to the nonavailability of active, unbiased dedicated databases for Ayurvedic literature. In 2010, authors identified 46 databases that can be used for systematic search of Ayurvedic papers and theses. This update reviewed our previous recommendation and identified current and relevant databases. To update on Ayurveda literature search and strategy to retrieve maximum publications. Author used psoriasis as an example to search previously listed databases and identify new. The population, intervention, control, and outcome table included keywords related to psoriasis and Ayurvedic terminologies for skin diseases. Current citation update status, search results, and search options of previous databases were assessed. Eight search strategies were developed. Hundred and five journals, both biomedical and Ayurveda, which publish on Ayurveda, were identified. Variability in databases was explored to identify bias in journal citation. Five among 46 databases are now relevant - AYUSH research portal, Annotated Bibliography of Indian Medicine, Digital Helpline for Ayurveda Research Articles (DHARA), PubMed, and Directory of Open Access Journals. Search options in these databases are not uniform, and only PubMed allows complex search strategy. "The Researches in Ayurveda" and "Ayurvedic Research Database" (ARD) are important grey resources for hand searching. About 44/105 (41.5%) journals publishing Ayurvedic studies are not indexed in any database. Only 11/105 (10.4%) exclusive Ayurveda journals are indexed in PubMed. AYUSH research portal and DHARA are two major portals after 2010. It is mandatory to search PubMed and four other databases because all five carry citations from different groups of journals. The hand

  20. A comparison of three design tree based search algorithms for the detection of engineering parts constructed with CATIA V5 in large databases

    Directory of Open Access Journals (Sweden)

    Robin Roj

    2014-07-01

    Full Text Available This paper presents three different search engines for the detection of CAD-parts in large databases. The analysis of the contained information is performed by the export of the data that is stored in the structure trees of the CAD-models. A preparation program generates one XML-file for every model, which in addition to including the data of the structure tree, also owns certain physical properties of each part. The first search engine is specializes in the discovery of standard parts, like screws or washers. The second program uses certain user input as search parameters, and therefore has the ability to perform personalized queries. The third one compares one given reference part with all parts in the database, and locates files that are identical, or similar to, the reference part. All approaches run automatically, and have the analysis of the structure tree in common. Files constructed with CATIA V5, and search engines written with Python have been used for the implementation. The paper also includes a short comparison of the advantages and disadvantages of each program, as well as a performance test.

  1. Intelligent energy allocation strategy for PHEV charging station using gravitational search algorithm

    Science.gov (United States)

    Rahman, Imran; Vasant, Pandian M.; Singh, Balbir Singh Mahinder; Abdullah-Al-Wadud, M.

    2014-10-01

    Recent researches towards the use of green technologies to reduce pollution and increase penetration of renewable energy sources in the transportation sector are gaining popularity. The development of the smart grid environment focusing on PHEVs may also heal some of the prevailing grid problems by enabling the implementation of Vehicle-to-Grid (V2G) concept. Intelligent energy management is an important issue which has already drawn much attention to researchers. Most of these works require formulation of mathematical models which extensively use computational intelligence-based optimization techniques to solve many technical problems. Higher penetration of PHEVs require adequate charging infrastructure as well as smart charging strategies. We used Gravitational Search Algorithm (GSA) to intelligently allocate energy to the PHEVs considering constraints such as energy price, remaining battery capacity, and remaining charging time.

  2. An High Resolution Near-Earth Objects Population Enabling Next-Generation Search Strategies

    Science.gov (United States)

    Tricaico, Pasquale; Beshore, E. C.; Larson, S. M.; Boattini, A.; Williams, G. V.

    2010-01-01

    Over the past decade, the dedicated search for kilometer-size near-Earth objects (NEOs), potentially hazardous objects (PHOs), and potential Earth impactors has led to a boost in the rate of discoveries of these objects. The catalog of known NEOs is the fundamental ingredient used to develop a model for the NEOs population, either by assessing and correcting for the observational bias (Jedicke et al., 2002), or by evaluating the migration rates from the NEOs source regions (Bottke et al., 2002). The modeled NEOs population is a necessary tool used to track the progress in the search of large NEOs (Jedicke et al., 2003) and to try to predict the distribution of the ones still undiscovered, as well as to study the sky distribution of potential Earth impactors (Chesley & Spahr, 2004). We present a method to model the NEOs population in all six orbital elements, on a finely grained grid, allowing us the design and test of targeted and optimized search strategies. This method relies on the observational data routinely reported to the Minor Planet Center (MPC) by the Catalina Sky Survey (CSS) and by other active NEO surveys over the past decade, to determine on a nightly basis the efficiency in detecting moving objects as a function of observable quantities including apparent magnitude, rate of motion, airmass, and galactic latitude. The cumulative detection probability is then be computed for objects within a small range in orbital elements and absolute magnitude, and the comparison with the number of know NEOs within the same range allows us to model the population. When propagated to the present epoch and projected on the sky plane, this provides the distribution of the missing large NEOs, PHOs, and potential impactors.

  3. Reduction rules-based search algorithm for opportunistic replacement strategy of multiple life-limited parts

    Directory of Open Access Journals (Sweden)

    Xuyun FU

    2018-01-01

    Full Text Available The opportunistic replacement of multiple Life-Limited Parts (LLPs is a problem widely existing in industry. The replacement strategy of LLPs has a great impact on the total maintenance cost to a lot of equipment. This article focuses on finding a quick and effective algorithm for this problem. To improve the algorithm efficiency, six reduction rules are suggested from the perspectives of solution feasibility, determination of the replacement of LLPs, determination of the maintenance occasion and solution optimality. Based on these six reduction rules, a search algorithm is proposed. This search algorithm can identify one or several optimal solutions. A numerical experiment shows that these six reduction rules are effective, and the time consumed by the algorithm is less than 38 s if the total life of equipment is shorter than 55000 and the number of LLPs is less than 11. A specific case shows that the algorithm can obtain optimal solutions which are much better than the result of the traditional method in 10 s, and it can provide support for determining to-be-replaced LLPs when determining the maintenance workscope of an aircraft engine. Therefore, the algorithm is applicable to engineering applications concerning opportunistic replacement of multiple LLPs in aircraft engines.

  4. Group search optimiser-based optimal bidding strategies with no Karush-Kuhn-Tucker optimality conditions

    Science.gov (United States)

    Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.

    2017-03-01

    General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.

  5. Optimization of refueling-shuffling scheme in PWR core by random search strategy

    International Nuclear Information System (INIS)

    Wu Yuan

    1991-11-01

    A random method for simulating optimization of refueling management in a pressurized water reactor (PWR) core is described. The main purpose of the optimization was to select the 'best' refueling arrangement scheme which would produce maximum economic benefits under certain imposed conditions. To fulfill this goal, an effective optimization strategy, two-stage random search method was developed. First, the search was made in a manner similar to the stratified sampling technique. A local optimum can be reached by comparison of the successive results. Then the other random experiences would be carried on between different strata to try to find the global optimum. In general, it can be used as a practical tool for conventional fuel management scheme. However, it can also be used in studies on optimization of Low-Leakage fuel management. Some calculations were done for a typical PWR core on a CYBER-180/830 computer. The results show that the method proposed can obtain satisfactory approach at reasonable low computational cost

  6. Climate change on the Colorado River: a method to search for robust management strategies

    Science.gov (United States)

    Keefe, R.; Fischbach, J. R.

    2010-12-01

    The Colorado River is a principal source of water for the seven Basin States, providing approximately 16.5 maf per year to users in the southwestern United States and Mexico. Though the dynamics of the river ensure Upper Basin users a reliable supply of water, the three Lower Basin states (California, Nevada, and Arizona) are in danger of delivery interruptions as Upper Basin demand increases and climate change threatens to reduce future streamflows. In light of the recent drought and uncertain effects of climate change on Colorado River flows, we evaluate the performance of a suite of policies modeled after the shortage sharing agreement adopted in December 2007 by the Department of the Interior. We build on the current literature by using a simplified model of the Lower Colorado River to consider future streamflow scenarios given climate change uncertainty. We also generate different scenarios of parametric consumptive use growth in the Upper Basin and evaluate alternate management strategies in light of these uncertainties. Uncertainty associated with climate change is represented with a multi-model ensemble from the literature, using a nearest neighbor perturbation to increase the size of the ensemble. We use Robust Decision Making to compare near-term or long-term management strategies across an ensemble of plausible future scenarios with the goal of identifying one or more approaches that are robust to alternate assumptions about the future. This method entails using search algorithms to quantitatively identify vulnerabilities that may threaten a given strategy (including the current operating policy) and characterize key tradeoffs between strategies under different scenarios.

  7. CUDASW++2.0: enhanced Smith-Waterman protein database search on CUDA-enabled GPUs based on SIMT and virtualized SIMD abstractions

    Directory of Open Access Journals (Sweden)

    Schmidt Bertil

    2010-04-01

    Full Text Available Abstract Background Due to its high sensitivity, the Smith-Waterman algorithm is widely used for biological database searches. Unfortunately, the quadratic time complexity of this algorithm makes it highly time-consuming. The exponential growth of biological databases further deteriorates the situation. To accelerate this algorithm, many efforts have been made to develop techniques in high performance architectures, especially the recently emerging many-core architectures and their associated programming models. Findings This paper describes the latest release of the CUDASW++ software, CUDASW++ 2.0, which makes new contributions to Smith-Waterman protein database searches using compute unified device architecture (CUDA. A parallel Smith-Waterman algorithm is proposed to further optimize the performance of CUDASW++ 1.0 based on the single instruction, multiple thread (SIMT abstraction. For the first time, we have investigated a partitioned vectorized Smith-Waterman algorithm using CUDA based on the virtualized single instruction, multiple data (SIMD abstraction. The optimized SIMT and the partitioned vectorized algorithms were benchmarked, and remarkably, have similar performance characteristics. CUDASW++ 2.0 achieves performance improvement over CUDASW++ 1.0 as much as 1.74 (1.72 times using the optimized SIMT algorithm and up to 1.77 (1.66 times using the partitioned vectorized algorithm, with a performance of up to 17 (30 billion cells update per second (GCUPS on a single-GPU GeForce GTX 280 (dual-GPU GeForce GTX 295 graphics card. Conclusions CUDASW++ 2.0 is publicly available open-source software, written in CUDA and C++ programming languages. It obtains significant performance improvement over CUDASW++ 1.0 using either the optimized SIMT algorithm or the partitioned vectorized algorithm for Smith-Waterman protein database searches by fully exploiting the compute capability of commonly used CUDA-enabled low-cost GPUs.

  8. CAZymes Analysis Toolkit (CAT): web service for searching and analyzing carbohydrate-active enzymes in a newly sequenced organism using CAZy database.

    Science.gov (United States)

    Park, Byung H; Karpinets, Tatiana V; Syed, Mustafa H; Leuze, Michael R; Uberbacher, Edward C

    2010-12-01

    The Carbohydrate-Active Enzyme (CAZy) database provides a rich set of manually annotated enzymes that degrade, modify, or create glycosidic bonds. Despite rich and invaluable information stored in the database, software tools utilizing this information for annotation of newly sequenced genomes by CAZy families are limited. We have employed two annotation approaches to fill the gap between manually curated high-quality protein sequences collected in the CAZy database and the growing number of other protein sequences produced by genome or metagenome sequencing projects. The first approach is based on a similarity search against the entire nonredundant sequences of the CAZy database. The second approach performs annotation using links or correspondences between the CAZy families and protein family domains. The links were discovered using the association rule learning algorithm applied to sequences from the CAZy database. The approaches complement each other and in combination achieved high specificity and sensitivity when cross-evaluated with the manually curated genomes of Clostridium thermocellum ATCC 27405 and Saccharophagus degradans 2-40. The capability of the proposed framework to predict the function of unknown protein domains and of hypothetical proteins in the genome of Neurospora crassa is demonstrated. The framework is implemented as a Web service, the CAZymes Analysis Toolkit, and is available at http://cricket.ornl.gov/cgi-bin/cat.cgi.

  9. Evidence-based librarianship: searching for the needed EBL evidence.

    Science.gov (United States)

    Eldredge, J D

    2000-01-01

    This paper discusses the challenges of finding evidence needed to implement Evidence-Based Librarianship (EBL). Focusing first on database coverage for three health sciences librarianship journals, the article examines the information contents of different databases. Strategies are needed to search for relevant evidence in the library literature via these databases, and the problems associated with searching the grey literature of librarianship. Database coverage, plausible search strategies, and the grey literature of library science all pose challenges to finding the needed research evidence for practicing EBL. Health sciences librarians need to ensure that systems are designed that can track and provide access to needed research evidence to support Evidence-Based Librarianship (EBL).

  10. Mate choice and the evolutionary stability of a fixed threshold in a sequential search strategy

    Directory of Open Access Journals (Sweden)

    Raymond Cheng

    2014-06-01

    Full Text Available The sequential search strategy is a prominent model of searcher behavior, derived as a rule by which females might sample and choose a mate from a distribution of prospective partners. The strategy involves a threshold criterion against which prospective mates are evaluated. The optimal threshold depends on the attributes of prospective mates, which are likely to vary across generations or within the lifetime of searchers due to stochastic environmental events. The extent of this variability and the cost to acquire information on the distribution of the quality of prospective mates determine whether a learned or environmentally canalized threshold is likely to be favored. In this paper, we determine conditions on cross-generational perturbations of the distribution of male phenotypes that allow for the evolutionary stability of an environmentally canalized threshold. In particular, we derive conditions under which there is a genetically determined threshold that is optimal over an evolutionary time scale in comparison to any other unlearned threshold. These considerations also reveal a simple algorithm by which the threshold could be learned.

  11. Lévy-taxis: a novel search strategy for finding odor plumes in turbulent flow-dominated environments

    Science.gov (United States)

    Pasternak, Zohar; Bartumeus, Frederic; Grasso, Frank W.

    2009-10-01

    Locating chemical plumes in aquatic or terrestrial environments is important for many economic, conservation, security and health related human activities. The localization process is composed mainly of two phases: finding the chemical plume and then tracking it to its source. Plume tracking has been the subject of considerable study whereas plume finding has received little attention. We address here the latter issue, where the searching agent must find the plume in a region often many times larger than the plume and devoid of the relevant chemical cues. The probability of detecting the plume not only depends on the movements of the searching agent but also on the fluid mechanical regime, shaping plume intermittency in space and time; this is a basic, general problem when exploring for ephemeral resources (e.g. moving and/or concealing targets). Here we present a bio-inspired search strategy named Lévy-taxis that, under certain conditions, located odor plumes significantly faster and with a better success rate than other search strategies such as Lévy walks (LW), correlated random walks (CRW) and systematic zig-zag. These results are based on computer simulations which contain, for the first time ever, digitalized real-world water flow and chemical plume instead of their theoretical model approximations. Combining elements of LW and CRW, Lévy-taxis is particularly efficient for searching in flow-dominated environments: it adaptively controls the stochastic search pattern using environmental information (i.e. flow) that is available throughout the course of the search and shows correlation with the source providing the cues. This strategy finds natural application in real-world search missions, both by humans and autonomous robots, since it accomodates the stochastic nature of chemical mixing in turbulent flows. In addition, it may prove useful in the field of behavioral ecology, explaining and predicting the movement patterns of various animals searching for

  12. Levy-taxis: a novel search strategy for finding odor plumes in turbulent flow-dominated environments

    International Nuclear Information System (INIS)

    Pasternak, Zohar; Grasso, Frank W; Bartumeus, Frederic

    2009-01-01

    Locating chemical plumes in aquatic or terrestrial environments is important for many economic, conservation, security and health related human activities. The localization process is composed mainly of two phases: finding the chemical plume and then tracking it to its source. Plume tracking has been the subject of considerable study whereas plume finding has received little attention. We address here the latter issue, where the searching agent must find the plume in a region often many times larger than the plume and devoid of the relevant chemical cues. The probability of detecting the plume not only depends on the movements of the searching agent but also on the fluid mechanical regime, shaping plume intermittency in space and time; this is a basic, general problem when exploring for ephemeral resources (e.g. moving and/or concealing targets). Here we present a bio-inspired search strategy named Levy-taxis that, under certain conditions, located odor plumes significantly faster and with a better success rate than other search strategies such as Levy walks (LW), correlated random walks (CRW) and systematic zig-zag. These results are based on computer simulations which contain, for the first time ever, digitalized real-world water flow and chemical plume instead of their theoretical model approximations. Combining elements of LW and CRW, Levy-taxis is particularly efficient for searching in flow-dominated environments: it adaptively controls the stochastic search pattern using environmental information (i.e. flow) that is available throughout the course of the search and shows correlation with the source providing the cues. This strategy finds natural application in real-world search missions, both by humans and autonomous robots, since it accommodates the stochastic nature of chemical mixing in turbulent flows. In addition, it may prove useful in the field of behavioral ecology, explaining and predicting the movement patterns of various animals searching for food

  13. Levy-taxis: a novel search strategy for finding odor plumes in turbulent flow-dominated environments

    Energy Technology Data Exchange (ETDEWEB)

    Pasternak, Zohar; Grasso, Frank W [BioMimetic and Cognitive Robotics Laboratory, Department of Psychology, Brooklyn College, The City University of New York, 2900 Bedford Avenue, Brooklyn 11210, NY (United States); Bartumeus, Frederic [Department of Ecology and Evolutionary Biology and Princeton Environmental Institute, 106 Guyot Hall, Princeton University, Princeton 08544, NJ (United States)], E-mail: zpast@yahoo.com

    2009-10-30

    Locating chemical plumes in aquatic or terrestrial environments is important for many economic, conservation, security and health related human activities. The localization process is composed mainly of two phases: finding the chemical plume and then tracking it to its source. Plume tracking has been the subject of considerable study whereas plume finding has received little attention. We address here the latter issue, where the searching agent must find the plume in a region often many times larger than the plume and devoid of the relevant chemical cues. The probability of detecting the plume not only depends on the movements of the searching agent but also on the fluid mechanical regime, shaping plume intermittency in space and time; this is a basic, general problem when exploring for ephemeral resources (e.g. moving and/or concealing targets). Here we present a bio-inspired search strategy named Levy-taxis that, under certain conditions, located odor plumes significantly faster and with a better success rate than other search strategies such as Levy walks (LW), correlated random walks (CRW) and systematic zig-zag. These results are based on computer simulations which contain, for the first time ever, digitalized real-world water flow and chemical plume instead of their theoretical model approximations. Combining elements of LW and CRW, Levy-taxis is particularly efficient for searching in flow-dominated environments: it adaptively controls the stochastic search pattern using environmental information (i.e. flow) that is available throughout the course of the search and shows correlation with the source providing the cues. This strategy finds natural application in real-world search missions, both by humans and autonomous robots, since it accommodates the stochastic nature of chemical mixing in turbulent flows. In addition, it may prove useful in the field of behavioral ecology, explaining and predicting the movement patterns of various animals searching for food

  14. Replication and load balancing strategy of STAR's relational database management system (RDBM)

    Energy Technology Data Exchange (ETDEWEB)

    DePhillips, M; Lauret, J; Kopytine, M [Brookhaven National Laboratory, Upton NY 11973 (United States); Kent State University, Kent Ohio 44242 (United States)], E-mail: jlauret@bnl.gov

    2008-07-15

    Database demand resulting from offline analysis and production of data at the STAR experiment at Brookhaven National Laboratory's Relativistic Heavy-Ion Collider has steadily increased over the last six years of data taking activities. With each year, STAR more than doubles the number of events recorded with an anticipation of reaching a billion event capabilities as early as next year. The challenges faced from producing and analyzing this magnitude of events in parallel have raised issues with regard to the distribution of calibrations and geometry data, via databases, to STAR's growing global collaboration. Rapid distribution, availability, ensured synchronization and load balancing have become paramount considerations. Both conventional technology and novel approaches are used in parallel to realize these goals. This paper discusses how STAR uses load balancing to optimize database usage. It discusses distribution methods via MySQL master slave replication; the synchronization issues that arise from this type of distribution and solutions, mostly homegrown, put forth to overcome these issues. A novel approach toward load balancing between slave nodes that assists in maintaining a high availability rate for a veracious community is discussed in detail. This load balancing addresses both, pools of nodes internal to a given location, as well as balancing the load for remote users between different available locations. Challenges, trade-offs, rationale for decisions and paths forward will be discussed in all cases, presenting a solid production environment with a vision for scalable growth.

  15. Replication and load balancing strategy of STAR's relational database management system (RDBM)

    International Nuclear Information System (INIS)

    DePhillips, M; Lauret, J; Kopytine, M

    2008-01-01

    Database demand resulting from offline analysis and production of data at the STAR experiment at Brookhaven National Laboratory's Relativistic Heavy-Ion Collider has steadily increased over the last six years of data taking activities. With each year, STAR more than doubles the number of events recorded with an anticipation of reaching a billion event capabilities as early as next year. The challenges faced from producing and analyzing this magnitude of events in parallel have raised issues with regard to the distribution of calibrations and geometry data, via databases, to STAR's growing global collaboration. Rapid distribution, availability, ensured synchronization and load balancing have become paramount considerations. Both conventional technology and novel approaches are used in parallel to realize these goals. This paper discusses how STAR uses load balancing to optimize database usage. It discusses distribution methods via MySQL master slave replication; the synchronization issues that arise from this type of distribution and solutions, mostly homegrown, put forth to overcome these issues. A novel approach toward load balancing between slave nodes that assists in maintaining a high availability rate for a veracious community is discussed in detail. This load balancing addresses both, pools of nodes internal to a given location, as well as balancing the load for remote users between different available locations. Challenges, trade-offs, rationale for decisions and paths forward will be discussed in all cases, presenting a solid production environment with a vision for scalable growth

  16. Search strategies for Higgs Bosons at high energy e+e/sup /minus// colliders

    International Nuclear Information System (INIS)

    Alexander, J.; Burke, D.L.; Jung, C.K.; Komamiya, S.; Burchat, P.R.

    1989-01-01

    We have used detailed Monte Carlo simulations to study search strategies for Higgs bosons at high energy e + e/sup /minus// colliders. We extend an earlier study of the minimal single-Higgs-doublet model at a center-of-mass energy of 1 TeV to examine the effects of b-quark tagging and jet counting. It is found that these techniques can increase the signal-to-noise ratio substantially in the mass range around the W mass. In addition, we have studied this model at a center-of-mass energy of 400 GeV and found that an e + e/sup /minus// collider in this region would be sensitive to a Higgs boson with mass up to twice the Z/degree/ mass. We have also considered a nonminimal two-doublet model for the Higgs sector by extending a study of charged Higgs boson searches to include a mass very close to the mass of the W/sup +-/. We demonstrate that techniques which include b-quark tagging can be utilized to extract a significant signal. In addition, we have examined the prospects for detecting nonminimal neutral Higgs bosons at 1 TeV. We conclude that it would be possible to detect the CP-even and CP-odd neutral Higgs bosons when they are pair-produced in e + e/sup /minus// annihilation over a limited mass range. However, in some scenarios of supersymmetry, the charged Higgs boson constitutes a significant background to the CP-odd and the more massive CP-even neutral Higgs boson. 16 refs., 13 figs., 2 tabs

  17. Cognitive biases and heuristics in medical decision making: a critical review using a systematic search strategy.

    Science.gov (United States)

    Blumenthal-Barby, J S; Krieger, Heather

    2015-05-01

    The role of cognitive biases and heuristics in medical decision making is of growing interest. The purpose of this study was to determine whether studies on cognitive biases and heuristics in medical decision making are based on actual or hypothetical decisions and are conducted with populations that are representative of those who typically make the medical decision; to categorize the types of cognitive biases and heuristics found and whether they are found in patients or in medical personnel; and to critically review the studies based on standard methodological quality criteria. Data sources were original, peer-reviewed, empirical studies on cognitive biases and heuristics in medical decision making found in Ovid Medline, PsycINFO, and the CINAHL databases published in 1980-2013. Predefined exclusion criteria were used to identify 213 studies. During data extraction, information was collected on type of bias or heuristic studied, respondent population, decision type, study type (actual or hypothetical), study method, and study conclusion. Of the 213 studies analyzed, 164 (77%) were based on hypothetical vignettes, and 175 (82%) were conducted with representative populations. Nineteen types of cognitive biases and heuristics were found. Only 34% of studies (n = 73) investigated medical personnel, and 68% (n = 145) confirmed the presence of a bias or heuristic. Each methodological quality criterion was satisfied by more than 50% of the studies, except for sample size and validated instruments/questions. Limitations are that existing terms were used to inform search terms, and study inclusion criteria focused strictly on decision making. Most of the studies on biases and heuristics in medical decision making are based on hypothetical vignettes, raising concerns about applicability of these findings to actual decision making. Biases and heuristics have been underinvestigated in medical personnel compared with patients. © The Author(s) 2014.

  18. Variation in Perfusion Strategies for Neonatal and Infant Aortic Arch Repair: Contemporary Practice in the STS Congenital Heart Surgery Database.

    Science.gov (United States)

    Meyer, David B; Jacobs, Jeffrey P; Hill, Kevin; Wallace, Amelia S; Bateson, Brian; Jacobs, Marshall L

    2016-09-01

    Regional cerebral perfusion (RCP) is used as an adjunct or alternative to deep hypothermic circulatory arrest (DHCA) for neonates and infants undergoing aortic arch repair. Clinical studies have not demonstrated clear superiority of either strategy, and multicenter data regarding current use of these strategies are lacking. We sought to describe the variability in contemporary practice patterns for use of these techniques. The Society of Thoracic Surgeons Congenital Heart Surgery Database (2010-2013) was queried to identify neonates and infants whose index operation involved aortic arch repair with cardiopulmonary bypass. Perfusion strategy was classified as isolated DHCA, RCP (with less than or equal to ten minutes of DHCA), or mixed (RCP with more than ten minutes of DHCA). Data were analyzed for the entire cohort and stratified by operation subgroups. Overall, 4,523 patients (105 centers) were identified; median age seven days (interquartile range: 5.0-13.0). The most prevalent perfusion strategy was RCP (43%). Deep hypothermic circulatory arrest and mixed perfusion accounted for 32% and 16% of cases, respectively. In all, 59% of operations involved some period of RCP. Regional cerebral perfusion was the most prevalent perfusion strategy for each operation subgroup. Neither age nor weight was associated with perfusion strategy, but reoperations were less likely to use RCP (31% vs 45%, P RCP and DHCA in the RCP group was longer than the DHCA time in the DHCA group (45 vs 36 minutes, P neonates and infants. In contemporary practice, RCP is the most prevalent perfusion strategy for these procedures. Use of DHCA is also common. Further investigation is warranted to ascertain possible relative merits of the various perfusion techniques. © The Author(s) 2016.

  19. Utilizing Multimedia Database Access: Teaching Strategies Using the iPad in the Dance Classroom

    Science.gov (United States)

    Ostashewski, Nathaniel; Reid, Doug; Ostashewski, Marcia

    2016-01-01

    This article presents action research that identified iPad tablet technology-supported teaching strategies in a dance classroom context. Dance classrooms use instructor-accessed music as a regular element of lessons, but video is both challenging and time-consuming to produce or display. The results of this study highlight how the Apple iPad…

  20. New Search Strategies Successfully Optimize Retrieval of Clinically Sound Treatment Studies in EMBASE. A review of: Wong, Sharon S‐L, Nancy L. Wilczynski, and R. Brian Haynes. “Developing Optimal Search Strategies for Detecting Clinically Sound Treatment Studies in EMBASE.” Journal of the Medical Library Association 94.1 (Jan. 2006: 41‐47. 14 May 2007 http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1324770.

    Directory of Open Access Journals (Sweden)

    John Loy

    2007-06-01

    Full Text Available Objective – To develop and test the sensitivity and specificity, precision andaccuracy of search strategies to retrieve clinically sound treatment studies in the EMBASE database.Design – Analytical study.Setting – Methodologically sound studies of treatment from 55 journals indexed in EMBASE for the year 2000.Subjects – EMBASE and hand searches performed at the Health Information Research Unit of McMaster University, Ontario, Canada.Methods – The authors compare the results of EMBASE searches using their search strategies with the “gold standard” of articles retrieved by hand search. Research assistants initially hand searched each issue of 55 selected journals published in 2000 to identify articles detailing studies on healthcare treatment of humans. Subject coverage of the journals was wide ranging and included obstetrics and gynaecology, psychiatry, oncology, neurology, surgery and general practice. Studies were then assessed to ensure they met the qualifying criteria: random allocation of participants to groups, outcome assessment of at least 80% of participants who began the study, and analysis consistent with study design. Initially, 3850 articles on treatment were identified, of which 1256 (32.6% were methodologically sound. To construct a comprehensive set of search terms, input was sought from librarians and researchers in the US and Canada. This initially produced a list of 5385 terms, of which 4843 were unique and 3524 produced hits. Individual search terms with sensitivity greater then 25% and specificity greater then 75% were incorporated into search strategies for use within the OVID interface for the EMBASE database to retrieve articles meeting the same criteria. These strategies were developed using all 27,769 articles published in the 55 journals in 2000. This all inclusive approach was used to test the search strategies’ ability to identify high quality treatment articles from a larger pool of material

  1. Hiding and Searching Strategies of Adult Humans in a Virtual and a Real-Space Room

    Science.gov (United States)

    Talbot, Katherine J.; Legge, Eric L. G.; Bulitko, Vadim; Spetch, Marcia L.

    2009-01-01

    Adults searched for or cached three objects in nine hiding locations in a virtual room or a real-space room. In both rooms, the locations selected by participants differed systematically between searching and hiding. Specifically, participants moved farther from origin and dispersed their choices more when hiding objects than when searching for…

  2. A Content Analysis of Strategies and Tactics Observed among MLIS Students in an Online Searching Course

    Science.gov (United States)

    Ondrusek, Anita L.; Ren, Xiaoai; Yang, Changwoo

    2017-01-01

    Online searching is a skill that all professional programs educating librarians consider an essential part of their curricula. However, investigations of online searching behavior have centered almost exclusively on end users, and there have been no recent formal studies that explore the online searching behaviors of MLIS students. In this study,…

  3. Use of Cognitive and Metacognitive Strategies in Online Search: An Eye-Tracking Study

    Science.gov (United States)

    Zhou, Mingming; Ren, Jing

    2016-01-01

    This study used eye-tracking technology to track students' eye movements while searching information on the web. The research question guiding this study was "Do students with different search performance levels have different visual attention distributions while searching information online? If yes, what are the patterns for high and low…

  4. Landslide databases to compare regional repair and mitigation strategies of transportation infrastructure

    Science.gov (United States)

    Wohlers, Annika; Damm, Bodo

    2017-04-01

    Regional data of the Central German Uplands are extracted from the German landslide database in order to understand the complex interactions between landslide risks and public risk awareness considering transportation infrastructure. Most information within the database is gathered by means of archive studies from inventories of emergency agencies, state, press and web archives, company and department records as well as scientific and (geo)technical literature. The information includes land use practices, repair and mitigation measures with resultant costs of the German road network as well as railroad and waterway networks. It therefore contains valuable information of historical and current landslide impacts, elements at risk and provides an overview of spatiotemporal changes in social exposure and vulnerability to landslide hazards over the last 120 years. On a regional scale the recorded infrastructure damages, and consequential repair or mitigation measures were categorized and classified, according to relevant landslide types, processes and types of infrastructure. In a further step, the data of recent landslides are compared with historical and modern repair and mitigation measures and are correlated with socioeconomic concepts. As a result, it is possible to identify some complex interactions between landslide hazard, risk perception, and damage impact, including time lags and intensity thresholds. The data reveal distinct concepts of repairing respectively mitigating landslides on different types of transportation infrastructure, which are not exclusively linked to higher construction efforts (e.g. embankments on railroads and channels), but changing levels of economic losses and risk perception as well. In addition, a shift from low cost prevention measures such as the removal of loose rock and vegetation, rock blasting, and catch barriers towards expensive mitigation measures such as catch fences, soil anchoring and rock nailing over time can be noticed

  5. Preservation of biological information in thermal spring deposits - Developing a strategy for the search for fossil life on Mars

    Science.gov (United States)

    Walter, M. R.; Des Marais, David J.

    1993-01-01

    Paleobiological experience on earth is used here to develop a search strategy for fossil life on Mars. In particular, the exploration of thermal spring deposits is proposed as a way to maximize the chance of finding fossil life on Mars. As a basis for this suggestion, the characteristics of thermal springs are discussed in some detail.

  6. Closing the Wedge: Search Strategies for Extended Higgs Sectors with Heavy Flavor Final States

    CERN Document Server

    Gori, Stefania; Shah, Nausheen R.; Zurek, Kathryn M.

    2016-01-01

    We consider search strategies for an extended Higgs sector at the high-luminosity LHC14 utilizing multi-top final states. In the framework of a Two Higgs Doublet Model, the purely top final states ($t\\bar t, \\, 4t$) are important channels for heavy Higgs bosons with masses in the wedge above $2\\,m_t$ and at low values of $\\tan\\beta$, while a $2 b 2t$ final state is most relevant at moderate values of $\\tan \\beta$. We find, in the $t\\bar t H$ channel, with $H \\rightarrow t \\bar t$, that both single and 3 lepton final states can provide statistically significant constraints at low values of $\\tan \\beta$ for $m_A$ as high as $\\sim 750$ GeV. When systematics on the $t \\bar t$ background are taken into account, however, the 3 lepton final state is more powerful, though the precise constraint depends fairly sensitively on lepton fake rates. We also find that neither $2b2t$ nor $t \\bar t$ final states provide constraints on additional heavy Higgs bosons with couplings to tops smaller than the top Yukawa due to expec...

  7. Prediction of shot success for basketball free throws: visual search strategy.

    Science.gov (United States)

    Uchida, Yusuke; Mizuguchi, Nobuaki; Honda, Masaaki; Kanosue, Kazuyuki

    2014-01-01

    In ball games, players have to pay close attention to visual information in order to predict the movements of both the opponents and the ball. Previous studies have indicated that players primarily utilise cues concerning the ball and opponents' body motion. The information acquired must be effective for observing players to select the subsequent action. The present study evaluated the effects of changes in the video replay speed on the spatial visual search strategy and ability to predict free throw success. We compared eye movements made while observing a basketball free throw by novices and experienced basketball players. Correct response rates were close to chance (50%) at all video speeds for the novices. The correct response rate of experienced players was significantly above chance (and significantly above that of the novices) at the normal speed, but was not different from chance at both slow and fast speeds. Experienced players gazed more on the lower part of the player's body when viewing a normal speed video than the novices. The players likely detected critical visual information to predict shot success by properly moving their gaze according to the shooter's movements. This pattern did not change when the video speed was decreased, but changed when it was increased. These findings suggest that temporal information is important for predicting action outcomes and that such outcomes are sensitive to video speed.

  8. Response Time, Visual Search Strategy, and Anticipatory Skills in Volleyball Players

    Directory of Open Access Journals (Sweden)

    Alessandro Piras

    2014-01-01

    Full Text Available This paper aimed at comparing expert and novice volleyball players in a visuomotor task using realistic stimuli. Videos of a volleyball setter performing offensive action were presented to participants, while their eye movements were recorded by a head-mounted video based eye tracker. Participants were asked to foresee the direction (forward or backward of the setter’s toss by pressing one of two keys. Key-press response time, response accuracy, and gaze behaviour were measured from the first frame showing the setter’s hand-ball contact to the button pressed by the participants. Experts were faster and more accurate in predicting the direction of the setting than novices, showing accurate predictions when they used a search strategy involving fewer fixations of longer duration, as well as spending less time in fixating all display areas from which they extract critical information for the judgment. These results are consistent with the view that superior performance in experts is due to their ability to efficiently encode domain-specific information that is relevant to the task.

  9. Operation management of daily economic dispatch using novel hybrid particle swarm optimization and gravitational search algorithm with hybrid mutation strategy

    Science.gov (United States)

    Wang, Yan; Huang, Song; Ji, Zhicheng

    2017-07-01

    This paper presents a hybrid particle swarm optimization and gravitational search algorithm based on hybrid mutation strategy (HGSAPSO-M) to optimize economic dispatch (ED) including distributed generations (DGs) considering market-based energy pricing. A daily ED model was formulated and a hybrid mutation strategy was adopted in HGSAPSO-M. The hybrid mutation strategy includes two mutation operators, chaotic mutation, Gaussian mutation. The proposed algorithm was tested on IEEE-33 bus and results show that the approach is effective for this problem.

  10. Subject search study. Final report

    International Nuclear Information System (INIS)

    Todeschini, C.

    1995-01-01

    The study gathered information on how users search the database of the International Nuclear Information System (INIS), using indicators such as Subject categories, Controlled terms, Subject headings, Free-text words, combinations of the above. Users participated from the Australian, French, Russian and Spanish INIS Centres, that have different national languages. Participants, both intermediaries and end users, replied to a questionnaire and executed search queries. The INIS Secretariat at the IAEA also participated. A protocol of all search strategies used in actual searches in the database was kept. The thought process for Russian and Spanish users is predominantly non-English and also the actual initial search formulation is predominantly non-English among Russian and Spanish users while it tends to be more in English among French users. A total of 1002 searches were executed by the five INIS centres including the IAEA. The search protocols indicate the following search behaviour: 1) free text words represent about 40% of search points on an average query; 2) descriptors used as search keys have the widest range as percentage of search points, from a low of 25% to a high of 48%; 3) search keys consisting of free text that coincides with a descriptor account for about 15% of search points; 4) Subject Categories are not used in many searches; 5) free text words are present as search points in about 80% of all searches; 6) controlled terms (descriptors) are used very extensively and appear in about 90% of all searches; 7) Subject Headings were used in only a few percent of searches. From the results of the study one can conclude that there is a greater reluctance on the part of non-native English speakers in initiating their searches by using free text word searches. Also: Subject Categories are little used in searching the database; both free text terms and controlled terms are the predominant types of search keys used, whereby the controlled terms are used more

  11. Some aspects of the file organization and retrieval strategy in large data-bases

    International Nuclear Information System (INIS)

    Arnaudov, D.D.; Govorun, N.N.

    1977-01-01

    Methods of organizing a big information retrieval system are discribed. A special attention is paid to the file organization. An adapting file structure is described in more detail. The discussed method gives one the opportunity to organize large files in such a way that the response time of the system can be minimized, when the file is increasing. In connection with the retrieval strategy a method is proposed, which uses the frequencies of the descr/iptors and the couples of the descriptors to forecast the expected number of the relevant documents. Programmes are made, on the base of these methods, which are used in the information retrieval systems of JINR

  12. International patent applications for non-injectable naloxone for opioid overdose reversal: Exploratory search and retrieve analysis of the PatentScope database.

    Science.gov (United States)

    McDonald, Rebecca; Danielsson Glende, Øyvind; Dale, Ola; Strang, John

    2018-02-01

    Non-injectable naloxone formulations are being developed for opioid overdose reversal, but only limited data have been published in the peer-reviewed domain. Through examination of a hitherto-unsearched database, we expand public knowledge of non-injectable formulations, tracing their development and novelty, with the aim to describe and compare their pharmacokinetic properties. (i) The PatentScope database of the World Intellectual Property Organization was searched for relevant English-language patent applications; (ii) Pharmacokinetic data were extracted, collated and analysed; (iii) PubMed was searched using Boolean search query '(nasal OR intranasal OR nose OR buccal OR sublingual) AND naloxone AND pharmacokinetics'. Five hundred and twenty-two PatentScope and 56 PubMed records were identified: three published international patent applications and five peer-reviewed papers were eligible. Pharmacokinetic data were available for intranasal, sublingual, and reference routes. Highly concentrated formulations (10-40 mg mL -1 ) had been developed and tested. Sublingual bioavailability was very low (1%; relative to intravenous). Non-concentrated intranasal spray (1 mg mL -1 ; 1 mL per nostril) had low bioavailability (11%). Concentrated intranasal formulations (≥10 mg mL -1 ) had bioavailability of 21-42% (relative to intravenous) and 26-57% (relative to intramuscular), with peak concentrations (dose-adjusted C max  = 0.8-1.7 ng mL -1 ) reached in 19-30 min (t max ). Exploratory analysis identified intranasal bioavailability as associated positively with dose and negatively with volume. We find consistent direction of development of intranasal sprays to high-concentration, low-volume formulations with bioavailability in the 20-60% range. These have potential to deliver a therapeutic dose in 0.1 mL volume. [McDonald R, Danielsson Glende Ø, Dale O, Strang J. International patent applications for non-injectable naloxone for opioid overdose reversal

  13. Where the bugs are: analyzing distributions of bacterial phyla by descriptor keyword search in the nucleotide database.

    Science.gov (United States)

    Squartini, Andrea

    2011-07-26

    The associations between bacteria and environment underlie their preferential interactions with given physical or chemical conditions. Microbial ecology aims at extracting conserved patterns of occurrence of bacterial taxa in relation to defined habitats and contexts. In the present report the NCBI nucleotide sequence database is used as dataset to extract information relative to the distribution of each of the 24 phyla of the bacteria superkingdom and of the Archaea. Over two and a half million records are filtered in their cross-association with each of 48 sets of keywords, defined to cover natural or artificial habitats, interactions with plant, animal or human hosts, and physical-chemical conditions. The results are processed showing: (a) how the different descriptors enrich or deplete the proportions at which the phyla occur in the total database; (b) in which order of abundance do the different keywords score for each phylum (preferred habitats or conditions), and to which extent are phyla clustered to few descriptors (specific) or spread across many (cosmopolitan); (c) which keywords individuate the communities ranking highest for diversity and evenness. A number of cues emerge from the results, contributing to sharpen the picture on the functional systematic diversity of prokaryotes. Suggestions are given for a future automated service dedicated to refining and updating such kind of analyses via public bioinformatic engines.

  14. Search for 5'-leader regulatory RNA structures based on gene annotation aided by the RiboGap database.

    Science.gov (United States)

    Naghdi, Mohammad Reza; Smail, Katia; Wang, Joy X; Wade, Fallou; Breaker, Ronald R; Perreault, Jonathan

    2017-03-15

    The discovery of noncoding RNAs (ncRNAs) and their importance for gene regulation led us to develop bioinformatics tools to pursue the discovery of novel ncRNAs. Finding ncRNAs de novo is challenging, first due to the difficulty of retrieving large numbers of sequences for given gene activities, and second due to exponential demands on calculation needed for comparative genomics on a large scale. Recently, several tools for the prediction of conserved RNA secondary structure were developed, but many of them are not designed to uncover new ncRNAs, or are too slow for conducting analyses on a large scale. Here we present various approaches using the database RiboGap as a primary tool for finding known ncRNAs and for uncovering simple sequence motifs with regulatory roles. This database also can be used to easily extract intergenic sequences of eubacteria and archaea to find conserved RNA structures upstream of given genes. We also show how to extend analysis further to choose the best candidate ncRNAs for experimental validation. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. The Search for Suitable Strategy: Threat-Based and Capabilities-Based Strategies in a Complex World

    Science.gov (United States)

    2016-05-26

    the 1973 Arab-Israeli War show that the true path to suitable strategy is a measure of forethought and theoretical planning exercises to shape habits ...is a measure of forethought and theoretical planning exercises to shape habits of thought and identify risks or shortcomings inherent in a chosen...political direction to the military instrument of power , while military strategy links military means to those 22 Oxford Dictionaries, s.v. “Strategy

  16. Creating a data exchange strategy for radiotherapy research: Towards federated databases and anonymised public datasets

    International Nuclear Information System (INIS)

    Skripcak, Tomas; Belka, Claus; Bosch, Walter; Brink, Carsten; Brunner, Thomas; Budach, Volker; Büttner, Daniel; Debus, Jürgen; Dekker, Andre; Grau, Cai; Gulliford, Sarah; Hurkmans, Coen; Just, Uwe

    2014-01-01

    Disconnected cancer research data management and lack of information exchange about planned and ongoing research are complicating the utilisation of internationally collected medical information for improving cancer patient care. Rapidly collecting/pooling data can accelerate translational research in radiation therapy and oncology. The exchange of study data is one of the fundamental principles behind data aggregation and data mining. The possibilities of reproducing the original study results, performing further analyses on existing research data to generate new hypotheses or developing computational models to support medical decisions (e.g. risk/benefit analysis of treatment options) represent just a fraction of the potential benefits of medical data-pooling. Distributed machine learning and knowledge exchange from federated databases can be considered as one beyond other attractive approaches for knowledge generation within “Big Data”. Data interoperability between research institutions should be the major concern behind a wider collaboration. Information captured in electronic patient records (EPRs) and study case report forms (eCRFs), linked together with medical imaging and treatment planning data, are deemed to be fundamental elements for large multi-centre studies in the field of radiation therapy and oncology. To fully utilise the captured medical information, the study data have to be more than just an electronic version of a traditional (un-modifiable) paper CRF. Challenges that have to be addressed are data interoperability, utilisation of standards, data quality and privacy concerns, data ownership, rights to publish, data pooling architecture and storage. This paper discusses a framework for conceptual packages of ideas focused on a strategic development for international research data exchange in the field of radiation therapy and oncology

  17. Creating a data exchange strategy for radiotherapy research: towards federated databases and anonymised public datasets.

    Science.gov (United States)

    Skripcak, Tomas; Belka, Claus; Bosch, Walter; Brink, Carsten; Brunner, Thomas; Budach, Volker; Büttner, Daniel; Debus, Jürgen; Dekker, Andre; Grau, Cai; Gulliford, Sarah; Hurkmans, Coen; Just, Uwe; Krause, Mechthild; Lambin, Philippe; Langendijk, Johannes A; Lewensohn, Rolf; Lühr, Armin; Maingon, Philippe; Masucci, Michele; Niyazi, Maximilian; Poortmans, Philip; Simon, Monique; Schmidberger, Heinz; Spezi, Emiliano; Stuschke, Martin; Valentini, Vincenzo; Verheij, Marcel; Whitfield, Gillian; Zackrisson, Björn; Zips, Daniel; Baumann, Michael

    2014-12-01

    Disconnected cancer research data management and lack of information exchange about planned and ongoing research are complicating the utilisation of internationally collected medical information for improving cancer patient care. Rapidly collecting/pooling data can accelerate translational research in radiation therapy and oncology. The exchange of study data is one of the fundamental principles behind data aggregation and data mining. The possibilities of reproducing the original study results, performing further analyses on existing research data to generate new hypotheses or developing computational models to support medical decisions (e.g. risk/benefit analysis of treatment options) represent just a fraction of the potential benefits of medical data-pooling. Distributed machine learning and knowledge exchange from federated databases can be considered as one beyond other attractive approaches for knowledge generation within "Big Data". Data interoperability between research institutions should be the major concern behind a wider collaboration. Information captured in electronic patient records (EPRs) and study case report forms (eCRFs), linked together with medical imaging and treatment planning data, are deemed to be fundamental elements for large multi-centre studies in the field of radiation therapy and oncology. To fully utilise the captured medical information, the study data have to be more than just an electronic version of a traditional (un-modifiable) paper CRF. Challenges that have to be addressed are data interoperability, utilisation of standards, data quality and privacy concerns, data ownership, rights to publish, data pooling architecture and storage. This paper discusses a framework for conceptual packages of ideas focused on a strategic development for international research data exchange in the field of radiation therapy and oncology. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Establishing the ACORN National Practitioner Database: Strategies to Recruit Practitioners to a National Practice-Based Research Network.

    Science.gov (United States)

    Adams, Jon; Steel, Amie; Moore, Craig; Amorin-Woods, Lyndon; Sibbritt, David

    2016-10-01

    The purpose of this paper is to report on the recruitment and promotion strategies employed by the Australian Chiropractic Research Network (ACORN) project aimed at helping recruit a substantial national sample of participants and to describe the features of our practice-based research network (PBRN) design that may provide key insights to others looking to establish a similar network or draw on the ACORN project to conduct sub-studies. The ACORN project followed a multifaceted recruitment and promotion strategy drawing on distinct branding, a practitioner-focused promotion campaign, and a strategically designed questionnaire and distribution/recruitment approach to attract sufficient participation from the ranks of registered chiropractors across Australia. From the 4684 chiropractors registered at the time of recruitment, the project achieved a database response rate of 36% (n = 1680), resulting in a large, nationally representative sample across age, gender, and location. This sample constitutes the largest proportional coverage of participants from any voluntary national PBRN across any single health care profession. It does appear that a number of key promotional and recruitment features of the ACORN project may have helped establish the high response rate for the PBRN, which constitutes an important sustainable resource for future national and international efforts to grow the chiropractic evidence base and research capacity. Further rigorous enquiry is needed to help evaluate the direct contribution of specific promotional and recruitment strategies in attaining high response rates from practitioner populations who may be invited to participate in future PBRNs. Copyright © 2016. Published by Elsevier Inc.

  19. Developing the "Compendium of Strategies to Reduce Teacher Turnover in the Northeast and Islands Region." A Companion to the Database. Issues & Answers. REL 2008-No. 052

    Science.gov (United States)

    Ellis, Pamela; Grogan, Marian; Levy, Abigail Jurist; Tucker-Seeley, Kevon

    2008-01-01

    This report provides state-, regional-, and district-level decisionmakers in the Northeast and Islands Region with a description of the "Compendium of Strategies to Reduce Teacher Turnover in the Northeast and Islands Region," a searchable database of selected profiles of retention strategies implemented in Connecticut, Maine,…

  20. Statistical Measures Alone Cannot Determine Which Database (BNI, CINAHL, MEDLINE, or EMBASE Is the Most Useful for Searching Undergraduate Nursing Topics. A Review of: Stokes, P., Foster, A., & Urquhart, C. (2009. Beyond relevance and recall: Testing new user-centred measures of database performance. Health Information and Libraries Journal, 26(3, 220-231.

    Directory of Open Access Journals (Sweden)

    Giovanna Badia

    2011-03-01

    Full Text Available Objective – The research project sought to determine which of four databases was the most useful for searching undergraduate nursing topics. Design – Comparative database evaluation. Setting – Nursing and midwifery students at Homerton School of Health Studies (now part of Anglia Ruskin University, Cambridge, United Kingdom, in 2005-2006. Subjects – The subjects were four databases: British Nursing Index (BNI, CINAHL, MEDLINE, and EMBASE.Methods – This was a comparative study using title searches to compare BNI (BritishNursing Index, CINAHL, MEDLINE and EMBASE.According to the authors, this is the first study to compare BNI with other databases. BNI is a database produced by British libraries that indexes the nursing and midwifery literature. It covers over 240 British journals, and includes references to articles from health sciences journals that are relevant to nurses and midwives (British Nursing Index, n.d..The researchers performed keyword searches in the title field of the four databases for the dissertation topics of nine nursing and midwifery students enrolled in undergraduate dissertation modules. The list of titles of journals articles on their topics were given to the students and they were asked to judge the relevancy of the citations. The title searches were evaluated in each of the databases using the following criteria: • precision (the number of relevant results obtained in the database for a search topic, divided by the total number of results obtained in the database search;• recall (the number of relevant results obtained in the database for a search topic, divided by the total number of relevant results obtained on that topic from all four database searches;• novelty (the number of relevant results that were unique in the database search, which was calculated as a percentage of the total number of relevant results found in the database;• originality (the number of unique relevant results obtained in the

  1. Three-dimensional high-precision indoor positioning strategy using Tabu search based on visible light communication

    Science.gov (United States)

    Peng, Qi; Guan, Weipeng; Wu, Yuxiang; Cai, Ye; Xie, Canyu; Wang, Pengfei

    2018-01-01

    This paper proposes a three-dimensional (3-D) high-precision indoor positioning strategy using Tabu search based on visible light communication. Tabu search is a powerful global optimization algorithm, and the 3-D indoor positioning can be transformed into an optimal solution problem. Therefore, in the 3-D indoor positioning, the optimal receiver coordinate can be obtained by the Tabu search algorithm. For all we know, this is the first time the Tabu search algorithm is applied to visible light positioning. Each light-emitting diode (LED) in the system broadcasts a unique identity (ID) and transmits the ID information. When the receiver detects optical signals with ID information from different LEDs, using the global optimization of the Tabu search algorithm, the 3-D high-precision indoor positioning can be realized when the fitness value meets certain conditions. Simulation results show that the average positioning error is 0.79 cm, and the maximum error is 5.88 cm. The extended experiment of trajectory tracking also shows that 95.05% positioning errors are below 1.428 cm. It can be concluded from the data that the 3-D indoor positioning based on the Tabu search algorithm achieves the requirements of centimeter level indoor positioning. The algorithm used in indoor positioning is very effective and practical and is superior to other existing methods for visible light indoor positioning.

  2. Random searching

    International Nuclear Information System (INIS)

    Shlesinger, Michael F

    2009-01-01

    There are a wide variety of searching problems from molecules seeking receptor sites to predators seeking prey. The optimal search strategy can depend on constraints on time, energy, supplies or other variables. We discuss a number of cases and especially remark on the usefulness of Levy walk search patterns when the targets of the search are scarce.

  3. Robots for hazardous duties: Military, space, and nuclear facility applications. (Latest citations from the NTIS bibliographic database). Published Search

    International Nuclear Information System (INIS)

    1993-09-01

    The bibliography contains citations concerning the design and application of robots used in place of humans where the environment could be hazardous. Military applications include autonomous land vehicles, robotic howitzers, and battlefield support operations. Space operations include docking, maintenance, mission support, and intra-vehicular and extra-vehicular activities. Nuclear applications include operations within the containment vessel, radioactive waste operations, fueling operations, and plant security. Many of the articles reference control techniques and the use of expert systems in robotic operations. Applications involving industrial manufacturing, walking robots, and robot welding are cited in other published searches in this series. (Contains a minimum of 183 citations and includes a subject term index and title list.)

  4. Undergraduates Prefer Federated Searching to Searching Databases Individually. A Review of: Belliston, C. Jeffrey, Jared L. Howland, & Brian C. Roberts. “Undergraduate Use of Federated Searching: A Survey of Preferences and Perceptions of Value-Added Functionality.” College & Research Libraries 68.6 (Nov. 2007: 472-86.

    Directory of Open Access Journals (Sweden)

    Genevieve Gore

    2008-09-01

    Full Text Available Objective – To determine whether use offederated searching by undergraduates saves time, meets their information needs, is preferred over searching databases individually, and provides results of higher quality. Design – Crossover study.Setting – Three American universities, all members of the Consortium of Church Libraries & Archives (CCLA: BYU (Brigham Young University, a large research university; BYUH (Brigham Young University – Hawaii, a small baccalaureate college; and BYUI (Brigham Young University – Idaho, a large baccalaureate collegeSubjects – Ninety-five participants recruited via e-mail invitations sent to a random sample of currently enrolled undergraduates at BYU, BYUH, and BYUI.Methods – Participants were given written directions to complete a literature search for journal articles on two biology-related topics using two search methods: 1. federated searching with WebFeat® (implemented in the same way for this study at the three universities and 2. a hyperlinked list of databases to search individually. Both methods used the same set of seven databases. Each topic was assigned in random order to one of the two search methods, also assigned in random order, for a total of two searches per participant. The time to complete the searches was recorded. Students compiled their list of citations, which were later normalized and graded. To analyze the quality of the citations, one quantitative rubric was created by librarians and one qualitative rubric was approved by a faculty member at BYU. The librarian-created rubric included the journal impact factor (from ISI’s Journal Citation Reports®, the proportion of citations from peer-reviewed journals (determined from Ulrichsweb.com™ to total citations, and the timeliness of the articles. The faculty-approved rubric included three criteria: relevance to the topic, quality of the individual citations (good quality: primary research results, peer-reviewed sources, and

  5. More than Just Finding Color: Strategy in Global Visual Search Is Shaped by Learned Target Probabilities

    Science.gov (United States)

    Williams, Carrick C.; Pollatsek, Alexander; Cave, Kyle R.; Stroud, Michael J.

    2009-01-01

    In 2 experiments, eye movements were examined during searches in which elements were grouped into four 9-item clusters. The target (a red or blue "T") was known in advance, and each cluster contained different numbers of target-color elements. Rather than color composition of a cluster invariantly guiding the order of search though…

  6. Search Strategy Development in a Flipped Library Classroom: A Student-Focused Assessment

    Science.gov (United States)

    Goates, Michael C.; Nelson, Gregory M.; Frost, Megan

    2017-01-01

    Librarians at Brigham Young University compared search statement development between traditional lecture and flipped instruction sessions. Students in lecture sessions scored significantly higher on developing search statements than those in flipped sessions. However, student evaluations show a strong preference for pedagogies that incorporate…

  7. Online-Expert: An Expert System for Online Database Selection.

    Science.gov (United States)

    Zahir, Sajjad; Chang, Chew Lik

    1992-01-01

    Describes the design and development of a prototype expert system called ONLINE-EXPERT that helps users select online databases and vendors that meet users' needs. Search strategies are discussed; knowledge acquisition and knowledge bases are described; and the Analytic Hierarchy Process (AHP), a decision analysis technique that ranks databases,…

  8. Citation searches are more sensitive than keyword searches to identify studies using specific measurement instruments.

    Science.gov (United States)

    Linder, Suzanne K; Kamath, Geetanjali R; Pratt, Gregory F; Saraykar, Smita S; Volk, Robert J

    2015-04-01

    To compare the effectiveness of two search methods in identifying studies that used the Control Preferences Scale (CPS), a health care decision-making instrument commonly used in clinical settings. We searched the literature using two methods: (1) keyword searching using variations of "Control Preferences Scale" and (2) cited reference searching using two seminal CPS publications. We searched three bibliographic databases [PubMed, Scopus, and Web of Science (WOS)] and one full-text database (Google Scholar). We report precision and sensitivity as measures of effectiveness. Keyword searches in bibliographic databases yielded high average precision (90%) but low average sensitivity (16%). PubMed was the most precise, followed closely by Scopus and WOS. The Google Scholar keyword search had low precision (54%) but provided the highest sensitivity (70%). Cited reference searches in all databases yielded moderate sensitivity (45-54%), but precision ranged from 35% to 75% with Scopus being the most precise. Cited reference searches were more sensitive than keyword searches, making it a more comprehensive strategy to identify all studies that use a particular instrument. Keyword searches provide a quick way of finding some but not all relevant articles. Goals, time, and resources should dictate the combination of which methods and databases are used. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...(10):1211-6. External Links: Original website information Database maintenance site National Institute of Ad... for user registration Not available About This Database Database Description Dow

  10. Managing search strategies for open innovation : the role of environmental munificence as well as internal and external R&D

    OpenAIRE

    Sofka, Wolfgang; Grimpe, Christoph

    2008-01-01

    Firms compete increasingly in an open innovation environment. Search strategies for external knowledge become therefore crucial for firm success. Existing research differentiates between the breadth (diversity) and depth (intensity) with which firms pursue external knowledge source. A consensus exists that resource constrains force firms to balance both dimensions. However, relatively little is known on how managers can selectively strengthen one of these dimensions. We argue conceptually tha...

  11. the Ĝ infrared search for extraterrestrial civilizations with large energy supplies. II. Framework, strategy, and first result

    Energy Technology Data Exchange (ETDEWEB)

    Wright, J. T.; Griffith, R. L.; Sigurdsson, S. [Department of Astronomy and Astrophysics, 525 Davey Lab, The Pennsylvania State University, University Park, PA, 16802 (United States); Povich, M. S. [Department of Physics and Astronomy, California State Polytechnic University, Pomona, 3801 West Temple Avenue, Pomona, CA 91768 (United States); Mullan, B. [Blue Marble Space Institution of Science, P.O. Box 85561, Seattle, WA 98145-1561 (United States)

    2014-09-01

    We describe the framework and strategy of the Ĝ infrared search for extraterrestrial civilizations with large energy supplies, which will use the wide-field infrared surveys of WISE and Spitzer to search for these civilizations' waste heat. We develop a formalism for translating mid-infrared photometry into quantitative upper limits on extraterrestrial energy supplies. We discuss the likely sources of false positives, how dust can and will contaminate our search, and prospects for distinguishing dust from alien waste heat. We argue that galaxy-spanning civilizations may be easier to distinguish from natural sources than circumstellar civilizations (i.e., Dyson spheres), although GAIA will significantly improve our capability to identify the latter. We present a zeroth order null result of our search based on the WISE all-sky catalog: we show, for the first time, that Kardashev Type III civilizations (as Kardashev originally defined them) are very rare in the local universe. More sophisticated searches can extend our methodology to smaller waste heat luminosities, and potentially entirely rule out (or detect) both Kardashev Type III civilizations and new physics that allows for unlimited 'free' energy generation.

  12. the Ĝ infrared search for extraterrestrial civilizations with large energy supplies. II. Framework, strategy, and first result

    International Nuclear Information System (INIS)

    Wright, J. T.; Griffith, R. L.; Sigurdsson, S.; Povich, M. S.; Mullan, B.

    2014-01-01

    We describe the framework and strategy of the Ĝ infrared search for extraterrestrial civilizations with large energy supplies, which will use the wide-field infrared surveys of WISE and Spitzer to search for these civilizations' waste heat. We develop a formalism for translating mid-infrared photometry into quantitative upper limits on extraterrestrial energy supplies. We discuss the likely sources of false positives, how dust can and will contaminate our search, and prospects for distinguishing dust from alien waste heat. We argue that galaxy-spanning civilizations may be easier to distinguish from natural sources than circumstellar civilizations (i.e., Dyson spheres), although GAIA will significantly improve our capability to identify the latter. We present a zeroth order null result of our search based on the WISE all-sky catalog: we show, for the first time, that Kardashev Type III civilizations (as Kardashev originally defined them) are very rare in the local universe. More sophisticated searches can extend our methodology to smaller waste heat luminosities, and potentially entirely rule out (or detect) both Kardashev Type III civilizations and new physics that allows for unlimited 'free' energy generation.

  13. The Ĝ Infrared Search for Extraterrestrial Civilizations with Large Energy Supplies. II. Framework, Strategy, and First Result

    Science.gov (United States)

    Wright, J. T.; Griffith, R. L.; Sigurdsson, S.; Povich, M. S.; Mullan, B.

    2014-09-01

    We describe the framework and strategy of the Ĝ infrared search for extraterrestrial civilizations with large energy supplies, which will use the wide-field infrared surveys of WISE and Spitzer to search for these civilizations' waste heat. We develop a formalism for translating mid-infrared photometry into quantitative upper limits on extraterrestrial energy supplies. We discuss the likely sources of false positives, how dust can and will contaminate our search, and prospects for distinguishing dust from alien waste heat. We argue that galaxy-spanning civilizations may be easier to distinguish from natural sources than circumstellar civilizations (i.e., Dyson spheres), although GAIA will significantly improve our capability to identify the latter. We present a zeroth order null result of our search based on the WISE all-sky catalog: we show, for the first time, that Kardashev Type III civilizations (as Kardashev originally defined them) are very rare in the local universe. More sophisticated searches can extend our methodology to smaller waste heat luminosities, and potentially entirely rule out (or detect) both Kardashev Type III civilizations and new physics that allows for unlimited "free" energy generation.

  14. Penerapan Teknik Seo (Search Engine Optimization pada Website dalam Strategi Pemasaran melalui Internet

    Directory of Open Access Journals (Sweden)

    Rony Baskoro Lukito

    2014-12-01

    Full Text Available The purpose of this research is how to optimize a web design that can increase the number of visitors. The number of Internet users in the world continues to grow in line with advances in information technology. Products and services marketing media do not just use the printed and electronic media. Moreover, the cost of using the Internet as a medium of marketing is relatively inexpensive when compared to the use of television as a marketing medium. The penetration of the internet as a marketing medium lasted for 24 hours in different parts of the world. But to make an internet site into a site that is visited by many internet users, the site is not only good from the outside view only. Web sites that serve as a medium for marketing must be built with the correct rules, so that the Web site be optimal marketing media. One of the good rules in building the internet site as a marketing medium is how the content of such web sites indexed well in search engines like google. Search engine optimization in the index will be focused on the search engine Google for 83% of internet users across the world using Google as a search engine. Search engine optimization commonly known as SEO (Search Engine Optimization is an important rule that the internet site is easier to find a user with the desired keywords.

  15. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Download First of all, please read the license of this database. Data ...1.4 KB) Simple search and download Downlaod via FTP FTP server is sometimes jammed. If it is, access [here]. About This Database Data...base Description Download License Update History of This Database Site Policy | Contact Us Download - Trypanosomes Database | LSDB Archive ...

  16. Inability to acquire spatial information and deploy spatial search strategies in mice with lesions in dorsomedial striatum.

    Science.gov (United States)

    Pooters, Tine; Gantois, Ilse; Vermaercke, Ben; D'Hooge, Rudi

    2016-02-01

    Dorsal striatum has been shown to contribute to spatial learning and memory, but the role of striatal subregions in this important aspect of cognitive functioning remains unclear. Moreover, the spatial-cognitive mechanisms that underlie the involvement of these regions in spatial navigation have scarcely been studied. We therefore compared spatial learning and memory performance in mice with lesions in dorsomedial (DMS) and dorsolateral striatum (DLS) using the hidden-platform version of the Morris water maze (MWM) task. Compared to sham-operated controls, animals with DMS damage were impaired during MWM acquisition training. These mice displayed delayed spatial learning, increased thigmotaxis, and increased search distance to the platform, in the absence of major motor dysfunction, working memory defects or changes in anxiety or exploration. They failed to show a preference for the target quadrant during probe trials, which further indicates that spatial reference memory was impaired in these animals. Search strategy analysis moreover demonstrated that DMS-lesioned mice were unable to deploy cognitively advanced spatial search strategies. Conversely, MWM performance was barely affected in animals with lesions in DLS. In conclusion, our results indicate that DMS and DLS display differential functional involvement in spatial learning and memory. Our results show that DMS, but not DLS, is crucial for the ability of mice to acquire spatial information and their subsequent deployment of spatial search strategies. These data clearly identify DMS as a crucial brain structure for spatial learning and memory, which could explain the occurrence of neurocognitive impairments in brain disorders that affect the dorsal striatum. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Database Description - TMFunction | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available sidue (or mutant) in a protein. The experimental data are collected from the literature both by searching th...the sequence database, UniProt, structural database, PDB, and literature database

  18. License - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us Trypanoso... Attribution-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database as follows: Trypanoso...nse Update History of This Database Site Policy | Contact Us License - Trypanosomes Database | LSDB Archive ...

  19. Looking for trouble: a description of oculomotor search strategies during live CCTV operation

    OpenAIRE

    Stainer, Matthew J.; Scott-Brown, Kenneth C.; Tatler, Benjamin W.

    2013-01-01

    Recent research has begun to address how CCTV operators in the modern control room attempt to search for crime (e.g., Howard et al., 2011). However, an often-neglected element of the CCTV task is that the operators have at their disposal a multiplexed wall of scenes, and a single spot-monitor on which they can select any of these feeds for inspection. Here we examined how 2 trained CCTV operators used these sources of information to search from crime during a morning, afternoon, and night-tim...

  20. Children’s information retrieval: beyond examining search strategies and interfaces

    NARCIS (Netherlands)

    Jochmann-Mannak, Hanna; Huibers, Theo W.C.; Sanders, T.J.M.

    2008-01-01

    The study of children’s information retrieval is still for the greater part untouched territory. Meanwhile, children can become lost in the digital information world, because they are confronted with search interfaces, both designed by and for adults. Most current research on children’s information

  1. Hybrid Multiple Soft-Sensor Models of Grinding Granularity Based on Cuckoo Searching Algorithm and Hysteresis Switching Strategy

    Directory of Open Access Journals (Sweden)

    Jie-Sheng Wang

    2015-01-01

    Full Text Available According to the characteristics of grinding process and accuracy requirements of technical indicators, a hybrid multiple soft-sensor modeling method of grinding granularity is proposed based on cuckoo searching (CS algorithm and hysteresis switching (HS strategy. Firstly, a mechanism soft-sensor model of grinding granularity is deduced based on the technique characteristics and a lot of experimental data of grinding process. Meanwhile, the BP neural network soft-sensor model and wavelet neural network (WNN soft-sensor model are set up. Then, the hybrid multiple soft-sensor model based on the hysteresis switching strategy is realized. That is to say, the optimum model is selected as the current predictive model according to the switching performance index at each sampling instant. Finally the cuckoo searching algorithm is adopted to optimize the performance parameters of hysteresis switching strategy. Simulation results show that the proposed model has better generalization results and prediction precision, which can satisfy the real-time control requirements of grinding classification process.

  2. Pathological and Biochemical Outcomes among African-American and Caucasian Men with Low Risk Prostate Cancer in the SEARCH Database: Implications for Active Surveillance Candidacy.

    Science.gov (United States)

    Leapman, Michael S; Freedland, Stephen J; Aronson, William J; Kane, Christopher J; Terris, Martha K; Walker, Kelly; Amling, Christopher L; Carroll, Peter R; Cooperberg, Matthew R

    2016-11-01

    Racial disparities in the incidence and risk profile of prostate cancer at diagnosis among African-American men are well reported. However, it remains unclear whether African-American race is independently associated with adverse outcomes in men with clinical low risk disease. We retrospectively analyzed the records of 895 men in the SEARCH (Shared Equal Access Regional Cancer Hospital) database in whom clinical low risk prostate cancer was treated with radical prostatectomy. Associations of African-American and Caucasian race with pathological biochemical recurrence outcomes were examined using chi-square, logistic regression, log rank and Cox proportional hazards analyses. We identified 355 African-American and 540 Caucasian men with low risk tumors in the SEARCH cohort who were followed a median of 6.3 years. Following adjustment for relevant covariates African-American race was not significantly associated with pathological upgrading (OR 1.33, p = 0.12), major upgrading (OR 0.58, p = 0.10), up-staging (OR 1.09, p = 0.73) or positive surgical margins (OR 1.04, p = 0.81). Five-year recurrence-free survival rates were 73.4% in African-American men and 78.4% in Caucasian men (log rank p = 0.18). In a Cox proportional hazards analysis model African-American race was not significantly associated with biochemical recurrence (HR 1.11, p = 0.52). In a cohort of patients at clinical low risk who were treated with prostatectomy in an equal access health system with a high representation of African-American men we observed no significant differences in the rates of pathological upgrading, up-staging or biochemical recurrence. These data support continued use of active surveillance in African-American men. Upgrading and up-staging remain concerning possibilities for all men regardless of race. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  3. Radiotherapy Planning Using an Improved Search Strategy in Particle Swarm Optimization.

    Science.gov (United States)

    Modiri, Arezoo; Gu, Xuejun; Hagan, Aaron M; Sawant, Amit

    2017-05-01

    Evolutionary stochastic global optimization algorithms are widely used in large-scale, nonconvex problems. However, enhancing the search efficiency and repeatability of these techniques often requires well-customized approaches. This study investigates one such approach. We use particle swarm optimization (PSO) algorithm to solve a 4D radiation therapy (RT) inverse planning problem, where the key idea is to use respiratory motion as an additional degree of freedom in lung cancer RT. The primary goal is to administer a lethal dose to the tumor target while sparing surrounding healthy tissue. Our optimization iteratively adjusts radiation fluence-weights for all beam apertures across all respiratory phases. We implement three PSO-based approaches: conventionally used unconstrained, hard-constrained, and our proposed virtual search. As proof of concept, five lung cancer patient cases are optimized over ten runs using each PSO approach. For comparison, a dynamically penalized likelihood (DPL) algorithm-a popular RT optimization technique is also implemented and used. The proposed technique significantly improves the robustness to random initialization while requiring fewer iteration cycles to converge across all cases. DPL manages to find the global optimum in 2 out of 5 RT cases over significantly more iterations. The proposed virtual search approach boosts the swarm search efficiency, and consequently, improves the optimization convergence rate and robustness for PSO. RT planning is a large-scale, nonconvex optimization problem, where finding optimal solutions in a clinically practical time is critical. Our proposed approach can potentially improve the optimization efficiency in similar time-sensitive problems.

  4. Intermittent random walks for an optimal search strategy: one-dimensional case

    International Nuclear Information System (INIS)

    Oshanin, G; Wio, H S; Lindenberg, K; Burlatsky, S F

    2007-01-01

    We study the search kinetics of an immobile target by a concentration of randomly moving searchers. The object of the study is to optimize the probability of detection within the constraints of our model. The target is hidden on a one-dimensional lattice in the sense that searchers have no a priori information about where it is, and may detect it only upon encounter. The searchers perform random walks in discrete time n = 0,1,2,...,N, where N is the maximal time the search process is allowed to run. With probability α the searchers step on a nearest-neighbour, and with probability (1-α) they leave the lattice and stay off until they land back on the lattice at a fixed distance L away from the departure point. The random walk is thus intermittent. We calculate the probability P N that the target remains undetected up to the maximal search time N, and seek to minimize this probability. We find that P N is a non-monotonic function of α, and show that there is an optimal choice α opt (N) of α well within the intermittent regime, 0 opt (N) N can be orders of magnitude smaller compared to the 'pure' random walk cases α = 0 and α = 1

  5. Personalized Search

    CERN Document Server

    AUTHOR|(SzGeCERN)749939

    2015-01-01

    As the volume of electronically available information grows, relevant items become harder to find. This work presents an approach to personalizing search results in scientific publication databases. This work focuses on re-ranking search results from existing search engines like Solr or ElasticSearch. This work also includes the development of Obelix, a new recommendation system used to re-rank search results. The project was proposed and performed at CERN, using the scientific publications available on the CERN Document Server (CDS). This work experiments with re-ranking using offline and online evaluation of users and documents in CDS. The experiments conclude that the personalized search result outperform both latest first and word similarity in terms of click position in the search result for global search in CDS.

  6. "Gone are the days of mass-media marketing plans and short term customer relationships": tobacco industry direct mail and database marketing strategies.

    Science.gov (United States)

    Lewis, M Jane; Ling, Pamela M

    2016-07-01

    As limitations on traditional marketing tactics and scrutiny by tobacco control have increased, the tobacco industry has benefited from direct mail marketing which transmits marketing messages directly to carefully targeted consumers utilising extensive custom consumer databases. However, research in these areas has been limited. This is the first study to examine the development, purposes and extent of direct mail and customer databases. We examined direct mail and database marketing by RJ Reynolds and Philip Morris utilising internal tobacco industry documents from the Legacy Tobacco Document Library employing standard document research techniques. Direct mail marketing utilising industry databases began in the 1970s and grew from the need for a promotional strategy to deal with declining smoking rates, growing numbers of products and a cluttered media landscape. Both RJ Reynolds and Philip Morris started with existing commercial consumer mailing lists, but subsequently decided to build their own databases of smokers' names, addresses, brand preferences, purchase patterns, interests and activities. By the mid-1990s both RJ Reynolds and Philip Morris databases contained at least 30 million smokers' names each. These companies valued direct mail/database marketing's flexibility, efficiency and unique ability to deliver specific messages to particular groups as well as direct mail's limited visibility to tobacco control, public health and regulators. Database marketing is an important and increasingly sophisticated tobacco marketing strategy. Additional research is needed on the prevalence of receipt and exposure to direct mail items and their influence on receivers' perceptions and smoking behaviours. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  7. “Gone are the days of mass-media marketing plans and short term customer relationships”: tobacco industry direct mail and database marketing strategies

    Science.gov (United States)

    Lewis, M Jane; Ling, Pamela M

    2015-01-01

    Background As limitations on traditional marketing tactics and scrutiny by tobacco control have increased, the tobacco industry has benefited from direct mail marketing which transmits marketing messages directly to carefully targeted consumers utilising extensive custom consumer databases. However, research in these areas has been limited. This is the first study to examine the development, purposes and extent of direct mail and customer databases. Methods We examined direct mail and database marketing by RJ Reynolds and Philip Morris utilising internal tobacco industry documents from the Legacy Tobacco Document Library employing standard document research techniques. Results Direct mail marketing utilising industry databases began in the 1970s and grew from the need for a promotional strategy to deal with declining smoking rates, growing numbers of products and a cluttered media landscape. Both RJ Reynolds and Philip Morris started with existing commercial consumer mailing lists, but subsequently decided to build their own databases of smokers’ names, addresses, brand preferences, purchase patterns, interests and activities. By the mid-1990s both RJ Reynolds and Philip Morris databases contained at least 30 million smokers’ names each. These companies valued direct mail/database marketing’s flexibility, efficiency and unique ability to deliver specific messages to particular groups as well as direct mail’s limited visibility to tobacco control, public health and regulators. Conclusions Database marketing is an important and increasingly sophisticated tobacco marketing strategy. Additional research is needed on the prevalence of receipt and exposure to direct mail items and their influence on receivers’ perceptions and smoking behaviours. PMID:26243810

  8. HMMerThread: detecting remote, functional conserved domains in entire genomes by combining relaxed sequence-database searches with fold recognition.

    Directory of Open Access Journals (Sweden)

    Charles Richard Bradshaw

    Full Text Available Conserved domains in proteins are one of the major sources of functional information for experimental design and genome-level annotation. Though search tools for conserved domain databases such as Hidden Markov Models (HMMs are sensitive in detecting conserved domains in proteins when they share sufficient sequence similarity, they tend to miss more divergent family members, as they lack a reliable statistical framework for the detection of low sequence similarity. We have developed a greatly improved HMMerThread algorithm that can detect remotely conserved domains in highly divergent sequences. HMMerThread combines relaxed conserved domain searches with fold recognition to eliminate false positive, sequence-based identifications. With an accuracy of 90%, our software is able to automatically predict highly divergent members of conserved domain families with an associated 3-dimensional structure. We give additional confidence to our predictions by validation across species. We have run HMMerThread searches on eight proteomes including human and present a rich resource of remotely conserved domains, which adds significantly to the functional annotation of entire proteomes. We find ∼4500 cross-species validated, remotely conserved domain predictions in the human proteome alone. As an example, we find a DNA-binding domain in the C-terminal part of the A-kinase anchor protein 10 (AKAP10, a PKA adaptor that has been implicated in cardiac arrhythmias and premature cardiac death, which upon stress likely translocates from mitochondria to the nucleus/nucleolus. Based on our prediction, we propose that with this HLH-domain, AKAP10 is involved in the transcriptional control of stress response. Further remotely conserved domains we discuss are examples from areas such as sporulation, chromosome segregation and signalling during immune response. The HMMerThread algorithm is able to automatically detect the presence of remotely conserved domains in

  9. A database of linear codes over F_13 with minimum distance bounds and new quasi-twisted codes from a heuristic search algorithm

    Directory of Open Access Journals (Sweden)

    Eric Z. Chen

    2015-01-01

    Full Text Available Error control codes have been widely used in data communications and storage systems. One central problem in coding theory is to optimize the parameters of a linear code and construct codes with best possible parameters. There are tables of best-known linear codes over finite fields of sizes up to 9. Recently, there has been a growing interest in codes over $\\mathbb{F}_{13}$ and other fields of size greater than 9. The main purpose of this work is to present a database of best-known linear codes over the field $\\mathbb{F}_{13}$ together with upper bounds on the minimum distances. To find good linear codes to establish lower bounds on minimum distances, an iterative heuristic computer search algorithm is employed to construct quasi-twisted (QT codes over the field $\\mathbb{F}_{13}$ with high minimum distances. A large number of new linear codes have been found, improving previously best-known results. Tables of $[pm, m]$ QT codes over $\\mathbb{F}_{13}$ with best-known minimum distances as well as a table of lower and upper bounds on the minimum distances for linear codes of length up to 150 and dimension up to 6 are presented.

  10. Commissioning of the CMS muon detector and development of generic search strategies for new physics

    International Nuclear Information System (INIS)

    Biallass, Philipp Alexander

    2009-01-01

    The detection and reconstruction of cosmic muon rays is important for the commissioning phase and alignment of the Compact Muon Solenoid experiment (CMS), in particular during the early phases of operation with physics collisions. In this context the Magnet Test/Cosmic Challenge (MTCC) with its comprehensive cosmic data taking periods including the presence of the magnetic field has been like a dress rehearsal of detector hardware and software for the upcoming start-up of the CMS detector. In addition to data taking also the comparison with simulated events is a crucial part of physics analyses. The first part of this thesis introduces a new cosmic muon generator, CMSCGEN, and it presents its validation by comparing its predictions with data from MTCC. As an example, results from a reconstruction study using the barrel muon system are shown, comparing data and Monte Carlo prediction at the level of single chambers up to reconstructed tracks including momentum measurements. Since leptons (electrons, muons) constitute very clean signatures for signals of new physics these commissioning and alignment procedures are also vital to most physics analyses. In the second part of this thesis a model independent search approach for new physics within CMS is presented, utilizing events with leptons and relying only on the knowledge of the Standard Model simulation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Due to the absence of a theoretical bias this approach is sensitive to a variety of models, including those not yet thought of. Within this feasibility study events are classified according to their particle content (muons, electrons, photons, jets, missing energy) into so called event classes. A broad scan of various distributions is performed, identifying significant deviations from the SM Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously

  11. Commissioning of the CMS muon detector and development of generic search strategies for new physics

    Energy Technology Data Exchange (ETDEWEB)

    Biallass, Philipp Alexander

    2009-03-27

    The detection and reconstruction of cosmic muon rays is important for the commissioning phase and alignment of the Compact Muon Solenoid experiment (CMS), in particular during the early phases of operation with physics collisions. In this context the Magnet Test/Cosmic Challenge (MTCC) with its comprehensive cosmic data taking periods including the presence of the magnetic field has been like a dress rehearsal of detector hardware and software for the upcoming start-up of the CMS detector. In addition to data taking also the comparison with simulated events is a crucial part of physics analyses. The first part of this thesis introduces a new cosmic muon generator, CMSCGEN, and it presents its validation by comparing its predictions with data from MTCC. As an example, results from a reconstruction study using the barrel muon system are shown, comparing data and Monte Carlo prediction at the level of single chambers up to reconstructed tracks including momentum measurements. Since leptons (electrons, muons) constitute very clean signatures for signals of new physics these commissioning and alignment procedures are also vital to most physics analyses. In the second part of this thesis a model independent search approach for new physics within CMS is presented, utilizing events with leptons and relying only on the knowledge of the Standard Model simulation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Due to the absence of a theoretical bias this approach is sensitive to a variety of models, including those not yet thought of. Within this feasibility study events are classified according to their particle content (muons, electrons, photons, jets, missing energy) into so called event classes. A broad scan of various distributions is performed, identifying significant deviations from the SM Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously

  12. Towards a new strategy of searching for QCD phase transition in heavy ion collisions

    International Nuclear Information System (INIS)

    Ploszajczak, M.; Shanenko, A.A.; Toneev, V.D.; Joint Inst. for Nuclear Research, Dubna

    1995-01-01

    The Hung and Shuryak arguments are reconsidered in favour of searching for the deconfinement phase transition in heavy ion collisions downward from the nominal SPS energy, at E lab ∼ 30 GeV/A where the fireball lifetime is the longest one. Using the recent lattice QCD data and the mixed phase model, it is shown that the deconfinement transition might occur at the bombarding energies as low as E lab = 3-5 GeV/A. Attention is drawn to the study of the mixed phase of nuclear matter in the collision energy range E lab = 2-10 GeV/A. (author)

  13. The use of a genetic algorithm-based search strategy in geostatistics: application to a set of anisotropic piezometric head data

    Science.gov (United States)

    Abedini, M. J.; Nasseri, M.; Burn, D. H.

    2012-04-01

    In any geostatistical study, an important consideration is the choice of an appropriate, repeatable, and objective search strategy that controls the nearby samples to be included in the location-specific estimation procedure. Almost all geostatistical software available in the market puts the onus on the user to supply search strategy parameters in a heuristic manner. These parameters are solely controlled by geographical coordinates that are defined for the entire area under study, and the user has no guidance as to how to choose these parameters. The main thesis of the current study is that the selection of search strategy parameters has to be driven by data—both the spatial coordinates and the sample values—and cannot be chosen beforehand. For this purpose, a genetic-algorithm-based ordinary kriging with moving neighborhood technique is proposed. The search capability of a genetic algorithm is exploited to search the feature space for appropriate, either local or global, search strategy parameters. Radius of circle/sphere and/or radii of standard or rotated ellipse/ellipsoid are considered as the decision variables to be optimized by GA. The superiority of GA-based ordinary kriging is demonstrated through application to the Wolfcamp Aquifer piezometric head data. Assessment of numerical results showed that definition of search strategy parameters based on both geographical coordinates and sample values improves cross-validation statistics when compared with that based on geographical coordinates alone. In the case of a variable search neighborhood for each estimation point, optimization of local search strategy parameters for an elliptical support domain—the orientation of which is dictated by anisotropic axes—via GA was able to capture the dynamics of piezometric head in west Texas/New Mexico in an efficient way.

  14. Building with Nature: in search of resilient storm surge protection strategies

    NARCIS (Netherlands)

    Slobbe, van E.J.J.; Vriend, de H.J.; Aarninkhof, S.G.J.; Lulofs, K.; Vries, de M.; Dircke, P.

    2013-01-01

    Low-lying, densely populated coastal areas worldwide are under threat, requiring coastal managers to develop new strategies to cope with land subsidence, sea-level rise and the increasing risk of storm-surge-induced floods. Traditional engineering approaches optimizing for safety are often

  15. Intermittent search strategies revisited: effect of the jump length and biased motion

    Energy Technology Data Exchange (ETDEWEB)

    Rojo, F; Budde, C E [Fa.M.A.F., Universidad Nacional de Cordoba, Ciudad Universitaria, X5000HUA Cordoba (Argentina); Revelli, J; Wio, H S [Instituto de Fisica de Cantabria, Universidad de Cantabria and CSIC, E-39005 Santander (Spain); Oshanin, G [Laboratoire de Physique Theorique de la Matiere Condensee, Universite Pierre et Marie Curie, 4 place Jussieu, 75252 Paris Cedex 5 (France); Lindenberg, Katja [Department of Chemistry and Biochemistry and BioCircuits Institute, University of California, San Diego, La Jolla, CA 92093-0340 (United States)

    2010-08-27

    We study the kinetics of a search of a single fixed target by a large number of searchers performing an intermittent biased random walk in a homogeneous medium. Our searchers carry out their walks in one of two states between which they switch randomly. One of these states (search phase) is a nearest-neighbor walk characterized by the probability of stepping in a given direction (i.e. the walks in this state are not necessarily isotropic). The other (relocation phase) is characterized by the length of the jumps (i.e. when in this state a walker does not perform a nearest-neighbor walk). Within such a framework, we propose a model to describe the searchers' dynamics, generalizing results of our previous work. We have obtained, and numerically evaluated, analytic results for the mean number of distinct sites visited up to a maximum evolution time. We have studied the dependence of this quantity on both the transition probability between the states and the parameters that characterize each state. In addition to our theoretical approach, we have implemented Monte Carlo simulations, finding excellent agreement between the theoretical-numerical and simulations results.

  16. Intermittent search strategies revisited: effect of the jump length and biased motion

    International Nuclear Information System (INIS)

    Rojo, F; Budde, C E; Revelli, J; Wio, H S; Oshanin, G; Lindenberg, Katja

    2010-01-01

    We study the kinetics of a search of a single fixed target by a large number of searchers performing an intermittent biased random walk in a homogeneous medium. Our searchers carry out their walks in one of two states between which they switch randomly. One of these states (search phase) is a nearest-neighbor walk characterized by the probability of stepping in a given direction (i.e. the walks in this state are not necessarily isotropic). The other (relocation phase) is characterized by the length of the jumps (i.e. when in this state a walker does not perform a nearest-neighbor walk). Within such a framework, we propose a model to describe the searchers' dynamics, generalizing results of our previous work. We have obtained, and numerically evaluated, analytic results for the mean number of distinct sites visited up to a maximum evolution time. We have studied the dependence of this quantity on both the transition probability between the states and the parameters that characterize each state. In addition to our theoretical approach, we have implemented Monte Carlo simulations, finding excellent agreement between the theoretical-numerical and simulations results.

  17. Orwell's 1984: Natural Language Searching and the Contemporary Metaphor.

    Science.gov (United States)

    Dadlez, Eva M.

    1984-01-01

    Describes a natural language searching strategy for retrieving current material which has bearing on George Orwell's "1984," and identifies four main themes (technology, authoritarianism, press and psychological/linguistic implications of surveillance, political oppression) which have emerged from cross-database searches of the "Big…

  18. Searching PubMed for molecular epidemiology studies: the case of chromosome aberrations

    DEFF Research Database (Denmark)

    Ugolini, Donatella; Neri, Monica; Knudsen, Lisbeth E

    2006-01-01

    to environmental pollutants. The search, done on the PubMed/MedLine database, was based on a strategy combining descriptors listed in the PubMed Medical Subject Headings (MeSH) Thesaurus and other available tools (free text or phrase search tools). 178 articles were retrieved by searching the period from January 1...

  19. Towards a new strategy of searching for QCD phase transition in heavy ion collisions

    Energy Technology Data Exchange (ETDEWEB)

    Ploszajczak, M. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France); Shanenko, A.A. [Joint Inst. for Nuclear Research, Dubna (Russian Federation). Lab. of Theoretical Physics; Toneev, V.D. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France)]|[Joint Inst. for Nuclear Research, Dubna (Russian Federation). Lab. of Theoretical Physics

    1995-12-31

    The Hung and Shuryak arguments are reconsidered in favour of searching for the deconfinement phase transition in heavy ion collisions downward from the nominal SPS energy, at E{sub lab} {approx} 30 GeV/A where the fireball lifetime is the longest one. Using the recent lattice QCD data and the mixed phase model, it is shown that the deconfinement transition might occur at the bombarding energies as low as E{sub lab} = 3-5 GeV/A. Attention is drawn to the study of the mixed phase of nuclear matter in the collision energy range E{sub lab} = 2-10 GeV/A. (author). 18 refs.

  20. Search for the best timing strategy in high-precision drift chambers

    International Nuclear Information System (INIS)

    Va'vra, J.

    1983-06-01

    Computer simulated drift chamber pulses are used to investigate various possible timing strategies in the drift chambers. In particular, the leading edge, the multiple threshold and the flash ADC timing methods are compared. Although the presented method is general for any drift geometry, we concentrate our discussion on the jet chambers where the drift velocity is about 3 to 5 cm/μsec and the individual ionization clusters are not resolved due to a finite speed of our electronics

  1. Turkey in Search of Relevant Foreign Policy Strategy (2002-2016

    Directory of Open Access Journals (Sweden)

    Urmanov Dayan R.

    2016-06-01

    Full Text Available The main idea of this article is to describe the process of Turkish foreign policy evolvement during the rule of Justice and Development party (JDP. From weak economy and unstable political situation in 2001, JDP quickly formulated a new strategy of foreign policy and stabilized economy. In the article the Turkish foreign policy in the 21st century is divided into several stages which respond to different international threats and circumstances. The first stage was a peacekeeping stage when Turkey tried to stabilize the situation near its borders and implement peace initiatives for the purpose to find new markets and allies. As a result, Turkey formulated a new strategy of foreign policy, called “Zero Problems Policy” which aimed to create a ring of friendly countries on the borders. On the second stage, Turkish foreign policy was more active – Turkey tried to balance among regional power centers and confront with one of the most powerful actors – Israel. Confrontation with Tel Aviv was a preface to the third stage, and today under the influence of “Arab Spring” and desire to change its role in international relations, Turkey refused “Zero Problems Policy” strategy and turned to a new aggressive and revanchist idea – neo- Ottomanism. Ankara tries to build a new regional set of rules where Turkey will play a leading role.

  2. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  3. Accessing and using chemical databases

    DEFF Research Database (Denmark)

    Nikolov, Nikolai Georgiev; Pavlov, Todor; Niemelä, Jay Russell

    2013-01-01

    Computer-based representation of chemicals makes it possible to organize data in chemical databases-collections of chemical structures and associated properties. Databases are widely used wherever efficient processing of chemical information is needed, including search, storage, retrieval......, and dissemination. Structure and functionality of chemical databases are considered. The typical kinds of information found in a chemical database are considered-identification, structural, and associated data. Functionality of chemical databases is presented, with examples of search and access types. More details...... are included about the OASIS database and platform and the Danish (Q)SAR Database online. Various types of chemical database resources are discussed, together with a list of examples....

  4. Sexual Orientation-Related Differences in Virtual Spatial Navigation and Spatial Search Strategies.

    Science.gov (United States)

    Rahman, Qazi; Sharp, Jonathan; McVeigh, Meadhbh; Ho, Man-Ling

    2017-07-01

    Spatial abilities are generally hypothesized to differ between men and women, and people with different sexual orientations. According to the cross-sex shift hypothesis, gay men are hypothesized to perform in the direction of heterosexual women and lesbian women in the direction of heterosexual men on cognitive tests. This study investigated sexual orientation differences in spatial navigation and strategy during a virtual Morris water maze task (VMWM). Forty-four heterosexual men, 43 heterosexual women, 39 gay men, and 34 lesbian/bisexual women (aged 18-54 years) navigated a desktop VMWM and completed measures of intelligence, handedness, and childhood gender nonconformity (CGN). We quantified spatial learning (hidden platform trials), probe trial performance, and cued navigation (visible platform trials). Spatial strategies during hidden and probe trials were classified into visual scanning, landmark use, thigmotaxis/circling, and enfilading. In general, heterosexual men scored better than women and gay men on some spatial learning and probe trial measures and used more visual scan strategies. However, some differences disappeared after controlling for age and estimated IQ (e.g., in visual scanning heterosexual men differed from women but not gay men). Heterosexual women did not differ from lesbian/bisexual women. For both sexes, visual scanning predicted probe trial performance. More feminine CGN scores were associated with lower performance among men and greater performance among women on specific spatial learning or probe trial measures. These results provide mixed evidence for the cross-sex shift hypothesis of sexual orientation-related differences in spatial cognition.

  5. Proteomic analysis of Pinus radiata needles: 2-DE map and protein identification by LC/MS/MS and substitution-tolerant database searching.

    Science.gov (United States)

    Valledor, Luis; Castillejo, Maria A; Lenz, Christof; Rodríguez, Roberto; Cañal, Maria J; Jorrín, Jesús

    2008-07-01

    Pinus radiata is one of the most economically important forest tree species, with a worldwide production of around 370 million m (3) of wood per year. Current selection of elite trees to be used in conservation and breeding programes requires the physiological and molecular characterization of available populations. To identify key proteins related to tree growth, productivity and responses to environmental factors, a proteomic approach is being utilized. In this paper, we present the first report of the 2-DE protein reference map of physiologically mature P. radiata needles, as a basis for subsequent differential expression proteomic studies related to growth, development, biomass production and responses to stresses. After TCA/acetone protein extraction of needle tissue, 549 +/- 21 well-resolved spots were detected in Coommassie-stained gels within the 5-8 pH and 10-100 kDa M(r) ranges. The analytical and biological variance determined for 450 spots were of 31 and 42%, respectively. After LC/MS/MS analysis of in-gel tryptic digested spots, proteins were identified by using the novel Paragon algorithm that tolerates amino acid substitution in the first-pass search. It allowed the confident identification of 115 out of the 150 protein spots subjected to MS, quite unusual high percentage for a poor sequence database, as is the case of P. radiata. Proteins were classified into 12 or 18 groups based on their corresponding cell component or biological process/pathway categories, respectively. Carbohydrate metabolism and photosynthetic enzymes predominate in the 2-DE protein profile of P. radiata needles.

  6. Race and time from diagnosis to radical prostatectomy: does equal access mean equal timely access to the operating room?--Results from the SEARCH database.

    Science.gov (United States)

    Bañez, Lionel L; Terris, Martha K; Aronson, William J; Presti, Joseph C; Kane, Christopher J; Amling, Christopher L; Freedland, Stephen J

    2009-04-01

    African American men with prostate cancer are at higher risk for cancer-specific death than Caucasian men. We determine whether significant delays in management contribute to this disparity. We hypothesize that in an equal-access health care system, time interval from diagnosis to treatment would not differ by race. We identified 1,532 African American and Caucasian men who underwent radical prostatectomy (RP) from 1988 to 2007 at one of four Veterans Affairs Medical Centers that comprise the Shared Equal-Access Regional Cancer Hospital (SEARCH) database with known biopsy date. We compared time from biopsy to RP between racial groups using linear regression adjusting for demographic and clinical variables. We analyzed risk of potential clinically relevant delays by determining odds of delays >90 and >180 days. Median time interval from diagnosis to RP was 76 and 68 days for African Americans and Caucasian men, respectively (P = 0.004). After controlling for demographic and clinical variables, race was not associated with the time interval between diagnosis and RP (P = 0.09). Furthermore, race was not associated with increased risk of delays >90 (P = 0.45) or >180 days (P = 0.31). In a cohort of men undergoing RP in an equal-access setting, there was no significant difference between racial groups with regard to time interval from diagnosis to RP. Thus, equal-access includes equal timely access to the operating room. Given our previous finding of poorer outcomes among African Americans, treatment delays do not seem to explain these observations. Our findings need to be confirmed in patients electing other treatment modalities and in other practice settings.

  7. Pharmacovigilance database search discloses ClC-K channels as a novel target of the AT1 receptor blockers valsartan and olmesartan.

    Science.gov (United States)

    Imbrici, Paola; Tricarico, Domenico; Mangiatordi, Giuseppe Felice; Nicolotti, Orazio; Lograno, Marcello Diego; Conte, Diana; Liantonio, Antonella

    2017-07-01

    Human ClC-K chloride channels are highly attractive targets for drug discovery as they have a variety of important physiological functions and are associated with genetic disorders. These channels are crucial in the kidney as they control chloride reabsorption and water diuresis. In addition, loss-of-function mutations of CLCNKB and BSND genes cause Bartter's syndrome (BS), whereas CLCNKA and CLCNKB gain-of-function polymorphisms predispose to a rare form of salt sensitive hypertension. Both disorders lack a personalized therapy that is in most cases only symptomatic. The aim of this study was to identify novel ClC-K ligands from drugs already on the market, by exploiting the pharmacological side activity of drug molecules available from the FDA Adverse Effects Reporting System database. We searched for drugs having a Bartter-like syndrome as a reported side effect, with the assumption that BS could be causatively related to the block of ClC-K channels. The ability of the selected BS-causing drugs to bind and block ClC-K channels was then validated through an integrated experimental and computational approach based on patch clamp electrophysiology in HEK293 cells and molecular docking simulations. Valsartan and olmesartan were able to block ClC-Ka channels and the molecular requirements for effective inhibition of these channels have been identified. These results suggest additional mechanisms of action for these sartans further to their primary AT 1 receptor antagonism and propose these compounds as leads for designing new potent ClC-K ligands. © 2017 The British Pharmacological Society.

  8. Delayed radical prostatectomy for intermediate-risk prostate cancer is associated with biochemical recurrence: possible implications for active surveillance from the SEARCH database.

    Science.gov (United States)

    Abern, Michael R; Aronson, William J; Terris, Martha K; Kane, Christopher J; Presti, Joseph C; Amling, Christopher L; Freedland, Stephen J

    2013-03-01

    Active surveillance (AS) is increasingly accepted as appropriate management for low-risk prostate cancer (PC) patients. It is unknown whether delaying radical prostatectomy (RP) is associated with increased risk of biochemical recurrence (BCR) for men with intermediate-risk PC. We performed a retrospective analysis of 1,561 low and intermediate-risk men from the Shared Equal Access Regional Cancer Hospital (SEARCH) database treated with RP between 1988 and 2011. Patients were stratified by interval between diagnosis and RP (≤ 3, 3-6, 6-9, or >9 months) and by risk using the D'Amico classification. Cox proportional hazard models were used to analyze BCR. Logistic regression was used to analyze positive surgical margins (PSM), extracapsular extension (ECE), and pathologic upgrading. Overall, 813 (52%) men were low-risk, and 748 (48%) intermediate-risk. Median follow-up among men without recurrence was 52.9 months, during which 437 men (38.9%) recurred. For low-risk men, RP delays were unrelated to BCR, ECE, PSM, or upgrading (all P > 0.05). For intermediate-risk men, however, delays >9 months were significantly related to BCR (HR: 2.10, P = 0.01) and PSM (OR: 4.08, P 9 months were associated with BCR in subsets of intermediate-risk men with biopsy Gleason score ≤ 3 + 4 (HR: 2.51, P 9 months predicted greater BCR and PSM risk. If confirmed in future studies, this suggests delayed RP for intermediate-risk PC may compromise outcomes. Copyright © 2012 Wiley Periodicals, Inc.

  9. Scopus database: a review.

    Science.gov (United States)

    Burnham, Judy F

    2006-03-08

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  10. Multiplicity of Mathematical Modeling Strategies to Search for Molecular and Cellular Insights into Bacteria Lung Infection.

    Science.gov (United States)

    Cantone, Martina; Santos, Guido; Wentker, Pia; Lai, Xin; Vera, Julio

    2017-01-01

    Even today two bacterial lung infections, namely pneumonia and tuberculosis, are among the 10 most frequent causes of death worldwide. These infections still lack effective treatments in many developing countries and in immunocompromised populations like infants, elderly people and transplanted patients. The interaction between bacteria and the host is a complex system of interlinked intercellular and the intracellular processes, enriched in regulatory structures like positive and negative feedback loops. Severe pathological condition can emerge when the immune system of the host fails to neutralize the infection. This failure can result in systemic spreading of pathogens or overwhelming immune response followed by a systemic inflammatory response. Mathematical modeling is a promising tool to dissect the complexity underlying pathogenesis of bacterial lung infection at the molecular, cellular and tissue levels, and also at the interfaces among levels. In this article, we introduce mathematical and computational modeling frameworks that can be used for investigating molecular and cellular mechanisms underlying bacterial lung infection. Then, we compile and discuss published results on the modeling of regulatory pathways and cell populations relevant for lung infection and inflammation. Finally, we discuss how to make use of this multiplicity of modeling approaches to open new avenues in the search of the molecular and cellular mechanisms underlying bacterial infection in the lung.

  11. Multiphasic on/off pheromone signalling in moths as neural correlates of a search strategy.

    Directory of Open Access Journals (Sweden)

    Dominique Martinez

    Full Text Available Insects and robots searching for odour sources in turbulent plumes face the same problem: the random nature of mixing causes fluctuations and intermittency in perception. Pheromone-tracking male moths appear to deal with discontinuous flows of information by surging upwind, upon sensing a pheromone patch, and casting crosswind, upon losing the plume. Using a combination of neurophysiological recordings, computational modelling and experiments with a cyborg, we propose a neuronal mechanism that promotes a behavioural switch between surge and casting. We show how multiphasic On/Off pheromone-sensitive neurons may guide action selection based on signalling presence or loss of the pheromone. A Hodgkin-Huxley-type neuron model with a small-conductance calcium-activated potassium (SK channel reproduces physiological On/Off responses. Using this model as a command neuron and the antennae of tethered moths as pheromone sensors, we demonstrate the efficiency of multiphasic patterning in driving a robotic searcher toward the source. Taken together, our results suggest that multiphasic On/Off responses may mediate olfactory navigation and that SK channels may account for these responses.

  12. Multiphasic on/off pheromone signalling in moths as neural correlates of a search strategy.

    Science.gov (United States)

    Martinez, Dominique; Chaffiol, Antoine; Voges, Nicole; Gu, Yuqiao; Anton, Sylvia; Rospars, Jean-Pierre; Lucas, Philippe

    2013-01-01

    Insects and robots searching for odour sources in turbulent plumes face the same problem: the random nature of mixing causes fluctuations and intermittency in perception. Pheromone-tracking male moths appear to deal with discontinuous flows of information by surging upwind, upon sensing a pheromone patch, and casting crosswind, upon losing the plume. Using a combination of neurophysiological recordings, computational modelling and experiments with a cyborg, we propose a neuronal mechanism that promotes a behavioural switch between surge and casting. We show how multiphasic On/Off pheromone-sensitive neurons may guide action selection based on signalling presence or loss of the pheromone. A Hodgkin-Huxley-type neuron model with a small-conductance calcium-activated potassium (SK) channel reproduces physiological On/Off responses. Using this model as a command neuron and the antennae of tethered moths as pheromone sensors, we demonstrate the efficiency of multiphasic patterning in driving a robotic searcher toward the source. Taken together, our results suggest that multiphasic On/Off responses may mediate olfactory navigation and that SK channels may account for these responses.

  13. Enhancing Artificial Bee Colony Algorithm with Self-Adaptive Searching Strategy and Artificial Immune Network Operators for Global Optimization

    Directory of Open Access Journals (Sweden)

    Tinggui Chen

    2014-01-01

    Full Text Available Artificial bee colony (ABC algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA, artificial colony optimization (ACO, and particle swarm optimization (PSO. However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments.

  14. Search Strategy of Detector Position For Neutron Source Multiplication Method by Using Detected-Neutron Multiplication Factor

    International Nuclear Information System (INIS)

    Endo, Tomohiro

    2011-01-01

    In this paper, an alternative definition of a neutron multiplication factor, detected-neutron multiplication factor kdet, is produced for the neutron source multiplication method..(NSM). By using kdet, a search strategy of appropriate detector position for NSM is also proposed. The NSM is one of the practical subcritical measurement techniques, i.e., the NSM does not require any special equipment other than a stationary external neutron source and an ordinary neutron detector. Additionally, the NSM method is based on steady-state analysis, so that this technique is very suitable for quasi real-time measurement. It is noted that the correction factors play important roles in order to accurately estimate subcriticality from the measured neutron count rates. The present paper aims to clarify how to correct the subcriticality measured by the NSM method, the physical meaning of the correction factors, and how to reduce the impact of correction factors by setting a neutron detector at an appropriate detector position

  15. The Use of Copper Pesticides in Germany and the Search for Minimization and Replacement Strategies

    Directory of Open Access Journals (Sweden)

    Stefan Kuehne

    2017-02-01

    Full Text Available Copper pesticides used to control fungal and bacterial diseases such as grapes downy mildew (Plasmopara viticola, downy mildew of hops (Pseudoperonospora humili, apple scab (Venturia spp., fireblight (Erwinia amylovora and potato late blight (Phytophthora infestans, play an important role in plant protection. In a 2013 survey of copper application in Germany we found, that while the amounts of copper used per hectare in conventional grape (0.8 kg ha−1, hop (1.7 kg ha−1 and potato-farming (0.8 kg ha−1 were well below those used in organic farming (2.3, 2.6 and 1.4 kg ha−1, respectively, they were nearly identical to those used in apple growing (1.4 kg ha−1. Due to the smaller farming area, only 24% (26.5 tonnes of the total amount of copper was applied in organic farming compared to 76% (84.8 tonnes in conventional farming. Since 2001, the Federal Agency for Agriculture and Food (BLE promoted a copper research and minimization strategy which was funded with a total of C10.2 million. Our status quo analysis of research in this field shows that some progress is being made concerning alternative compounds, resistant varieties and decision support systems. However, it also shows that new approaches are not yet able to replace copper pesticides completely, especially in organic farming. In integrated pest management, copper preparations are important for the necessary active substance rotation and successful resistance management. The availability of such products is often essential for organic grapes, hops and fruit production and for extending the organic farming of these crops. We conclude that the complete elimination of copper pesticides is not yet practicable in organic farming as the production of several organic crops would become unprofitable and may lead to organic farmers reverting to conventional production. Several existing copper reduction strategies were, however, identified, and some, like modified forecast models adapted to

  16. A Search Strategy of Level-Based Flooding for the Internet of Things

    Science.gov (United States)

    Qiu, Tie; Ding, Yanhong; Xia, Feng; Ma, Honglian

    2012-01-01

    This paper deals with the query problem in the Internet of Things (IoT). Flooding is an important query strategy. However, original flooding is prone to cause heavy network loads. To address this problem, we propose a variant of flooding, called Level-Based Flooding (LBF). With LBF, the whole network is divided into several levels according to the distances (i.e., hops) between the sensor nodes and the sink node. The sink node knows the level information of each node. Query packets are broadcast in the network according to the levels of nodes. Upon receiving a query packet, sensor nodes decide how to process it according to the percentage of neighbors that have processed it. When the target node receives the query packet, it sends its data back to the sink node via random walk. We show by extensive simulations that the performance of LBF in terms of cost and latency is much better than that of original flooding, and LBF can be used in IoT of different scales. PMID:23112594

  17. Towards Improved Airborne Fire Detection Systems Using Beetle Inspired Infrared Detection and Fire Searching Strategies

    Directory of Open Access Journals (Sweden)

    Herbert Bousack

    2015-06-01

    Full Text Available Every year forest fires cause severe financial losses in many countries of the world. Additionally, lives of humans as well as of countless animals are often lost. Due to global warming, the problem of wildfires is getting out of control; hence, the burning of thousands of hectares is obviously increasing. Most important, therefore, is the early detection of an emerging fire before its intensity becomes too high. More than ever, a need for early warning systems capable of detecting small fires from distances as large as possible exists. A look to nature shows that pyrophilous “fire beetles” of the genus Melanophila can be regarded as natural airborne fire detection systems because their larvae can only develop in the wood of fire-killed trees. There is evidence that Melanophila beetles can detect large fires from distances of more than 100 km by visual and infrared cues. In a biomimetic approach, a concept has been developed to use the surveying strategy of the “fire beetles” for the reliable detection of a smoke plume of a fire from large distances by means of a basal infrared emission zone. Future infrared sensors necessary for this ability are also inspired by the natural infrared receptors of Melanophila beetles.

  18. From Marx to Marcos - Search of the Subject and the Strategy of Revolution

    Directory of Open Access Journals (Sweden)

    Ilya L. Morozov

    2017-12-01

    Full Text Available The article discusses the evolution of theories of social revolution from the mid-nineteenth to the late twentieth centuries. The author analyzes the basic concepts of theorists and practitioners of the armed revolutionary struggle – from the founder of the classical Communist theory of Karl Marx to the Mexican guerrilla leader Subcomandante Marcos. The author focuses on the analysis of changes in the understanding of the subject (“driving forces” of the left political revolution, as well as the strategy of armed revolutionary struggle. The author comes to the conclusion about the historical evolution of the subject of the revolutionary struggle from major sustainable macro-groups (“classes”, targeted at the armed struggle, to self-born (by the network principle unstructured protest groups, situational leaders, mild forms of the revolutionary struggle, which minimize the armed violence, though do not eliminate it completely. The author substantiates the conclusion about the absence in the modern protest movement of social forces, able to become the subject of revolution socialist orientation. This increases the danger of dominance of the social protest of extremist nationalist and religious political spectra. The author offers two models of response to this threat: the growing influence of the reigning centre-right conservative parties of Russia; return to center-left positions of the social democratic movement of the countries of the European Union.

  19. Searching out the hydrogen absorption/desorption limiting reaction factors: Strategies allowing to increase kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Zeaiter, Ali, E-mail: ali.zeaiter@femto-st.fr; Chapelle, David; Nardin, Philippe

    2015-10-05

    Highlights: • A macro scale thermodynamic model that simulates the response of a FeTi-X hydride tank is performed, and validated experimentally. • A sensibility study to identify the most influent input variables that can changes very largely the reaction rate. - Abstract: Hydrogen gas has become one of the most promising energy carriers. Main breakthrough concerns hydrogen solid storage, specially based on intermetallic material use. Regarding the raw material abundance and cost, the AB type alloy FeTi is an auspicious candidate to store hydrogen. Its absorption/desorption kinetics is a basic hindrance to common use, compared with more usual hydrides. First, discussions based on literature help us identifying the successive steps leading to metal hydriding, and allow to introduce the physical parameters which drive or limit the reaction. This analysis leads us to suggest strategies in order to increase absorption/desorption kinetics. Attention is then paid to a thermofluidodynamic model, allowing to describe a macroscopic solid storage reactor. Thus, we can achieve a simulation which describes the overall reaction inside the hydrogen reactor and, by varying the sub-mentioned parameters (thermal conductivity, the powder granularity, environment heat exchange…), we attempt to hierarchy the reaction limiting factors. These simulations are correlated to absorption/desorption experiments for which pressure, temperature and hydrogen flow are recorded.

  20. STRATEGIES IN SEARCHING HOMOGENEITY IN A FACULTY OF A POSTGRADUATE PROGRAM.

    Science.gov (United States)

    Cecatti, José G; Fernandes, Karayna G; Souza, Renato T; Silveira, Carla; Surita, Fernanda G

    2015-01-01

    The professor plays a fundamental role in a graduate program, considering he/she is who plans and performs a great part of the tasks, and he/she is also responsible for spreading knowledge among students. The professor should use didactical resources for his/her continuous qualification, being responsible for situations favoring the development of students who should learn according to the best and easier way. The homogeneity in the postgraduate program consists of having subgroups of research corresponding to the Areas of Concentration, where each subgroup works with some distinct topics of research. It is desirable that the staff of postgraduate program has a significant and high quality scientific production, homogeneously distributed among them. The professors must systematically search for resources for research in agencies supporting research, not only for sponsoring the studies, but also for adding value to the researchers involved in the whole activities. The postgraduate programs need to support the professional qualification of their staff who should improve their knowledge on epidemiology for clinical studies, ethics in research and teaching skills. Two characteristics of the postgraduate system in Brazil are the nucleation and solidarity, based on the capacity and/or interest of those more structured programs to help those beginners, cooperating with their activities. The Capes (the national governmental agency responsible for coordinating and evaluating all postgraduate programs in Brazil) valorizes the social insertion in the context of postgraduate programs´ activities. It includes the recognition of activities with technological, cultural, educational and social impact as criteria for evaluation of the programs. Does exist an ideal model of postgraduate program? We think that there is no a mathematical formulae nor an ideal model for a postgraduate program. Each institution should make adaptations and search for improvements of their faculty and

  1. In Search for Anti-Aging Strategy: Can We Rejuvenate Our Aging Stem Cells?

    Directory of Open Access Journals (Sweden)

    Anna Meiliana

    2015-08-01

    Full Text Available BACKGROUND: Recent evidence suggested that we grow old partly because of our stem cells grow old as a result of mechanisms that suppress the development of cancer over a lifetime. We believe that a further, more precise mechanistic understanding of this process will be required before this knowledge can be translated into human anti-aging therapies. CONTENT: A diminished capacity to maintain tissue homeostasis is a central physiological characteristic of aging. As stem cells regulate tissue homeostasis, depletion of stem cell reserves and/or diminished stem cell function have been postulated to contribute to aging. It has further been suggested that accumulated DNA damage could be a principal mechanism underlying age-dependent stem cell decline. It is interesting that many of the rejuvenating interventions act on the stem cell compartments, perhaps reflecting shared genetic and biochemical pathways controlling stem cell function and longevity. Strategy to slow down the aging processes is based on caloric restriction refers to a dietary regimen low in calories but without undernutrition. Sirtuin (SIRT1 and 3, increases longevity by mimicking the beneficial effects of caloric restriction. SIRT3 regulates stress-responsive mitochondrial homeostasis, and more importantly, SIRT3 upregulation rejuvenates aged stem cells in tissues. Resveratrol (3,5,4’-trihydroxystilbene, a natural polyphenol found in grapes and wine, was the most powerful natural activator of SIRT1. In fact, resveratrol treatment has been demonstrated to rescue adult stem cell decline, slow down bodyweight loss, improve trabecular bone structure and mineral density, and significantly extend lifespan. SUMMARY: Tissue-specific stem cells persist throughout the entire lifespan to repair and maintain tissues, but their self-renewal and differentiation potential become dysregulated with aging. Given that adult stem cells are thought to be central to tissue maintenance and organismal

  2. Optimal Power Flow Using Gbest-Guided Cuckoo Search Algorithm with Feedback Control Strategy and Constraint Domination Rule

    Directory of Open Access Journals (Sweden)

    Gonggui Chen

    2017-01-01

    Full Text Available The optimal power flow (OPF is well-known as a significant optimization tool for the security and economic operation of power system, and OPF problem is a complex nonlinear, nondifferentiable programming problem. Thus this paper proposes a Gbest-guided cuckoo search algorithm with the feedback control strategy and constraint domination rule which is named as FCGCS algorithm for solving OPF problem and getting optimal solution. This FCGCS algorithm is guided by the global best solution for strengthening exploitation ability. Feedback control strategy is devised to dynamically regulate the control parameters according to actual and specific feedback value in the simulation process. And the constraint domination rule can efficiently handle inequality constraints on state variables, which is superior to traditional penalty function method. The performance of FCGCS algorithm is tested and validated on the IEEE 30-bus and IEEE 57-bus example systems, and simulation results are compared with different methods obtained from other literatures recently. The comparison results indicate that FCGCS algorithm can provide high-quality feasible solutions for different OPF problems.

  3. Modelling sensory limitation: the role of tree selection, memory and information transfer in bats' roost searching strategies.

    Science.gov (United States)

    Ruczyński, Ireneusz; Bartoń, Kamil A

    2012-01-01

    Sensory limitation plays an important role in the evolution of animal behaviour. Animals have to find objects of interest (e.g. food, shelters, predators). When sensory abilities are strongly limited, animals adjust their behaviour to maximize chances for success. Bats are nocturnal, live in complex environments, are capable of flight and must confront numerous perceptual challenges (e.g. limited sensory range, interfering clutter echoes). This makes them an excellent model for studying the role of compensating behaviours to decrease costs of finding resources. Cavity roosting bats are especially interesting because the availability of tree cavities is often limited, and their quality is vital for bats during the breeding season. From a bat's sensory point of view, cavities are difficult to detect and finding them requires time and energy. However, tree cavities are also long lasting, allowing information transfer among conspecifics. Here, we use a simple simulation model to explore the benefits of tree selection, memory and eavesdropping (compensation behaviours) to searches for tree cavities by bats with short and long perception range. Our model suggests that memory and correct discrimination of tree suitability are the basic strategies decreasing the cost of roost finding, whereas perceptual range plays a minor role in this process. Additionally, eavesdropping constitutes a buffer that reduces the costs of finding new resources (such as roosts), especially when they occur in low density. We conclude that natural selection may promote different strategies of roost finding in relation to habitat conditions and cognitive skills of animals.

  4. Modelling sensory limitation: the role of tree selection, memory and information transfer in bats' roost searching strategies.

    Directory of Open Access Journals (Sweden)

    Ireneusz Ruczyński

    Full Text Available Sensory limitation plays an important role in the evolution of animal behaviour. Animals have to find objects of interest (e.g. food, shelters, predators. When sensory abilities are strongly limited, animals adjust their behaviour to maximize chances for success. Bats are nocturnal, live in complex environments, are capable of flight and must confront numerous perceptual challenges (e.g. limited sensory range, interfering clutter echoes. This makes them an excellent model for studying the role of compensating behaviours to decrease costs of finding resources. Cavity roosting bats are especially interesting because the availability of tree cavities is often limited, and their quality is vital for bats during the breeding season. From a bat's sensory point of view, cavities are difficult to detect and finding them requires time and energy. However, tree cavities are also long lasting, allowing information transfer among conspecifics. Here, we use a simple simulation model to explore the benefits of tree selection, memory and eavesdropping (compensation behaviours to searches for tree cavities by bats with short and long perception range. Our model suggests that memory and correct discrimination of tree suitability are the basic strategies decreasing the cost of roost finding, whereas perceptual range plays a minor role in this process. Additionally, eavesdropping constitutes a buffer that reduces the costs of finding new resources (such as roosts, especially when they occur in low density. We conclude that natural selection may promote different strategies of roost finding in relation to habitat conditions and cognitive skills of animals.

  5. Astronomical databases of Nikolaev Observatory

    Science.gov (United States)

    Protsyuk, Y.; Mazhaev, A.

    2008-07-01

    Several astronomical databases were created at Nikolaev Observatory during the last years. The databases are built by using MySQL search engine and PHP scripts. They are available on NAO web-site http://www.mao.nikolaev.ua.

  6. Refining search terms for nanotechnology

    International Nuclear Information System (INIS)

    Porter, Alan L.; Youtie, Jan; Shapira, Philip; Schoeneck, David J.

    2008-01-01

    The ability to delineate the boundaries of an emerging technology is central to obtaining an understanding of the technology's research paths and commercialization prospects. Nowhere is this more relevant than in the case of nanotechnology (hereafter identified as 'nano') given its current rapid growth and multidisciplinary nature. (Under the rubric of nanotechnology, we also include nanoscience and nanoengineering.) Past efforts have utilized several strategies, including simple term search for the prefix nano, complex lexical and citation-based approaches, and bootstrapping techniques. This research introduces a modularized Boolean approach to defining nanotechnology which has been applied to several research and patenting databases. We explain our approach to downloading and cleaning data, and report initial results. Comparisons of this approach with other nanotechnology search formulations are presented. Implications for search strategy development and profiling of the nanotechnology field are discussed

  7. Refining search terms for nanotechnology

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Alan L. [Georgia Institute of Technology (United States); Youtie, Jan [Georgia Institute of Technology, Enterprise Innovation Institute (United States)], E-mail: jan.youtie@innovate.gatech.edu; Shapira, Philip [Georgia Institute of Technology (United States); Schoeneck, David J. [Search Technology, Inc. (United States)

    2008-05-15

    The ability to delineate the boundaries of an emerging technology is central to obtaining an understanding of the technology's research paths and commercialization prospects. Nowhere is this more relevant than in the case of nanotechnology (hereafter identified as 'nano') given its current rapid growth and multidisciplinary nature. (Under the rubric of nanotechnology, we also include nanoscience and nanoengineering.) Past efforts have utilized several strategies, including simple term search for the prefix nano, complex lexical and citation-based approaches, and bootstrapping techniques. This research introduces a modularized Boolean approach to defining nanotechnology which has been applied to several research and patenting databases. We explain our approach to downloading and cleaning data, and report initial results. Comparisons of this approach with other nanotechnology search formulations are presented. Implications for search strategy development and profiling of the nanotechnology field are discussed.

  8. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  9. Strategies for medical data extraction and presentation part 2: creating a customizable context and user-specific patient reference database.

    Science.gov (United States)

    Reiner, Bruce

    2015-06-01

    One of the greatest challenges facing healthcare professionals is the ability to directly and efficiently access relevant data from the patient's healthcare record at the point of care; specific to both the context of the task being performed and the specific needs and preferences of the individual end-user. In radiology practice, the relative inefficiency of imaging data organization and manual workflow requirements serves as an impediment to historical imaging data review. At the same time, clinical data retrieval is even more problematic due to the quality and quantity of data recorded at the time of order entry, along with the relative lack of information system integration. One approach to address these data deficiencies is to create a multi-disciplinary patient referenceable database which consists of high-priority, actionable data within the cumulative patient healthcare record; in which predefined criteria are used to categorize and classify imaging and clinical data in accordance with anatomy, technology, pathology, and time. The population of this referenceable database can be performed through a combination of manual and automated methods, with an additional step of data verification introduced for data quality control. Once created, these referenceable databases can be filtered at the point of care to provide context and user-specific data specific to the task being performed and individual end-user requirements.

  10. Analysis of a Bibliographic Database Enhanced with a Library Classification.

    Science.gov (United States)

    Drabenstott, Karen Markey; And Others

    1990-01-01

    Describes a project that examined the effects of incorporating subject terms from the Dewey Decimal Classification (DDC) into a bibliographic database. It is concluded that the incorporation of DDC and possibly other library classifications into online catalogs can enhance subject access and provide additional subject searching strategies. (11…

  11. Definition and construction of a first database for assessing the impacts on health and the environment of different strategies for the back end of the fuel cycle

    International Nuclear Information System (INIS)

    Muller, O.; Ouzounian, G.

    1998-01-01

    The life cycle assessment framework has been applied to the management of used fuel cycle to determine a general methodology to study the impacts on health and the environment of the back end of the fuel cycle. System definition starts with a definite waste fuel composition and covers all the industrial steps until all elements of the waste are stored. It is recommended to use electricity generation as a functional unit especially for comparing different strategies. In this case, as some parts of the nuclear waste may be recycled to produce electricity, systems have to be expanded to cover both front and back ends of the fuel cycle. A first bibliographical database covering different stages of the nuclear cycle has been constructed and stored with the standard Ecobilan format developed for environmental analysis and management. Data collection includes all steps from mining extraction to ultimate disposal. Together with the constitution of this database several typical strategies for PWR fuels have been assessed. A first list of criteria has been chosen to best represent the impacts of each strategy on both human health of population and workers and the environment. Data gathered for each step are ready to be reused for designing and assessing simulations on alternative nuclear cycles. (author)

  12. Atomic Spectra Database (ASD)

    Science.gov (United States)

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  13. Database in Artificial Intelligence.

    Science.gov (United States)

    Wilkinson, Julia

    1986-01-01

    Describes a specialist bibliographic database of literature in the field of artificial intelligence created by the Turing Institute (Glasgow, Scotland) using the BRS/Search information retrieval software. The subscription method for end-users--i.e., annual fee entitles user to unlimited access to database, document provision, and printed awareness…

  14. Online Patent Searching: The Realities.

    Science.gov (United States)

    Kaback, Stuart M.

    1983-01-01

    Considers patent subject searching capabilities of major online databases, noting patent claims, "deep-indexed" files, test searches, retrieval of related references, multi-database searching, improvements needed in indexing of chemical structures, full text searching, improvements needed in handling numerical data, and augmenting a…

  15. Literature database aid

    International Nuclear Information System (INIS)

    Wanderer, J.A.

    1991-01-01

    The booklet is to help with the acquisition of original literature either after a conventional literature search or in particular after a database search. It bridges the gap between abbreviated (short) and original (long) titel. This, together with information on the holdings of technical/scientific libraries, facilitates document delivery. 1500 short titles are listed alphabetically. (orig.) [de

  16. A Study on Information Search and Commitment Strategies on Web Environment and Internet Usage Self-Efficacy Beliefs of University Students'

    Science.gov (United States)

    Geçer, Aynur Kolburan

    2014-01-01

    This study addresses university students' information search and commitment strategies on web environment and internet usage self-efficacy beliefs in terms of such variables as gender, department, grade level and frequency of internet use; and whether there is a significant relation between these beliefs. Descriptive method was used in the study.…

  17. Students' Scientific Epistemic Beliefs, Online Evaluative Standards, and Online Searching Strategies for Science Information: The Moderating Role of Cognitive Load Experience

    Science.gov (United States)

    Hsieh, Ya-Hui; Tsai, Chin-Chung

    2014-01-01

    The purpose of this study is to examine the moderating role of cognitive load experience between students' scientific epistemic beliefs and information commitments, which refer to online evaluative standards and online searching strategies. A total of 344 science-related major students participated in this study. Three questionnaires were…

  18. Analysis of Students' Online Information Searching Strategies, Exposure to Internet Information Pollution and Cognitive Absorption Levels Based on Various Variables

    Science.gov (United States)

    Kurt, Adile Askim; Emiroglu, Bülent Gürsel

    2018-01-01

    The objective of the present study was to examine students' online information searching strategies, their cognitive absorption levels and the information pollution levels on the Internet based on different variables and to determine the correlation between these variables. The study was designed with the survey model, the study group included 198…

  19. Using Direct Policy Search to Identify Robust Strategies in Adapting to Uncertain Sea Level Rise and Storm Surge

    Science.gov (United States)

    Garner, G. G.; Keller, K.

    2017-12-01

    Sea-level rise poses considerable risks to coastal communities, ecosystems, and infrastructure. Decision makers are faced with deeply uncertain sea-level projections when designing a strategy for coastal adaptation. The traditional methods have provided tremendous insight into this decision problem, but are often silent on tradeoffs as well as the effects of tail-area events and of potential future learning. Here we reformulate a simple sea-level rise adaptation model to address these concerns. We show that Direct Policy Search yields improved solution quality, with respect to Pareto-dominance in the objectives, over the traditional approach under uncertain sea-level rise projections and storm surge. Additionally, the new formulation produces high quality solutions with less computational demands than the traditional approach. Our results illustrate the utility of multi-objective adaptive formulations for the example of coastal adaptation, the value of information provided by observations, and point to wider-ranging application in climate change adaptation decision problems.

  20. How to perform a systematic search

    DEFF Research Database (Denmark)

    Bartels, Else Marie

    2013-01-01

    All medical practice and research must be evidence-based, as far as this is possible. With medical knowledge constantly growing, it has become necessary to possess a high level of information literacy to stay competent and professional. Furthermore, as patients can now search information...... on the Internet, clinicians must be able to respond to this type of information in a professional way, when needed. Here, the development of viable systematic search strategies for journal articles, books, book chapters and other sources, selection of appropriate databases, search tools and selection methods...

  1. [The practice of systematic reviews. II. Searching and selection of studies

    DEFF Research Database (Denmark)

    Assendelft, W J; van Tulder, M W; Scholten, R J

    1999-01-01

    can be performed in trial registers and printed indexes and by correspondence with experts and hand searching of journals. Storage of the search results in a bibliographic database is recommended. Various methodological problems may play a role in searching and selecting studies for a review: studies......Structured searching and selection of studies is an important component of a systematic review. It is recommended to record the various steps in a protocol in advance. The thoroughness of the searching and selection will partially depend on the available resources, like manpower and funds. A search...... action should be based on an unequivocally formulated research or clinical question, that is operationalized into clear inclusion and exclusion criteria. The actual start of a search strategy is a search in preferably multiple databases like Medline and EMBASE-Excerpta Medica. Additional search actions...

  2. License - SSBD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...thout notice. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - SSBD | LSDB Archive ...

  3. Download - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...cess [here]. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Download - PSCDB | LSDB Archive ...

  4. License - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ... notice. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - ASTRA | LSDB Archive ...

  5. License - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - JSNP | LSDB Archive ...

  6. License - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...out This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - KOME | LSDB Archive ...

  7. Download - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...is Database Database Description Download License Update History of This Database Site Policy | Contact Us Download - ASTRA | LSDB Archive ...

  8. License - RGP gmap | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...nged without notice. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - RGP gmap | LSDB Archive ...

  9. License - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ut notice. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - SAHG | LSDB Archive ...

  10. Download - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...t This Database Database Description Download License Update History of This Database Site Policy | Contact Us Download - RED | LSDB Archive ...

  11. Download - GRIPDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...t This Database Database Description Download License Update History of This Database Site Policy | Contact Us Download - GRIPDB | LSDB Archive ...

  12. License - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...thout notice. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - RPSD | LSDB Archive ...

  13. License - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...out This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - RMOS | LSDB Archive ...

  14. Search of medical literature for indoor carbon monoxide exposure

    Energy Technology Data Exchange (ETDEWEB)

    Brennan, T.; Ivanovich, M.

    1995-12-01

    This report documents a literature search on carbon monoxide. The search was limited to the medical and toxicological databases at the National Library of Medicine (MEDLARS). The databases searched were Medline, Toxline and TOXNET. Searches were performed using a variety of strategies. Combinations of the following keywords were used: carbon, monoxide, accidental, residential, occult, diagnosis, misdiagnosis, heating, furnace, and indoor. The literature was searched from 1966 to the present. Over 1000 references were identified and summarized using the following abbreviations: The major findings of the search are: (1) Acute and subacute carbon monoxide exposures result in a large number of symptoms affecting the brain, kidneys, respiratory system, retina, and motor functions. (2) Acute and subacute carbon monoxide (CO) poisonings have been misdiagnosed on many occasions. (3) Very few systematic investigations have been made into the frequency and consequences of carbon monoxide poisonings.

  15. Building strategies for tsunami scenarios databases to be used in a tsunami early warning decision support system: an application to western Iberia

    Science.gov (United States)

    Tinti, S.; Armigliato, A.; Pagnoni, G.; Zaniboni, F.

    2012-04-01

    One of the most challenging goals that the geo-scientific community is facing after the catastrophic tsunami occurred on December 2004 in the Indian Ocean is to develop the so-called "next generation" Tsunami Early Warning Systems (TEWS). Indeed, the meaning of "next generation" does not refer to the aim of a TEWS, which obviously remains to detect whether a tsunami has been generated or not by a given source and, in the first case, to send proper warnings and/or alerts in a suitable time to all the countries and communities that can be affected by the tsunami. Instead, "next generation" identifies with the development of a Decision Support System (DSS) that, in general terms, relies on 1) an integrated set of seismic, geodetic and marine sensors whose objective is to detect and characterise the possible tsunamigenic sources and to monitor instrumentally the time and space evolution of the generated tsunami, 2) databases of pre-computed numerical tsunami scenarios to be suitably combined based on the information coming from the sensor environment and to be used to forecast the degree of exposition of different coastal places both in the near- and in the far-field, 3) a proper overall (software) system architecture. The EU-FP7 TRIDEC Project aims at developing such a DSS and has selected two test areas in the Euro-Mediterranean region, namely the western Iberian margin and the eastern Mediterranean (Turkish coasts). In this study, we discuss the strategies that are being adopted in TRIDEC to build the databases of pre-computed tsunami scenarios and we show some applications to the western Iberian margin. In particular, two different databases are being populated, called "Virtual Scenario Database" (VSDB) and "Matching Scenario Database" (MSDB). The VSDB contains detailed simulations of few selected earthquake-generated tsunamis. The cases provided by the members of the VSDB are computed "real events"; in other words, they represent the unknowns that the TRIDEC

  16. SU-E-J-129: A Strategy to Consolidate the Image Database of a VERO Unit Into a Radiotherapy Management System

    International Nuclear Information System (INIS)

    Yan, Y; Medin, P; Yordy, J; Zhao, B; Jiang, S

    2014-01-01

    Purpose: To present a strategy to integrate the imaging database of a VERO unit with a treatment management system (TMS) to improve clinical workflow and consolidate image data to facilitate clinical quality control and documentation. Methods: A VERO unit is equipped with both kV and MV imaging capabilities for IGRT treatments. It has its own imaging database behind a firewall. It has been a challenge to transfer images on this unit to a TMS in a radiation therapy clinic so that registered images can be reviewed remotely with an approval or rejection record. In this study, a software system, iPump-VERO, was developed to connect VERO and a TMS in our clinic. The patient database folder on the VERO unit was mapped to a read-only folder on a file server outside VERO firewall. The application runs on a regular computer with the read access to the patient database folder. It finds the latest registered images and fuses them in one of six predefined patterns before sends them via DICOM connection to the TMS. The residual image registration errors will be overlaid on the fused image to facilitate image review. Results: The fused images of either registered kV planar images or CBCT images are fully DICOM compatible. A sentinel module is built to sense new registered images with negligible computing resources from the VERO ExacTrac imaging computer. It takes a few seconds to fuse registered images and send them to the TMS. The whole process is automated without any human intervention. Conclusion: Transferring images in DICOM connection is the easiest way to consolidate images of various sources in your TMS. Technically the attending does not have to go to the VERO treatment console to review image registration prior delivery. It is a useful tool for a busy clinic with a VERO unit

  17. BLAST and FASTA similarity searching for multiple sequence alignment.

    Science.gov (United States)

    Pearson, William R

    2014-01-01

    BLAST, FASTA, and other similarity searching programs seek to identify homologous proteins and DNA sequences based on excess sequence similarity. If two sequences share much more similarity than expected by chance, the simplest explanation for the excess similarity is common ancestry-homology. The most effective similarity searches compare protein sequences, rather than DNA sequences, for sequences that encode proteins, and use expectation values, rather than percent identity, to infer homology. The BLAST and FASTA packages of sequence comparison programs provide programs for comparing protein and DNA sequences to protein databases (the most sensitive searches). Protein and translated-DNA comparisons to protein databases routinely allow evolutionary look back times from 1 to 2 billion years; DNA:DNA searches are 5-10-fold less sensitive. BLAST and FASTA can be run on popular web sites, but can also be downloaded and installed on local computers. With local installation, target databases can be customized for the sequence data being characterized. With today's very large protein databases, search sensitivity can also be improved by searching smaller comprehensive databases, for example, a complete protein set from an evolutionarily neighboring model organism. By default, BLAST and FASTA use scoring strategies target for distant evolutionary relationships; for comparisons involving short domains or queries, or searches that seek relatively close homologs (e.g. mouse-human), shallower scoring matrices will be more effective. Both BLAST and FASTA provide very accurate statistical estimates, which can be used to reliably identify protein sequences that diverged more than 2 billion years ago.

  18. Custom Search Engines: Tools & Tips

    Science.gov (United States)

    Notess, Greg R.

    2008-01-01

    Few have the resources to build a Google or Yahoo! from scratch. Yet anyone can build a search engine based on a subset of the large search engines' databases. Use Google Custom Search Engine or Yahoo! Search Builder or any of the other similar programs to create a vertical search engine targeting sites of interest to users. The basic steps to…

  19. A search for pre-main sequence stars in the high-latitude molecular clouds. II - A survey of the Einstein database

    Science.gov (United States)

    Caillault, Jean-Pierre; Magnani, Loris

    1990-01-01

    The preliminary results are reported of a survey of every EINSTEIN image which overlaps any high-latitude molecular cloud in a search for X-ray emitting pre-main sequence stars. This survey, together with complementary KPNO and IRAS data, will allow the determination of how prevalent low mass star formation is in these clouds in general and, particularly, in the translucent molecular clouds.

  20. A review of strategies to stimulate dental professionals to integrate smoking cessation interventions into primary care.

    NARCIS (Netherlands)

    Rosseel, J.P.; Jacobs, J.E.; Plasschaert, A.J.M.; Grol, R.P.T.M.

    2012-01-01

    OBJECTIVE: To summarise evidence regarding the effectiveness of various implementation strategies to stimulate the delivery of smoking cessation advice and support during daily dental care. BASIC RESEARCH DESIGN: Search of online medical and psychological databases, correspondence with authors and

  1. Students' Scientific Epistemic Beliefs, Online Evaluative Standards, and Online Searching Strategies for Science Information: The Moderating Role of Cognitive Load Experience

    Science.gov (United States)

    Hsieh, Ya-Hui; Tsai, Chin-Chung

    2014-06-01

    The purpose of this study is to examine the moderating role of cognitive load experience between students' scientific epistemic beliefs and information commitments, which refer to online evaluative standards and online searching strategies. A total of 344 science-related major students participated in this study. Three questionnaires were used to ascertain the students' scientific epistemic beliefs, information commitments, and cognitive load experience. Structural equation modeling was then used to analyze the moderating effect of cognitive load, with the results revealing its significant moderating effect. The relationships between sophisticated scientific epistemic beliefs and the advanced evaluative standards used by the students were significantly stronger for low than for high cognitive load students. Moreover, considering the searching strategies that the students used, the relationships between sophisticated scientific epistemic beliefs and advanced searching strategies were also stronger for low than for high cognitive load students. However, for the high cognitive load students, only one of the sophisticated scientific epistemic belief dimensions was found to positively associate with advanced evaluative standard dimensions.

  2. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    Science.gov (United States)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  3. Specialist Bibliographic Databases.

    Science.gov (United States)

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  4. Specialist Bibliographic Databases

    Science.gov (United States)

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  5. A search for pre-main-sequence stars in high-latitude molecular clouds. 3: A survey of the Einstein database

    Science.gov (United States)

    Caillault, Jean-Pierre; Magnani, Loris; Fryer, Chris

    1995-01-01

    In order to discern whether the high-latitude molecular clouds are regions of ongoing star formation, we have used X-ray emission as a tracer of youthful stars. The entire Einstein database yields 18 images which overlap 10 of the clouds mapped partially or completely in the CO (1-0) transition, providing a total of approximately 6 deg squared of overlap. Five previously unidentified X-ray sources were detected: one has an optical counterpart which is a pre-main-sequence (PMS) star, and two have normal main-sequence stellar counterparts, while the other two are probably extragalactic sources. The PMS star is located in a high Galactic latitude Lynds dark cloud, so this result is not too suprising. The translucent clouds, though, have yet to reveal any evidence of star formation.

  6. A supersymmetry search strategy with single-lepton events at 13 TeV by the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Lobanov, Artur; Seitz, Claudia; Melzer-Pellmann, Isabell; Singh, Akshansh [DESY, Hamburg (Germany)

    2016-07-01

    We present an inclusive search for supersymmetry in the single-lepton channel with 13 TeV. To optimise the sensitivity to various new-physics topologies, we search in several exclusive categories which differ in the number of jets and b-tagged jets. We determine the background from data, exploiting the fact that the main background is located at small values of the azimuthal angle between the W-boson candidate and the charged lepton. To be less dependent on the new-physics scale, we also introduce separate search categories based on the scalar sum of the jet transverse momenta and on the scalar sum of the transverse missing momentum and the transverse momentum of the lepton. Depending on the signal model, the signals regions have varying sensitivity. Here we concentrate on gluino-gluino production, where the pair-produced gluinos decay to a top-antitop pair and the lightest neutralino.

  7. JICST Factual Database(2)

    Science.gov (United States)

    Araki, Keisuke

    The computer programme, which builds atom-bond connection tables from nomenclatures, is developed. Chemical substances with their nomenclature and varieties of trivial names or experimental code numbers are inputted. The chemical structures of the database are stereospecifically stored and are able to be searched and displayed according to stereochemistry. Source data are from laws and regulations of Japan, RTECS of US and so on. The database plays a central role within the integrated fact database service of JICST and makes interrelational retrieval possible.

  8. Disbiome database: linking the microbiome to disease.

    Science.gov (United States)

    Janssens, Yorick; Nielandt, Joachim; Bronselaer, Antoon; Debunne, Nathan; Verbeke, Frederick; Wynendaele, Evelien; Van Immerseel, Filip; Vandewynckel, Yves-Paul; De Tré, Guy; De Spiegeleer, Bart

    2018-06-04

    Recent research has provided fascinating indications and evidence that the host health is linked to its microbial inhabitants. Due to the development of high-throughput sequencing technologies, more and more data covering microbial composition changes in different disease types are emerging. However, this information is dispersed over a wide variety of medical and biomedical disciplines. Disbiome is a database which collects and presents published microbiota-disease information in a standardized way. The diseases are classified using the MedDRA classification system and the micro-organisms are linked to their NCBI and SILVA taxonomy. Finally, each study included in the Disbiome database is assessed for its reporting quality using a standardized questionnaire. Disbiome is the first database giving a clear, concise and up-to-date overview of microbial composition differences in diseases, together with the relevant information of the studies published. The strength of this database lies within the combination of the presence of references to other databases, which enables both specific and diverse search strategies within the Disbiome database, and the human annotation which ensures a simple and structured presentation of the available data.

  9. Did online publishers "get it right"? Using a naturalistic search strategy to review cognitive health promotion content on internet webpages.

    Science.gov (United States)

    Hunter, P V; Delbaere, M; O'Connell, M E; Cammer, A; Seaton, J X; Friedrich, T; Fick, F

    2017-06-15

    One of the most common uses of the Internet is to search for health-related information. Although scientific evidence pertaining to cognitive health promotion has expanded rapidly in recent years, it is unclear how much of this information has been made available to Internet users. Thus, the purpose of our study was to assess the reliability and quality of information about cognitive health promotion encountered by typical Internet users. To generate a list of relevant search terms employed by Internet users, we entered seed search terms in Google Trends and recorded any terms consistently used in the prior 2 years. To further approximate the behaviour of typical Internet users, we entered each term in Google and sampled the first two relevant results. This search, completed in October 2014, resulted in a sample of 86 webpages, 48 of which had content related to cognitive health promotion. An interdisciplinary team rated the information reliability and quality of these webpages using a standardized measure. We found that information reliability and quality were moderate, on average. Just one retrieved page mentioned best practice, national recommendations, or consensus guidelines by name. Commercial content (i.e., product promotion, advertising content, or non-commercial) was associated with differences in reliability and quality, with product promoter webpages having the lowest mean reliability and quality ratings. As efforts to communicate the association between lifestyle and cognitive health continue to expand, we offer these results as a baseline assessment of the reliability and quality of cognitive health promotion on the Internet.

  10. Hybrid and Cooperative Strategies Using Harmony Search and Artificial Immune Systems for Solving the Nurse Rostering Problem

    Directory of Open Access Journals (Sweden)

    Suk Ho Jin

    2017-06-01

    Full Text Available The nurse rostering problem is an important search problem that features many constraints. In a nurse rostering problem, these constraints are defined by processes such as maintaining work regulations, assigning nurse shifts, and considering nurse preferences. A number of approaches to address these constraints, such as penalty function methods, have been investigated in the literature. We propose two types of hybrid metaheuristic approaches for solving the nurse rostering problem, which are based on combining harmony search techniques and artificial immune systems to balance local and global searches and prevent slow convergence speeds and prematurity. The proposed algorithms are evaluated against a benchmarking dataset of nurse rostering problems; the results show that they identify better or best known solutions compared to those identified in other studies for most instances. The results also show that the combination of harmony search and artificial immune systems is better suited than using single metaheuristic or other hybridization methods for finding upper-bound solutions for nurse rostering problems and discrete optimization problems.

  11. Optimization of partial search

    International Nuclear Information System (INIS)

    Korepin, Vladimir E

    2005-01-01

    A quantum Grover search algorithm can find a target item in a database faster than any classical algorithm. One can trade accuracy for speed and find a part of the database (a block) containing the target item even faster; this is partial search. A partial search algorithm was recently suggested by Grover and Radhakrishnan. Here we optimize it. Efficiency of the search algorithm is measured by the number of queries to the oracle. The author suggests a new version of the Grover-Radhakrishnan algorithm which uses a minimal number of such queries. The algorithm can run on the same hardware that is used for the usual Grover algorithm. (letter to the editor)

  12. Detection and identification of drugs and toxicants in human body fluids by liquid chromatography-tandem mass spectrometry under data-dependent acquisition control and automated database search.

    Science.gov (United States)

    Oberacher, Herbert; Schubert, Birthe; Libiseller, Kathrin; Schweissgut, Anna

    2013-04-03

    Systematic toxicological analysis (STA) is aimed at detecting and identifying all substances of toxicological relevance (i.e. drugs, drugs of abuse, poisons and/or their metabolites) in biological material. Particularly, gas chromatography-mass spectrometry (GC/MS) represents a competent and commonly applied screening and confirmation tool. Herein, we present an untargeted liquid chromatography-tandem mass spectrometry (LC/MS/MS) assay aimed to complement existing GC/MS screening for the detection and identification of drugs in blood, plasma and urine samples. Solid-phase extraction was accomplished on mixed-mode cartridges. LC was based on gradient elution in a miniaturized C18 column. High resolution electrospray ionization-MS/MS in positive ion mode with data-dependent acquisition control was used to generate tandem mass spectral information that enabled compound identification via automated library search in the "Wiley Registry of Tandem Mass Spectral Data, MSforID". Fitness of the developed LC/MS/MS method for application in STA in terms of selectivity, detection capability and reliability of identification (sensitivity/specificity) was demonstrated with blank samples, certified reference materials, proficiency test samples, and authentic casework samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  14. NSGA-II Algorithm with a Local Search Strategy for Multiobjective Optimal Design of Dry-Type Air-Core Reactor

    Directory of Open Access Journals (Sweden)

    Chengfen Zhang

    2015-01-01

    Full Text Available Dry-type air-core reactor is now widely applied in electrical power distribution systems, for which the optimization design is a crucial issue. In the optimization design problem of dry-type air-core reactor, the objectives of minimizing the production cost and minimizing the operation cost are both important. In this paper, a multiobjective optimal model is established considering simultaneously the two objectives of minimizing the production cost and minimizing the operation cost. To solve the multi-objective optimization problem, a memetic evolutionary algorithm is proposed, which combines elitist nondominated sorting genetic algorithm version II (NSGA-II with a local search strategy based on the covariance matrix adaptation evolution strategy (CMA-ES. NSGA-II can provide decision maker with flexible choices among the different trade-off solutions, while the local-search strategy, which is applied to nondominated individuals randomly selected from the current population in a given generation and quantity, can accelerate the convergence speed. Furthermore, another modification is that an external archive is set in the proposed algorithm for increasing the evolutionary efficiency. The proposed algorithm is tested on a dry-type air-core reactor made of rectangular cross-section litz-wire. Simulation results show that the proposed algorithm has high efficiency and it converges to a better Pareto front.

  15. Manipulating Google's Knowledge Graph Box to Counter Biased Information Processing During an Online Search on Vaccination: Application of a Technological Debiasing Strategy.

    Science.gov (United States)

    Ludolph, Ramona; Allam, Ahmed; Schulz, Peter J

    2016-06-02

    One of people's major motives for going online is the search for health-related information. Most consumers start their search with a general search engine but are unaware of the fact that its sorting and ranking criteria do not mirror information quality. This misconception can lead to distorted search outcomes, especially when the information processing is characterized by heuristic principles and resulting cognitive biases instead of a systematic elaboration. As vaccination opponents are vocal on the Web, the chance of encountering their non‒evidence-based views on immunization is high. Therefore, biased information processing in this context can cause subsequent impaired judgment and decision making. A technological debiasing strategy could counter this by changing people's search environment. This study aims at testing a technological debiasing strategy to reduce the negative effects of biased information processing when using a general search engine on people's vaccination-related knowledge and attitudes. This strategy is to manipulate the content of Google's knowledge graph box, which is integrated in the search interface and provides basic information about the search topic. A full 3x2 factorial, posttest-only design was employed with availability of basic factual information (comprehensible vs hardly comprehensible vs not present) as the first factor and a warning message as the second factor of experimental manipulation. Outcome variables were the evaluation of the knowledge graph box, vaccination-related knowledge, as well as beliefs and attitudes toward vaccination, as represented by three latent variables emerged from an exploratory factor analysis. Two-way analysis of variance revealed a significant main effect of availability of basic information in the knowledge graph box on participants' vaccination knowledge scores (F2,273=4.86, P=.01), skepticism/fear of vaccination side effects (F2,273=3.5, P=.03), and perceived information quality (F2

  16. Manipulating Google’s Knowledge Graph Box to Counter Biased Information Processing During an Online Search on Vaccination: Application of a Technological Debiasing Strategy

    Science.gov (United States)

    Allam, Ahmed; Schulz, Peter J

    2016-01-01

    Background One of people’s major motives for going online is the search for health-related information. Most consumers start their search with a general search engine but are unaware of the fact that its sorting and ranking criteria do not mirror information quality. This misconception can lead to distorted search outcomes, especially when the information processing is characterized by heuristic principles and resulting cognitive biases instead of a systematic elaboration. As vaccination opponents are vocal on the Web, the chance of encountering their non‒evidence-based views on immunization is high. Therefore, biased information processing in this context can cause subsequent impaired judgment and decision making. A technological debiasing strategy could counter this by changing people’s search environment. Objective This study aims at testing a technological debiasing strategy to reduce the negative effects of biased information processing when using a general search engine on people’s vaccination-related knowledge and attitudes. This strategy is to manipulate the content of Google’s knowledge graph box, which is integrated in the search interface and provides basic information about the search topic. Methods A full 3x2 factorial, posttest-only design was employed with availability of basic factual information (comprehensible vs hardly comprehensible vs not present) as the first factor and a warning message as the second factor of experimental manipulation. Outcome variables were the evaluation of the knowledge graph box, vaccination-related knowledge, as well as beliefs and attitudes toward vaccination, as represented by three latent variables emerged from an exploratory factor analysis. Results Two-way analysis of variance revealed a significant main effect of availability of basic information in the knowledge graph box on participants’ vaccination knowledge scores (F2,273=4.86, P=.01), skepticism/fear of vaccination side effects (F2,273=3.5, P=.03

  17. Design of a Bioactive Small Molecule that Targets the Myotonic Dystrophy Type 1 RNA Via an RNA Motif-Ligand Database & Chemical Similarity Searching

    Science.gov (United States)

    Parkesh, Raman; Childs-Disney, Jessica L.; Nakamori, Masayuki; Kumar, Amit; Wang, Eric; Wang, Thomas; Hoskins, Jason; Tran, Tuan; Housman, David; Thornton, Charles A.; Disney, Matthew D.

    2012-01-01

    Myotonic dystrophy type 1 (DM1) is a triplet repeating disorder caused by expanded CTG repeats in the 3′ untranslated region of the dystrophia myotonica protein kinase (DMPK) gene. The transcribed repeats fold into an RNA hairpin with multiple copies of a 5′CUG/3′GUC motif that binds the RNA splicing regulator muscleblind-like 1 protein (MBNL1). Sequestration of MBNL1 by expanded r(CUG) repeats causes splicing defects in a subset of pre-mRNAs including the insulin receptor, the muscle-specific chloride ion channel, Sarco(endo)plasmic reticulum Ca2+ ATPase 1 (Serca1/Atp2a1), and cardiac troponin T (cTNT). Based on these observations, the development of small molecule ligands that target specifically expanded DM1 repeats could serve as therapeutics. In the present study, computational screening was employed to improve the efficacy of pentamidine and Hoechst 33258 ligands that have been shown previously to target the DM1 triplet repeat. A series of inhibitors of the RNA-protein complex with low micromolar IC50’s, which are >20-fold more potent than the query compounds, were identified. Importantly, a bis-benzimidazole identified from the Hoechst query improves DM1-associated pre-mRNA splicing defects in cell and mouse models of DM1 (when dosed with 1 mM and 100 mg/kg, respectively). Since Hoechst 33258 was identified as a DM1 binder through analysis of an RNA motif-ligand database, these studies suggest that lead ligands targeting RNA with improved biological activity can be identified by using a synergistic approach that combines analysis of known RNA-ligand interactions with virtual screening. PMID:22300544

  18. Identification of specific markers for amphetamine synthesised from the pre-precursor APAAN following the Leuckart route and retrospective search for APAAN markers in profiling databases from Germany and the Netherlands.

    Science.gov (United States)

    Hauser, Frank M; Rößler, Thorsten; Hulshof, Janneke W; Weigel, Diana; Zimmermann, Ralf; Pütz, Michael

    2018-04-01

    α-Phenylacetoacetonitrile (APAAN) is one of the most important pre-precursors for amphetamine production in recent years. This assumption is based on seizure data but there is little analytical data available showing how much amphetamine really originated from APAAN. In this study, several syntheses of amphetamine following the Leuckart route were performed starting from different organic compounds including APAAN. The organic phases were analysed using gas chromatography-mass spectrometry (GC-MS) to search for signals caused by possible APAAN markers. Three compounds were discovered, isolated, and based on the performed syntheses it was found that they are highly specific for the use of APAAN. Using mass spectra, high resolution MS and nuclear magnetic resonance (NMR) data the compounds were characterised and identified as 2-phenyl-2-butenenitrile, 3-amino-2-phenyl-2-butenenitrile, and 4-amino-6-methyl-5-phenylpyrimidine. To investigate their significance, they were searched in data from seized amphetamine samples to determine to what extent they were present in illicitly produced amphetamine. Data of more than 580 cases from amphetamine profiling databases in Germany and the Netherlands were used for this purpose. These databases allowed analysis of the yearly occurrence of the markers going back to 2009. The markers revealed a trend that was in agreement with seizure reports and reflected an increasing use of APAAN from 2010 on. This paper presents experimental proof that APAAN is indeed the most important pre-precursor of amphetamine in recent years. It also illustrates how important it is to look for new ways to identify current trends in drug production since such trends can change within a few years. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Standardization of Keyword Search Mode

    Science.gov (United States)

    Su, Di

    2010-01-01

    In spite of its popularity, keyword search mode has not been standardized. Though information professionals are quick to adapt to various presentations of keyword search mode, novice end-users may find keyword search confusing. This article compares keyword search mode in some major reference databases and calls for standardization. (Contains 3…

  20. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  1. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  2. Searching for the most cost-effective strategy for controlling epidemics spreading on regular and small-world networks.

    Science.gov (United States)

    Kleczkowski, Adam; Oleś, Katarzyna; Gudowska-Nowak, Ewa; Gilligan, Christopher A

    2012-01-07

    We present a combined epidemiological and economic model for control of diseases spreading on local and small-world networks. The disease is characterized by a pre-symptomatic infectious stage that makes detection and control of cases more difficult. The effectiveness of local (ring-vaccination or culling) and global control strategies is analysed by comparing the net present values of the combined cost of preventive treatment and illness. The optimal strategy is then selected by minimizing the total cost of the epidemic. We show that three main strategies emerge, with treating a large number of individuals (global strategy, GS), treating a small number of individuals in a well-defined neighbourhood of a detected case (local strategy) and allowing the disease to spread unchecked (null strategy, NS). The choice of the optimal strategy is governed mainly by a relative cost of palliative and preventive treatments. If the disease spreads within the well-defined neighbourhood, the local strategy is optimal unless the cost of a single vaccine is much higher than the cost associated with hospitalization. In the latter case, it is most cost-effective to refrain from prevention. Destruction of local correlations, either by long-range (small-world) links or by inclusion of many initial foci, expands the range of costs for which the NS is most cost-effective. The GS emerges for the case when the cost of prevention is much lower than the cost of treatment and there is a substantial non-local component in the disease spread. We also show that local treatment is only desirable if the disease spreads on a small-world network with sufficiently few long-range links; otherwise it is optimal to treat globally. In the mean-field case, there are only two optimal solutions, to treat all if the cost of the vaccine is low and to treat nobody if it is high. The basic reproduction ratio, R(0), does not depend on the rate of responsive treatment in this case and the disease always invades

  3. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  4. Update History of This Database - SSBD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us SSBD Update History of This Database Date Update contents 2016/07/25 SSBD English archive si...tion Download License Update History of This Database Site Policy | Contact Us Update History of This Database - SSBD | LSDB Archive ... ...te is opened. 2013/09/03 SSBD ( http://ssbd.qbic.riken.jp/ ) is opened. About This Database Database Descrip

  5. Update History of This Database - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us SAHG Update History of This Database Date Update contents 2016/05/09 SAHG English archive si...te is opened. 2009/10 SAHG ( http://bird.cbrc.jp/sahg ) is opened. About This Database Database Description ...Download License Update History of This Database Site Policy | Contact Us Update History of This Database - SAHG | LSDB Archive ...

  6. Update History of This Database - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us DMPD Update History of This Database Date Update contents 2010/03/29 DMPD English archive si....jp/macrophage/ ) is released. About This Database Database Description Download License Update History of Thi...s Database Site Policy | Contact Us Update History of This Database - DMPD | LSDB Archive ...

  7. Update History of This Database - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us RMOS Update History of This Database Date Update contents 2015/10/27 RMOS English archive si...12 RMOS (http://cdna01.dna.affrc.go.jp/RMOS/) is opened. About This Database Database Description Download License Update Hi...story of This Database Site Policy | Contact Us Update History of This Database - RMOS | LSDB Archive ...

  8. Ocean Drilling Program: Janus Web Database

    Science.gov (United States)

    JANUS Database Send questions/comments about the online database Request data not available online Janus database Search the ODP/TAMU web site ODP's main web site Janus Data Model Data Migration Overview in Janus Data Types and Examples Leg 199, sunrise. Janus Web Database ODP and IODP data are stored in

  9. Online Petroleum Industry Bibliographic Databases: A Review.

    Science.gov (United States)

    Anderson, Margaret B.

    This paper discusses the present status of the bibliographic database industry, reviews the development of online databases of interest to the petroleum industry, and considers future developments in online searching and their effect on libraries and information centers. Three groups of databases are described: (1) databases developed by the…

  10. Database Description - tRNADB-CE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us tRNAD...B-CE Database Description General information of database Database name tRNADB-CE Alter...CC BY-SA Detail Background and funding Name: MEXT Integrated Database Project Reference(s) Article title: tRNAD... 2009 Jan;37(Database issue):D163-8. External Links: Article title: tRNADB-CE 2011: tRNA gene database curat...n Download License Update History of This Database Site Policy | Contact Us Database Description - tRNADB-CE | LSDB Archive ...

  11. Variation in number of hits for complex searches in Google Scholar

    Directory of Open Access Journals (Sweden)

    Wichor Matthijs Bramer, BSc

    2016-11-01

    Full Text Available Objective: Google Scholar is often used to search for medical literature. Numbers of results reported by Google Scholar outperform the numbers reported by traditional databases. How reliable are these numbers? Why are often not all available 1,000 references shown? Methods: For several complex search strategies used in systematic review projects, the number of citations and the total number of versions were calculated. Several search strategies were followed over a two-year period, registering fluctuations in reported search results. Results: Changes in numbers of reported search results varied enormously between search strategies and dates. Theories for calculations of the reported and shown number of hits were not proved. Conclusions: The number of hits reported in Google Scholar is an unreliable measure. Therefore, its repeatability is problematic, at least when equal results are needed.

  12. Download - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...Database Description Download License Update History of This Database Site Policy | Contact Us Download - SAHG | LSDB Archive ...

  13. License - GRIPDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...e Database Description Download License Update History of This Database Site Policy | Contact Us License - GRIPDB | LSDB Archive ...

  14. License - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...se Database Description Download License Update History of This Database Site Policy | Contact Us License - GETDB | LSDB Archive ...

  15. Download - Metabolonote | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ... Database Description Download License Update History of This Database Site Policy | Contact Us Download - Metabolonote | LSDB Archive ...

  16. Search Improvement Process-Chaotic Optimization-Particle Swarm Optimization-Elite Retention Strategy and Improved Combined Cooling-Heating-Power Strategy Based Two-Time Scale Multi-Objective Optimization Model for Stand-Alone Microgrid Operation

    Directory of Open Access Journals (Sweden)

    Fei Wang

    2017-11-01

    Full Text Available The optimal dispatching model for a stand-alone microgrid (MG is of great importance to its operation reliability and economy. This paper aims at addressing the difficulties in improving the operational economy and maintaining the power balance under uncertain load demand and renewable generation, which could be even worse in such abnormal conditions as storms or abnormally low or high temperatures. A new two-time scale multi-objective optimization model, including day-ahead cursory scheduling and real-time scheduling for finer adjustments, is proposed to optimize the operational cost, load shedding compensation and environmental benefit of stand-alone MG through controllable load (CL and multi-distributed generations (DGs. The main novelty of the proposed model is that the synergetic response of CL and energy storage system (ESS in real-time scheduling offset the operation uncertainty quickly. And the improved dispatch strategy for combined cooling-heating-power (CCHP enhanced the system economy while the comfort is guaranteed. An improved algorithm, Search Improvement Process-Chaotic Optimization-Particle Swarm Optimization-Elite Retention Strategy (SIP-CO-PSO-ERS algorithm with strong searching capability and fast convergence speed, was presented to deal with the problem brought by the increased errors between actual renewable generation and load and prior predictions. Four typical scenarios are designed according to the combinations of day types (work day or weekend and weather categories (sunny or rainy to verify the performance of the presented dispatch strategy. The simulation results show that the proposed two-time scale model and SIP-CO-PSO-ERS algorithm exhibit better performance in adaptability, convergence speed and search ability than conventional methods for the stand-alone MG’s operation.

  17. Evaluation of Federated Searching Options for the School Library

    Science.gov (United States)

    Abercrombie, Sarah E.

    2008-01-01

    Three hosted federated search tools, Follett One Search, Gale PowerSearch Plus, and WebFeat Express, were configured and implemented in a school library. Databases from five vendors and the OPAC were systematically searched. Federated search results were compared with each other and to the results of the same searches in the database's native…

  18. Comparison of characteristics of international and national databases for rheumatoid arthritis: a systematic literature review

    NARCIS (Netherlands)

    Gvozdenović, E.; Koevoets, R.; Langenhoff, J.; Allaart, C. F.; Landewé, R. B. M.

    2014-01-01

    To evaluate current (inter)national registers and observational cohorts in Europe, and to compare inclusion criteria, aims, collected data, and participation in the European League Against Rheumatism (EULAR) repository. We performed a systematic search strategy in six literature databases for

  19. GMDD: a database of GMO detection methods.

    Science.gov (United States)

    Dong, Wei; Yang, Litao; Shen, Kailin; Kim, Banghyun; Kleter, Gijs A; Marvin, Hans J P; Guo, Rong; Liang, Wanqi; Zhang, Dabing

    2008-06-04

    Since more than one hundred events of genetically modified organisms (GMOs) have been developed and approved for commercialization in global area, the GMO analysis methods are essential for the enforcement of GMO labelling regulations. Protein and nucleic acid-based detection techniques have been developed and utilized for GMOs identification and quantification. However, the information for harmonization and standardization of GMO analysis methods at global level is needed. GMO Detection method Database (GMDD) has collected almost all the previous developed and reported GMOs detection methods, which have been grouped by different strategies (screen-, gene-, construct-, and event-specific), and also provide a user-friendly search service of the detection methods by GMO event name, exogenous gene, or protein information, etc. In this database, users can obtain the sequences of exogenous integration, which will facilitate PCR primers and probes design. Also the information on endogenous genes, certified reference materials, reference molecules, and the validation status of developed methods is included in this database. Furthermore, registered users can also submit new detection methods and sequences to this database, and the newly submitted information will be released soon after being checked. GMDD contains comprehensive information of GMO detection methods. The database will make the GMOs analysis much easier.

  20. The CAPEC Database

    DEFF Research Database (Denmark)

    Nielsen, Thomas Lund; Abildskov, Jens; Harper, Peter Mathias

    2001-01-01

    in the compound. This classification makes the CAPEC database a very useful tool, for example, in the development of new property models, since properties of chemically similar compounds are easily obtained. A program with efficient search and retrieval functions of properties has been developed.......The Computer-Aided Process Engineering Center (CAPEC) database of measured data was established with the aim to promote greater data exchange in the chemical engineering community. The target properties are pure component properties, mixture properties, and special drug solubility data....... The database divides pure component properties into primary, secondary, and functional properties. Mixture properties are categorized in terms of the number of components in the mixture and the number of phases present. The compounds in the database have been classified on the basis of the functional groups...